Meta's AI System Llama Approved For Use By US Government Agencies
The U.S. General Services Administration has approved Meta's AI system Llama for use by federal agencies, declaring that it meets government security and legal standards. Reuters reports: "It's not about currying favor," [said Josh Gruenbaum, the GSA's procurement lead, when asked whether tech executives are giving the government discounts to get President Donald Trump's approval]. "It's about that recognition of how do we all lock in arms and make this country the best country it could possibly be." Federal agencies will be able to deploy the tool to speed up contract review or more quickly solve information technology hiccups, among other tasks, he said.
Read more of this story at Slashdot.
TikTok Algorithm To Be Retrained On US User Data Under Trump Deal
The Trump administration has struck a deal requiring TikTok's algorithm to be copied, retrained, and operated in the U.S. using only U.S. user data, with Oracle auditing the system and U.S. investors forming a joint venture to oversee it. The BBC reports: It comes after President Donald Trump said a deal to prevent the app's ban in the US, unless sold by its Chinese parent company ByteDance, had been reached with China's approval. White House officials claim the deal will be a win for the app's US users and citizens. President Trump is expected to sign an executive order later this week on the proposed deal, which will set out how it will comply with US national security demands.
The order will also outline a 120-day pause to the enforcement deadline to allow the deal to close. It is unclear whether the Chinese government has approved this agreement, or begun to take regulatory steps required to deliver it. However, the White House appears confident it has secured China's approval. Data belonging to the 170m users TikTok says it has in the US is already held on Oracle servers, under an existing arrangement called Project Texas. It saw US user data siphoned off due to concerns it could fall into the hands of the Chinese government.
A senior White House official said that under President Trump's deal, the company would take on a comprehensive role in securing the entirety of the app for American users. They said this would include auditing and inspecting the source code and recommendation system underpinning the app, and rebuilding it for US users using only US user data.
Read more of this story at Slashdot.
California Issues Historic Fine Over Lawyer's ChatGPT Fabrications
An anonymous reader quotes a report from CalMatters: A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion (PDF) stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified."
The opinion, issued 10 days ago in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month.
The Los Angeles-area attorney fined last week, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up.
Read more of this story at Slashdot.
Apple's iPhone 17 Pro Can Be Easily Scratched
An anonymous reader shares a report: The iPhone 17 Pro and 17 Pro Max appear to provide little resistance to scratches and scuffs around the sharp edges of the camera bump. Tech blogger Zack Nelson demonstrates this weakness in a durability test on his JerryRigEverything YouTube channel, explaining that the anodized aluminium layer on the iPhone 17 Pro and 17 Pro Max "does not stick to corners very well" -- creating a weak point in the coating. This is a known issue with the electrochemical anodizing process, so it was a design decision Apple knowingly made.
"For some reason, Apple didn't add a chamfer, fillet, or radius around the camera plateau, and I think it was intentional, so it looks cooler," Nelson says in the video. "But that decision to look cool out of the box is going to plague everyone who owns this phone down the road." The video shows that everyday objects, like a coin or house key carried in the same pocket as the iPhone 17 Pro, can chip away at the anodized coating around the sharp corners of the camera bump. However, that same mildly aggressive scratching on the flat surface of the camera plateau only produced dust that could be easily wiped away.
Read more of this story at Slashdot.
Uber CEO Says Robotaxis Could Displace Drivers in 10 To 15 Years and Create 'a Big, Big Societal Question'
The rise of self-driving cars could eventually cost many ride-hailing drivers their jobs -- and that's a big problem, Uber CEO Dara Khosrowshahi said. From a report: Khosrowshahi spoke about the issue onstage this month at a summit hosted by the "All-In" podcast, which posted a video of the conversation on Wednesday. At the summit, Khosrowshahi was asked about concerns that gig workers, who have played a key role in Uber's development, will eventually lose their jobs as self-driving cars become more prevalent.
The Uber CEO said he expects human drivers to continue working alongside self-driving cars in Uber's network in the coming years. "For the next five to seven years, we're going to have more human drivers and delivery people, just because we're going so quickly," Khosrowshahi said. "But, I think, 10 to 15 years from now, this is going to be a real issue," he said about drivers losing their jobs.
Read more of this story at Slashdot.
Microsoft is Bringing Video Wallpapers To Windows 11
Microsoft is working on bringing support for setting a video as your desktop wallpaper on Windows 11. From a report: Hidden in the latest Windows 11 preview builds, the feature lets you set an MP4, MOV, AVI, WMV, M4V, or MKV file as your wallpaper, which will play the video whenever you view the desktop.
For many years, users have wanted the ability to set a video as a desktop background. It's a feature that many Linux distributions support, and macOS also supports the ability to set a moving background as your lock screen. Windows Vista did support setting videos as your wallpaper, but only as part of the Ultimate SKU via a feature called DreamScene.
Read more of this story at Slashdot.
Nvidia To Invest $100 Billion in OpenAI
Nvidia will invest up to $100 billion in OpenAI as the AI lab builds data centers requiring 10 gigawatts of power capacity. The 10-gigawatt deployment equals 4 to 5 million GPUs -- the same number Nvidia will ship globally this year. Building one gigawatt of data center capacity costs $50 to $60 billion, including approximately $35 billion for Nvidia chips and systems. The first phase begins in the second half of 2026 using Nvidia's next-generation Vera Rubin systems.
The investment adds Nvidia to OpenAI's investor roster alongside Microsoft, SoftBank, and Thrive Capital at a $500 billion valuation. Nvidia CEO Jensen Huang described the investment as "additive to everything that's been announced and contracted."
Read more of this story at Slashdot.
China Road Trip Exposes List of Uninvestable Assets in the West
An anonymous reader shares a report: Venture capitalists in clean tech are starting to say out loud what they've suspected for a while: China's dominance has left key sectors in the West uninvestable. A group of eight VCs from Western firms agreed to share with Bloomberg the details of a July road trip across China during which they visited factories, spoke with startup investors, and interviewed founders of companies.
They knew China had raced ahead in sectors like batteries and "everything around energy," but seeing how big the gap was firsthand left them wondering how European and North American competitors can even survive, says Talia Rafaeli, a former investment banker at both Goldman Sachs and Barclays who's now a partner at Kompas VC. "Everyone needs to take this kind of trip," she said.
Read more of this story at Slashdot.
Apple Watch's New High Blood Pressure Notifications Developed With AI
Many Apple Watches will soon be able to alert users about possible high blood pressure, reports Reuters — culminating six years of research and development:
Apple used AI to sort through the data from 100,000 people enrolled in a heart and movement study it originally launched in 2019 to see whether it could find features in the signal data from the watch's main heart-related sensor that it could then match up with traditional blood pressure measurements, said Sumbul Ahmad Desai [Apple's vice president of health]. After multiple layers of machine learning, Apple came up with an algorithm that it then validated with a specific study of 2,000 participants.
Apple's privacy measures mean that "one of the ironies here is we don't get a lot of data" outside of the context of large-scale studies, Desai said. But data from those studies "gives us a sense of, scientifically, what are some other signals that are worth pulling the thread on ... those studies are incredibly powerful."
The feature, which received approval from the U.S. Food and Drug Administration, does not measure blood pressure directly, but notifies users that they may have high blood pressure and encourages them to use a cuff to measure it and talk to a doctor. Apple plans to roll out the feature to more than 150 countries, which Ami Bhatt, chief innovation officer of the American College of Cardiology, said could help people discover high blood pressure early and reduce related conditions such as heart attacks, strokes and kidney disease. Bhatt, who said her views are her own and do not represent those of the college, said Apple appears to have been careful to avoid false positives that might alarm users. But she said the iPhone maker should emphasize that the new feature is no substitute for traditional measurements and professional diagnosis.
The article notes that the feature will be available in Apple Watch Series 11 models that go on sale on Friday, as well as models back to the Apple Watch Series 9.
Read more of this story at Slashdot.
Astronomers Discover Previously Unknown Quasi-Moon Near Earth
"Astronomers have spotted a quasi-moon near Earth," reports CNN, "and the small space rock has likely been hanging out near our planet unseen by telescopes for about 60 years, according to new research."
The newly discovered celestial object, named 2025 PN7, is a type of near-Earth asteroid that orbits the sun but sticks close to our planet. Like our world, 2025 PN7 takes one year to complete an orbit around the sun...
The newly found 2025 PN7 is just one of a handful of known quasi-moons with orbits near our planet, including Kamo'oalewa, which is also thought to be an ancient lunar fragment. Kamo'oalewa is one of the destinations of China's Tianwen-2 mission launched in May, which aims to collect and return samples from the space rock in 2027. The Pan-STARRS observatory located on the Haleakala volcano in Hawaii captured observations of 2025 PN7 on August 29. Archival data revealed that the object has been in an Earth-like orbit for decades.
The quasi-moon managed to escape the notice of astronomers for so long because it is small and faint, said Carlos de la Fuente Marcos, a researcher on the faculty of mathematical sciences at the Complutense University of Madrid who recently authored a paper about the space rock. The paper was published on September 2 in the journal Research Notes of the American Astronomical Society, which is for timely non-peer-reviewed astronomical observations. The space rock swings within 186,000 miles (299,337 kilometers) of us during its closest pass of our planet, de la Fuente Marcos said.... "It can only be detected by currently available telescopes when it gets close to our planet as it did this summer," de la Fuente Marcos explained. "Its visibility windows are few and far between. It is a challenging object...."
Astronomers are still trying to figure out 2025 PN7's size. About 98 feet (30 meters) across is a reasonable estimate, de la Fuente Marcos said. It also has the potential to be 62 feet (19 meters) in diameter, according to EarthSky. The space rock is currently the smallest-known quasi-moon to have orbited near Earth, de la Fuente Marcos said.
Read more of this story at Slashdot.
Why One Computer Science Professor is 'Feeling Cranky About AI' in Education
Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI. "We're jumping through hoops without stopping first to question the run-away train," Barr writes...
Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."
From the article:
In much discussion about CS education:
a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.
b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.
c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.
Read more of this story at Slashdot.
AI Tools Give Dangerous Powers to Cyberattackers, Security Researchers Warn
"On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client's AI program-writing assistant into executing programs that forked over the company's databases and code repositories," reports the Washington Post.
"We have never been this foolish with security," Brauchler said...
Demonstrations at last month's Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google's Gemini didn't even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker's number, mimicking successful phishing scams.
The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email...
Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches. Seven teams of hackers that developed autonomous "cyber reasoning systems" for a contest held last month by the Pentagon's Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them. Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.
The real nightmare scenario is when these worlds collide, and an attacker's AI finds a way in and then starts communicating with the victim's AI, working in partnership — "having the bad guy AI collaborate with the good guy AI," as SentinelOne's [threat researcher Alex] Delamotte put it. "Next year," said Adam Meyers, senior vice president at CrowdStrike, "AI will be the new insider threat."
In August more than 1,000 people lost data to a modified Nx program (downloaded hundreds of thousands of times) that used pre-installed coding tools from Google/Anthropic/etc. According to the article, the malware "instructed those programs to root out" sensitive data (including passwords or cryptocurrency wallets) and send it back to the attacker. "The more autonomy and access to production environments such tools have, the more havoc they can wreak," the article points out — including this quote from SentinelOne threat researcher Alex Delamotte.
"It's kind of unfair that we're having AI pushed on us in every single product when it introduces new risks."
Read more of this story at Slashdot.
More Durable UV Coating For Solar Panels Made From Red Onion Skins
Long-time Slashdot reader fahrbot-bot shared this report from ZME Science
Researchers from the University of Turku, in collaboration with Aalto University and Wageningen University, have developed a bio-based UV protection film for solar cells that not only blocks nearly all harmful ultraviolet light but also outperforms commercial plastic films. The key ingredient is a water extract made from red onion skins...
[T]he same sunlight that powers [solar cells] can also degrade their delicate components — particularly the electrolyte inside dye-sensitized solar cells (DSSCs), a type known for their flexibility and low-light performance. To mitigate this, manufacturers typically wrap cells in UV-protective films made from petroleum-based plastics like polyethylene terephthalate (PET). But these plastics degrade over time and are difficult to recycle... Nanocellulose can be processed into thin, transparent films that serve as the perfect substrate for UV-blocking compounds.
Their breakthrough came when they dyed these films using an extract from red onion skins, a common kitchen waste. The result was a filter that blocked 99.9% of UV radiation up to 400 nanometers, a feat that outstripped even the PET-based commercial filters chosen for comparison... [T]he onion-treated filter excelled: it let through over 80% of light in the 650-1,100 nm range — an ideal sweet spot for energy absorption... Even predictive modeling based on early degradation trends suggested the CNF-ROE filter could extend a solar cell's lifetime to roughly 8,500 hours. The PET-based filter? Just 1,500 hours... [T]he red onion extract offered a rare combination of longevity, transparency, and sustainability...
The team envisions biodegradable solar cells for smart packaging, remote sensors, or wearable devices — especially in applications where recovery and recycling are not feasible. Their work is part of the BioEST project, funded by the Research Council of Finland, which supports sustainable innovation across electronics and materials science. This achievement taps into a broader movement to decarbonize every step of solar energy production. Plastic packaging is one of the overlooked sources of emissions in clean technology. Swapping out fossil-based plastics for biodegradable alternatives helps close that loop...
The findings appeared in the journal Applied Optical Materials.
Read more of this story at Slashdot.
Meta's UK Arbitration 'Threatens to Bankrupt' Facebook Whistleblower, Says Her Lawyer
In a debate on employment rights, a U.K. Parliament member brought up Meta's former director of global public policy Sarah Wynn-Williams
Louise Haigh, the former Labour transport secretary, said Wynn-Williams was facing a fine of $50,000 (£37,000) every time she breached an order secured by Meta preventing her from talking disparagingly about the company... "I am sure that the whole house and the government will stand with Sarah as we pass this legislation to ensure that whistleblowers and those with the moral courage to speak out are always protected...."
Meta has emphasised that Wynn-Williams entered into the non-disparagement agreement voluntarily as part of her departure. Meta said that to date, Wynn-Williams had not been forced to make any payments under the agreement... [The ruling came after Wynn-Williams published an exposé in March about her time at Facebook titled Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism.] The ruling stated Wynn-Williams should stop promoting the book and, to the extent she could, stop further publication... Wynn-Williams has not spoken in public since appearing at the Senate hearing in April.
Wynn-Williams "remains silenced" according to her lawyer, who tells the Guardian that Meta's arbitration proceedings in the U.K. "threaten to bankrupt" the whistleblower.
Read more of this story at Slashdot.
America's Space Force is Preparing for a New Kind of War
A July combat training exercise involved a satellite dish-style antenna that "could fire enough electromagnetic energy to fry the satellite 22,000 miles away," reports the Washington Post. But "Instead, the salvo would be more covert — millisecond pulses of energy that would subtly disrupt the satellite's signals, which U.S. military forces were using to communicate in the Pacific Ocean."
The goal was to disguise the strike as a garbled connection that could be easily remedied by securing a loose cable or a simple reboot, leaving U.S. service members frustrated without raising their suspicions. [And using less power "would make it harder for the Blue Team to track where the interference was coming from."] This is how the next war could start: invisible shots fired in space on the electromagnetic spectrum that could render U.S. fighter jets and aircraft carriers deaf and blind, unable to communicate. In this case, the "aggressors" targeting the U.S. satellite were not from China or Russia, but rather an elite squadron of U.S. Space Force Guardians mimicking how potential adversaries would act in a conflict that begins in orbit... Involving more than 700 service members and spanning 50 million square miles and six time zones, the training exercise, called Resolute Space, was observed firsthand exclusively by The Washington Post.
The article describes leadership at the U.S. Space Force "still honing their mission while jousting with adversaries, such as China, that are moving quickly and conducting combat-like operations in orbit... While the Space Force continues to evolve, many defense analysts and some members of Congress fear the United States has already ceded its dominance in space to China and others."
With a budget of just $40 billion, the relatively tiny Space Force makes up just about 4 percent of the Defense Department's budget and less than 1 percent of its personnel. It has more than 15,000 Guardians, which also includes several thousand civilians. By comparison, the Army has nearly 1 million soldiers. The Space Force has been squeezed under the department of the Air Force and struggled to distinguish itself from the other branches...
China, Russia and others have demonstrated that they can take out or interfere with the satellites operated by the Pentagon and intelligence agencies that provide the nation's missile warning and tracking, reconnaissance and communications. China in particular has moved rapidly to build an arsenal of space-based weapons... [R]ecently, several of China's satellites have engaged in what Space Force officials have called "dogfighting," jousting with U.S. satellites at high speeds and close ranges.
Read more of this story at Slashdot.
Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions
Last week the Guardian reported on "thousands of AI workers contracted for Google through Japanese conglomerate Hitachi's GlobalLogic to rate and moderate the output of Google's AI products, including its flagship chatbot Gemini... and its summaries of search results, AI Overviews."
"AI isn't magic; it's a pyramid scheme of human labor," said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. "These raters are the middle rung: invisible, essential and expendable...." Ten of Google's AI trainers the Guardian spoke to said they have grown disillusioned with their jobs because they work in siloes, face tighter and tighter deadlines, and feel they are putting out a product that's not safe for users... In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini's predecessor, a "faulty" and "dangerous" product
This week Google laid off 200 of those moderating contractors, reports Wired. "These workers, who often are hired because of their specialist knowledge, had to have either a master's or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields."
Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves. According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI. At the same time, the company is also finding ways to get rid of current employees as it continues to hire new workers. In July, GlobalLogic made it mandatory for its workers in Austin, Texas, to return to office, according to a notice seen by WIRED...
Some contractors attempted to unionize earlier this year but claim those efforts were quashed. Now they allege that the company has retaliated against them. Two workers have filed a complaint with the National Labor Relations Board, alleging they were unfairly fired, one due to bringing up wage transparency issues, and the other for advocating for himself and his coworkers. "These individuals are employees of GlobalLogic or their subcontractors, not Alphabet," Courtenay Mencini, a Google spokesperson, said in a statement...
"Globally, other AI contract workers are fighting back and organizing for better treatment and pay," the article points out, noting that content moderators from around the world facing similar issues formed the Global Trade Union Alliance of Content Moderators which includes workers from Kenya, Turkey, and Colombia.
Thanks to long-time Slashdot reader mspohr for sharing the news.
Read more of this story at Slashdot.
Secure Software Supply Chains, Urges Former Go Lead Russ Cox
Writing in Communications of the ACM, former Go tech lead Russ Cox warns we need to keep improving defenses of software supply chains, highlighting "promising approaches that should be more widely used" and "areas where more work is needed."
There are important steps we can take today, such as adopting software signatures in some form, making sure to scan for known vulnerabilities regularly, and being ready to update and redeploy software when critical new vulnerabilities are found. More development should be shifted to safer languages that make vulnerabilities and attacks less likely. We also need to find ways to fund open source development to make it less susceptible to takeover by the mere offer of free help. Relatively small investments in OpenSSL and XZ development could have prevented both the Heartbleed vulnerability and the XZ attack.
Some highlights from the 5,000-word article:
Make Builds Reproducible. "The Reproducible Builds project aims to raise awareness of reproducible builds generally, as well as building tools to help progress toward complete reproducibility for all Linux software. The Go project recently arranged for Go itself to be completely reproducible given only the source code... A build for a given target produces the same distribution bits whether you build on Linux or Windows or Mac, whether the build host is X86 or ARM, and so on. Strong reproducibility makes it possible for others to easily verify that the binaries posted for download match the source code..."
Prevent Vulnerabilities. "The most secure software dependencies are the ones not used in the first place: Every dependency adds risk... Another good way to prevent vulnerabilities is to use safer programming languages that remove error-prone language features or make them needed less often..."
Authenticate Software. ("Cryptographic signatures make it impossible to nefariously alter code between signing and verifying. The only problem left is key distribution...") "The Go checksum database is a real-world example of this approach that protects millions of Go developers. The database holds the SHA256 checksum of every version of every public Go module..."
Fund Open Source. [Cox first cites the XKCD cartoon "Dependencies," calling it "a disturbingly accurate assessment of the situation..."] "The XZ attack is the clearest possible demonstration that the problem is not fixed. It was enabled as much by underfunding of open source as by any technical detail."
The article also emphasized the importance of finding and fixing vulnerabilities quickly, arguing that software attacks must be made more difficult and expensive.
"We use source code downloaded from strangers on the Internet in our most critical applications; almost no one is checking the code.... We all have more work to do."
Read more of this story at Slashdot.
Tech Boomtown Seattle Grapples with Fewer Tech Jobs
Near Microsoft's headquarters in Redmond, the Five Stones coffee shop advertised for a barista a few months ago — and started getting resumes from "people who listed Microsoft and other tech companies," writes the Wall Street Journal:
The applicants typically had master's degrees and experience in graphic design or marketing roles, Andrews said — sometimes senior ones. They were applying to jobs at Five Stones that would pay Redmond's minimum wage, $16.66 an hour. Five Stones hasn't yet hired such candidates because the coffee shop gives priority to more traditional entry-level baristas, like high-schoolers...
[Microsoft and Amazon] have laid off more than 46,000 employees since 2023, according to Layoffs.fyi, which tracks workforce reductions. That represents 85% of layoffs by Seattle-area tech companies... As Amazon and Microsoft have made cuts — and other local tech firms including Expedia and Redfin have followed suit — the effects have rippled through Seattle's other business sectors. Weakness in payroll and sales tax contributed to a projected $146 million shortfall in revenue over the next two years. Restaurant and retail spending is down in the business and shopping districts surrounding Amazon's and Microsoft's campuses, with total transactions falling by as much as 7% in some popular areas in the past year, according to data from Square. In the first half of 2025, around 450 restaurants closed in Seattle, or about 16% of its total. "At the halfway point of the year, we've already seen as many closures as we'd usually see in a full year," said Anthony Anton, chief executive officer of the Washington Hospitality Association.
Uber driver Juan Prado made six figures in 2021, often shuttling passengers in town for job interviews and doing frequent drop-offs near downtown tech offices. Now, he said, demand is much lower. "There are moments where you can be online, and in certain areas, it shows nothing...." Seattle tech firms are asking for significantly fewer job placements than years ago, said Noelle McDonald, senior vice president at recruiting company Aquent, which counts Amazon and Microsoft as clients. Hiring windows have lengthened and open roles receive around 10 times as many applications.
And of course, "Commercial real-estate vacancies stand at a record high as offices built to accommodate a boom sit empty... "
While some laid-off employees launched their own startups, "the outlook for many tech workers is dour as companies invest in software tools they can use to streamline teams," the article points out. Microsoft CEO Satya Nadella "has said the company is increasingly looking to AI to perform coding and other tasks once done by people," while in June, Amazon "said its workforce would shrink going forward."
Read more of this story at Slashdot.
Disney Sued by Law Firm Wanting to Use 'Steamboat Willie' in Its Ads
Mickey Mouse's first movie Steamboat Willie entered the public domain in 2024.
Now one of America's largest personal injury firms is suing Disney, reports the Associated Press, "in an effort to get a ruling that would allow it to use Steamboat Willie in advertisements..."
[The law firm said] it had reached out to Disney to make sure the entertainment company wouldn't sue them if they used images from the animated film for their TV and online ads. Disney's lawyers responded by saying they didn't offer legal advice to third parties, according to the lawsuit. Morgan & Morgan said it was filing the lawsuit to get a decision because it otherwise feared being sued by Disney for trademark infringement if it used Steamboat Willie.
"Without waiver of any of its rights, Disney will not provide such advice in response to your letter," Disney's attorneys wrote in their letter (adding "Very truly yours..."). A local newscast showed a glimpse of the letter, along with a few seconds of the ad (which ends with Minnie Mouse pulling out a cellphone to call for a lawyer...)
Attorney John Morgan tells the newscast that Disney's legal team "is playing cute, and so we're just trying to get a yes or no answer.. They wrote us back a bunch of mumbo-jumbo that made no sense, didn't answer the question. We tried it again, they didn't answer the question..." (The newscast adds that the case isn't expected to go to court for at least a year.)
Read more of this story at Slashdot.
Glitches Humiliated Zuck in Smart Glasses Launch. Meta CTO Explains What Happened
When Meta finally unveiled its newest smart glasses, CEO Mark Zuckerberg "drew more snickers than applause," wrote the New York Times. (Mashable points out a video call failing onstage followed by an unsuccessful recipe demonstration.)
Meta chief technology officer Andrew Bosworth later explained the funny reason their demo didn't work, reports TechCrunch, while answering questions on Instagram:
"When the chef said, 'Hey, Meta, start Live AI,' it started every single Ray-Ban Meta's Live AI in the building. And there were a lot of people in that building," Bosworth explained. "That obviously didn't happen in rehearsal; we didn't have as many things," he said, referring to the number of glasses that were triggered... The second part of the failure had to do with how Meta had chosen to route the Live AI traffic to its development server to isolate it during the demo. But when it did so, it did this for everyone in the building on the access points, which included all the headsets. "So we DDoS'd ourselves, basically, with that demo," Bosworth added... Meta's dev server wasn't set up to handle the flood of traffic from the other glasses in the building — Meta was only planning for it to handle the demos alone.
The issue with the failed WhatsApp call, on the other hand, was the result of a new bug. The smart glasses' display had gone to sleep at the exact moment the call came in, Bosworth said. When Zuckerberg woke the display back up, it didn't show the answer notification to him. The CTO said this was a "race condition" bug... "We've never run into that bug before," Bosworth noted. "That's the first time we'd ever seen it. It's fixed now, and that's a terrible, terrible place for that bug to show up." He stressed that, of course, Meta knows how to handle video calls, and the company was "bummed" about the bug showing up here... "It really was just a demo fail and not, like, a product failure," he said.
Thanks to Slashdot reader fjo3 for sharing the news.
Read more of this story at Slashdot.