Canadian Math Prodigy Allegedly Stole $65 Million In Crypto
A Canadian math prodigy is accused of stealing over $65 million through complex exploits on decentralized finance platforms and is currently a fugitive from U.S. authorities. Despite facing criminal charges for fraud and money laundering, he has evaded capture by moving internationally, embracing the controversial "Code is Law" philosophy, and maintaining that his actions were legal under the platforms' open-source rules. The Globe and Mail reports: Andean Medjedovic was 18 years old when he made a decision that would irrevocably alter the course of his life. In the fall of 2021, shortly after completing a master's degree at the University of Waterloo, the math prodigy and cryptocurrency trader from Hamilton had conducted a complex series of transactions designed to exploit a vulnerability in the code of a decentralized finance platform. The maneuver had allegedly allowed him to siphon approximately $16.5-million in digital tokens out of two liquidity pools operated by the platform, Indexed Finance, according to a U.S. court document.
Indexed Finance's leaders traced the attack back to Mr. Medjedovic, and made him an offer: Return 90 per cent of the funds, keep the rest as a so-called "bug bounty" -- a reward for having identified an error in the code -- and all would be forgiven. Mr. Medjedovic would then be free to launch his career as a white hat, or ethical, hacker. Mr. Medjedovic didn't take the deal. His social media posts hinted, without overtly stating, that he believed that because he had operated within the confines of the code, he was entitled to the funds -- a controversial philosophy in the world of decentralized finance known as "Code is Law." But instead of testing that argument in court, Mr. Medjedovic went into hiding. By the time authorities arrived on a quiet residential street in Hamilton to search his parents' townhouse less than two months later, Mr. Medjedovic had moved out, taking his electronic devices with him.
Then, roughly two years later, he struck again, netting an even larger sum -- approximately $48.4-million -- by conducting a similar exploit on another decentralized finance platform, U.S. authorities allege. Mr. Medjedovic, now 22, faces five criminal charges -- including wire fraud, attempted extortion and money laundering -- according to a U.S. federal court document that was unsealed earlier this year. If convicted, he could be facing decades in prison. First, authorities will have to find him.
Read more of this story at Slashdot.
Apple Says All Mac Minis With Intel Are Now Vintage
Apple has officially designated all Intel-based Mac minis as "vintage" or "obsolete," marking the end of an era. This means Apple no longer guarantees parts or service for these devices, as they've surpassed the 5- to 7-year support window. 9to5Mac reports: Apple periodically adds devices to its ever-growing list of vintage and obsolete products. That happened today, as spotted by MacRumors, with two noteworthy "vintage" additions: iPhone 6s and Mac mini (2018). The latter product is especially significant, because the 2018 Mac mini was the last remaining Intel model that was not yet labeled either vintage or obsolete.
So what are those timelines exactly? Per Apple's definitions:
Vintage: "Apple stopped distributing them for sale more than 5 and less than 7 years ago." Obsolete: "Apple stopped distributing them for sale more than 7 years ago." [...] Since these products are now considered vintage, Apple no longer guarantees that parts for repairs will be readily available.
Read more of this story at Slashdot.
Figma Sent a Cease-and-Desist Letter To Lovable Over the Term 'Dev Mode'
An anonymous reader quotes a report from TechCrunch: Figma has sent a cease-and-desist letter to popular no-code AI startup Lovable, Figma confirmed to TechCrunch. The letter tells Lovable to stop using the term "Dev Mode" for a new product feature. Figma, which also has a feature called Dev Mode, successfully trademarked that term last year, according to the U.S. Patent and Trademark office. What's wild is that "dev mode" is a common term used in many products that cater to software programmers. It's like an edit mode. Software products from giant companies like Apple's iOS, Google's Chrome, Microsoft's Xbox have features formally called "developer mode" that then get nicknamed "dev mode" in reference materials.
Even "dev mode" itself is commonly used. For instance Atlassian used it in products that pre-date Figma's copyright by years. And it's a common feature name in countless open source software projects. Figma tells TechCrunch that its trademark refers only to the shortcut "Dev Mode" -- not the full term "developer mode." Still, it's a bit like trademarking the term "bug" to refer to "debugging." Since Figma wants to own the term, it has little choice but send cease-and-desist letters. (The letter, as many on X pointed out, was very polite, too.) If Figma doesn't defend the term, it could be absorbed as a generic term and the trademarked becomes unenforceable.
Read more of this story at Slashdot.
Uber Cofounder Kalanick Says AI Means Some Consultants Are in 'Big Trouble'
Uber cofounder Travis Kalanick thinks AI is about to shake up consulting -- and for "traditional" professionals, not in a good way. From a report: The former Uber CEO said consultants who mostly follow instructions or do repetitive tasks are at risk of being replaced by AI. "If you're a traditional consultant and you're just doing the thing, you're executing the thing, you're probably in some big trouble," he said. He joked about what that future of consultancy might look like: "Push a button. Get a consultant."
However, Kalanick said the professionals who would come out ahead would be the ones who build tools rather than just use them. "If you are the consultant that puts the things together that replaces the consultant, maybe you got some stuff," he said. "You're going to profitable companies with competitive moats, making that moat bigger," he explained. "Making their profit bigger is probably pretty interesting from a financial point of view."
Read more of this story at Slashdot.
You Should Still Learn To Code, Says GitHub CEO
You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."
Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."
AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.
Read more of this story at Slashdot.
Google DeepMind Is Hiring a 'Post-AGI' Research Scientist
An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.
"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.
OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.
Read more of this story at Slashdot.
OpenAI is Building a Social Network
An anonymous reader shares a report: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter. While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It's unclear if OpenAI's plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month.
Launching a social network in or around ChatGPT would likely increase Altman's already-bitter rivalry with Elon Musk. In February, after Musk made an unsolicited offer to purchase OpenAI for $97.4 billion, Altman responded: "no thank you but we will buy twitter for $9.74 billion if you want." Entering the social media market also puts OpenAI on more of a collision course with Meta, which we're told is planning to add a social feed to its coming standalone app for its AI assistant. When reports of Meta building a rival to the ChatGPT app first surfaced a couple of months ago, Altman shot back on X again by saying, "ok fine maybe we'll do a social app."
Read more of this story at Slashdot.
Android Phones Will Soon Reboot Themselves After Sitting Unused For 3 Days
An anonymous reader shares a report: A silent update rolling out to virtually all Android devices will make your phone more secure, and all you have to do is not touch it for a few days. The new feature implements auto-restart of a locked device, which will keep your personal data more secure. It's coming as part of a Google Play Services update, though, so there's nothing you can do to speed along the process.
Google is preparing to release a new update to Play Services (v25.14), which brings a raft of tweaks and improvements to myriad system features. First spotted by 9to5Google, the update was officially released on April 14, but as with all Play Services updates, it could take a week or more to reach all devices. When 25.14 arrives, Android devices will see a few minor improvements, including prettier settings screens, improved connection with cars and watches, and content previews when using Quick Share.
Read more of this story at Slashdot.
Indian IT Faces Its Kodak Moment
An anonymous reader shares a report: Generative AI offers remarkable efficiency gains while presenting a profound challenge for the global IT services industry -- a sector concentrated in India and central to its export economy.
For decades, Indian technology firms thrived by deploying their engineering talent to serve primarily Western clients. Now they face a critical question. Will AI's productivity dividend translate into revenue growth? Or will fierce competition see these gains competed away through price reductions?
Industry soundings suggest the deflationary dynamic may already be taking hold. JPMorgan's conversations with executives, deal advisors and consultants across India's technology hubs reveal growing concern -- AI-driven efficiencies are fuelling pricing pressures. This threatens to constrain medium-term industry growth to a modest 4-5%, with little prospect of acceleration into fiscal year 2026. This emerging reality challenges the earlier narrative that AI would primarily unlock new revenue streams.
Read more of this story at Slashdot.
Chinese Robotaxis Have Government Black Boxes, Approach US Quality
An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.
Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.
The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted. On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.
Read more of this story at Slashdot.
Climate Crisis Has Tripled Length of Deadly Ocean Heatwaves, Study Finds
The climate crisis has tripled the length of ocean heatwaves, a study has found, supercharging deadly storms and destroying critical ecosystems such as kelp forests and coral reefs. From a report: Half of the marine heatwaves since 2000 would not have happened without global heating, which is caused by burning fossil fuels. The heatwaves have not only become more frequent but also more intense: 1C warmer on average, but much hotter in some places, the scientists said.
The research is the first comprehensive assessment of the impact of the climate crisis on heatwaves in the world's oceans, and it reveals profound changes. Hotter oceans also soak up fewer of the carbon dioxide emissions that are driving temperatures up. "Here in the Mediterranean, we have some marine heatwaves that are 5C hotter," said Dr Marta Marcos at the Mediterranean Institute for Advanced Studies in Mallorca, Spain, who led the study. "It's horrible when you go swimming. It looks like soup."
As well as devastating underwater ecosystems such as sea grass meadows, Marcos said: "Warmer oceans provide more energy to the strong storms that affect people at the coast and inland."
Read more of this story at Slashdot.
Apple To Analyze User Data on Devices To Bolster AI Technology
Apple will begin analyzing data on customers' devices in a bid to improve its AI platform, a move designed to safeguard user information while still helping it catch up with AI rivals. From a report: Today, Apple typically trains AI models using synthetic data -- information that's meant to mimic real-world inputs without any personal details. But that synthetic information isn't always representative of actual customer data, making it harder for its AI systems to work properly.
The new approach will address that problem while ensuring that user data remains on customers' devices and isn't directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet, which have fewer privacy restrictions. The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.
Read more of this story at Slashdot.
Samsung Pauses One UI 7 Rollout Worldwide
Samsung has paused the global rollout of its One UI 7 update after a serious bug was reported that prevented some Galaxy S24 owners from unlocking their phones. The Verge reports: While the complaints seem to have specifically come from South Korean owners of Galaxy S24 series handsets, Samsung has played it safe and paused the rollout across all models worldwide. While some users will have already downloaded the update to One UI 7, using the app CheckFirm we've confirmed that the update is no longer listed on Samsung's servers as the latest firmware version across several Galaxy devices, with older patches appearing instead. Samsung hasn't confirmed the pause in the rollout, nor plans to issue a fix for users who have already downloaded the One UI 7 update. We've reached out to the company for comment.
Read more of this story at Slashdot.
Risks To Children Playing Roblox 'Deeply Disturbing,' Say Researchers
A new investigation reveals that children as young as five can easily access inappropriate content and interact unsupervised with adults on Roblox, despite the platform's child-friendly image and recent safety updates. The Guardian reports: Describing itself as "the ultimate virtual universe," Roblox features millions of games and interactive environments, known collectively as "experiences." Some of the content is developed by Roblox, but much of it is user-generated. In 2024, the platform had more than 85 million daily active users, an estimated 40% of whom are under 13. While the company said it "deeply sympathized" with parents whose children came to harm on the platform, it said "tens of millions of people have a positive, enriching and safe experience on Roblox every day."
However, in an investigation shared with the Guardian, the digital-behavior experts Revealing Reality discovered "something deeply disturbing ... a troubling disconnect between Roblox's child-friendly appearance and the reality of what children experience on the platform." [...] Despite new tools launched last week aimed at giving parents more control over their children's accounts, the researchers concluded: "Safety controls that exist are limited in their effectiveness and there are still significant risks for children on the platform."
Read more of this story at Slashdot.
Intel To Sell Majority Stake In Altera For $4.46 Billion To Fund Revival Effort
Intel will sell a 51% stake in its Altera programmable chip unit to private equity firm Silver Lake for $4.46 billion, aiming to cut costs, raise cash, and streamline the company's focus as it shifts toward becoming a contract chip manufacturer. CNBC reports: The deal, announced on Monday, values Altera at $8.75 billion, a sharp decline from the $17 billion Intel paid in 2015. [...] Since last year, Intel has taken steps to spin Altera out as a separate unit and said it planned to sell a portion of its stake. "Today's announcement reflects our commitment to sharpening our focus, lowering our expense structure and strengthening our balance sheet," [CEO Lip-Bu Tan], who took the helm after former top boss Pat Gelsinger's ouster, said.
Altera makes programmable chips that can be used for various purposes from telecom equipment to military. Reuters had first reported in November that Silver Lake was among potential suitors competing for a minority stake in Altera. The deal is expected to close in the second half of 2025, after which Intel expects to deconsolidate Altera's financial results from Intel's financial statements, the company said.
Read more of this story at Slashdot.
UK Laws Are Not 'Fit For Social Media Age'
An anonymous reader quotes a report from the New York Times: British laws restricting what the police can say about criminal cases are "not fit for the social media age (source paywalled; alternative source)," a government committee said in a report released Monday in Britain that highlighted how unchecked misinformation stoked riots last summer. Violent disorder, fueled by the far right, affected several towns and cities for days after a teenager killed three girls on July 29 at a Taylor Swift-themed dance class in Southport, England. In the hours after the stabbings, false claims that the attacker was an undocumented Muslim immigrant spread rapidly online. In a report looking into the riots, a parliamentary committee said a lack of information from the authorities after the attack "created a vacuum where misinformation was able to grow." The report blamed decades-old British laws, aimed at preventing jury bias, that stopped the police from correcting false claims. By the time the police announced the suspect was British-born, those false claims had reached millions.
The Home Affairs Committee, which brings together lawmakers from across the political spectrum, published its report after questioning police chiefs, government officials and emergency workers over four months of hearings. Axel Rudakubana, who was sentenced to life in prison for the attack, was born and raised in Britain by a Christian family from Rwanda. A judge later found there was no evidence he was driven by a single political or religious ideology, but was obsessed with violence. [...] The committee's report acknowledged that it was impossible to determine "whether the disorder could have been prevented had more information been published." But it concluded that the lack of information after the stabbing "created a vacuum where misinformation was able to grow, further undermining public confidence," and that the law on contempt was not "fit for the social media age."
Read more of this story at Slashdot.
Hacked Crosswalks In Bay Area Play Deepfake-Style Messages From Tech Billionaires
Several crosswalk buttons in Palo Alto and nearby cities were hacked over the weekend to play deepfake-style satirical audio clips mimicking Elon Musk and Mark Zuckerberg. Authorities have disabled the altered systems, but the identity of the prankster remains unknown. SFGATE reports: Videos of the altered crosswalks began circulating on social media throughout Saturday and Sunday. [...] A city employee was the first to report an issue with one of the signals at University Avenue and High Street in downtown Palo Alto, Horrigan-Taylor told SFGATE via email. Officials later discovered that as many as 12 intersections in downtown Palo Alto had been affected.
"The impact is isolated," Horrigan-Taylor said. "Signal operations are otherwise unaffected, and motorists are reminded to always exercise caution around pedestrians." Officials told the outlet they've removed any devices that were tampered with and the compromised voice-over systems have since been disabled, with footage obtained by SFGATE showing several were covered in caution tape, blinking constantly and unpressable.
Read more of this story at Slashdot.
Meta Starts Using Data From EU Users To Train Its AI Models
Meta said the company plans to start using data collected from its users in the European Union to train its AI systems. Engadget reports: Starting this week, the tech giant will begin notifying Europeans through email and its family of apps of the fact, with the message set to include an explanation of the kind of data it plans to use as part of the training. Additionally, the notification will link out to a form users can complete to opt out of the process. "We have made this objection form easy to find, read, and use, and we'll honor all objection forms we have already received, as well as newly submitted ones," says Meta.
The company notes it will only use data it collects from public posts and Meta AI interactions for training purposes. It won't use private messages in its training sets, nor any interactions, public or otherwise, made by users under the age of 18. As for why the company wants to start using EU data now, it claims the information will allow it to fine tune its future models to better serve Europeans. "We believe we have a responsibility to build AI that's not just available to Europeans, but is actually built for them. That's why it's so important for our generative AI models to be trained on a variety of data so they can understand the incredible and diverse nuances and complexities that make up European communities," Meta states.
"That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products. This is particularly important as AI models become more advanced with multi-modal functionality, which spans text, voice, video, and imagery."
Read more of this story at Slashdot.
NATO Inks Deal With Palantir For Maven AI System
An anonymous reader quotes a report from DefenseScoop: NATO announced Monday that it has awarded a contract to Palantir to adopt its Maven Smart System for artificial intelligence-enabled battlefield operations. Through the contract, which was finalized March 25, the NATO Communications and Information Agency (NCIA) plans to use a version of the AI system -- Maven Smart System NATO -- to support the transatlantic military organization's Allied Command Operations strategic command. NATO plans to use the system to provide "a common data-enabled warfighting capability to the Alliance, through a wide range of AI applications -- from large language models (LLMs) to generative and machine learning," it said in a release, ultimately enhancing "intelligence fusion and targeting, battlespace awareness and planning, and accelerated decision-making." [...] NATO's Allied Command Operations will begin using Maven within the next 30 days, the organization said Monday, adding that it hopes that using it will accelerate further adoption of emerging AI capabilities. Palantir said the contract "was one of the most expeditious in [its] history, taking only six months from outlining the requirement to acquiring the system."
Read more of this story at Slashdot.
VMware Revives Its Free ESXi Hypervisor
VMware has resumed offering a free hypervisor. News of the offering emerged in a throwaway line in the Release Notes for version 8.0 Update 3e of the Broadcom business unit's ESXi hypervisor. From a report: Just below the "What's New" section of that document is the statement: "Broadcom makes available the VMware vSphere Hypervisor version 8, an entry-level hypervisor. You can download it free of charge from the Broadcom Support portal."
VMware offered a free version of ESXi for years, and it was beloved by home lab operators and vAdmins who needed something to tinker with. But in February 2024, VMware discontinued it on grounds that it was dropping perpetual licenses and moving to subscriptions.
Read more of this story at Slashdot.