Governance, Legislation, Tech & Democracy
UN urges global rules for AI to prevent inequality. Only 15% of countries have AI strategies, raising concern about unchecked innovation widening global gaps, the UN tech chief said
(DigWatch – 28 July 2025) According to Doreen Bogdan-Martin, head of the UN’s International Telecommunications Union, the world must urgently adopt a unified approach to AI regulation. She warned that fragmented national strategies could deepen global inequalities and risk leaving billions excluded from the AI revolution. Bogdan-Martin stressed that only a global framework can ensure AI benefits all of humanity instead of worsening digital divides. – https://dig.watch/updates/un-urges-global-rules-for-ai-to-prevent-inequality
What Comes Next in AI Regulation? While the administration’s AI Action Plan received a surprisingly positive reception, its ambitious scope may make implementation difficult
(Kevin Frazier – Lawfare – 28 July 2025) A few days removed from the release of the AI Action Plan, it’s now possible to take a slightly more nuanced perspective of what many observers have heralded as “not bad.”. Americans for Responsible Innovation President Brad Carson, for example, regarded the plan as “cautiously promising.” Michael Horowitz of the Council on Foreign Relations characterized it as aligned with “an ongoing bipartisan approach to the U.S. leadership in AI.” The Atlantic Council labeled it a “deliberative and thorough plan.” Of course, some took a less favorable view—the New York Times promptly ran a summary under the headline, “Trump Plans to Give A.I. Developers a Free Hand.” Still, a scroll through X, Bluesky, and LinkedIn in the hours following the publication of the long-awaited document returned a fairly uniform, positive assessment. – https://www.lawfaremedia.org/article/what-comes-next-in-ai-regulation
Trump pushes for ‘anti-woke’ AI in US government contracts. Directive demands transparency in AI model political content
(DigWatch – 26 July 2025) Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump. The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services. It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models. – https://dig.watch/updates/trump-pushes-for-anti-woke-ai-in-us-government-contracts
Experts urge broader values in AI development. AI development needs ethics, not just efficiency, say Stanford and Dragonfly leaders
(DigWatch – 26 July 2025) Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy. But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better. She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes. – https://dig.watch/updates/experts-urge-broader-values-in-ai-development
Mistral’s new “environmental audit” shows how much AI is hurting the planet. Individual prompts don’t cost much, but billions together can have aggregate impact
(Ars Technica – 25 July 2025) Despite concerns over the environmental impacts of AI models, it’s surprisingly hard to find precise, reliable data on the CO2 emissions and water use for many major large language models. French model-maker Mistral is seeking to fix that this week, releasing details from what it calls a first-of-its-kind environmental audit “to quantify the environmental impacts of our LLMs.”. The results, which are broadly in line with estimates from previous scholarly work, suggest the environmental harm of any single AI query is relatively small compared to many other common Internet tasks. But with billions of AI prompts taxing GPUs every year, even those small individual impacts can lead to significant environmental effects in aggregate. – https://arstechnica.com/ai/2025/07/mistrals-new-environmental-audit-shows-how-much-ai-is-hurting-the-planet/
Online Safety Depends on a Growing Trust and Safety Vendor Ecosystem
(Toby Shulruff, Lucas Wright, Jeff Lazarus – Tech Policy Press – 25 July 2025) This year’s TrustCon, the annual gathering of trust and safety professionals, wrapped this week in San Francisco. One component of the event is an exhibition that features vendors of services such as AI solutions for content moderation, forensic tools, and other online safety products and services. Since the 2000s, online platforms have come to rely more and more on third party vendors to do trust and safety work. This “third party ecosystem” does much more than outsourced content moderation—it now plays a vital role in core trust and safety work. You will find third parties at all levels of the trust and safety “stack”—from content and conduct moderation to policy-making/community standards development, tooling for content moderation, and more recently, compliance with new regulations. While this slow, necessity-driven evolution is better known by trust and safety professionals working inside platforms, there has been little attention outside the field to how platforms have become so reliant on this third party ecosystem. This dependence has crucial implications on the trust and safety field as a whole and the end users that expect platforms to prevent, detect, and mitigate abuse, especially as regulatory compliance becomes a priority. To reflect on how we got here, current state, and the implications of vendorization, we draw on findings from in-depth interviews conducted by the Trust and Safety Foundation as a part of the History of T&S Project (Toby), research on the T&S vendor ecosystem (Lucas), and direct experience in the T&S field (Jeff). – https://www.techpolicy.press/online-safety-depends-on-a-growing-trust-and-safety-vendor-ecosystem/
How X’s Community Notes Leave South Asians Disproportionately Exposed to Misinformation
(Kayla Bassett – Tech Policy Press – 25 July 2025) Community Notes is X’s flagship crowdsourced fact-checking feature, designed to provide timely, user-generated context on misleading posts. In practice, however, the system faces global challenges, including delays, inaccuracies, and inconsistent coverage. These issues are more pronounced in higher-risk languages such as Hindi and Urdu, where accurate notes often stall while misinformation remains visible. A latest study by the Center for the Study of Organized Hate (CSOH), based on a complete archive of 1.85 million public notes, shows that posts in South Asian languages, including Hindi, Urdu, Bengali, Tamil, and Nepali, account for just 1,608 entries, roughly 0.094 percent of the archive, even though the region represents approximately a quarter of the world’s population and five percent of X’s monthly users. Even more striking, only 37 of those notes have ever appeared on the public timeline. Community Notes relies on two key mechanisms: a “helpfulness” up‑vote and a bridging test, which requires agreement from contributors who typically disagree with one another. The idea is elegant in theory: bring together people with different viewpoints to agree on what’s accurate, avoiding echo chambers. But the system struggles in practice when there isn’t enough contributor activity in a given language to meet its consensus thresholds. – https://www.techpolicy.press/how-xs-community-notes-leave-south-asians-disproportionately-exposed-to-misinformation/
When Algorithms Learn to Discriminate: The Hidden Crisis of Emergent Ableism
(Sergey Kornilov – Tech Policy Press – 25 July 2025) The Equal Employment Opportunity Commission’s $365,000 settlement with iTutorGroup in 2023 was straightforward: the résumé-screening software automatically rejected women over 55 and men over 60. This was a clear case of age discrimination, with hard-coded rules and an obvious fix. But a 2024 American Civil Liberties Union (ACLU) complaint to the Federal Trade Commission reveals something far more troubling. Inthe legal complaint, the ACLU alleged that products from Aon Consulting, Inc., a major hiring technology vendor, “assess very general personality traits such as positivity, emotional awareness, liveliness, ambition, and drive that are not clearly job related or necessary for a specific job and can unfairly screen out people based on disabilities.” The ACLU’s client in the case was a biracial job applicant with autism who faced discrimination through Aon’s widely used hiring assessments. But unlike ITutorGroup’s code-level exclusion, Aon’s system alleged discrimination was far more opaque–its bias buried not in an explicit rule, but in statistical patterns. Aon’s ADEPT-15 personality test was never programmed to detect disability. It didn’t need to be. – https://www.techpolicy.press/when-algorithms-learn-to-discriminate-the-hidden-crisis-of-emergent-ableism/
The Coming Wave of Child Influencer Regulation
(Marissa Edmund – Tech Policy Press – 25 July 2025) What protections do kids have when their childhood becomes content? With the rapid rise of child and family influencers, this question has never been more urgent. The US enacted its first child labor laws in the early 1900s that established minimum ages for work, prohibited jobs for minors, and implemented financial protections for child actors, known as the Coogan Law. Now, the definition of work has evolved, and an unprecedented number of minors have not only carved out careers as influencers, but also as the breadwinners of their families. In response, states are increasingly stepping up to protect kids’ financial interests and empower child content creators to control their digital footprints. – https://www.techpolicy.press/the-coming-wave-of-child-influencer-regulation/
Assessing the Trump Administration’s AI Action Plan
(Sam Winter-Levy – Just Security – 25 July 2025) On July 23, the Trump administration released its much anticipated AI Action Plan, a blueprint for achieving what it describes as “unquestioned and unchallenged global technological dominance” in artificial intelligence. The Plan is the administration’s most comprehensive statement to date for how it intends to win the AI race against China, spur innovation at home, and promote U.S. technologies overseas. Despite the administration’s sharp rhetorical break from its predecessor, the Plan reflects notable continuity with several key Biden-era priorities. Both administrations have emphasized scaling domestic AI companies, increasing government adoption of AI tools, and reforming permitting and regulatory processes to meet the extraordinary compute and energy demands required to train and deploy advanced models. – https://www.justsecurity.org/117765/assessing-trump-ai-action-plan/
Experts React: Unpacking the Trump Administration’s Plan to Win the AI Race
(Navin Girishankar, Kirti Gupta, Matt Pearl, Philip Luck, James Andrew Lewis, Leslie Abrahams, Sujai Shivakumar, Erin L. Murphy, Noam Unger, and Madeleine McLean – Center for Strategic & International Studies – 25 July 2025) On July 23, 2025, President Donald Trump signed three executive orders (EOs) on artificial intelligence (AI). These orders came shortly after the release of the administration’s AI Action Plan and each focuses on one of three AI policy priorities: (1) building AI infrastructure, (2) diffusing U.S. AI technology globally, and (3) removing ideological bias from AI models. The EOs and the AI Action Plan, which outlines over 100 recommendations for achieving U.S. global dominance of AI, mark the administration’s most detailed articulation of its AI policy agenda to date. In this Experts React, leading experts from CSIS share their analysis of the content and implications of these initiatives. – https://www.csis.org/analysis/experts-react-unpacking-trump-administrations-plan-win-ai-race
Geostrategies
‘Impossible hill to climb’: US clouds crush European competition on their home turf. Local providers squeezed out despite market growth, leaving sovereignty hopes in question
(The Register – 28 July 2025) European cloud infrastructure companies make up just 15 percent of their own market, and the huge investment the US giants can wield makes their dominance “an impossible hill to climb” for any would-be challengers. Details shared by Synergy Research on regional markets show that Euro cloud operators continue to grow, but none comes remotely close to competing with the big American rivals for leadership of European markets. According to Synergy, local companies accounted for nearly a third (29 percent) of cloud infrastructure revenues in 2017, but by 2022 their share had dropped to just 15 percent and has held fairly steady ever since. – https://www.theregister.com/2025/07/28/euro_cloud_vs_us/
China Proposes Global AI Cooperation Body, Positioning Shanghai as a Potential Hub
(AI Insider – 28 July 2025) China has proposed the creation of a new international organization to coordinate global artificial intelligence development and regulation, aiming to offer an alternative vision to U.S.-led initiatives in the rapidly advancing sector. Premier Li Qiang outlined the proposal at the World Artificial Intelligence Conference in Shanghai, emphasizing inclusive access to AI technology and shared governance. – https://theaiinsider.tech/2025/07/28/china-proposes-global-ai-cooperation-body-positioning-shanghai-as-a-potential-hub/
Smart Device Empire: Beijing’s Expansion Through Everyday Digital Infrastructure
(Matthew Johnson – The Jamestown Foundation – 25 July 2025) The PRC is exporting an integrated system of smart devices, data infrastructure, and governance standards. Through industrial policy, state-backed overproduction, and strategic data asymmetry, Beijing is building a global IoT architecture designed to embed PRC standards, influence, and governance into the connected environments of other countries. By dominating core components like cellular IoT modules and steering global standards through initiatives like China Standards 2035, Beijing is creating long-term supply chain dependencies and rewriting the rules of digital interoperability. Devices manufactured by PRC firms often carry embedded risks: unpatched vulnerabilities, mandated government access under China’s Data Security Law, and use in cyber operations like Volt Typhoon and LapDogs. Expansion into emerging markets is fueled by Digital Silk Road diplomacy, subsidized financing, and turnkey infrastructure deals—seen in Huawei’s smart city platforms and Haier’s bundled appliance systems deployed across Asia, Africa, and Latin America. Looking ahead, the global spread of China’s IoT platforms signals a deeper push to shape the foundations of digital infrastructure—where influence over connected devices gradually extends to norms, data flows, and governance models. – https://jamestown.org/program/smart-device-empire-beijings-expansion-through-everyday-digital-infrastructure/
Security
Majority of 1.4M customers caught in Allianz Life data heist. No word on who’s behind it, but attack has hallmarks of the usual suspects
(The Register – 28 July 2025) Financial services biz Allianz says the majority of customers of one of its North American subsidiaries had their data stolen in a cyberattack. Lawyers acting on behalf of US-based Allianz Life filed a breach notification with Maine’s attorney general on Saturday, saying the intrusion began on July 16 and was detected a day later. Official filings did not state how many people were affected, or what data was compromised, although in a statement to The Register, Allianz said the majority of its 1.4 million customers were impacted. “The threat actor was able to obtain personally identifiable data related to the majority of Allianz Life’s customers, financial professionals, and select Allianz Life employees, using a social engineering technique,” a spokesperson said. – https://www.theregister.com/2025/07/28/allianz_life_data_breach/
Mapping a decade’s worth of hybrid threats targeting South Korea
(Fitriani, Shelly Shih and Alice Wai – The Strategist – 28 July 2025) While Australia is coming to terms with the realities of hybrid threats, South Korea has long been on the front line. Reflecting its formal state of war with North Korea, South Korea has endured decades of grey-zone provocations, including infiltration attempts, cyberattacks and disinformation campaigns. However, the hybrid threat landscape confronting South Korea is evolving in both intensity and complexity—just as it is for the broader Indo-Pacific. For South Korea, it now extends beyond North Korea’s traditional campaigns to encompass state actors such as China, emerging technologies such as AI, and threats including intellectual property theft. – https://www.aspistrategist.org.au/mapping-a-decades-worth-of-hybrid-threats-targeting-south-korea/
Crypto hacks soar in 2025 as security gaps widen. The platforms lost more than $3.1 billion in the first half of 2025, with AI-powered hacks and phishing scams leading the surge
(DigWatch – 26 July 2025) According to Hacken’s latest research, the crypto sector has already recorded more than $3.1 billion in losses during the first half of 2025. That figure already exceeds 2024, mainly due to access control flaws, phishing, and AI-driven exploits. Access control remains the most significant weakness, responsible for almost 60% of recorded losses. The most severe breach was the Bybit attack, where North Korean hackers exploited a wallet signer vulnerability to steal $1.46 billion. Other incidents include UPCX’s $70 million loss, a manipulated price oracle exploit on KiloEx, and insider fraud involving the Roar staking contract. – https://dig.watch/updates/crypto-hacks-soar-in-2025-as-security-gaps-widen
Ubiquitous Technical Surveillance Demands Broader Data Protections
(Justin Sherman – Lawfare – 25 July 2025) On June 26, the Department of Justice’s Office of the Inspector General (OIG) published a partially redacted report detailing the FBI’s efforts to mitigate the effects of a seemingly esoteric, yet pressing, threat facing U.S. government personnel: ubiquitous technical surveillance (UTS). The takeaways of the report were not optimistic. The media quickly picked up the juiciest elements of the document. The Guardian highlighted a story from the report in which a hacker working for the Sinaloa drug cartel obtained the mobile phone number of an FBI assistant legal attaché at the U.S. Embassy in Mexico City, gained access to ingoing and outgoing calls as well as location data, and used Mexico City’s camera system to surveil the official and monitor the people with whom they met. According to the OIG report, the cartel “used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses.” – https://www.lawfaremedia.org/article/ubiquitous-technical-surveillance-demands-broader-data-protections
Defence, Intelligence, Warfare
Russia turns Soviet-era tanks into robot ‘platoon’ guided by a single command vehicle
(Interesting Engineering – 28 July 2025) Russia has made a significant advancement in creating autonomous ground combat systems by publicly revealing the “Shturm” robotic assault concept during recent field trials. Developed by the major Russian machine-building company and the largest manufacturer of main battle tanks in the world, Uralvagonzavod, under orders from the Russian Ministry of Defense, the Shturm system was observed for the first time in its full configuration in video footage shared by Russian military analyst Andrei_bt. – https://interestingengineering.com/military/russia-soviet-tanks-turned-robotic-platoon
Report: AI, Deep Techs Are Rewriting the Rules of Military Deception
(AI Insider – 27 July 2025) A new report from New America finds that artificial intelligence is transforming military deception by enabling more precise misinformation while also introducing new vulnerabilities. AI systems can be misled by falsified data and may generate false conclusions on their own, making them both tools and targets of deception. The study warns that quantum computing could further disrupt deception strategies by breaking encryption or accelerating data manipulation. – https://theaiinsider.tech/2025/07/27/report-ai-deep-techs-are-rewriting-the-rules-of-military-deception/
Advanced strike missile that can deliver pinpoint accuracy at 300-mile range fired in test
(Interesting Engineering – 27 July 2025) An advanced strike missile delivered pinpoint accuracy during a long-range test. Conducted by the Australian Army, the successful test involved live firing of a Precision Strike Missile (PrSM). The missile, developed by Lockheed Martin, was fired from a High Mo bility Artillery Rocket System (HIMARS). The next-generation missile can hit the target beyond 300 miles. The live-test also marks the first operational PrSM firing by a military force outside the United States. The test, conducted at Mount Bundey Training Area in the Northern Territory, is expected to bolster Australia’s attack capability. – https://interestingengineering.com/military/advanced-strike-missile-delivers-pinpoint-accuracy
Frontiers
Plexision Announces Funding to Bring Artificial Intelligence to Transplant Outcome Care
(AI Insider – 28 July 2025) Plexision has received a $365,000 investment from the Richard King Mellon Foundation to enhance its AI- and ML-powered blood tests for predicting complex transplant outcomes. The company’s platform integrates immune cell function with machine learning to rank risks of rejection and infection, enabling faster, more precise clinical decisions within 6–24 hours. Validated through multi-center studies, Plexision’s tests like PlexABMR™ and PlexEBV™ have shown strong predictive accuracy and will be showcased at the 2025 World Transplant Congress. – https://theaiinsider.tech/2025/07/28/plexision-announces-funding-to-bring-artificial-intelligence-to-transplant-outcome-care/
Revival Healthcare Capital Announces $485 Million Joint Investment With Olympus to Advance Robotics Through Co-Founded New Company Swan EndoSurgical
(AI Insider – 28 July 2025) Revival Healthcare Capital and Olympus Corporation have partnered to launch Swan EndoSurgical, Inc., a new company focused on developing a next-generation endoluminal robotics platform targeting gastrointestinal (GI) treatments. The partnership includes a joint investment of up to $458 million tied to milestone achievements, with Revival holding a majority equity stake and Olympus retaining a future buyout option; the investment aims to accelerate development and market entry of purpose-built robotic solutions. Swan leverages Revival’s build-to-buy innovation model and Olympus’ expertise in visualization and endoscopy, aiming to deliver earlier, safer, and more effective GI lesion and tumor treatments than current options, while offering Olympus a complementary path to expand its surgical robotics strategy. – https://theaiinsider.tech/2025/07/28/revival-healthcare-capital-announces-485-million-joint-investment-with-olympus-to-advance-robotics-through-co-founded-new-company-swan-endosurgical/
Samsung lands $16.5 billion Tesla chip deal to power next-gen AI, Elon Musk confirms
(Interesting Engineering – 28 July 2025) Samsung Electronics has confirmed a massive $16.5 billion semiconductor supply agreement with Tesla. The news became public through a regulatory filing by Samsung and was later verified by Elon Musk on his social media platform X. The agreement officially started on July 26, 2024, and will run until December 31, 2033, according to Samsung’s filing. While the document initially did not name Tesla as the buyer, Musk later confirmed the company’s involvement. He also revealed key details about the project, including a major role for Samsung’s semiconductor facility in Texas. – https://interestingengineering.com/culture/samsung-165b-deal-ai-chip-tesla
Starseer Secures $2M Seed Round Led by Gula Tech Adventures to Revolutionize AI Security and Transparency
(AI Insider – 28 July 2025) Starseer has raised $2 million in seed funding led by Gula Tech Adventures to scale its AI exposure management platform focused on transparency, security, and regulatory compliance. Its model-agnostic tools help enterprises and government agencies detect and defend against AI risks like prompt injections and data poisoning, while meeting standards like the EU AI Act. The funding will support product development, team growth, and broader adoption across high-stakes sectors such as finance, healthcare, and autonomous systems. – https://theaiinsider.tech/2025/07/28/starseer-secures-2m-seed-round-led-by-gula-tech-adventures-to-revolutionize-ai-security-and-transparency/
Reka Closes $110M in Funding to Accelerate Adoption of Its Multimodal AI Platforms
(AI Insider – 28 July 2025) Reka has secured a $110 million investment from backers including NVIDIA and Snowflake to accelerate its multimodal AI platform development and global adoption. The company’s flagship products include Reka Flash, Reka Vision, and Reka Research — AI tools capable of interpreting and reasoning across video, image, text, and audio with high efficiency and low compute costs. With clients like Shutterstock and Turing Video, Reka is expanding enterprise use of its AI tools while maintaining a strong focus on scalable, cost-effective model performance. – https://theaiinsider.tech/2025/07/28/reka-closes-110m-in-funding-to-accelerate-adoption-of-its-multimodal-ai-platforms/
Volca, an AI Startup Transforming Front Office Operations for Home Services, Announces $5.5M Seed Led by Pathlight Ventures
(AI Insider – 28 July 2025) Volca, an AI-powered marketing platform for home services businesses, has raised $5.5M in seed funding led by Pathlight Ventures with backing from notable investors including MetaProp, GTMFund, and executives from Stripe, Plaid, and Ramp. Its core product uses AI-driven SMS referral marketing to streamline operations, personalize customer engagement, and automate revenue generation, integrating with CRMs like ServiceTitan. Founded by Brendan Kazanjian and Brandon Rabovsky, Volca has already driven significant revenue for clients and plans to expand its AI tools and hiring in NYC. – https://theaiinsider.tech/2025/07/28/volca-an-ai-startup-transforming-front-office-operations-for-home-services-announces-5-5m-seed-led-by-pathlight-ventures/
Doosan Robotics Acquires U.S. Automation Firm ONExia for $25.9 Million
(AI Insider – 28 July 2025) Doosan Robotics has acquired an 89.59% controlling stake in Pennsylvania-based automation firm ONExia for $25.9 million, marking a major step in its strategy to lead in AI-powered robotics. ONExia, known for its system integration and collaborative robotics focused on end-of-line automation, brings deep expertise, industry-specific solutions, and a 30% annual sales growth track record. The acquisition aligns with Doosan’s broader shift from hardware to intelligent robotics platforms, including new investments in R&D, talent, and an AI-focused Innovation Center, as it aims to become a global leader in the emerging Physical AI sector. – https://theaiinsider.tech/2025/07/28/doosan-robotics-acquires-u-s-automation-firm-onexia-for-25-9-million/
Google AI makes breakthrough to crack Roman texts to unearth secrets of the past
(Interesting Engineering – 27 July 2025) Each year, archaeologists uncover approximately 1,500 Latin inscriptions—etched into stone, metal, or pottery—that provide rare insights into the everyday lives, beliefs, and customs of ancient Romans. Yet interpreting these texts is no easy task. Many inscriptions are incomplete, weathered, or broken, making them difficult to read and contextualize. To tackle this problem, a team of researchers has developed a generative neural network capable of analyzing patterns in fragmented Latin inscriptions and predicting missing sections. The AI model, named Aeneas after the Trojan hero of Roman mythology, was designed to understand the complex relationships between language, context, and historical usage. – https://interestingengineering.com/innovation/google-ai-to-crack-roman-texts
Finland sets new quantum record with longest-lasting superconducting qubit
(Interesting Engineering – 27 July 2025) A team of researchers in Finland has set a new world record for how long a quantum bit, known as a qubit, can hold onto its information. They have pushed the coherence time of a superconducting transmon qubit to a full millisecond at best, with a median time of half a millisecond. That might sound brief, but in the world of quantum computing, it’s a massive improvement that could change the game. Longer coherence times mean qubits can run more operations and quantum computers can perform more calculations before errors start to appear. “A high-coherence qubit will benefit the research community and accelerate the global efforts on developing quantum sensors, quantum simulators, and quantum computers based on superconducting quantum technologies,” the study authors note. – https://interestingengineering.com/science/transmon-qubit-one-millisecond-coherence-time
Autonomous nuclear reactors’ plan gets US boost with Amazon’s tech for next-gen designs
(Interesting Engineering – 27 July 2025) Idaho National Laboratory (INL) and Amazon Web Services (AWS) have established a collaboration to use artificial intelligence to advance nuclear energy technology. The partnership will apply AWS’s cloud computing infrastructure and AI tools to INL’s work on next-generation nuclear reactors. The primary goal is to develop technologies that reduce the cost and time required to design, license, build, and operate nuclear facilities. The long-term objective is to enable safe and reliable autonomous operation of advanced reactors, accelerating their deployment. – https://interestingengineering.com/energy/autonomous-nuclear-reactors-plan-amazon-deal