Governance and Legislation
The ‘One Big Beautiful Bill’ And Its AI Moratorium: A Closer Look
(AI Insider – 30 June 2025) The “One Big Beautiful Bill Act,” passed by the House in May 2025, includes a controversial 10-year moratorium on state-level AI regulation, sparking bipartisan backlash and growing opposition from governors, attorneys general, and public interest groups. Supporters, including Google, OpenAI, and the U.S. Chamber of Commerce, argue that centralizing AI oversight fosters innovation and avoids a regulatory patchwork, but critics warn it undermines state autonomy, nullifies 149 state laws, and leaves consumers vulnerable to AI-related harms. With public opinion turning against the moratorium and key Republican lawmakers negotiating amendments, a Senate vote expected by early July may alter the scope or duration of the provision, shaping the future of U.S. AI governance. – https://theaiinsider.tech/2025/06/30/the-one-big-beautiful-bill-and-its-ai-moratorium-a-closer-look/
Nuclear Non-Proliferation Is the Wrong Framework for AI Governance
(AI Frontiers – 30 June 2025) In a recent interview, Demis Hassabis — co-founder and CEO of Google DeepMind, a leading AI lab — was asked if he worried about ending up like Robert Oppenheimer, the scientist who unleashed the atomic bomb and was later haunted by his creation. While Hassabis didn’t explicitly endorse the comparison, he responded by advocating for an international institution to govern AI, holding up the International Atomic Energy Agency (IAEA) as a guiding example. Hassabis isn’t alone in comparing AI and nuclear technology. Sam Altman and others at OpenAI have also argued that artificial intelligence is so impactful globally that it requires an international regulatory agency on the scale of the IAEA. Back in 2019, Bill Gates, for example, likened AI to nuclear technology, describing both as rare technologies “that are both promising and dangerous.” Researchers, too, have made similar comparisons, looking to the IAEA or Eisenhower’s Atoms for Peace initiative as potential models for AI regulation. No analogy is perfect, but especially as a general-purpose technology, AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation. It also places undue emphasis on AI use in weapons, rather than on its potential to drive economic prosperity and enhance national security. – https://aifrontiersmedia.substack.com/p/nuclear-non-proliferation-is-the
In the AI Race, Copyright Is the United States’s Greatest Hurdle
(Tim Hwang, Joshua Levine – Lawfare – 30 June 2025) It is no secret that dominance over cutting-edge technologies will play a major role in the geopolitical competition between the U.S. and China. Technological leadership will play a role in defining not just the economic health of each nation, but its military and soft power assets, as well. Artificial intelligence (AI) has emerged as one pivotal area in this competition, with both nations working to accelerate their capabilities in the technology and secure the necessary inputs for further development. China’s national and provincial governments are taking steps to create the infrastructure and regulatory regime to empower AI development and diffusion. The United States is currently ahead in this race—its companies are making the most dramatic breakthroughs in the technology and are implementing AI at scale through the economy. But this lead is fragile. In the United States, leading AI labs are facing an existential threat: copyright lawsuits. In these suits, rights holders argue that companies whose training data includes copyrighted material obtained by scraping the web without their express consent is a violation of copyright law. Because of the vast amount of data included in such training sets, the potential copyright penalties would bankrupt many AI developers. Though little talked about in discussions of geopolitical competition, it may ultimately be domestic battles over copyright that define whether the U.S. emerges as the definitive leader in the technological race with China, or falls behind. To remedy this issue and ensure the U.S. stays ahead, Congress – or ideally the courts – should take the bold and important step of affirming the legality of using publicly available data for training AI models in the United States. – https://www.lawfaremedia.org/article/in-the-ai-race–copyright-is-the-united-states-s-greatest-hurdle
Denmark proposes landmark law to protect citizens from deepfake misuse
(DigWatch – 30 June 2025) Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies. The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances. – https://dig.watch/updates/denmark-proposes-landmark-law-to-protect-citizens-from-deepfake-misuse
AI training with pirated books triggers massive legal risk
(DigWatch – 30 June 2025) A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft. Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror. – https://dig.watch/updates/ai-training-with-pirated-books-triggers-massive-legal-risk
- New NHS plan adds AI to protect patient safety
(DigWatch – 30 June 2025) The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse. Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections. Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals. – https://dig.watch/updates/new-nhs-plan-adds-ai-to-protect-patient-safety
Việt Nam launches $38.4b National Data Development Fund to fuel digital transformation
(Vietnam News – 30 June 2025) The Government has established the National Data Development Fund, with an initial capital of VNĐ1 trillion (US$38.4 billion), to strengthen digital infrastructure and promote data governance. Under Decree No.160/2025/NĐ-CP, the fund operates as a non-budget state financial fund. As a non-profit entity, it is administered by the Ministry of Public Security and authorised to maintain its official seal and operate accounts at both the State Treasury and commercial banks legally operating within Việt Nam’s financial system. – https://vietnamnews.vn/economy/1720506/viet-nam-launches-38-4b-national-data-development-fund-to-fuel-digital-transformation.html
Geostrategies
French National Quantum Update: June 2025
(Quantum Insider – 30 June 2025) France’s quantum pioneers kicked off the summer by demonstrating significant progress in quantum technology across research, business, policy, and international cooperation. Paris Region continued its strategic push to become Europe’s next quantum hub, while French companies Pasqal and Alice & Bob gained national recognition through their inclusion in the Tech Next40/120 list. On the commercial front, Pasqal opened its first North American quantum factory and sold a 100-qubit processor to Canada’s Distriq, while Orange Business and Toshiba launched a quantum-safe network service in Paris. In research, the EuroQCS-France consortium began offering remote access to a 12-qubit photonic system from Quandela to European users. Meanwhile, France strengthened global ties with Singapore and welcomed over 1,000 participants to the France Quantum 2025 conference, underscoring its international ambitions in the field. – https://thequantuminsider.com/2025/06/30/french-national-quantum-update-june-2025/
EIF Invests €30 million in Quantum Technologies And Deep Physics with Quantonation II
(Quantum Insider – 30 June 2025) The European Investment Fund (EIF) is investing €30 million in Quantonation II to strengthen early-stage financing for quantum and deep physics startups, aiming to boost Europe’s role in the global quantum race. Quantonation II plans to build a €200 million fund targeting around 25 high-potential companies and five venture studios focused on quantum computing, sensing, and other deep-tech applications. The investment aligns with the EIF’s InvestEU strategy to close funding gaps in underserved sectors and promote European technological sovereignty in foundational science and emerging technologies. – https://thequantuminsider.com/2025/06/30/eif-invests-e30-million-in-quantum-technologies-and-deep-physics-with-quantonation-ii/
Export Controls Accelerate China’s Quantum Supply Chain
(Elias Huber – RUSI – 27 June 2025) The development of quantum technologies is anticipated to bring both foundational capability advances to commercial and dual-use applications, including to computing, sensing, and the transfer of information. This makes quantum technologies a global political priority, including in the UK and the EU. Under the Biden administration, US leadership in quantum and other ‘force-multiplying’ technologies, such as advanced semiconductors, have become a key national security imperative. The US has therefore intensified its export controls on quantum technologies aimed at China over the past year. These measures have resulted in disruptions to China’s quantum hardware and talent development, but they also accelerated China’s domestic quantum supply chain. By forcing leading laboratories and quantum start-ups to rapidly iterate with domestic suppliers in replacing foreign dependencies, export controls brought the demand side on board with Chinese localisation efforts. On the supply side, the basis for a self-reliant Chinese quantum supply chain was created years ago by strong government support and indications of US quantum controls as early as 2018. Now, with a parallel quantum ecosystem rapidly emerging, it is only a matter of time until exports from Chinese suppliers arrive in Europe. This presents both risks and opportunities. – https://www.rusi.org/explore-our-research/publications/commentary/export-controls-accelerate-chinas-quantum-supply-chain
Security
EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow
(Quantum Insider – 30 June 2025) The European Union has called on member states to transition to quantum-safe encryption by 2030, citing urgent cybersecurity risks posed by future quantum computers. The EU plan promotes Post-Quantum Cryptography (PQC) for most sectors and explores Quantum Key Distribution (QKD) for high-security applications, outlining a phased roadmap that begins in 2026 with risk assessments and awareness campaigns. Technical and logistical challenges—including limited QKD range, integration issues, and lack of standards—underscore the need for coordinated action among governments, industry, and researchers. – https://thequantuminsider.com/2025/06/30/eu-presses-for-quantum-safe-encryption-by-2030-as-risks-grow/
AI and Secure Code Generation
(Dave Aitel, Dan Geer – Lawfare – 27 June 2025) At the end of 2024, 25 percent of new code at Google was being written not by humans, but by generative large language models (LLMs)—a practice known as “vibe coding.” While the name may sound silly, vibe coding is a tectonic shift in the way software is built. Indeed, the quality of LLMs themselves is improving at a rapid pace in every dimension we can measure—and many we can’t. This rapid automation is transforming software engineering on two fronts simultaneously: Artificial intelligence (AI) is not only writing new code; it is also beginning to analyze, debug, and reason about existing human-written code. As a result, traditional ways of evaluating security—counting bugs, reviewing code, and tracing human intent—are becoming obsolete. AI experts no longer know if AI-generated code is safer, riskier, or simply vulnerable in different ways than human-written code. We must ask: Do AIs write code with more bugs, fewer bugs, or entirely new categories of bugs? And can AIs reliably discover vulnerabilities in legacy code that human reviewers miss—or overlook flaws humans find obvious? Whatever the answer, AI will never again be as inexperienced at code security analysis as it is today. And as is typical with information security, we are leaping into the future without useful metrics to measure position or velocity. – https://www.lawfaremedia.org/article/ai-and-secure-code-generation
Frontiers
Mayo Clinic’s AI Tool Identifies 9 Dementia Types, Including Alzheimer’s, With One Scan
(AI Insider – 30 June 2025) Mayo Clinic researchers have developed an AI tool, StateViewer, that identifies patterns of brain activity linked to nine types of dementia, including Alzheimer’s, using a single FDG-PET scan. In tests involving over 3,600 scans, StateViewer identified the correct dementia type in 88% of cases and helped clinicians interpret scans nearly twice as fast with up to three times greater accuracy. The tool addresses a critical need for early, precise diagnosis, particularly in clinics lacking neurology specialists, and is being further evaluated for broader clinical adoption. – https://theaiinsider.tech/2025/06/30/mayo-clinics-ai-tool-identifies-9-dementia-types-including-alzheimers-with-one-scan/
World’s first robotic hand bends fingers using nothing but human thought
(Interesting Engineering – 30 June 2025) Assistive robotics and brain-computer interfaces (BCIs) are rapidly transforming how people with disabilities regain independence. These technologies enable users to control external devices like prosthetics or robotic limbs using brain signals instead of muscle movements. While invasive BCIs have shown precise control, they require surgery and long-term maintenance. That limits their use to a small group of patients. Now, researchers at Carnegie Mellon University have made a major breakthrough in noninvasive BCI technology. – https://interestingengineering.com/innovation/robotic-hand-controlled-by-brainwaves
ChatGPT emerges as a search alternative, but Google holds ground
(DigWatch – 30 June 2025) ChatGPT is now used by over 400 million people weekly and ranks the eighth most visited website globally. While many users rely on it for tasks like writing, productivity, and planning, a growing number are also turning to it for search — a space long dominated by Google. Despite its popularity, experts say ChatGPT won’t fully replace Google. Rohan Sarin, a former product lead at Google and Microsoft, argues that the two serve different purposes. Google excels at direct, fact-based queries, while ChatGPT is better suited for exploration and synthesis. – https://dig.watch/updates/chatgpt-emerges-as-a-search-alternative-but-google-holds-ground