Governance, Legislation, and Geostrategies
Introducing the OECD AI Capability Indicators
(OECD – 3 June 2025) The report introduces the OECD’s beta AI Capability Indicators. The indicators are designed to assess and compare AI advancements against human abilities. Developed over five years by a collaboration of over 50 experts, the indicators cover nine human abilities, from Language to Manipulation. Unique in the current policy space, these indicators leverage cutting-edge research to provide a clear framework for policymakers to understand AI’s potential impacts on education, work, public affairs and private life. – https://www.oecd.org/en/publications/introducing-the-oecd-ai-capability-indicators_be745f04-en.html
From PISA to AI: How the OECD is measuring what AI can do
(Andreas Schleicher, Sam Mitchell – OECD.AI – 3 June 2025) Does anyone really know what AI can and cannot do? Amidst all the hype and fear surrounding AI, it can be frustrating to find objective, reliable information about its true capabilities. It is widel acknowledged that current AI evaluations are out of step with the performance of frontier AI models, particularly for newer Large Language Models (LLMs), which leading tech companies release at a fast pace. We know ChatGPT can outperform most students on tests like PISA and even graduate students on the GRE, but can AI handle human tasks like managing a classroom of rowdy kids, placing a tile on a roof or negotiating a contract for a service? The natural response may be to turn to specialist benchmarks developed by computer scientists. However, no one outside a small group of AI technicians understands benchmark results and their actual significance. When OpenAI’s GPTo1 scores 0.893 on the MMLU-Pro benchmark, does this mean AI is ready to be deployed throughout the economy? What sorts of human abilities can it replicate? The truth is that no one really knows. – https://oecd.ai/en/wonk/from-pisa-to-ai-how-the-oecd-is-measuring-what-ai-can-do
How Can Enterprises Navigate The Generative AI Landscape — A Guide to Large Language Models (LLMs) in 2025
(AI Insider – 3 June 2025) Large language models are becoming core components of enterprise systems in 2025, with organizations adopting them to automate knowledge work, improve efficiency, and enhance decision-making across sectors. Enterprises face key decisions around open-source versus proprietary models, deployment architecture, customization techniques like RAG and fine-tuning, and compliance with growing AI governance standards. Success depends on aligning LLM adoption with internal data readiness, infrastructure capabilities, and responsible AI practices, using structured frameworks like AI Insider’s Seven-Layer AI Stack. – https://theaiinsider.tech/2025/06/03/how-can-enterprises-navigate-generative-ai-lanscape-a-guide-to-large-language-models-llms-in-2025/
Regulatory Misalignment and the RAISE Act
(Kevin Frazier – Lawfare – 3 June 2025) As artificial intelligence (AI) becomes more powerful, there is an ongoing debate about whether the federal government or states should lead in regulating this rapidly advancing tool. One of the commonly proposed frameworks—tort liability based on reasonableness standards—often struggles to adequately address harms caused by AI. Under such a framework, plaintiffs may have difficulties proving the foreseeability of the alleged harms, establishing causation, defining a clear “standard of care” amid rapidly evolving technology, and demonstrating a breach of that standard. Statutory interventions like New York’s proposed Responsible AI Safety and Education (RAISE) Act, which imposes a tort-based liability scheme, attempt to fill these perceived gaps but fall short of that tall order. Crafting rules that are both flexible enough to accommodate rapid technological evolution and robust enough to safeguard against significant risks is a delicate balancing act. Ill-conceived regimes risk stifling innovation, creating an unlevel playing field for competitors, or failing to prevent the very harms they aim to address. The RAISE Act serves as a case study of such a flawed approach. Asking whether the act fulfills the ideal aims of AI regulation—incentivizing innovation, fostering responsible development, providing redress, and ensuring predictability for all stakeholders—returns a clear answer: no. This conclusion should inform AI governance efforts in states considering similar models and provide insight into the present debate over allowing Congress to lead in shaping the AI policy landscape. – https://www.lawfaremedia.org/article/regulatory-misalignment-and-the-raise-act
New Report Helps Europe Charts Aggressive Course to Lead Global Quantum Race
Quantum Insider – 3 June 2025) A new EU strategy proposes unifying quantum research, infrastructure, and commercialization efforts to establish Europe as a global leader in quantum technologies and reduce reliance on foreign systems. The plan includes creating Quantum Competence Clusters, expanding EuroHPC-linked quantum computing and secure communication networks, and establishing chip manufacturing pilot lines under the EU Chips Act. To accelerate commercialization, the report recommends a Competitive Procurement Challenge for fault-tolerant quantum computing, public investment incentives, and stronger intellectual property and talent retention measures. – https://thequantuminsider.com/2025/06/03/europe-charts-aggressive-course-to-lead-global-quantum-race/
AI data centre boom sparks incentives and pushback
(DigWatch – 3 June 2025) The explosive growth of AI and cloud computing has ignited a data centre building boom across the United States, with states offering massive financial incentives to attract investment. However, electricity, and water is beginning to meet resistance from lawmakers and local communities concerned about long-term costs and environmental strain. Dozens of states have rolled out tax exemptions, permitting fast-tracks, and deregulated energy options to lure hyperscale data centres—massive facilities consuming hundreds of megawatts of power. – https://dig.watch/updates/ai-data-centre-boom-sparks-incentives-and-pushback
Security
Is a Quantum-Cryptography Apocalypse Imminent?
(Quantum Insider – 3 June 2025) Will quantum computers crack cryptographic codes and cause a global security disaster? You might certainly get that impression from a lot of news coverage, the latest of which reports new estimates that it might be 20 times easier to crack such codes than previously thought. Cryptography underpins the security of almost everything in cyberspace, from wifi to banking to digital currencies such as bitcoin. Whereas it was previously estimated that it would take a quantum computer with 20 million qubits (quantum bits) eight hours to crack the popular RSA algorithm (named after its inventors, Rivest–Shamir–Adleman), the new estimate reckons this could be done with 1 million qubits – https://thequantuminsider.com/2025/06/03/is-a-quantum-cryptography-apocalypse-imminent/
Frontiers
Researchers in Japan Study Attachment-Like Behaviors Between Humans and AI
(AI Insider – 3 June 2025) A study from Waseda University proposes that people exhibit attachment-like behaviors toward AI, such as seeking emotional support or maintaining distance, paralleling patterns seen in human relationships. Researchers developed a new tool, the Experiences in Human-AI Relationships Scale (EHARS), which revealed that 75% of users turn to AI for advice and 39% view it as a dependable presence. The findings suggest psychological models like attachment theory can guide ethical AI design, especially for mental health tools and companion technologies, while raising questions about emotional overdependence and manipulation. – https://theaiinsider.tech/2025/06/03/researchers-in-japan-study-attachment-like-behaviors-between-humans-and-ai/
Meta strikes 20-year nuclear power deal to fuel AI and save Illinois reactor
(Interesting Engineering – 3 June 2025) Meta has signed a 20-year agreement with Constellation Energy to secure nuclear power from the Clinton Clean Energy Center in Illinois. The deal ensures continued operation of the plant beyond 2027, when Illinois’ zero-emissions credit program expires. It also marks Meta’s first long-term nuclear power purchase, as it moves to meet surging electricity demand from AI and data centers. The agreement is part of Meta’s broader push to match its electricity use with 100 percent clean energy and invest in emerging technologies. Financial details of the deal were not disclosed. – https://interestingengineering.com/culture/meta-nuclear-power-deal-ai-clinton-plant
Eye-opening device: Self-powered AI synapse mimics human vision, achieves 82% accuracy
(Interesting Engineering – 3 June 2025) Scientists in Japan have developed a groundbreaking AI synapse that recognizes colors nearly as well as the human eye, promising great advances in energy-efficient visual recognition for smartphones, drones, and autonomous vehicles. The research team at the Tokyo University of Science developed the self-powered optoelectronic device, which operates entirely on light, to address the high power, storage and computational demands of current machine vision systems. – https://interestingengineering.com/science/eye-opening-device-self-powered-ai-synapse-mimics-human-vision-achieves-82-accuracy
MIT AI digs through 1 million samples to find 19 materials that can replace cement
(Interesting Engineering – 3 June 2025) A research team across the Olivetti Group and the MIT Concrete Sustainability Hub have unveiled a machine learning AI aglorithm that helps them find out alternatives for cement. Led by Souroush Mahjoubi, the team published an open-access paper in Nature’s Comminications Materials, outlining their solution. The research team were working on finding alternatives to reduce the amount of cement in concrete to save on costs and emissions when they came across this discovery. – https://interestingengineering.com/innovation/mit-cement-alternatives-ai
New quantum battery design promises fast-charging, ultra-compact energy storage
(Interesting Engineering – 3 June 2025) In the coming years, batteries so tiny yet powerful could revolutionize everything from smartphones to supercomputers. Energy storage is about to take a massive leap forward, with the new concept of “topological quantum battery” at the forefront. A theoretical study by researchers at the RIKEN Center for Quantum Computing and Huazhong University of Science and Technology has shown how to efficiently design a quantum battery. – https://interestingengineering.com/energy/new-quantum-battery-design
NVIDIA unveils world’s largest quantum research supercomputer
(DigWatch – 3 June 2025) NVIDIA has launched the world’s largest research supercomputer dedicated to quantum computing, named ABCI-Q, housed at Japan’s new Global Research and Development Centre for Business by Quantum-AI Technology (G-QuAT). Delivered in collaboration with Japan’s National Institute of Advanced Industrial Science and Technology (AIST), ABCI-Q combines over 2,000 NVIDIA H100 GPUs with multiple quantum processors to enable advanced quantum-AI workloads. – https://dig.watch/updates/nvidia-unveils-worlds-largest-quantum-research-supercomputer
Colt, Honeywell and Nokia to trial quantum cryptography in space
(DigWatch – 3 June 2025) Colt Technology Services, Honeywell, and Nokia have joined forces to trial quantum key distribution (QKD) via satellites to develop quantum-safe networks. The trial builds on a previous Colt pilot focused on terrestrial quantum-secure networks. The collaboration aims to tackle the looming cybersecurity risks of quantum computing, which threatens to break current encryption methods. The project seeks to deliver secure global communication beyond the current 100km terrestrial limit by trialling space-based and subsea QKD. – https://dig.watch/updates/colt-honeywell-and-nokia-to-trial-quantum-cryptography-in-space