Governance
From risk to resilience: No-code AI governance in the Global South
(Michael Harre – OECD.AI – 30 September 2025) As OECD countries advance national AI strategies, a critical question looms for developing nations: how do they govern AI now, without waiting years for top-down frameworks? One answer is to equip local teams to build and continually refine their own guardrails using accessible, no-code tools. Done well, AI shifts from a replacement risk to a skills multiplier. This post outlines how no-code AI tools, paired with locally authored rules, enable developing communities to leapfrog slow, centralised governance models. – https://oecd.ai/en/wonk/from-risk-to-resilience-no-code-ai-governance-in-the-global-south
EDPB issues guidelines on GDPR-DSA tension for platforms
(DigWatch – 30 September 2025) On 12 September 2025, the European Data Protection Board (EDPB) adopted draft guidelines detailing how online platforms should reconcile requirements under the GDPR and the Digital Services Act (DSA). The draft is now open for public consultation through 31 October. The guidelines address key areas of tension, including proactive investigations, notice-and-action systems, deceptive design, recommender systems, age safety and transparency in advertising. They emphasise that DSA obligations must be implemented in ways consistent with GDPR principles. – https://dig.watch/updates/edpb-issues-guidelines-on-gdpr-dsa-tension-for-platforms – https://www.taylorwessing.com/en/insights-and-events/insights/2025/09/rd-balancing-online-safety-and-data-protection-edpb-guidelines-on-interplay-of-gdpr-and-dsa
Greece considers social media ban for under-16s, says Mitsotakis
(DigWatch – 30 September 2025) Greek Prime Minister Kyriakos Mitsotakis has signalled that Greece may consider banning social media use for children under 16. He raised the issue during a UN event in New York, hosted by Australia, titled ‘Protecting Children in the Digital Age’, held as part of the 80th UN General Assembly. Mitsotakis emphasised that any restrictions would be coordinated with international partners, warning that the world is carrying out the largest uncontrolled experiment on children’s minds through unchecked social media exposure. – https://dig.watch/updates/greece-considers-social-media-ban-for-under-16s-says-mitsotakis – https://www.primeminister.gr/en/2025/09/26/37065
Teen Safety is the Price of Admission for OpenAI and Its Peers
(Vaishnavi J – Tech Policy Press – 30 September 2025) In a post on its website, OpenAI previewed parental controls that will allow parents to link their accounts to their teens’ and customize their experiences. It followed two recent posts that sketch its thinking on teenagers’ use of ChatGPT: one on balancing teen safety, freedom, and privacy, and another outlining progress toward age prediction. With three posts on the subject in just two weeks’ time, the company is clearly trying to signal that this is a topic it takes seriously. Strikingly, the one on balancing safety with freedom and privacy was authored directly by Sam Altman, the company’s cofounder and CEO — a sign that teen safety is now a board-level priority at OpenAI, and a central design and policy challenge. The posts arrive in a particularly fraught environment for the debate over AI and youth wellbeing, against a backdrop of mounting legal challenges and demonstrable harms. AI companies including OpenAI, Replika, and Character.ai all face lawsuits in the US alleging that their “AI companions” can promote self-harm or expose teens to sexualized interactions. Regulators in Europe, meanwhile, have opened inquiries under the Digital Services Act into whether AI systems adequately protect children. – https://www.techpolicy.press/teen-safety-is-the-price-of-admission-for-openai-and-its-peers/
Where AI Meets Racism at the Border
(Tsion Gurmu, Hinako Sugiyama, Sobechukwu Uwajeh – Tech Policy Press – 30 September 2025) Following the passage of President Donald Trump’s “Big, Beautiful Bill,” the United States is anticipated to spend billions more on technology to surveil its borders, track immigrants, and execute its mass detention and deportation program. Part of the money will go to acquire and deploy new AI systems, including surveillance towers that utilize facial recognition, social media monitoring, and database analytics. However, the US has previously committed to international law guidelines that require taking another look at utilizing these biased AI systems for a “smart border.”. In response to a meeting with United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, the Black Alliance for Just Immigration (BAJI) and the Immigrant Rights Clinic and International Justice Clinic at UC Irvine (UCI) School of Law recently submitted a report to break down the specifics of how AI affects Black migrants and migrants of color while, giving suggestions for change in the future, drawn from the international human rights law. – https://www.techpolicy.press/where-ai-meets-racism-at-the-border/
New Zealand created its AI Strategy and Guidance products to incorporate OECD values in a distinct manner
(Caitlin Parr, Emma Naji, Liam Williams, Sandra Laws – OECD.AI – 29 September 2025) New Zealand has just released a suite of AI materials—a milestone as the last OECD country to publish a national AI strategy. These include: New Zealand AI Strategy; Responsible AI Guidance for Businesses; Māori Data and AI Guidance. These resources aim to boost innovation, particularly among small and medium-sized enterprises (SMEs), which have some of the lowest rates of AI adoption among OECD countries. They also reflect New Zealand’s commitment to using AI in ways that are ethical, inclusive, and uniquely Kiwi. Together, they demonstrate how New Zealand is implementing the OECD’s framework for trustworthy AI, in line with the resourceful ‘number-8-wire’ ethos for which the country is known. – https://oecd.ai/en/wonk/new-zealand-created-its-ai-strategy-and-guidance-products-to-incorporate-oecd-values-in-a-distinct-manner
G7 AI transparency reporting: Ten insights for AI governance and risk management
(Audrey Plonk, Karine Perset – OECD.AI – 25 September 2025) Transparency in artificial intelligence (AI) is increasingly recognised as essential to building trust, ensuring accountability, and promoting responsible innovation. In 2023, the Group of Seven (G7) launched the Hiroshima AI Process (HAIP), a global initiative aimed at addressing the governance and risk challenges posed by AI systems. A central element of this process is a voluntary transparency reporting framework, developed with the OECD, which invites AI organisations to disclose how they identify risks, implement safeguards, and align with internationally agreed-upon principles for trustworthy AI. In April 2025, the OECD published the first round of transparency reports. Twenty organisations from around the globe participated, ranging from large multinational technology companies to smaller advisory, research, and educational institutions. Their submissions offer a unique insight into how AI developers approach governance in practice. – https://oecd.ai/en/wonk/g7-haip-report-insights-for-ai-governance-and-risk-management
AI governance through global red lines can help prevent unacceptable risks
(Stuart Russell, Charbel-Raphael Segerie, Niki Iliadis , Tereza Zoumpalova – OECD.AI – 22 September 2025) As AI systems become increasingly capable and more deeply integrated into our lives, the risks and harms they pose also increase. Recent examples illustrate the urgency: powerful multimodal systems have fueled large-scale scams and fraud; increasingly human-like AI agents are enabling manipulation and dependency, with particularly severe consequences for children; and models have demonstrated deceptive behaviour and even resisted shutdown or modification. Without clear and enforceable red lines that prohibit specific unacceptable uses and behaviours of AI systems, the resulting harms could become widespread, irreversible, and destabilising. – https://oecd.ai/en/wonk/ai-governance-through-global-red-lines-can-help-prevent-unacceptable-risks
The Future of AI Policy Is the Future of Competing Demands
(RAND Corporation – September 2025) Right now, policymakers are contending with decisions to optimize the benefits of artificial intelligence (AI) while ensuring that key pillars of safety, data privacy, and worker well-being are supported. Realizing the gains from AI does not come without tough choices, particularly when addressing how quickly AI is developed and adopted by sectors of the economy. RAND’s Social and Economic Policy Rethink Initiative has developed a volume of work on the opportunities and challenges presented by AI adoption. This volume aims to support policy, industry, and community leaders as they confront key questions: What are the social and economic policy stakes for AI adoption?; What types of AI adoption tradeoffs will policymakers need to manage?; How can policymakers map AI impacts to develop agile AI responses? – https://www.rand.org/well-being/projects/portfolios/rethinking-social-economic-policy-systems/ai-adoption.html
Legislation
California enacts first state-level AI safety law
(DigWatch – 30 September 2025) In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies. The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act. – https://dig.watch/updates/california-enacts-first-state-level-ai-safety-law – https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/
California Signed A Landmark AI Safety Law. What To Know About SB53
(Cristiano Lima-Strong – Tech Policy Press – 30 September 2025) California Gov. Gavin Newsom (D) on Monday signed into law the Transparency in Frontier Artificial Intelligence Act, known as SB53, capping off a tumultuous year of negotiations over AI regulations in the state and ushering in some of the most significant rules in the United States. The proposal, the subject of intense debate stateside and federally, is poised to be a major marker in the debate over AI safety nationwide and could serve as a template for other states to follow — if lawmakers in Washington do not ultimately preempt such state rules. “With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” California State Sen. Scott Wiener (D), who introduced the bill, said in a statement. “With this law, California is stepping up, once again, as a global leader on both technology innovation and safety.” – https://www.techpolicy.press/california-signed-a-landmark-ai-safety-law-what-to-know-about-sb53/
New Jersey proposes bill to uncover data centre energy and water use
(DigWatcch – 30 September 2025) New Jersey legislators have introduced a bill requiring data centre operators in the state to disclose their annual energy and water usage publicly. The measure seeks to inject transparency into operations that are notorious for high resource consumption. – https://dig.watch/updates/new-jersey-proposes-bill-to-uncover-data-centre-energy-and-water-use – https://www.njspotlightnews.org/2025/09/how-much-water-and-energy-do-data-centers-consume-nj-bill-demands-answers/
How Not to Embarrass the Future
(Gregory M. Dickinson, Kevin Frazier – Lawfare – 30 September 2025) The timing of society’s legal response to artificial intelligence (AI) matters, but not in the way one might think. Time is not of the essence. The best policy emerges from learned experience. Yet, when it comes to new tools such as AI companions, a desire to regulate unnecessarily truncates that learning period. Policymakers are far more likely to err by acting rashly than by delaying legal reform for too long. AI’s advances are outpacing legislative cycles, tempting lawmakers to try to “future proof” the law with sweeping rules that carry unpredictable consequences. But the history and theory of technology governance teaches just the opposite: When policymakers act quickly, they’re likely to get the details wrong. Those errors are not costless. Regulatory mistakes today will harden tomorrow into obstacles to innovation that last long after the targeted technologies have changed. Legislators should resist the urge to pass hasty and overconfident laws that burden future innovation. Regulation should be specially crafted to limit its duration and guard against inadvertent creep. And—because the legislature will never be filled with enlightened philosopher kings capable of predicting the future—doing nothing may be the best option of all. – https://www.lawfaremedia.org/article/how-not-to-embarrass-the-future
Geostrategies
When Governments Pull the Plug
(Theodore Christakis – Lawfare – 29 September 2025) Since spring, Brussels has been buzzing with the worry that, if geopolitical winds shift, Washington might order U.S. hyperscalers to yank the plug on cloud and productivity suites running Europe’s daily business. Although orders of suspensions of core digital services are rare and have always been very narrow and targeted, Politico summed up the anxiety crisply in late June: “Trump can pull the plug on the internet, and Europe can’t do anything about it.”. But as European firms and governments fret about the United States, a recent episode involving a U.S. tech giant and an Indian oil company set an awkward precedent. In response to a new wave of EU sanctions against Russia, Microsoft temporarily suspended its services to Nayara Energy, an Indian company with Russian ties, and then restored them days later, telling Reuters it was “in ongoing discussions with the European Union towards service continuity.”. In doing so, the EU found itself on the other side of the “kill switch” equation. It has been a vocal opponent of the very power it has just wielded, demonstrating that the ability to cut off digital services is not a unilateral power held by only one nation. – https://www.lawfaremedia.org/article/when-governments-pull-the-plug
The Artificial General Intelligence Race and International Security
(Jim Mitre, Michael C. Horowitz, Natalia Henry, Emma Borden, Joel B. Predd – RAND Corporation – 24 September 2025) As humanity approaches the technological capacity to develop artificial general intelligence (AGI), the race between leading artificial intelligence (AI) powers—particularly the United States and China—is likely to intensify amid broader U.S.-China strategic competition. Perry World House at the University of Pennsylvania and the RAND Geopolitics of AGI Initiative commissioned papers by experts in AI, international relations, and national security to examine the dynamics of the AGI race and its potential implications for international security and stability. The authors contend with whether the greatest risks stem from the ambiguous pre-AGI period or from the rapid, competitive race itself and whether AGI will fundamentally alter the nuclear balance or primarily democratize destructive capabilities. Other authors argue that traditional arms control is ill-suited for AGI, proposing instead novel governance models, such as an “AI cartel” to distinguish military from civilian applications. Collectively, the papers highlight strategic dilemmas—speed versus caution, perception versus reality, and competition versus collusion—that demand deliberate choices to ensure that AGI advances international security rather than undermines it. – https://www.rand.org/pubs/perspectives/PEA4155-1.html
Europe is lagging in AI adoption – how can businesses close the gap?
(Cathy Li, Andrew Caruana Galizia – World Economic Forum – 23 September 2025) Artificial Intelligence has countries the world over vying for supremacy – or at the very least a slice of the AI pie. The US has led the way with key innovations, such as AI microchips and the large language models that form the foundations of generative AI. In something akin to the 1960s space race, China has been following closely in the US’s footsteps. Yet, the rest of the world is in catch-up mode, including Europe. A study by Accenture reveals that European AI adoption is lagging behind the US. More than half of large European organizations (56%) “have yet to scale a truly transformative AI investment”, the consultancy states. And this race is being complicated by growing questions over the return on investment from the massive funds being spent, so how can Europe’s businesses catch up? – https://www.weforum.org/stories/2025/09/europe-ai-adoption-lag/
Frontiers
MIT explores AI solutions to reduce emissions
(DigWatch – 30 September 2025) Rapid growth in AI data centres is raising global energy use and emissions, prompting MIT scientists to cut the carbon footprint through more intelligent computing, greater efficiency, and improved data centre design. Innovations include cutting energy-heavy training, using optimised or lower-power processors, and improving algorithms to achieve results with fewer computations. Known as ‘negaflops,’ these efficiency gains can dramatically lower energy consumption without compromising AI performance. – https://dig.watch/updates/mit-explores-ai-solutions-to-reduce-emissions – https://news.mit.edu/2025/responding-to-generative-ai-climate-impact-0930
Harvard researchers develop AI for brain surgery
(DigWatch – 30 September 2025) Harvard researchers have developed an AI tool to distinguish glioblastoma from similar brain tumours during surgery. The PICTURE system gives surgeons near-real-time guidance for critical decisions during surgery. PICTURE outperformed humans and other AI, correctly distinguishing glioblastoma from PCNSL over 98 percent of the time in international tests. The tool also flags cases it is unsure of, allowing human review and reducing the risk of misdiagnosis, particularly in complex or rare brain tumours. – https://dig.watch/updates/harvard-researchers-develop-ai-for-brain-surgery – https://hms.harvard.edu/news/ai-distinguishes-glioblastoma-look-alike-cancers-during-surgery
New lab-built neuron mirrors real brain cells in energy, runs on just 0.1 volts
(Interesting Engineering – 30 September 2025) A neuron made in the lab now works almost like one in the body. A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. The work builds on their earlier research using protein nanowires synthesized from electricity-generating bacteria. – https://interestingengineering.com/innovation/artificial-neuron-protein-nanowires-umass
Robots cut travel time by 30% using human-like memory system in smart factories
(Interesting Engineering – 30 September 2025) A new “Physical AI” technology could help improve the navigation of autonomous mobile robots in environments like logistics centers and smart factories. Developed by South Korea’s Daegu Gyeongbuk Institute of Science and Technology (DGIST), the tech models the “spread and forgetting of social issues.”. The robots could distinguish between important, real-time obstacles and unnecessary, outdated information using this human-like forgetting method. – https://interestingengineering.com/innovation/human-like-memory-system-helps-robots
Sora: OpenAI’s TikTok-like app now lets you deepfake friends with consent cameos
(Interesting Engineering – 30 September 2025) If you think the internet was not yet entirely AI slop, OpenAI’s new Sora launch just might change your opinion. The company has unveiled Sora 2, its upgraded video-and-audio generation model, alongside a new iOS social app also called Sora. The app borrows heavily from TikTok’s short-video format but adds a twist: users can record short clips and let friends spin them into AI-generated cameos. The release follows Sora’s debut in February 2024, which OpenAI described as the GPT-1 moment for video. That model hinted at what large-scale video training could unlock. – https://interestingengineering.com/culture/sora-2-tiktok-like-ai-video-app
Machine learning enables ‘mind reading’ in mice through subtle facial movements
(Interesting Engineering – 30 September 2025) “Mind reading” often sounds like science fiction. But a new study shows it may only take a simple video. Researchers at the Champalimaud Foundation in Portugal discovered that mice’s facial movements reveal their internal thought strategies. The finding could open a non-invasive way to study brain activity while raising new concerns about mental privacy. In earlier work, the team set up a puzzle for mice. The animals had to figure out which of two water spouts provided a sugary drink. – https://interestingengineering.com/science/machine-learning-thought-maps-mouse-study