Internet Governance Forum (IGF) 2025
AI and the future of work: Global forum highlights risks, promise, and urgent choices
(DigWatch – 25 June 2025) At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use. AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps. AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms. – https://dig.watch/updates/ai-and-the-future-of-work-global-forum-highlights-risks-promise-and-urgent-choices
IGF panel urges rethinking internet governance amid rising geopolitical tensions
(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, a session led by the German Federal Ministry for Digital Transformation spotlighted a bold foresight exercise imagining how global internet governance could evolve by 2040. Co-led by researcher Julia Pohler, the initiative involved a diverse 15-member German task force and interviews with international experts, including Anriette Esterhuysen and Gbenga Sesan. Their work yielded four starkly different future scenarios, ranging from intensified geopolitical rivalry and internet fragmentation to overregulation and a transformative turn toward treating the internet as a public good. A central takeaway was the resurgence of state power as a dominant force shaping digital futures. – https://dig.watch/updates/igf-panel-urges-rethinking-internet-governance-amid-rising-geopolitical-tensions
Advancing digital identity in Africa while safeguarding sovereignty
(DigWatch – 25 June 2025) A pivotal discussion on digital identity and sovereignty in developing countries unfolded at the Internet Governance Forum 2025 in Norway. The session, co-hosted by CityHub and AFICTA (Africa ICT Alliance), brought together experts from Africa, Asia, and Europe to explore how digital identity systems can foster inclusion, support cross-border services, and remain anchored in national sovereignty. Speakers emphasised that digital identity is foundational for bridging the digital divide and fostering economic development. Dr Jimson Olufuye, Chair of AFICTA, stressed the existential nature of identity in the digital age, noting, ‘If you cannot identify anybody, it means the person does not exist.’ He linked identity inclusion directly to the World Summit on the Information Society (WSIS) action lines and the Global Digital Compact goals. – https://dig.watch/updates/advancing-digital-identity-in-africa-while-safeguarding-sovereignty
AU Open Forum at IGF 2025 highlights urgent need for action on Africa’s digital future
(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, the African Union’s Open Forum served as a critical platform for African stakeholders to assess the state of digital governance across the continent. The forum featured updates from the African Union Commission, the UN Economic Commission for Africa (UNECA), and voices from governments, civil society, youth, and the private sector. The tone was constructive yet urgent, with leaders stressing the need to move from declarations to implementation on long-standing issues like digital inclusion, infrastructure, and cybersecurity. Dr Maktar Sek of UNECA highlighted key challenges slowing Africa’s digital transformation, including policy fragmentation, low internet connectivity (just 38% continent-wide), and high service costs. – https://dig.watch/updates/au-open-forum-at-igf-2025-highlights-urgent-need-for-action-on-africas-digital-future
AI governance efforts centre on human rights
(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law. Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation. – https://dig.watch/updates/ai-governance-efforts-centre-on-human-rights
Civil society pushes back against cyber law misuse at IGF 2025
(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant panel of civil society leaders warned that cyber laws, initially designed to combat real security threats, are increasingly being weaponised by governments to restrict civic space. Representatives from across Africa, Latin America, the Middle East, and Asia shared strikingly similar experiences: the use of vague and overly broad legal terms, executive dominance in lawmaking, and lack of meaningful public consultation have turned cyber legislation into a tool for silencing dissent, particularly targeting journalists, activists, and marginalized communities. – https://dig.watch/updates/civil-society-pushes-back-against-cyber-law-misuse-at-igf-2025
Global consensus grows on inclusive and cooperative AI governance at IGF 2025
(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks. China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration. – https://dig.watch/updates/global-consensus-grows-on-inclusive-and-cooperative-ai-governance-at-igf-2025
Parliamentarians call for stronger platform accountability and human rights protections at IGF 2025
(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide. – https://dig.watch/updates/parliamentarians-call-for-stronger-platform-accountability-and-human-rights-protections-at-igf-2025
EuroDIG outcomes shared at IGF 2025 session in Norway
(DigWatch – 25 June 2025) At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy. – https://dig.watch/updates/eurodig-outcomes-shared-at-igf-2025-session-in-norway
WSIS+20 review highlights gaps in digital access and skills
(DigWatch – 25 June 2025) Experts gathered at the Internet Governance Forum 2025 in Norway to assess progress since the World Summit on the Information Society (WSIS) was launched two decades ago. The session, co-hosted by the Government of Finland and ICANN, offered a timely stocktake ahead of the WSIS+20 negotiations in December 2025. Panellists emphasised that WSIS has successfully anchored multistakeholder participation in internet governance. Yet, pressing challenges persist, particularly the digital divide, gender gaps, and lack of basic digital skills—issues that remain just as urgent now as in 2005. – https://dig.watch/updates/wsis20-review-highlights-gaps-in-digital-access-and-skills
Governance and Legislation
Emerging divides in the transition to artificial intelligence
(OECD – 25 June 2025) Business adoption of artificial intelligence has markedly accelerated in 2023-24, with generative AI. Some places, sectors and firms have been faster in the uptake, so that gaps are forming and reinforcing existing cleavages. AI champions have stood out in the most innovative countries and regions, among larger firms and in knowledge-intensive services. AI is being used as a business solution for greater competitiveness. Applications are manifold and context-specific, often tied to local conditions for diffusion. Legal and data protection concerns, alongside skills shortages, cost or technology lock-ins, can slow adoption though, contributing to emerging divides. – https://www.oecd.org/en/publications/emerging-divides-in-the-transition-to-artificial-intelligence_7376c776-en.html
Federal Judge Rules in Favor of Anthropic on AI Training Fair Use, Sets Stage for Key Trial
(AI Insider – 25 June 2025) In a landmark decision, U.S. District Judge William Alsup ruled that Anthropic did not violate copyright law by training its AI models on published books without author permission, affirming the company’s argument that such use falls under the fair use doctrine. This marks the first significant judicial endorsement of AI companies’ right to train large language models using copyrighted materials. – https://theaiinsider.tech/2025/06/25/federal-judge-rules-in-favor-of-anthropic-on-ai-training-fair-use-sets-stage-for-key-trial/
A Patchwork of State AI Regulation Is Bad. A Moratorium Is Worse
(Kristin O’Donoghue – AI Frontiers – 26 June 2025) Since May, Congress has been debating an unprecedented proposal: a 10-year moratorium that would eliminate virtually all state and local AI policies across the nation. This provision, tucked into the “One Big Beautiful Bill,” would prohibit states from enacting or enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next decade. It’s not clear what version of the moratorium, if any, will become law. The House sent the One Big Beautiful Bill to the Senate’s Commerce Committee, where the moratorium has been subject to an ongoing debate and numerous revisions. The latest public Senate text — which could be voted on as early as Friday — ties the prohibition to the “Broadband Equity, Access, and Deployment” (BEAD) program, threatening to withhold billions of dollars in federal funds to expand broadband from states that choose to regulate AI. The provision’s language may still shift ahead of the Senate’s final vote. Once approved there, the bill must pass the House, receive President Trump’s signature, and then survive inevitable lawsuits from states claiming it’s unconstitutional. But whatever happens to this provision, the momentum to remove regulatory barriers on AI will persist. Amazon, Meta, Microsoft, and Google will continue to lobby for the laxest legislation possible, or none at all, now that such a move has entered the mainstream. It’s time to seriously consider the consequences of a federal moratorium. If Congress enacts this provision — or a similar one — it will grant dramatic power to the creators of a new and largely untested technology. The moratorium will halt state efforts to protect children from AI harms, hold developers accountable for algorithmic discrimination, and encourage transparency in the development and use of AI — all without supplying any federal standards in their place. – https://aifrontiersmedia.substack.com/p/congress-might-block-states-from
Protecting AI Whistleblowers
(Charlie Bullock, Mackenzie Arnold – Lawfare – 25 June 2025) In May 2024, OpenAI found itself at the center of a national controversy when news broke that the AI lab was pressuring departing employees to sign contracts with extremely broad nondisparagement and nondisclosure provisions—or else lose their vested equity in the company. This would essentially have required former employees to avoid criticizing OpenAI for the indefinite future, even on the basis of publicly known facts and nonconfidential information. Although OpenAI quickly apologized and promised not to enforce the provisions in question, the damage had already been done—a few weeks later, a number of current and former OpenAI and Google DeepMind employees signed an open letter calling for a “right to warn” about serious risks posed by AI systems, noting that “[o]rdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”. The controversy over OpenAI’s restrictive exit paperwork helped convince a number of industry employees, commentators, and lawmakers of the need for new legislation to fill in gaps in existing law and protect AI industry whistleblowers from retaliation. This culminated recently in the AI Whistleblower Protection Act (AI WPA), a bipartisan bill introduced by Sen. Chuck Grassley (R-Iowa) along with a group of three Republican and three Democratic senators. Companion legislation was introduced in the house by Reps. Ted Lieu (D-Calif.) and Jay Obernolte (R-Calif.). Whistleblower protections such as the AI WPA are minimally burdensome, easy to implement and enforce, and plausibly useful for facilitating government access to the information needed to mitigate AI risks. They also have genuine bipartisan appeal, meaning there is actually some possibility of enacting them. As increasingly capable AI systems continue to be developed and adopted, it is essential that those most knowledgeable about any dangers posed by these systems be allowed to speak freely. – https://www.lawfaremedia.org/article/protecting-ai-whistleblowers
Beyond Bans: Expanding the Policy Options for Tech-Security Threats
(Geoffrey Gertz, Justin Sherman – Lawfare – 25 June 2025) In early April, President Trump granted TikTok another 75-day reprieve from its threatened ban in the United States. It is but the latest twist in a five-year, administration-spanning saga, in which the U.S. government has repeatedly threatened to ban the Chinese-owned app from the U.S. market if it is not sold to non-Chinese buyers—but has never followed through on such ultimatums. While the TikTok case has some unique challenges, it is part of a broader trend of using bans to address national security risks associated with Chinese technology in the United States. After Chinese company DeepSeek released an innovative new AI model, members of Congress were quick to initiate a conversation about whether to ban DeepSeek in the United States. The government has already announced measures to ban certain connected vehicles from China and is working on similar restrictions for Chinese drones; reports suggest certain Chinese routers could also be banned. Beyond China, the last administration also banned the Russian antivirus provider Kaspersky—another example of how the government is using national security authorities in the tech supply chain. There are plenty of real national security issues posed by technology from China and other foreign adversary countries across various elements of U.S. industries and tech supply chains. Such risks range from espionage, to “prepositioning” of malware (quietly putting malicious code in place that can be activated later), to increased leverage over U.S. supply chains, including for the defense industrial base. To better address this policy problem, however, the United States urgently needs to build policy toolkits—and policy muscles—beyond bans. Policy discourse about how to mitigate national security risks from a specific technology, such as a Chinese AI model or mobile app, all too often results in reductive conversations about whether or not to ban such technology. But this dichotomy leaves policymakers with an unappealing choice: Either ban any technology that poses a risk, or—if unwilling to follow through with an action as dramatic and costly as a ban—do nothing, and leave the American public exposed to potential national security risks as a result. American policymakers need a spectrum of responses to foreign technology risks that appropriately balance trade-offs in economic costs; Americans’ access to online services; supply chain entanglement; transparency; domestic imperatives like privacy and civil liberties; and the ability to convince allies and partners to act alongside the United States, where relevant. Such a toolkit—encompassing technical, governance, and commercial mitigation measures—at present often comes up short of a robust, comprehensive approach to contemporary tech supply chain and national security risks, leaving the U.S. vulnerable and policymakers without more tailored options to act on potential threats. – https://www.lawfaremedia.org/article/beyond-bans–expanding-the-policy-options-for-tech-security-threats
Geostrategies
Google launches AI Mode Search in India
(DigWatch – 25 June 2025) Google has launched its advanced AI Mode search experience in India, allowing users to explore information through more natural and complex interactions. The feature, previously available as an experiment in the US, can now be enabled in English via Search Labs. Users test experimental tools on this platform and share feedback on early Google Search features – https://dig.watch/updates/google-launches-ai-mode-search-in-india
Security
AI data risks prompt new global cybersecurity guidance
(DigWatch – 25 June 2025) A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift. Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring. – https://dig.watch/updates/ai-data-risks-prompt-new-global-cybersecurity-guidance
New report: major developments and trends on terrorism in Europe in 2024
(Europol – 24 June 2025) A total of 58 terrorist attacks were reported by 14 EU Member States in 2024. Of these, 34 were completed, 5 were failed and 19 were foiled. Overall, 449 individuals were arrested for terrorism-related offences across 20 Member States. These numbers are sourced from Europol’s European Union Terrorism Situation and Trend Report 2025 (TE-SAT), published today. This flagship report – the only one of its kind in Europe – describes the major developments and trends in the terrorism landscape in the EU in 2024, based on qualitative and quantitative information provided by EU Member States and other Europol partners. – https://www.europol.europa.eu/media-press/newsroom/news/new-report-major-developments-and-trends-terrorism-in-europe-in-2024
Frontiers
Galbot Raises $153 Million to Expand Embodied AI Robots, Partners With Bosch Group Investment Arm
(AI Insider – 25 June 2025) Chinese robotics firm Galbot raised $151 million in a new funding round led by CATL and Puquan Capital, bringing total investment to over $330 million since 2023. The company, founded by Prof. He Wang, develops embodied AI systems that allow robots to perceive and interact with their environments, with applications in retail, automotive, and industrial settings. Galbot also formed a joint venture with Bosch Group’s Boyuan Capital to commercialize embodied AI robots globally, focusing on high-precision manufacturing tasks and advancing intelligent automation across sectors. – https://theaiinsider.tech/2025/06/25/galbot-raises-153-million-to-expand-embodied-ai-robots-partners-with-bosch-group-investment-arm/
World’s first cryo chip controls qubits at -273°C, powers leap in quantum computing
(Interesting Engineering – 25 June 2025) In a major advance for quantum computing, researchers at the University of Sydney have developed a cryogenic control chip that can operate directly next to quantum bits, or qubits, at near absolute zero. The breakthrough solves one of the biggest challenges in building large-scale quantum computers, keeping quantum information both stable and accessible. The research outlines a new chip design that can function at milli-kelvin temperatures, just above absolute zero, without disturbing the fragile quantum states. – https://interestingengineering.com/innovation/worlds-first-cryo-chip-controls-qubits-at-273c
AlphaGenome: New Google AI reads DNA mutations, predicts molecular consequences
(Interesting Engineering – 25 June 2025) In a big leap for genomics, Google on Wednesday unveiled a powerful AI model that predicts how single DNA mutations affect the complex machinery regulating gene activity. Named AlphaGenome, the tool covers both coding and non-coding regions of the genome, offering a unified view of variant effects like never before. It brings base-resolution insight to long-range genomic analysis, decoding the impact of mutations with speed, scale, and unprecedented depth. – https://interestingengineering.com/innovation/google-alphagenome-dna-variant-prediction-ai
New hypersonic computer model simulates gas, droplet particles flying at 3,836 mph
(Interesting Engineering – 25 June 2025) Two San Diego State University aerospace engineering researchers developed a new model in computational mathematics that could have widespread implications for hypersonic military aircraft. The model predicts how fuel droplets and gas particles behave in detonation waves. These waves occur in rocket engines, scramjets, which fly at hypersonic speeds. However, the new model could also have applications for climate science and medicine. – https://interestingengineering.com/innovation/computer-model-simulates-particles-flying-3836-mph