Weekly Digest on AI and Emerging Technologies (30 June 2025)

Internet Governance Forum (IGF) 2025

Path forward for global digital cooperation debated at IGF 2025

(DigWatch – 27 June 2025) At the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, policymakers, civil society, and digital stakeholders gathered to chart the future of global internet governance through the WSIS+20 review. With a high-level UN General Assembly meeting scheduled for December, co-facilitators from Kenya and Albania emphasised the need to update the World Summit on the Information Society (WSIS) framework while preserving its original, people-centred vision. They underscored the importance of inclusive consultations, highlighting a new multistakeholder sounding board and upcoming joint sessions to enhance dialogue between governments and broader communities. The conversation revolved around the evolving digital landscape and how WSIS can adapt to emerging technologies like AI, data governance, and digital public infrastructure. – https://dig.watch/updates/path-forward-for-global-digital-cooperation-debated-at-igf-2025

Digital rights under threat: Global majority communities call for inclusive solutions at IGF 2025

(DigWatch – 27 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a pivotal session hosted by Oxfam’s RECIPE Project shed light on the escalating digital rights challenges facing communities across the Global majority. Representatives from Vietnam, Bolivia, Cambodia, Somalia, and Palestine presented sobering findings based on research with over 1,000 respondents across nine countries. Despite the diversity of regions, speakers echoed similar concerns: digital literacy is dangerously low, access to safe and inclusive online spaces remains unequal, and legal protections for digital rights are often absent or underdeveloped. The human cost of digital inequality was made clear from Bolivia to Palestine. In Bolivia, over three-quarters of respondents had experienced digital security incidents, and many reported targeted violence linked to their roles as human rights defenders. – https://dig.watch/updates/digital-rights-under-threat-global-majority-communities-call-for-inclusive-solutions-at-igf-2025

Efforts to address internet fragmentation take centre stage at IGF 2025 in Norway

(DigWatch – 27 June 2025) On the final day of the Internet Governance Forum 2025 in Lillestrøm, Norway, stakeholders from governments, civil society, technical communities, and the private sector gathered to launch the new work cycle of the Policy Network on Internet Fragmentation (PNIF). Now entering its third year, the PNIF unveiled a structured framework to analyse internet fragmentation across three dimensions: user experience, internet governance coordination, and the technical infrastructure layer. The session emphasised the urgent need for international cooperation to counter growing fragmentation threats, as enshrined in paragraph 29C of the Global Digital Compact. Speakers raised alarm over how political and economic forces are re-shaping the global internet. – https://dig.watch/updates/efforts-to-address-internet-fragmentation-take-centre-stage-at-igf-2025-in-norway

How can technical standards bridge or broaden the digital divide?

(DigWatch – 27 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, the Freedom Online Coalition convened a diverse panel to explore how technical standards shape global connectivity and inclusion. The session, moderated by Laura O’Brien, Senior International Counsel at Access Now, highlighted how open and interoperable standards can empower underserved communities. Divine Agbeti, Director General of the Cybersecurity Authority of Ghana, shared how mobile money systems helped bring over 80% of Ghana’s adult population into the digital financial fold—an example of how shared standards translate into real-world impact, especially across Africa. However, the conversation quickly turned to the systemic barriers that exclude many from the standard-setting process itself. – https://dig.watch/updates/how-can-technical-standards-bridge-or-broaden-the-digital-divide

IGF 2025: Africa charts a sovereign path for AI governance

(DigWatch – 27 June 2025) African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises. Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles. – https://dig.watch/updates/igf-2025-africa-charts-a-sovereign-path-for-ai-governance

Internet Governance Forum marks 20 years of reshaping global digital policy

(DigWatch – 26 June 2025) The 2025 Internet Governance Forum (IGF), held in Norway, offered a deep and wide-ranging reflection on the IGF’s 20-year journey in shaping digital governance and its prospects for the future. Bringing together voices from governments, civil society, the technical community, business, and academia, the session celebrated the IGF’s unique role in institutionalising a multistakeholder approach to internet policymaking, particularly through inclusive and non-binding dialogue. Moderated by Avri Doria, who has been with the IGF since its inception, the session focused on how the forum has influenced individuals, governments, and institutions across the globe. Doria described the IGF as a critical learning platform and a ‘home for evolving objectives’ that has helped connect people with vastly different viewpoints over the decades. – https://dig.watch/updates/internet-governance-forum-marks-20-years-of-reshaping-global-digital-policy

Bridging the digital divide through language inclusion

(DigWatch – 26 June 2025) At the Internet Governance Forum 2025 in Norway, a high-level panel of global experts highlighted the urgent need to embed language inclusion into internet governance and digital rights frameworks. While internet access has expanded globally, billions remain excluded from meaningful participation due to the continued dominance of a few major languages online. Moderated by Ram Mohan, Chief Strategy Officer of Identity Digital and Chair of the newly formed Coalition on Digital Impact (CODI), the session brought together speakers from ICANN, the Unicode Consortium, DotAsia, DOTAU, the National Telecom and Regulatory Authority of Egypt, and other institutions. The consensus was clear: true digital inclusion is not possible without linguistic inclusion. – https://dig.watch/updates/bridging-the-digital-divide-through-language-inclusion

Children safety online in 2025: Global leaders demand stronger rules

(DigWatch – 26 June 2025) At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms. The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being. Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’. She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders. – https://dig.watch/updates/children-safety-online-in-2025-global-leaders-demand-stronger-rules

IGF leadership panel explores future of digital governance

(DigWatch – 26 June 2025) As the Internet Governance Forum (IGF) prepares to mark its 20th anniversary, members of the IGF Leadership Panel gathered in Norway to present a strategic vision for strengthening the forum’s institutional role and ensuring greater policy impact. The session explored proposals to make the IGF a permanent UN institution, improve its output relevance for policymakers, and enhance its role in implementing outcomes from WSIS+20 and the Global Digital Compact. – https://dig.watch/updates/igf-leadership-panel-explores-future-of-digital-governance

IGF and WSIS platforms must be strengthened, not replaced, say leaders

(DigWatch – 26 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, stakeholders gathered to assess the International Telecommunication Union’s (ITU) role in the WSIS Plus 20 review process. The session, moderated by Cynthia Lesufi of South Africa, invited input on the achievements and future direction of the World Summit on the Information Society (WSIS), now marking its 20th year. Speakers from Brazil, Australia, Korea, Germany, Japan, Cuba, South Africa, Ghana, Nigeria, and Bangladesh offered their national and regional insights. – https://dig.watch/updates/igf-and-wsis-platforms-must-be-strengthened-not-replaced-say-leaders

Tower of Babel reimagined: IGF 2025 experiment highlights language barriers in internet governance

(DigWatch – 26 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, an unconventional session titled ‘Tower of Babel Chaos’ challenged the norm of using English as the default language in global digital policy discussions. Moderator Virginia Paque, Senior Policy Editor of Diplo and the only native English speaker among the participants, suspended English as the session’s required language and encouraged attendees to define internet governance and interact in their own native tongues. – https://dig.watch/updates/tower-of-babel-reimagined-igf-2025-experiment-highlights-language-barriers-in-internet-governance

WSIS prepares for Geneva as momentum builds for impactful digital governance

(DigWatch – 25 June 2025) As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11 July in Geneva, stakeholders from across sectors gathered at the Internet Governance Forum in Norway to reflect on WSIS’s evolution and map a shared path forward. The session, moderated by Gitanjali Sah of ITU, brought together over a dozen speakers from governments, UN agencies, civil society, and the technical and business communities. The event is crucial, marking two decades since the WSIS process began. It has grown into a multistakeholder framework involving more than 50 UN entities. While the action lines offer a structured and inclusive approach to digital cooperation, participants acknowledged that measurement and implementation remain the weakest links. – https://dig.watch/updates/wsis20-prepares-for-geneva-as-momentum-builds-for-impactful-digital-governance

AI sandboxes pave path for responsible innovation in developing countries

(DigWatch – 26 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance. Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance. – https://dig.watch/updates/ai-sandboxes-pave-path-for-responsible-innovation-in-developing-countries

UNESCO and ICANN lead push for multilingual and inclusive internet governance

(DigWatch – 26 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts gathered to discuss how to involve diverse communities—especially indigenous and underrepresented groups—better in the technical governance of the internet. The session, led by Niger’s Anne Rachel Inne, emphasised that meaningful participation requires more than token inclusion; it demands structural reforms and practical engagement tools. Central to the dialogue was the role of multilingualism, which UNESCO’s Guilherme Canela de Souza described as both a right and a necessity for true digital inclusion. ICANN’s Theresa Swinehart spotlighted ‘Universal Acceptance’ as a tangible step toward digital equality, ensuring that domain names and email addresses work in all languages and scripts. – https://dig.watch/updates/unesco-and-icann-lead-push-for-multilingual-and-inclusive-internet-governance

Cybercrime in Africa: Turning research into justice and action

(DigWatch – 26 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.  Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue. – https://dig.watch/updates/cybercrime-in-africa-turning-research-into-justice-and-action

AI and the future of work: Global forum highlights risks, promise, and urgent choices

(DigWatch – 25 June 2025) At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use. AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps. AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms. – https://dig.watch/updates/ai-and-the-future-of-work-global-forum-highlights-risks-promise-and-urgent-choices

IGF panel urges rethinking internet governance amid rising geopolitical tensions

(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, a session led by the German Federal Ministry for Digital Transformation spotlighted a bold foresight exercise imagining how global internet governance could evolve by 2040. Co-led by researcher Julia Pohler, the initiative involved a diverse 15-member German task force and interviews with international experts, including Anriette Esterhuysen and Gbenga Sesan. Their work yielded four starkly different future scenarios, ranging from intensified geopolitical rivalry and internet fragmentation to overregulation and a transformative turn toward treating the internet as a public good. A central takeaway was the resurgence of state power as a dominant force shaping digital futures. – https://dig.watch/updates/igf-panel-urges-rethinking-internet-governance-amid-rising-geopolitical-tensions

Advancing digital identity in Africa while safeguarding sovereignty

(DigWatch – 25 June 2025) A pivotal discussion on digital identity and sovereignty in developing countries unfolded at the Internet Governance Forum 2025 in Norway. The session, co-hosted by CityHub and AFICTA (Africa ICT Alliance), brought together experts from Africa, Asia, and Europe to explore how digital identity systems can foster inclusion, support cross-border services, and remain anchored in national sovereignty. Speakers emphasised that digital identity is foundational for bridging the digital divide and fostering economic development. Dr Jimson Olufuye, Chair of AFICTA, stressed the existential nature of identity in the digital age, noting, ‘If you cannot identify anybody, it means the person does not exist.’ He linked identity inclusion directly to the World Summit on the Information Society (WSIS) action lines and the Global Digital Compact goals. – https://dig.watch/updates/advancing-digital-identity-in-africa-while-safeguarding-sovereignty

AU Open Forum at IGF 2025 highlights urgent need for action on Africa’s digital future

(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, the African Union’s Open Forum served as a critical platform for African stakeholders to assess the state of digital governance across the continent. The forum featured updates from the African Union Commission, the UN Economic Commission for Africa (UNECA), and voices from governments, civil society, youth, and the private sector. The tone was constructive yet urgent, with leaders stressing the need to move from declarations to implementation on long-standing issues like digital inclusion, infrastructure, and cybersecurity. Dr Maktar Sek of UNECA highlighted key challenges slowing Africa’s digital transformation, including policy fragmentation, low internet connectivity (just 38% continent-wide), and high service costs. – https://dig.watch/updates/au-open-forum-at-igf-2025-highlights-urgent-need-for-action-on-africas-digital-future

AI governance efforts centre on human rights

(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law. Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation. – https://dig.watch/updates/ai-governance-efforts-centre-on-human-rights

Civil society pushes back against cyber law misuse at IGF 2025

(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant panel of civil society leaders warned that cyber laws, initially designed to combat real security threats, are increasingly being weaponised by governments to restrict civic space. Representatives from across Africa, Latin America, the Middle East, and Asia shared strikingly similar experiences: the use of vague and overly broad legal terms, executive dominance in lawmaking, and lack of meaningful public consultation have turned cyber legislation into a tool for silencing dissent, particularly targeting journalists, activists, and marginalized communities. – https://dig.watch/updates/civil-society-pushes-back-against-cyber-law-misuse-at-igf-2025

Global consensus grows on inclusive and cooperative AI governance at IGF 2025

(DigWatch – 25 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks. China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration. – https://dig.watch/updates/global-consensus-grows-on-inclusive-and-cooperative-ai-governance-at-igf-2025

Parliamentarians call for stronger platform accountability and human rights protections at IGF 2025

(DigWatch – 25 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide. – https://dig.watch/updates/parliamentarians-call-for-stronger-platform-accountability-and-human-rights-protections-at-igf-2025

EuroDIG outcomes shared at IGF 2025 session in Norway

(DigWatch – 25 June 2025) At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy. – https://dig.watch/updates/eurodig-outcomes-shared-at-igf-2025-session-in-norway

WSIS+20 review highlights gaps in digital access and skills

(DigWatch – 25 June 2025) Experts gathered at the Internet Governance Forum 2025 in Norway to assess progress since the World Summit on the Information Society (WSIS) was launched two decades ago. The session, co-hosted by the Government of Finland and ICANN, offered a timely stocktake ahead of the WSIS+20 negotiations in December 2025. Panellists emphasised that WSIS has successfully anchored multistakeholder participation in internet governance. Yet, pressing challenges persist, particularly the digital divide, gender gaps, and lack of basic digital skills—issues that remain just as urgent now as in 2005. – https://dig.watch/updates/wsis20-review-highlights-gaps-in-digital-access-and-skills

World gathers in Norway to shape digital future

(DigWatch – 24 June 2025) The Internet Governance Forum (IGF) 2025 opened in Lillestrøm, Norway, marking its 20th anniversary and coinciding with the World Summit on the Information Society Plus 20 (WSIS+20) review. UN Secretary-General António Guterres, in a video message, underscored that digital cooperation has shifted from aspiration to necessity. He highlighted global challenges such as the digital divide, online hate speech, and concentrated tech power, calling for immediate action to ensure a more equitable digital future. – https://dig.watch/updates/world-gathers-in-norway-to-shape-digital-future

Protecting the vulnerable online: Global lawmakers push for new digital safety standards

(DigWatch – 24 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, a parliamentary session titled ‘Click with Care: Protecting Vulnerable Groups Online’ gathered lawmakers, regulators, and digital rights experts from around the world to confront the urgent issue of online harm targeting marginalised communities. Speakers from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya shared insights on how current laws often fall short, especially in the Global South where women, children, and LGBTQ+ groups face disproportionate digital threats. Research presented showed alarming trends—one in three African women experience online abuse, often with no support or recourse, and platforms’ moderation systems are frequently inadequate, slow, or biassed in favor of users from the Global North. – https://dig.watch/updates/protecting-the-vulnerable-online-global-lawmakers-push-for-new-digital-safety-standards

Global South pushes for digital inclusion

(DigWatch – 24 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, global leaders, youth delegates, and digital policymakers convened to confront one of the most pressing challenges of the digital age: bridging the digital divide in the Global South. UN Under-Secretary-General Li Junhua highlighted that while connectivity has improved since 2015, 2.6 billion people—primarily in the least developed countries—remain offline. The issue, however, is no longer just about cables and coverage. It now includes access to affordable devices, digital literacy, and the skills needed to navigate the internet safely and meaningfully. – https://dig.watch/updates/global-south-pushes-for-digital-inclusion

Big Tech’s grip on information sparks urgent debate at IGF 2025 in Norway

(DigWatch – 24 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, global leaders, tech executives, civil society figures, and academics converged for a high-level session to confront one of the digital age’s most pressing dilemmas: how to protect democratic discourse and human rights amid big tech’s tightening control over the global information space. The session, titled ‘Losing the Information Space?’, tackled the rising threat of disinformation, algorithmic opacity, and the erosion of public trust, all amplified by powerful AI technologies. Norwegian Minister Lubna Jaffery sounded the alarm, referencing the annulled Romanian presidential election as a stark reminder of how influence operations and AI-driven disinformation campaigns can destabilise democracies. She warned that while platforms have democratised access to expression, they’ve also created fragmented echo chambers and supercharged the spread of propaganda. – https://dig.watch/updates/big-techs-grip-on-information-sparks-urgent-debate-at-igf-2025-in-norway

Small states, big ambitions: How startups and nations are shaping the future of AI

(DigWatch – 24 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a dynamic discussion unfolded on how small states and startups can influence the global AI landscape. The session, hosted by Norway, challenged the notion that only tech giants can shape AI’s future. Instead, it presented a compelling vision of innovation rooted in agility, trust, contextual expertise, and collaborative governance. Norway’s Digitalisation Minister, Karianne Tung, outlined her country’s ambition to become the world’s most digitalised nation by 2030, citing initiatives like the Olivia supercomputer and open-access language models tailored to Norwegian society. Startups such as Cognite showcased how domain-specific data—particularly in energy and industry—can give smaller players a strategic edge. – https://dig.watch/updates/small-states-big-ambitions-how-startups-and-nations-are-shaping-the-future-of-ai

Parliamentarians at IGF 2025 call for action on information integrity

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust. AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes. – https://dig.watch/updates/parliamentarians-at-igf-2025-call-for-action-on-information-integrity

Africa reflects on 20 years of WSIS at IGF 2025

(DigWatch – 23 June 2025) At the Internet Governance Forum (IGF) 2025, a high-level session brought together African government officials, private sector leaders, civil society advocates, and international experts to reflect on two decades of the continent’s engagement in the World Summit on the Information Society (WSIS) process. Moderated by Mactar Seck of the UN Economic Commission for Africa, the WSIS+20 Africa review highlighted both remarkable progress and ongoing challenges in digital transformation. Seck opened the discussion with a snapshot of Africa’s connectivity leap from 2.6% in 2005 to 38% today. Yet, he warned, ‘Cybersecurity costs Africa 10% of its GDP,’ underscoring the urgency of coordinated investment and inclusion. Emphasising multi-stakeholder collaboration, he called for ‘inclusive policy-making across government, private sector, academia and civil society,’ aligned with frameworks such as the AU Digital Strategy and the Global Digital Compact. – https://dig.watch/updates/africa-reflects-on-20-years-of-wsis-at-igf-2025

Rethinking AI in journalism with global cooperation

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media. Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making. – https://dig.watch/updates/rethinking-ai-in-journalism-with-global-cooperation

Lawmakers at IGF 2025 call for global digital safeguards

(DigWatch – 23 June 2025) At the Internet Governance Forum (IGF) 2025 in Norway, a high‑level parliamentary roundtable convened global lawmakers to tackle the pressing challenge of digital threats to democracy. Led by moderator Nikolis Smith, the discussion included Martin Chungong, Secretary‑General of the Inter‑Parliamentary Union (via video), and MPs from Norway, Kenya, California, Barbados, and Tajikistan. The central concern was how AI, disinformation, deepfakes, and digital inequality jeopardise truth, electoral integrity, and public trust. – https://dig.watch/updates/lawmakers-at-igf-2025-call-for-global-digital-safeguards

Participants from 170 countries meet in Norway for the annual Internet Governance Forum

(Council of Europe – 23 June 2025) The Council of Europe is taking part in the 20th UN Internet Governance Forum (IGF) in Lillestrøm, Norway (23-27 June), focusing on ensuring human rights in the age of big tech, advancing equality and inclusion in AI and countering disinformation and the threats to democratic dialogue. – https://www.coe.int/en/web/portal/-/participants-from-170-countries-meet-in-norway-for-the-annual-internet-governance-forum

A unified call for a stronger digital future at IGF 2025

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, global stakeholders converged to shape the future of digital governance by aligning the Internet Governance Forum (IGF) with the World Summit on the Information Society (WSIS) Plus 20 review and the Global Digital Compact (GDC) follow-up. Moderated by Yoichi Iida, former Vice Minister at Japan’s Ministry of Internal Affairs and Communications, the session featured high-level representatives from governments, international organisations, the business sector, and youth networks, all calling for a stronger, more inclusive, better-resourced IGF. – https://dig.watch/updates/a-unified-call-for-a-stronger-digital-future-at-igf-2025

Cybersecurity vs freedom of expression: IGF 2025 panel calls for balanced, human-centred digital governance

(DigWatch – 23 June 2025) At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts from government, civil society, and the tech industry convened to discuss one of the thorniest challenges of the digital age: how to secure cyberspace without compromising freedom of expression and fundamental human rights. The session, moderated by terrorism survivor and activist Bjørn Ihler, revealed a shared urgency across sectors to move beyond binary thinking and craft nuanced, people-centred approaches to online safety. – https://dig.watch/updates/cybersecurity-vs-freedom-of-expression-igf-2025-panel-calls-for-balanced-human-centred-digital-governance

How ROAMX helps bridge the digital divide

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and stakeholders gathered to assess the progress of UNESCO’s ROAMX framework, a tool for evaluating digital development through the lenses of Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues such as gender equality and sustainability. Since its introduction in 2018, and with the rollout of new second-generation indicators in 2024, ROAMX has helped countries align their digital policies with global standards like the WSIS and Sustainable Development Goals. – https://dig.watch/updates/how-roamx-helps-bridge-the-digital-divide

Civil society pushes for digital rights and justice in WSIS+20 review at IGF 2025

(DigWatch – 23 June 2025) At a packed session during Day 0 of the Internet Governance Forum 2025 in Lillestrøm, Norway, civil society leaders gathered to strategise how the upcoming WSIS+20 review can deliver on the promise of digital rights and justice. Organised by the Global Digital Justice Forum and the Global Digital Rights Coalition for WSIS, the brainstorming session brought together voices from across the globe to assess the ‘elements paper’ recently issued by review co-facilitators from Albania and Kenya. Anna Oosterlinck of ARTICLE 19 opened the session by noting significant gaps in the current draft, especially in its treatment of human rights and multistakeholder governance. – https://dig.watch/updates/civil-society-pushes-for-digital-rights-and-justice-in-wsis20-review-at-igf-2025

Grassroots internet governance faces crossroads at IGF 2025

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, the IGF Support Association convened a critical session addressing the long-term sustainability of National and Regional Internet Initiatives (NRIs). With over 170 NRIs worldwide playing a key role in connecting local voices to global internet policy, participants discussed how a potential renewal of the IGF’s UN mandate might influence their operations. While many, including internet pioneer Vint Cerf, welcomed the idea of institutional stability through UN backing, most agreed it wouldn’t automatically resolve the chronic funding and legitimacy challenges NRIs face on the ground. A recurring concern was the disconnect between expectations and resources. – https://dig.watch/updates/grassroots-internet-governance-faces-crossroads-at-igf-2025

Spyware accountability demands Global South leadership at IGF 2025

(DigWatch – 23 June 2025) At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East. – https://dig.watch/updates/spyware-accountability-demands-global-south-leadership-at-igf-2025

WGIG reunion sparks calls for reform at IGF 2025 in Norway

(DigWatch – 23 June 2025) At the Internet Governance Forum (IGF) 2025 in Lillestrøm, Norway, a reunion of the original Working Group on Internet Governance (WGIG) marked a significant reflection and reckoning moment for global digital governance. Commemorating the 20th anniversary of WGIG’s formation, the session brought together pioneers of the multistakeholder model that reshaped internet policy discussions during the World Summit on the Information Society (WSIS). – https://dig.watch/updates/wgig-reunion-sparks-calls-for-reform-at-igf-2025-in-norway

Governance and Legislation

Le Chat leads AI privacy ranking report

(DigWatch – 27 June 2025) A new report has revealed that Le Chat from Mistral AI is the most privacy-respecting generative AI, with ChatGPT and Grok close behind. The study by Incogni assessed nine popular services against 11 criteria covering data use, sharing and transparency. Meta AI came last, flagged for poor privacy practices and extensive data sharing. According to the findings, Gemini and Copilot also performed poorly in protecting user privacy. – https://dig.watch/updates/le-chat-leads-ai-privacy-ranking-report

Exploring Potential Impacts of Global Shifts on Financial Systems and Assets

(S. Yash Kalash – Centre for International Governance Innovation – 26 June 2025)  S. Yash Kalash explores the evolving relationship between geopolitical transformations and technological innovation, and their combined impact on the future of global financial assets and systems. Through a scenario-based methodology, he examines three plausible geopolitical configurations — a multipolar order, a fragmented global economy and a renewed cooperative global governance framework — and assesses their implications for currency stability, investment patterns and market volatility. The analysis then integrates technological developments — particularly in artificial intelligence, blockchain and decentralized finance — into each geopolitical scenario to evaluate how these technologies may amplify, stabilize or disrupt global financial systems. While emerging technologies offer potential for financial inclusion and efficiency, they also risk exacerbating fragmentation if regulatory divergence and infrastructure disparities persist. Drawing on current trends and foresight tools, the paper concludes with recommendations for strategic policy responses focused on regulatory flexibility, global coordination, infrastructure investment and cybersecurity. These measures aim to help states, financial institutions and multilateral bodies navigate the increasing intersection of geopolitical uncertainty and rapid technological change. – https://www.cigionline.org/publications/exploring-potential-impacts-of-global-shifts-on-financial-systems-and-assets/

Bipartisan bill seeks to ban federal agencies from using DeepSeek, AI tools from ‘foreign adversaries’

(Jonathan Greig – The Record – 26 June 2025) A pair of senators introduced a bill on Wednesday that would ban federal agencies from using artificial intelligence tools produced in countries considered “foreign adversaries” — a term that legally covers Russia, China, Iran and North Korea. The No Adversarial AI Act would create a federal list of AI tools produced by companies based in Russia, China, Iran and North Korea, and prohibit U.S. agencies from using them. The legislation is from Sens. Rick Scott (R-FL) and Gary Peters (D-MI). Several House members introduced a corresponding bill as well. – https://therecord.media/bipartisan-bill-ban-deepseek-federal

Narrowing the National Security Exception to Federal AI Guardrails

(Amos Toh – Lawfare – 26 June 2025) Immediately upon taking office, President Trump replaced the Biden administration’s executive order on artificial intelligence (AI) with his own, directing agencies to roll back any regulation or policy that poses “barriers to American AI innovation.” But this deregulatory push has taken an unexpected turn. In April, the White House released a pair of memoranda on using and acquiring AI across the federal government, which many had feared would gut Biden-era safeguards seeking to ensure that the technology is safe, effective, and trustworthy. But the memos uphold many of these safeguards, recognizing that AI innovation cannot come “at the expense of the American people or any violations of their trust.” This commitment to public trust should also lead the administration to level up the rules governing national security applications of the technology, which lag far behind the recently released memos. Congress should also pass legislation codifying safeguards and provide mechanisms for enforcement and oversight.   – https://www.lawfaremedia.org/article/narrowing-the-national-security-exception-to-federal-ai-guardrails

EU urged to pause AI act rollout

(DigWatch – 26 June 2025) The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions. The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030. – https://dig.watch/updates/eu-urged-to-pause-ai-act-rollout

Taiwan leads in AI election defence efforts

(DigWatch – 26 June 2025) Taiwan has been chosen to lead a new coalition formed by the International Foundation for Electoral Systems to strengthen democratic resilience against AI-driven disinformation. The AI Advisory Group on Elections will unite policymakers and experts to address AI’s role in protecting fair elections. The island’s experience has made it a key voice in global AI governance as it counters sophisticated disinformation campaigns linked to authoritarian regimes. Taiwan’s Cyber Ambassador, Audrey Tang, stressed that AI must serve the greater good and help build accountable digital societies. – https://dig.watch/updates/taiwan-leads-in-ai-election-defence-efforts

Bosch calls for balanced AI rules in Europe

(DigWatch – 26 June 2025) Bosch CEO Stefan Hartung has cautioned that Europe could slow its progress in AI by imposing too many regulations. Speaking at a tech conference in Stuttgart, he argued that strict and unclear rules make the region less attractive for innovation. Bosch, which holds the most significant number of AI patents in Europe, plans to invest 2.5 billion euros in AI development by the end of 2027. The company is focusing on AI solutions for autonomous vehicles and industrial efficiency. – https://dig.watch/updates/bosch-calls-for-balanced-ai-rules-in-europe

Report: Quantum Tech Could Add $8.5 Billion and 20,000 Jobs to South Carolina Economy

(Quantum Insider – 25 June 2025) A new economic report projects that quantum technologies could generate $8.5 billion in economic output and nearly 20,000 new jobs in South Carolina, with broader regional gains reaching $32.9 billion. The study, conducted by Dr. Joseph Von Nessen with SC Quantum and the University of South Carolina, links quantum adoption to productivity increases across key industries such as manufacturing and logistics. The analysis estimates a 5.7% average productivity boost for South Carolina’s leading sectors, highlighting the state’s strong alignment with emerging quantum applications – https://thequantuminsider.com/2025/06/25/report-quantum-tech-could-add-8-5-billion-and-20000-jobs-to-south-carolina-economy/

Emerging divides in the transition to artificial intelligence

(OECD – 25 June 2025) Business adoption of artificial intelligence has markedly accelerated in 2023-24, with generative AI. Some places, sectors and firms have been faster in the uptake, so that gaps are forming and reinforcing existing cleavages. AI champions have stood out in the most innovative countries and regions, among larger firms and in knowledge-intensive services. AI is being used as a business solution for greater competitiveness. Applications are manifold and context-specific, often tied to local conditions for diffusion. Legal and data protection concerns, alongside skills shortages, cost or technology lock-ins, can slow adoption though, contributing to emerging divides. – https://www.oecd.org/en/publications/emerging-divides-in-the-transition-to-artificial-intelligence_7376c776-en.html

Federal Judge Rules in Favor of Anthropic on AI Training Fair Use, Sets Stage for Key Trial

(AI Insider – 25 June 2025) In a landmark decision, U.S. District Judge William Alsup ruled that Anthropic did not violate copyright law by training its AI models on published books without author permission, affirming the company’s argument that such use falls under the fair use doctrine. This marks the first significant judicial endorsement of AI companies’ right to train large language models using copyrighted materials. – https://theaiinsider.tech/2025/06/25/federal-judge-rules-in-favor-of-anthropic-on-ai-training-fair-use-sets-stage-for-key-trial/

A Patchwork of State AI Regulation Is Bad. A Moratorium Is Worse

(Kristin O’Donoghue  – AI Frontiers – 26 June 2025) Since May, Congress has been debating an unprecedented proposal: a 10-year moratorium that would eliminate virtually all state and local AI policies across the nation. This provision, tucked into the “One Big Beautiful Bill,” would prohibit states from enacting or enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next decade. It’s not clear what version of the moratorium, if any, will become law. The House sent the One Big Beautiful Bill to the Senate’s Commerce Committee, where the moratorium has been subject to an ongoing debate and numerous revisions. The latest public Senate text — which could be voted on as early as Friday — ties the prohibition to the “Broadband Equity, Access, and Deployment” (BEAD) program, threatening to withhold billions of dollars in federal funds to expand broadband from states that choose to regulate AI. The provision’s language may still shift ahead of the Senate’s final vote. Once approved there, the bill must pass the House, receive President Trump’s signature, and then survive inevitable lawsuits from states claiming it’s unconstitutional. But whatever happens to this provision, the momentum to remove regulatory barriers on AI will persist. Amazon, Meta, Microsoft, and Google will continue to lobby for the laxest legislation possible, or none at all, now that such a move has entered the mainstream. It’s time to seriously consider the consequences of a federal moratorium. If Congress enacts this provision — or a similar one — it will grant dramatic power to the creators of a new and largely untested technology. The moratorium will halt state efforts to protect children from AI harms, hold developers accountable for algorithmic discrimination, and encourage transparency in the development and use of AI — all without supplying any federal standards in their place. – https://aifrontiersmedia.substack.com/p/congress-might-block-states-from

Protecting AI Whistleblowers

(Charlie Bullock, Mackenzie Arnold – Lawfare – 25 June 2025) In May 2024, OpenAI found itself at the center of a national controversy when news broke that the AI lab was pressuring departing employees to sign contracts with extremely broad nondisparagement and nondisclosure provisions—or else lose their vested equity in the company. This would essentially have required former employees to avoid criticizing OpenAI for the indefinite future, even on the basis of publicly known facts and nonconfidential information. Although OpenAI quickly apologized and promised not to enforce the provisions in question, the damage had already been done—a few weeks later, a number of current and former OpenAI and Google DeepMind employees signed an open letter calling for a “right to warn” about serious risks posed by AI systems, noting that “[o]rdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”. The controversy over OpenAI’s restrictive exit paperwork helped convince a number of industry employees, commentators, and lawmakers of the need for new legislation to fill in gaps in existing law and protect AI industry whistleblowers from retaliation. This culminated recently in the AI Whistleblower Protection Act (AI WPA), a bipartisan bill introduced by Sen. Chuck Grassley (R-Iowa) along with a group of three Republican and three Democratic senators. Companion legislation was introduced in the house by Reps. Ted Lieu (D-Calif.) and Jay Obernolte (R-Calif.). Whistleblower protections such as the AI WPA are minimally burdensome, easy to implement and enforce, and plausibly useful for facilitating government access to the information needed to mitigate AI risks. They also have genuine bipartisan appeal, meaning there is actually some possibility of enacting them. As increasingly capable AI systems continue to be developed and adopted, it is essential that those most knowledgeable about any dangers posed by these systems be allowed to speak freely. – https://www.lawfaremedia.org/article/protecting-ai-whistleblowers

Beyond Bans: Expanding the Policy Options for Tech-Security Threats

(Geoffrey Gertz, Justin Sherman – Lawfare – 25 June 2025) In early April, President Trump granted TikTok another 75-day reprieve from its threatened ban in the United States. It is but the latest twist in a five-year, administration-spanning saga, in which the U.S. government has repeatedly threatened to ban the Chinese-owned app from the U.S. market if it is not sold to non-Chinese buyers—but has never followed through on such ultimatums. While the TikTok case has some unique challenges, it is part of a broader trend of using bans to address national security risks associated with Chinese technology in the United States. After Chinese company DeepSeek released an innovative new AI model, members of Congress were quick to initiate a conversation about whether to ban DeepSeek in the United States. The government has already announced measures to ban certain connected vehicles from China and is working on similar restrictions for Chinese drones; reports suggest certain Chinese routers could also be banned. Beyond China, the last administration also banned the Russian antivirus provider Kaspersky—another example of how the government is using national security authorities in the tech supply chain. There are plenty of real national security issues posed by technology from China and other foreign adversary countries across various elements of U.S. industries and tech supply chains. Such risks range from espionage, to “prepositioning” of malware (quietly putting malicious code in place that can be activated later), to increased leverage over U.S. supply chains, including for the defense industrial base. To better address this policy problem, however, the United States urgently needs to build policy toolkits—and policy muscles—beyond bans. Policy discourse about how to mitigate national security risks from a specific technology, such as a Chinese AI model or mobile app, all too often results in reductive conversations about whether or not to ban such technology. But this dichotomy leaves policymakers with an unappealing choice: Either ban any technology that poses a risk, or—if unwilling to follow through with an action as dramatic and costly as a ban—do nothing, and leave the American public exposed to potential national security risks as a result. American policymakers need a spectrum of responses to foreign technology risks that appropriately balance trade-offs in economic costs; Americans’ access to online services; supply chain entanglement; transparency; domestic imperatives like privacy and civil liberties; and the ability to convince allies and partners to act alongside the United States, where relevant. Such a toolkit—encompassing technical, governance, and commercial mitigation measures—at present often comes up short of a robust, comprehensive approach to contemporary tech supply chain and national security risks, leaving the U.S. vulnerable and policymakers without more tailored options to act on potential threats. – https://www.lawfaremedia.org/article/beyond-bans–expanding-the-policy-options-for-tech-security-threats

AI on the Edge of Space. Securing Space Superiority and Avoiding Surprise in Orbit

(Christopher Huynh – Center for Security and Emerging Technology – June 2025) The U.S. Space Force faces growing threats from near-peer adversaries capable of targeting U.S. satellites, underscoring the need for enhanced space control capabilities. This paper examines how artificial intelligence can augment space domain awareness (SDA) and orbital warfare functions to help avoid operational surprise in orbit. Integrating AI, both on ground systems and onboard satellites, is essential to accelerating decision-making, enhancing satellite survivability, and maintaining domain knowledge in an increasingly contested environment. This analysis reviews emerging AI applications upon two space mission areas and proposes additional areas for research. For the SDA mission area, it highlights the power of neural networks and explainable AI tools, such as Local Interpretable Model- Agnostic Explanations (LIME), to accelerate space object detection and improve sensor tasking efficiency. For the orbital warfare mission area, it explores how onboard AI agents can be applied to autonomously manage engagements through rendezvous and proximity operations (RPOs), optimize other satellite subsystems, and enable responsive payload tasking—within the constraints of satellite power and compute limitations. These findings are informed by recently published technical papers and defense policy documents. The paper concludes with recommendations for responsible AI adoption into the above mission areas. These include the immediate adoption of some more mature SDA models, and procuring upgradeable satellite systems with sufficient onboard compute. This paper also recommends key policy considerations, such as defining boundaries for on-orbit autonomy, and establishing rigorous test and evaluation protocols to ensure transparent and auditable AI. In aggregate, implementing all or some of these efforts could significantly increase satellite survivability, and create opportunities to gain an algorithmically informed advantage to secure space superiority. – https://cset.georgetown.edu/publication/ai-on-the-edge-of-space/

Texas Statewide Quantum Initiative Becomes Law

(Quantum Insider – 24 June 2025) Texas has enacted a new law to establish the Texas Quantum Initiative, aiming to position the state as a national leader in quantum computing, networking, and sensing technologies. The legislation creates a governor-appointed advisory committee, a strategic planning process, and a grant fund to support research, workforce training, and quantum manufacturing efforts. The initiative will deliver annual strategic plans and biennial reports to state leaders, prioritizing commercially relevant infrastructure, federal funding opportunities, and supply chain development. – https://thequantuminsider.com/2025/06/24/texas-quantum-initiative-passed/

NCSC issues new guidance for EU cybersecurity rules

(DigWatch – 24 June 2025) The National Cyber Security Centre (NCSC) has published new guidance to assist organisations in meeting the upcoming EU Network and Information Security Directive (NIS2) requirements. Ireland missed the October 2024 deadline but is expected to adopt the directive soon. –  https://dig.watch/updates/ncsc-issues-new-guidance-for-eu-cybersecurity-rules

EU adviser backs Android antitrust ruling against Google

(DigWatch – 23 June 2025) An adviser to the Court of Justice of the European Union has supported the EU’s antitrust ruling against Google, recommending the dismissal of its appeal over a €4.1bn fine. The case concerns Google’s use of its Android mobile system to limit competition through pre-installed apps and contractual restrictions. – https://dig.watch/updates/eu-adviser-backs-android-antitrust-ruling-against-google

WhatsApp ads delayed in EU until 2026

(DigWatch – 23 June 2025) Meta plans to introduce ads on WhatsApp globally, starting with the Updates tab, where users can subscribe to channels and receive promoted content. However, the Irish Data Protection Commission has confirmed that the rollout will be delayed across the EU until 2026. – https://dig.watch/updates/whatsapp-ads-delayed-in-eu-until-2026

Geostrategies

Verizon and Nokia secure UK contract

(DigWatch – 26 June 2025) Verizon and Nokia have partnered to deliver private 5G networks at Thames Freeport in the UK. The networks will support industrial operations with high-speed, reliable connectivity, enabling AI, automation, and real-time data processing. The UK contract is part of a broader multibillion-dollar transformation of the region. Nokia will provide all hardware and software, powering major sites, including DP World London Gateway and Ford’s Dagenham plant. – https://dig.watch/updates/verizon-and-nokia-secure-uk-contract

Google launches AI Mode Search in India

(DigWatch – 25 June 2025) Google has launched its advanced AI Mode search experience in India, allowing users to explore information through more natural and complex interactions. The feature, previously available as an experiment in the US, can now be enabled in English via Search Labs. Users test experimental tools on this platform and share feedback on early Google Search features – https://dig.watch/updates/google-launches-ai-mode-search-in-india

Alibaba Cloud launches new AI tools and education partnerships in Europe

(DigWatch – 24 June 2025) Alibaba Cloud has announced a new suite of AI services as part of its expansion across Europe. Revealed during the Alibaba European Summit in Paris, the company said the new offerings reinforce its long-term commitment to the region by providing AI-driven tools and cloud solutions for fashion, healthcare, and automotive industries. A key development is a significant upgrade to the Platform for AI (PAI), Alibaba’s AI computing platform hosted in the Frankfurt cloud region. The company stated that the enhancements will increase efficiency and scalability to meet the rising demand for compute-intensive workloads. –  https://dig.watch/updates/alibaba-cloud-launches-new-ai-tools-and-education-partnerships-in-europe

EU and Australia to begin negotiations on security and defence partnership

(DigWatch – 24 June 2025) Brussels and Canberra begin negotiations on a Security and Defence Partnership (SDP). The announcement follows a meeting between European Commission President Ursula von der Leyen, European Council President António Costa, and Australian Prime Minister Anthony Albanese. The proposed SDP aims to establish a formal framework for cooperation in a range of security-related areas. These include defence industry collaboration, counter-terrorism and cyber threats, maritime security, non-proliferation and disarmament, space security, economic security, and responses to hybrid threats. – https://dig.watch/updates/eu-and-australia-to-begin-negotiations-on-security-and-defence-partnership

OpenAI and Microsoft’s collaboration is near breaking point

(DigWatch – 23 June 2025) The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance. OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure. Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant. – https://dig.watch/updates/openai-and-microsofts-collaboration-is-near-breaking-point

Security

Understanding and mitigating bias to harness AI responsibly

(Europol – 27 June 2025) Published today, ‘AI bias in law enforcement – a practical guide’ provides a deeper understanding of the issue and explores methods to prevent, identify, and mitigate risks at various stages of AI deployment. The report aims to provide law enforcement with clear guidelines on how to deploy AI technologies while safeguarding fundamental rights. Open in modalCover for the report: AI bias in law enforcement AI is a strong asset for law enforcement to strengthen its capacities to combat emerging threats amplified by digitalisation through the integration of new technical solutions in its tools box against crime. AI can help law enforcement to analyse large and complex datasets, automate repetitive tasks, and to support better informed decision-making. Deployed responsibly, it offers considerable potential to enhance operational capabilities and improve public safety.  However, these benefits must be carefully weighed against the possible risks posed by bias, which may appear at various stages of AI system development and deployment. Such bias must be checked to ensure fair outcomes, maintain public trust, and protect fundamental rights. This report provides law enforcement authorities with the insights and guidance needed to identify, mitigate, and prevent bias in AI systems. This knowledge can play a crucial role in supporting the safe and ethical adoption of AI to ensure that the technology is used effectively, fairly and transparently in the service of public safety. – https://www.europol.europa.eu/media-press/newsroom/news/understanding-and-mitigating-bias-to-harness-ai-responsibly

AI and Data Voids: How Propaganda Exploits Gaps in Online Information

(McKenzie Sadeghi – Lawfare – 26 June 2025) In the lead-up to the 2024 global elections, media outlets, think tanks, and world leaders issued dire warnings about artificial intelligence (AI)-generated misinformation and deepfakes. While there were many cases of foreign actors using generative AI to influence the 2024 U.S. presidential election—as documented by the intelligence community—multiple analyses argued that fears of an AI-fueled misinformation wave were largely overblown and that falsehoods still came from low-tech old-school tactics such as cheap video edits, memes, and manipulated headlines. The greater threat, it turns out, isn’t what AI is creating but, rather, what it’s absorbing and repeating. As generative AI systems increasingly replace search engines and become embedded in consumer products, enterprise software, and public services, the stakes of what they repeat and how they interpret the world are growing. The large language models (LLMs) powering today’s most widely used chatbots have been exposed to a polluted information ecosystem where state-backed foreign propaganda outlets are increasingly imitating legitimate media and employing narrative laundering tactics optimized for search engine visibility—often with the primary purpose of infecting the AI models with false claims reflecting their malign influence operations. – https://www.lawfaremedia.org/article/ai-and-data-voids–how-propaganda-exploits-gaps-in-online-information

AI data risks prompt new global cybersecurity guidance

(DigWatch – 25 June 2025) A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift. Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring. – https://dig.watch/updates/ai-data-risks-prompt-new-global-cybersecurity-guidance

New report: major developments and trends on terrorism in Europe in 2024

(Europol – 24 June 2025) A total of 58 terrorist attacks were reported by 14 EU Member States in 2024. Of these, 34 were completed, 5 were failed and 19 were foiled. Overall, 449 individuals were arrested for terrorism-related offences across 20 Member States. These numbers are sourced from Europol’s European Union Terrorism Situation and Trend Report 2025 (TE-SAT), published today. This flagship report – the only one of its kind in Europe – describes the major developments and trends in the terrorism landscape in the EU in 2024, based on qualitative and quantitative information provided by EU Member States and other Europol partners. – https://www.europol.europa.eu/media-press/newsroom/news/new-report-major-developments-and-trends-terrorism-in-europe-in-2024

The Future of American Cybersecurity

(Paul Rosenzweig – Lawfare – 24 June 2025)  The following text is a slightly revised version of a talk given by the author at the May 2025 V2 Security Conference in Copenhagen – My theme today is to try and answer the question: “What do we expect from the Trump administration with respect to cybersecurity and data privacy in the next four years?” The “A” answer of course is that nobody really knows. Trump is exceedingly unpredictable—the more so with respect to issue areas where he really has no preconceived and settled notion. Unlike, say, tariffs, it seems likely that Trump has given little thought to cybersecurity or data privacy—and thus his reactions are likely to be off the cuff. But that would be a short analysis, and you deserve more. So let’s dive in. My deeper analysis starts by providing a broad context for U.S.-EU cybersecurity and data privacy engagement today. I then turn to specific predictions about Trump’s expected actions in the areas of cybersecurity and data privacy. I conclude with some thoughts on how these actions will impact the EU and how the EU member states ought to consider responding. – https://www.lawfaremedia.org/article/the-future-of-american-cybersecurity

How AI power lets hackers automate cyber attacks. Are you prepared to fight?

(Interesting Engineering – 24 June 2025) It seems like it wasn’t that long ago when people’s biggest cybersecurity worry was that someone might guess their password correctly. Those were simpler times. Back then, most businesses felt safe deploying antivirus software, training employees around what a suspicious link looks like, and calling it a day. That approach worked about as well as you’d expect, but at least the threats were predictable. But today, the threat landscape has changed dramatically. Cybercriminals have become much more sophisticated and organized in recent years. They’re using advanced automation techniques, targeting specific industries (or people) with precision, and operating ransomware-as-a-service models that scale their operations. Meanwhile, most businesses are blissfully unaware, still running the same security strategies that were already questionable five or six years ago. The result? A completely unbalanced security landscape where traditional defenses are increasingly outmatched by evolving threats. This gap isn’t sustainable. – https://interestingengineering.com/innovation/how-ai-power-lets-hackers-automate-cyber-attacks-are-you-prepared-to-fight

WhatsApp prohibited on US House devices citing data risk

(DigWatch – 24 June 2025) Meta Platforms’ messaging service WhatsApp has been banned from all devices used by the US House of Representatives, according to an internal memo distributed to staff on Monday. The memo, issued by the Office of the Chief Administrative Officer, stated that the Office of Cybersecurity had classified WhatsApp as a high-risk application. – https://dig.watch/updates/whatsapp-prohibited-on-us-house-devices-citing-data-risk

Generative AI and the continued importance of cybersecurity fundamentals

(DigWatch – 23 June 2025) The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries. AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods. While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches. – https://dig.watch/updates/generative-ai-and-the-continued-importance-of-cybersecurity-fundamentals

DeepSeek under fire for alleged military ties and export control evasion

(DigWatch – 23 June 2025) The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors. The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI. – https://dig.watch/updates/deepseek-under-fire-for-alleged-military-ties-and-export-control-evasion

Defense, Intelligence, and Warfare

Interview: Safran defense boss on the changing battlefield and AI

(Rudy Ruitenberg – Defense News – 27 June 2025) Safran Electronics & Defense CEO Franck Saudo spoke to Defense News at the Paris Air Show last week about the changing battlefield and use of AI, as well as areas of future growth. Safran is Europe’s biggest supplier of military optronics and inertial navigation systems. Saudo noted two mega trends in defense, one being rising defense budgets with a new emphasis on European sovereignty, and the second the transformation of the battlefield, including greater battlefield transparency, widespread electronic warfare and new objects such as drones. – https://www.defensenews.com/global/europe/2025/06/27/interview-safran-defense-boss-on-the-changing-battlefield-and-ai/

Army unveils plans to acquire two different sizes of autonomous launchers

(Ashley Roque – Breaking Defense – 27 June 2025) The US Army is interested in acquiring two new autonomous platforms under a new initiative it’s calling the Common Autonomous Multi-Domain Launcher (CAML). In a “request for solutions brief” posted today, the service announced that its Rapid Capabilities and Critical Technologies Office is leading the charge to find two separate CAML variants — a heavy and a medium — on a “rapid timeline.”. “CAML is an autonomous/optionally crewed, highly mobile, air transportable, cross domain fires launcher with the potential to augment or replace existing Army launchers,” the service said. – https://breakingdefense.com/2025/06/army-unveils-plans-to-acquire-two-different-sizes-of-autonomous-launchers/

NATO members aim for spending 5% of GDP on defense, with 1.5% eligible for cyber

(Alexander Martin – The Record – 27 June 2025) NATO allies reached an agreement this week to increase their defense spending to 5% of GDP within a decade, with 3.5% to go toward core defense and the remaining 1.5% of GDP on indirect defense spending, including cybersecurity capabilities. The expanded range of what amounts to defense spending — now including investments in energy and supply chain resilience, logistics infrastructure and innovation — relates immediately to strategic concerns highlighted by Russia’s full-blown invasion of Ukraine and also to the systemic challenges posed by what NATO describes as China’s “stated ambitions and coercive policies.”. Investment in defenses against cyberattacks comes as experts warn that even incidents below the threshold of starting an armed conflict are having “strategically consequential effects” on NATO allies, and as NATO itself agreed to launch an integrated cyberdefense center at its military headquarters in Mons, Belgium. – https://therecord.media/nato-agreement-5percent-gdp-defense-spending-cyber

EVADE: DARPA pivots shipboard drone program to rapidly field tech later this year

(Justin Katz – Breaking Defense – 27 June 2025) An experimental Pentagon program focused on developing shipboard unmanned aerial systems is aiming to transition its technology to the broader Defense Department later this year, following a change of plans centered on more rapidly fielding the drones to servicemembers. The program in question, dubbed AdvaNced airCraft Infrastructure-Less Launch And RecoverY  (ANCILLARY), was initiated by the Defense Advanced Research Projects Agency in 2022 and aims to produce a relatively small UAS that can be easily launched and recovered from US warships. Phillip Smith, a DARPA program manager overseeing the effort, told Breaking Defense in an interview that the program has opted to refocus its efforts on more rapidly fielding a capable UAS in 2026, relative to ANCILLARY’s initial goal to begin flight tests that year. He said the decision was prompted last year by two factors. The first was because DARPA’s “partner in the Navy” disclosed it could not proceed with shipboard testing or a phase two downselect as originally planned. The second factor was conversations happening between DARPA and other partners within the Pentagon. – https://breakingdefense.com/2025/06/evade-darpa-pivots-shipboard-drone-program-to-rapidly-field-tech-later-this-year/

Cyber Command and Coast Guard establish task force for port cyber defence

(DigWatch – 27 June 2025) US Cyber Command has joined forces with the Coast Guard in a major military exercise designed to simulate cyberattacks on key port infrastructure. Known as Cyber Guard, the training scenario marked a significant evolution in defensive readiness, integrating for the first time with Pacific Sentry—an Indo-Pacific Command exercise simulating conflict over Taiwan. The joint effort included the formation of Task Force Port, a temporary unit tasked with coordinating defence of coastal infrastructure. – https://dig.watch/updates/cyber-command-and-coast-guard-establish-task-force-for-port-cyber-defence

Maxar launching AI-powered ‘predictive intelligence’ to spot crises before they happen

(Patrick Tucker – Defense One – 25 June 2025) A satellite imaging company that played a key role in revealing Russian forces massing on Ukraine’s border prior to invasion launched a new product that uses AI and satellite data to provide “predictive intelligence” on hundreds of sites around the world. Maxar’s new product, “Sentry”, provides a way for multiple satellite companies to collaborate and share data in order to keep more sensors on emerging developments. Maxar described Sentry as AI-powered software that can function as its own mini intelligence agency, bringing together data from not only high-resolution imaging satellites but also other intelligence sources, potentially including synthetic aperture radar satellites that use microwave pulses to “see” through clouds or at night, electro-optical satellites that can measure things like weather patterns and vegetation. – https://www.defenseone.com/technology/2025/06/maxar-launching-ai-powered-predictive-intelligence-spot-crises-they-happen/406326/?oref=d1-homepage-river

11-pound electronic warfare weapon lets drones sniff out enemy radio signals mid-air

(Interesting Engineering – 24 June 2025) At the 2025 Paris Air Show, Thales introduced a compact, low-power electronic warfare (EW) payload designed for deployment on light drones, offering frontline forces a critical new tool for electromagnetic dominance in contested environments. Weighing under 5 kg (11 pounds) and consuming less than 40 watts, the new sensor system is engineered for integration with small unmanned aerial systems (UAS), either free-flying or tethered, enabling autonomous detection and geolocation of radio-frequency (RF) emitters over tens of miles. – https://interestingengineering.com/military/thales-drones-electronic-spies

Frontiers

AI tool detects 9 types of dementia from a single scan with 88% diagnostic accuracy

(Interesting Engineering – 27 June 2025) Mayo Clinic researchers have developed a new AI tool that detects nine types of dementia using one common brain scan. The system, called StateViewer, includes Alzheimer’s disease in its scope and promises faster, more accurate diagnoses. The study published recently reports that StateViewer identified the correct dementia type in 88% of cases. It also doubled interpretation speed and tripled diagnostic accuracy compared to standard workflows. – https://interestingengineering.com/health/ai-boosts-dementia-diagnosis-accuracy

Light-powered robot swarms may replace antibiotics for tough sinus infections

(Interesting Engineering – 27 June 2025) Swarms of microrobots have been designed to help get rid of bacterial sinus infections. After completing the task, these tiny robots can be easily expelled from the nose. Interestingly, these tiny, light-activated robots are reportedly as small as a “dust speck.”. These are called CBMRs (copper single–atom–loaded bismuth oxoiodide photocatalytic microrobots). – https://interestingengineering.com/science/microrobots-to-clear-sinus-infections

Study: Jet-Powered Humanoid Robots Take Flight with New Aerodynamic Control

(AI Insider – 26 June 2025) If the idea of flying robots seems like something straight out of sci-fi, researchers in a recent study set out to show their science isn’t fiction. A jet-powered humanoid robot, equipped with advanced aerodynamic modeling, can now fly and balance in windy conditions, potentially transforming tasks like search and rescue, according to a study published in Nature. The research, conducted by scientists from the Istituto Italiano di Tecnologia and other institutions in Europe and the U.S., introduces a robot called iRonCub-Mk1, which uses jet engines to achieve flight and sophisticated computer models to manage the complex forces of air resistance. This breakthrough could pave the way for robots that combine human-like dexterity with aerial mobility, offering new possibilities for industries ranging from disaster response to logistics. – https://theaiinsider.tech/2025/06/26/study-jet-powered-humanoid-robots-take-flight-with-new-aerodynamic-control/

Alibaba unveils ‘world’s first’ AI model that detects stomach cancer at early stage

(Interesting Engineering – 26 June 2025) AI can chat, paint, and fly drones. Now, it can catch stomach cancer early. Alibaba Group has unveiled what it claims is the world’s first artificial intelligence model capable of detecting gastric cancer, even in its early stages, using only CT scans. Called ‘Grape’ (short for gastric cancer risk assessment procedure), the system was co-developed by Alibaba’s Damo Academy and the Zhejiang Cancer Hospital. – https://interestingengineering.com/innovation/ai-stomach-cancer-detection-alibaba-grape

Top 7 AI agents transforming business in 2025

(DigWatch – 26 June 2025) AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors. From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input. Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function. – https://dig.watch/updates/top-7-ai-agents-transforming-business-in-2025

New ranking shows which AI respects your data

(DigWatch -. 27 June 2025) A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data. The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control. – https://dig.watch/updates/new-ranking-shows-which-ai-respects-your-data

World’s first cryo chip controls qubits at -273°C, powers leap in quantum computing

(Interesting Engineering – 25 June 2025) In a major advance for quantum computing, researchers at the University of Sydney have developed a cryogenic control chip that can operate directly next to quantum bits, or qubits, at near absolute zero. The breakthrough solves one of the biggest challenges in building large-scale quantum computers, keeping quantum information both stable and accessible. The research outlines a new chip design that can function at milli-kelvin temperatures, just above absolute zero, without disturbing the fragile quantum states. – https://interestingengineering.com/innovation/worlds-first-cryo-chip-controls-qubits-at-273c

AlphaGenome: New Google AI reads DNA mutations, predicts molecular consequences

(Interesting Engineering – 25 June 2025) In a big leap for genomics, Google on Wednesday unveiled a powerful AI model that predicts how single DNA mutations affect the complex machinery regulating gene activity. Named AlphaGenome, the tool covers both coding and non-coding regions of the genome, offering a unified view of variant effects like never before. It brings base-resolution insight to long-range genomic analysis, decoding the impact of mutations with speed, scale, and unprecedented depth. – https://interestingengineering.com/innovation/google-alphagenome-dna-variant-prediction-ai

New hypersonic computer model simulates gas, droplet particles flying at 3,836 mph

(Interesting Engineering – 25 June 2025) Two San Diego State University aerospace engineering researchers developed a new model in computational mathematics that could have widespread implications for hypersonic military aircraft. The model predicts how fuel droplets and gas particles behave in detonation waves. These waves occur in rocket engines, scramjets, which fly at hypersonic speeds. However, the new model could also have applications for climate science and medicine. – https://interestingengineering.com/innovation/computer-model-simulates-particles-flying-3836-mph

Japan unveils world’s most advanced quantum–classical hybrid computing system

(Interesting Engineering – 24 June 2025) Japan now hosts the world’s most advanced quantum–classical hybrid setup, pairing IBM’s cutting-edge quantum system with one of Earth’s fastest supercomputers. On Tuesday, IBM and Japan’s national research lab RIKEN unveiled the first IBM Quantum System Two installed outside the U.S., integrated directly with Fugaku — the country’s flagship supercomputer. This marks a major step toward “quantum-centric supercomputing,” where quantum and classical systems work together to solve problems neither could tackle alone. – https://interestingengineering.com/innovation/japan-ibm-quantum-fugaku-hybrid

Humanoid robots get cloud-free brains as Google drops offline Gemini AI

(Interesting Engineering – 24 June 2025) Google DeepMind has launched a powerful on-device version of its Gemini Robotics AI model. The new system can control physical robots without relying on cloud connectivity. It marks a major step in deploying fast, adaptive, and general-purpose robotics in real-world environments. The model, known as ‘Gemini Robotics On-Device,’ brings Gemini 2.0’s multimodal reasoning into robots with no internet required. It’s designed for latency-sensitive use cases and environments with poor or no connectivity. – https://interestingengineering.com/innovation/google-robotics-offline-intelligence

South Korea Recognizes Quantum And AI Chip Designs as National Strategic Technologies

(Quantum Insider – 24 June 2025) South Korea has designated quantum random number generation and low-power AI chip design for autonomous vehicles as national strategic technologies, expanding government support for secure communication and future mobility systems. EYL’s quantum random number generator was recognized for its role in quantum cryptography, offering enhanced security through true randomness based on quantum physical processes, according to Chosun Biz. Boss Semiconductor’s AI chip design was acknowledged for enabling real-time data processing in autonomous vehicles while minimizing power consumption, as reported by Chosun Biz. – https://thequantuminsider.com/2025/06/24/south-korea-recognizes-quantum-and-ai-chip-designs-as-national-strategic-technologies/

IBM and RIKEN Unveil First IBM Quantum System Two Outside of the U.S.

(Quantum Insider – 24 June 2025) IBM and RIKEN have launched the first IBM Quantum System Two outside the U.S., co-located with the Fugaku supercomputer in Japan to advance hybrid quantum-classical computing. The system uses IBM’s 156-qubit Heron processor, which outperforms the previous generation in both error rate and speed, achieving circuit operations 10× faster than before. The integration enables development of low-latency quantum-classical workflows, with early demonstrations including accurate modeling of complex molecules like iron sulfides. – https://thequantuminsider.com/2025/06/24/ibm-riken-system-two/

India’s QNu Labs Launches Quantum Training Academy to Build Cybersecurity Talent Pipeline

(Quantum Insider – 24 June 2025) QNu Labs has launched QNu Academy to build a global workforce trained in quantum cybersecurity, aligning with India’s National Quantum Mission, reports TimesTech. The program offers practical and academic training in quantum-secure technologies such as QKD, QRNG, and PQC, with support from institutions like the IITs and DRDO. QNu Academy includes career readiness initiatives, faculty development, and Centers of Excellence to expand India’s quantum research and talent base. – https://thequantuminsider.com/2025/06/23/indias-qnu-labs-launches-quantum-training-academy-to-build-cybersecurity-talent-pipeline/

EU Project ELENA Pioneers LNOI Platform for Next-Gen Photonic Circuits & Europe’s 1st Commercial Supplier of LNOI Wafers

(Quantum Insider – 24 June 2025) The EU-funded ELENA project has developed the first fully European supply chain for lithium niobate on insulator (LNOI) substrates, enabling high-performance photonic integrated circuits. The initiative established Europe’s first commercial LNOI wafer supply and launched CCRAFT, an open-access foundry for mass-producing thin-film lithium niobate (TFLN) photonic chips. Demonstrator chips targeting quantum, telecom, space, and sensing applications validate the platform’s potential to meet growing demand for energy-efficient optical technologies. – https://thequantuminsider.com/2025/06/23/eu-project-elena-pioneers-lnoi-platform-for-next-gen-photonic-circuits-europes-1st-commercial-supplier-of-lnoi-wafers/

MIT Study Shows LLMs Factor in Unrelated Information When Recommending Medical Treatments

(AI Insider – 24 June 2025) A new MIT study presented at the ACM Conference on Fairness, Accountability, and Transparency, finds that large language models used in health care can make flawed treatment recommendations when exposed to nonclinical text variations such as typos, informal language, and missing gender cues. The study tested four models, including GPT-4, using altered patient messages that preserved clinical content but mimicked realistic communication styles, revealing a 7–9% increase in erroneous self-care advice, particularly for female patients. Researchers call for stricter audits and new evaluation benchmarks before LLMs are deployed in clinical settings, warning that models trained on sanitized data may falter under real-world patient interaction scenarios. – https://theaiinsider.tech/2025/06/24/mit-study-shows-llms-factor-in-unrelated-information-when-recommending-medical-treatments/

China pushes quantum computing towards industrial use

(DigWatch – 24 June 2025) A Chinese startup has used quantum computing to improve breast cancer screening accuracy, highlighting how the technology could transform medical diagnostics—based in Hefei, Origin Quantum applied its superconducting quantum processor to analyse medical images faster and more precisely. – https://dig.watch/updates/china-pushes-quantum-computing-towards-industrial-use

Heat action plans in India struggle to match rising urban temperatures

(DigWatch – 23 June 2025) On 11 June, the India Meteorological Department (IMD) issued a red alert for Delhi as temperatures exceeded 45°C, with real-feel levels reaching 54°C. Despite warnings, many outdoor workers in the informal sector continued working, highlighting challenges in protecting vulnerable populations during heatwaves. The primary tool in India for managing extreme heat, the Heat Action Plan (HAP), is developed annually by city and state governments. While some regions, such as Ahmedabad and Tamil Nadu, have reported improved outcomes, most HAPs face implementation, funding, coordination, and data availability issues.  A 2023 study found that 95% of HAPs lacked detailed mapping of high-risk areas and vulnerable groups. Experts and non-governmental organisations recommend incorporating Geographic Information Systems (GIS) and remote sensing to improve targeting. – https://dig.watch/updates/heat-action-plans-in-india-struggle-to-match-rising-urban-temperatures

World’s first quantum satellite computer launched in historic SpaceX rideshare

(Interesting Engineering – 23 June 2025) In a historic milestone for quantum technology, a photonic quantum computer has been launched into space for the first time. Developed by an international team led by Philip Walther at the University of Vienna, the compact system blasted off on June 23 aboard a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California. The processor is set to begin operations around 550 kilometers above the planet. – https://interestingengineering.com/space/first-quantum-processor-launched-to-space

Tailored AI agents improve work output—at a social cost

(DigWatch – 23 June 2025) AI agents can significantly improve workplace productivity when tailored to individual personality types, according to new research from the Massachusetts Institute of Technology (MIT). However, the study also found that increased efficiency may come at the expense of human social interaction. Led by Professor Sinan Aral and postdoctoral associate Harang Ju from MIT Sloan School of Management, the research revealed that human workers collaborating with AI agents completed tasks 60% more efficiently. This gain was partly attributed to a 23% reduction in social messages between team members. – https://dig.watch/updates/tailored-ai-agents-improve-work-output-at-a-social-cost

This site is registered on wpml.org as a development site.