Daily Digest on AI and Emerging Technologies (12 March 2026)

Governance, Regulation, and Legislation

Grok Showed the World What Ungoverned AI Looks Like

(Cyrus Hodes – Just Security) The 2026 International AI Safety Report, published in early February by over 100 experts from more than 30 countries, reached a sobering conclusion: the gap between the pace of AI advancement and our ability to implement effective safeguards remains a critical challenge. The report’s chair, Turing Award winner Yoshua Bengio, put it plainly: international agreement on AI governance is now in the rational interest of every country, mirroring “exactly what has happened with the management of nuclear risks.”. This is not abstract. Indeed, we already have a case study in what happens when that coordination does not exist. Last December, xAI’s chatbot Grok began generating thousands of nonconsensual sexualised images per hour, including images of minors. Users discovered they could upload photographs of real people and instruct the AI to “undress” them. Governments issued statements and regulators announced investigations. But nobody effectively stopped it, nor could they have without effective multilateral coordination. What followed was a textbook case of fragmented response. Malaysia and Indonesia      banned Grok outright. Britain accelerated enforcement of the Online Safety Act, launching an Ofcom investigation (led by the United Kingdom’s regulator for communication services). France widened an existing inquiry and raided X’s offices in Paris. India demanded compliance reports. Brazil’s chief prosecutor called for X to stop Grok from producing sexualized content within five days or face legal action. The European Commission ordered X to preserve all internal documents related to Grok over doubts about compliance, while 57 members of the European Parliament called for bans on “nudification” tools under the AI Act. California’s attorney general sent a cease-and-desist letter to xAI. And U.S. senators wrote to Apple and Google requesting the removal of X from app stores. xAI’s response was to comply by preventing Grok creating sexualized deepfakes in jurisdictions where it is illegal (as discussed in this previous Just Security article). The company was saying the quiet part out loud: xAI would do the minimum required, country by country, because no coordinated international standard exists to require otherwise. – https://www.justsecurity.org/131377/what-ungoverned-ai-looks-like/

Narrative Integrity Risk: The Next Frontier in Financial Stability. AI is already amplifying the destabilization of financial markets

(Chris Beall, Chris Blask, Jen Rosiere Reynolds – Lawfare) Increasing acceptance of and guessing about what might happen when the artificial intelligence (AI) bubble pops has become a popular parlor game. But what remains underdiscussed is that AI-enabled coordinated market manipulation is already well underway. It’s doing real damage—and it’s probably going to get worse. Markets run on capital and on confidence. Today, that confidence is increasingly vulnerable to targeted manipulation. When narratives can be distorted at scale, narrative integrity becomes a direct risk to shareholders and market functioning. Narrative integrity is the accuracy, authenticity, and resilience of online information and narratives, ensured by systems, processes, and organizational practices that protect them for sound decision-making. Generative AI accelerates the risk of narrative integrity. Most firms still treat narrative manipulation as a communications hiccup rather than an adversarial threat. These are deliberate, adaptive attacks, capable of distorting valuations and eroding reputations. Recent reports from Marsh McLennan, Swiss Re, and World Economic Forum have already highlighted misinformation as a top global risk of instability driven by AI-accelerated narratives. The market consequence is clear: Firms that understand and anticipate narrative manipulation will outperform those that wait. – https://www.lawfaremedia.org/article/narrative-integrity-risk–the-next-frontier-in-financial-stability

EU draft regulation aims to create new legal framework for startups

(DigWatch) A draft initiative from the European Commission seeks to introduce a new legal structure designed to simplify how companies operate across the EU. The proposal, often referred to as the ‘EU Inc’ initiative, explores the creation of a so-called ’28th regime’ that would exist alongside national corporate frameworks used by member states. A concept that aims to provide startups and technology firms with a single legal structure that applies across the EU. – https://dig.watch/updates/eu-draft-regulation-aims-to-create-new-legal-framework-for-startups

EU launches AI platform to detect food fraud and contamination

(DigWatch) Food safety monitoring across the EU is receiving a technological upgrade with the launch of TraceMap, a new AI platform designed to detect food fraud, contamination and disease outbreaks more quickly. The European Commission introduced the tool as part of efforts to strengthen consumer protection and improve oversight of the agri-food supply chain. TraceMap helps authorities analyse large volumes of data related to food production, distribution and trade. By identifying connections between operators, shipments and supply chains, the system allows investigators to spot suspicious activity and potential safety risks earlier. – https://dig.watch/updates/eu-launches-ai-platform-to-detect-food-fraud

Security and Surveillance

Researchers Discover Major Security Gaps in LLM Guardrails

(Kevin Poireault – Infosecurity Magazine) Security and safety guardrails in generative AI tools, deployed to prevent malicious uses like prompt injection attacks, can themselves be hacked through a type of prompt injection. Researchers at Unit 42, Palo Alto Networks’ research lab, have found that large language models (LLMs) used by GenAI companies to enforce safety policies and evaluate output quality can be manipulated into authorizing policy violations through stealthy input sequences. Unit 42 refers to these LLMs as ‘AI Judges’ and said they are being increasingly deployed as AI operations scale. In a new report published on March 10, Unit 42 demonstrated an attack method that could target these ‘AI Judges’ and empower them to authorize policy violations. – https://www.infosecurity-magazine.com/news/major-security-gaps-llm-guardrails/

Cyber-Attacks on UK Firms Increase at Four Times Global Rate

(Phil Muncaster – Infosecurity Magazine) UK organizations were hit by far fewer cyber-attacks in February than the global average, but the year-on-year (YoY) increase was nearly four times the growth rate worldwide, according to Check Point. The security vendor’s February 2026 Global Threat Intelligence report revealed that it blocked an average of 2086 cyber-attacks per organization per week globally, a 9.8% year-on-year (YoY) increase. In the UK, the figure was only 1504 per week, but that represented a 36% YoY increase. Education, energy & utilities, government, healthcare and financial services were among the most frequently targeted sectors in the UK. – https://www.infosecurity-magazine.com/news/cyberattacks-uk-firms-increase/

Expanded Identity Attack Vectors: From Document Fraud to Signal Manipulation

(Ihar Kliashchou – Infosecurity Magazine) For years, identity fraud was treated as a document problem. Forged passports, stolen IDs, and compromised credentials defined the threat landscape, and verification controls were built to stop these risks at the point of entry. That model no longer reflects how modern identity systems operate. Documents still matter, but today’s attacks increasingly target the signals automated systems use to decide whether to trust an identity. Recent global research on identity verification threats and opportunities suggests that modern impersonation tactics are now as common as traditional fraud: deepfake-driven attacks (33%), identity spoofing (34%), and biometric fraud (34%) are reported at similar frequencies to document fraud (30%) and synthetic identity schemes (29%). This underscores how AI-assisted signal manipulation has moved from the fringe into the mainstream of identity threats. This shift reflects not only the nature of the signals, but also the shift in how identity is verified. As more identity decisions move online and into automated workflows, signals that were once assessed by human examiners in person are increasingly processed by software. The system no longer observes identity directly — it interprets digital inputs. – https://www.infosecurity-magazine.com/blogs/expanded-identity-attack-vectors/

Canada warns about AI-generated scams targeting citizens online

(DigWatch) Authorities in Canada have issued a warning about the growing use of AI in impersonation scams targeting citizens. Fraudsters increasingly deploy advanced tools capable of mimicking politicians, government officials and other public figures with convincing realism. Deepfake videos, synthetic audio and AI-generated messages allow scammers to create convincing communications that appear to come from trusted authorities. – https://dig.watch/updates/canada-warns-about-ai-generated-scams

Dutch intelligence warns about phishing attacks on Signal and WhatsApp

(DigWatch) A large-scale cyber campaign linked to state hackers is targeting accounts on the messaging platforms Signal and WhatsApp. Intelligence services warn that phishing attacks aim to gain access to communications belonging to diplomats, military personnel and government officials. The warning was issued by the Dutch intelligence agencies, General Intelligence and Security Service and Military Intelligence and Security Service, which confirmed that several government employees in the Netherlands have already been targeted during the campaign. – https://dig.watch/updates/dutch-intelligence-warns-about-phishing-attacks-on-signal-and-whatsapp

Russia-linked hackers appear on Iran war’s cyber front, but their impact is murky

(David DiMolfetta – Defense One) Apparent Russia-linked hacking collectives backing Iran have been observed joining the cyber activity unfolding alongside the U.S.-Israel war against Iran, though analysts have mixed views on whether their involvement represents a meaningful escalation or little more than online noise. The outlook on such “hacktivist” groups — hackers who attempt to penetrate systems and steal information for political activism — comes days after The Washington Post reported that Russia is supplying Iran with intelligence to help target U.S. forces in the Middle East and adds another dimension to the already complex cyber and information environment surrounding the war. – https://www.defenseone.com/threats/2026/03/russia-linked-hackers-appear-iran-wars-cyber-front-their-impact-murky/412013/?oref=d1-featured-river-secondary

Frontiers

Chinese tech hubs promote OpenClaw AI agent

(DigWatch) Technology hubs in China are promoting the OpenClaw AI agent as part of new local industry initiatives. Officials in China say the open source tool can automate tasks such as email management and travel booking. Cities including Shenzhen, Wuxi and Hefei are drafting policies to build an ecosystem around OpenClaw. Authorities in China are offering subsidies, computing resources and office support to encourage AI-driven one-person companies. – https://dig.watch/updates/chinese-tech-hubs-promote-openclaw-ai-agent

Astronauts test AI-assisted health checks in orbit

(DigWatch) AI is playing an increasingly important role in space medicine as astronauts aboard the International Space Station test new technologies designed to support autonomous health monitoring. The experiment combines augmented reality with an AI system that analyses ultrasound scans in orbit. NASA astronaut Jack Hathaway and European Space Agency astronaut Sophie Adenot carried out guided ultrasound examinations using the EchoFinder-2 biomedical device. – https://dig.watch/updates/astronauts-test-ai-health-checks-in-orbit

Blockchain network Tron joins Agentic AI Foundation to advance AI infrastructure

(DigWatch) Tron has joined the Linux Foundation’s Agentic AI Foundation (AAIF) as a governing member to support the development of AI agent infrastructure. The network aims to enable collaboration and interoperability among systems that efficiently manage high-volume, low-value transactions. Founder Justin Sun highlighted Tron’s speed, scalability, and low fees as key advantages for AI-agent use cases. He noted that as AI agents move to mainstream machine-to-machine commerce, transaction volumes could rise, increasing demand for robust blockchain networks. – https://dig.watch/updates/tron-joins-agentic-ai-foundation-blockchain

Qualcomm and NEURA Robotics partner to accelerate physical AI and cognitive robotics

(DigWatch) NEURA Robotics and Qualcomm have formed a long-term strategic collaboration to advance physical AI and next-generation robotics platforms. A partnership that aims to bring intelligent robots into real-world environments more rapidly by combining advanced AI processors with full-stack robotic systems. The cooperation focuses on developing ‘Brain + Nervous System’ reference architectures that integrate high-level cognition, such as perception, reasoning and planning, with ultra-low-latency control systems. – https://dig.watch/updates/qualcomm-and-neura-robotics-partner-to-accelerate-physical-ai-and-cognitive-robotics

Space startup to test crypto mining in orbit

(DigWatch) Starcloud, a space startup, is preparing to test Bitcoin mining in orbit with its upcoming Starcloud-2 satellite. The mission will carry specialised ASIC mining processors, marking one of the first attempts to run crypto infrastructure beyond Earth. The initiative builds on a successful 2025 demonstration when Starcloud operated Nvidia H100 GPUs in low Earth orbit. During that mission, the satellite performed AI computing tasks, proving that data-centre-grade hardware can function in space. – https://dig.watch/updates/space-startup-to-test-crypto-mining-in-orbit