Weekly Digest on AI and Emerging Technologies (9 March 2026)

Daily Digest on AI and Emerging Technologies (3 March 2026) – https://www.cgspam.org/daily-digest-on-ai-and-emerging-technologies-3-march-2026/

Daily Digest on AI and Emerging Technologies (4 March 2026) – https://www.cgspam.org/daily-digest-on-ai-and-emerging-technologies-4-march-2026/

Daily Digest on AI and Emerging Technologies (5 March 2026) – https://www.cgspam.org/daily-digest-on-ai-and-emerging-technologies-5-march-2026/

Daily Digest on AI and Emerging Technologies (6 March 2026) – https://www.cgspam.org/daily-digest-on-ai-and-emerging-technologies-6-march-2026/

 

Governance, Regulation, and Legislation

Beyond Carpenter – A Legislative Framework for Mobile Location Privacy

(Jim Dempsey – Lawfare) In Carpenter v. United States, the Supreme Court held that the Fourth Amendment requires a warrant for compelled disclosure of historical cell-site location information. But Carpenter left unresolved critical questions: What about real-time collection? Direct collection by the government? Geofencing and tower dumps? Government purchases from data brokers? Duration thresholds? Emergency exceptions? This report offers seven principles for updating the Electronic Communications Privacy Act. Applying them to location data, it argues that merely codifying Carpenter would be insufficient. Instead, it proposes a new freestanding chapter of Title 18—Chapter 120—comprehensively regulating government acquisition of mobile location information. Drawing on the Wiretap Act, the proposed chapter establishes a warrant requirement for all government acquisition of mobile location data, a two-stage judicial process for non-individualized searches like geofencing and tower dumps, emergency exceptions, a statutory suppression rule, minimization requirements, and notice provisions. The report includes proof-of-concept legislative text with a section-by-section analysis. – https://www.lawfaremedia.org/article/beyond-carpenter—a-legislative-framework-for-mobile-location-privacy

Installing a Content Patch in the Stored Communications Act

(Stephanie Pell, Richard Salgado – Lawfare) The 1980s-era scheme in the Stored Communications Act that allows the government to compel service providers to disclose user content without a warrant is the mullet of surveillance law. It is time to lose it. The forty-year-old rules are frozen in time, reflecting antiquated assumptions about technology and its adoption. Under some interpretations of these pre-cloud statutory rules, the warrant requirement is excused for vast swaths of user content, including content already accessed by the user and content more than 180 days old. These carve-outs do not fit modern dependence on provider-hosted communications and computing services, and they are even less appropriate for artificial intelligence and other technologies on the horizon. The rules undermine both constitutional privacy interests and the statute’s own aims. This report offers a proof-of-concept patch. It replaces the carve-out-riddled regime with a single warrant requirement for compelled disclosure of content and harmonizes the interlocking blocking provisions to clarify that key disclosure exceptions are permissive rather than mandatory. It also defines “processing services” to reflect modern architectures and prepare the statute for the future. It improves defect detection through user notice and a clearer provider role in raising objections, adds a suppression remedy for Fourth Amendment and specified statutory violations, and addresses a narrow edge case involving criminal defendants seeking access to exculpatory content held by providers. – https://www.lawfaremedia.org/article/installing-a-content-patch-in-the-stored-communications-act

Data Proxies for the Stored Communications Act

(David Kris – Lawfare) When law enforcement obtains a customer’s cloud data from a cloud service provider (CSP) under the Stored Communications Act, a nondisclosure order can prevent the CSP from notifying the customer—leaving the customer unable to assert legal rights, including attorney-client privilege. This report proposes the “Data Proxy Act,” legislation enabling customers to contractually appoint a trusted, independent third party—a “data proxy”—to advance their interests when they cannot act for themselves. Under the proposed bill, when a CSP receives a data demand paired with a nondisclosure order, it may petition the court for authorization to notify the customer’s data proxy. The court would grant the petition upon finding, by a preponderance of the evidence, that the data proxy is trustworthy and will comply with the nondisclosure order, and that disclosure will not result in improper notice to the customer. Once notified, the data proxy may assert the customer’s legal rights, including through litigation. The proposal builds on the bipartisan NDO Fairness Act and encourages cloud adoption without unduly compromising investigative secrecy. – https://www.lawfaremedia.org/article/data-proxies-for-the-stored-communications-act

Harmonizing ECPA to Close Gaps and Increase Statutory Coherence

(Aaron R. Cooper – Lawfare) The Electronic Communications Privacy Act of 1986 (ECPA) was designed to protect the privacy of electronic communications while providing a clear framework for law enforcement access to user data. But forty years of technological change have exposed statutory gaps and inconsistencies that undermine both goals. Rather than proposing a wholesale rewrite, this report identifies four targeted reforms to harmonize ECPA’s provisions and improve statutory coherence. First, it proposes aligning the legal standard for pen register and trap-and-trace devices with the higher standard already required for comparable stored data under Section 2703(d) of the Stored Communications Act (SCA). Second, it proposes extending the Wiretap Act’s procedural protections—including predicate-crime limitations and suppression remedies—to electronic communications, which currently lack parity with oral and wire communications. Third, it proposes adding a suppression remedy for content obtained in violation of the SCA’s requirements. And fourth, it proposes establishing an explicit statutory right for communications service providers to challenge surveillance orders issued under the SCA. Taken together, these reforms would create a more coherent and technology-neutral framework, furthering important rule-of-law principles without undermining law enforcement’s legitimate investigative capabilities. – https://www.lawfaremedia.org/article/harmonizing-ecpa-to-close-gaps-and-increase-statutory-coherence

Unpacking and Updating the CLOUD Act

(Jennifer Daskal – Lawfare) The CLOUD Act, enacted in 2018, is the most significant amendment to ECPA in over a decade. It clarified that U.S. law enforcement can compel data from covered providers regardless of where it is stored, and it created a framework for foreign governments to enter into executive agreements with the United States, enabling direct access to non-Americans’ data held by U.S. providers subject to specified requirements. Eight years later, the act has failed to achieve its full potential. It has been mischaracterized as a new surveillance authority, when it changed neither the standards nor process for compelling data from providers within its jurisdiction, though it did clarify that data location was irrelevant to the authority to compel. Its executive agreement framework has also fallen short: only two agreements, with the U.K. and Australia, are in place, while EU and Canadian negotiations have stalled. The U.K. also leveraged its agreement to support a decryption mandate against Apple, despite the statute specifying that CLOUD Act agreements cannot create new decryption obligations. This report proposes three legislative fixes: codifying DOJ policy designed to ensure businesses and other enterprises retain more control over their own data; encouraging and explicitly enabling new executive agreements, including with supranational entities like the EU; and prohibiting use of CLOUD Act agreements to support foreign decryption mandates or other security-reducing measures. – https://www.lawfaremedia.org/article/unpacking-and-updating-the-cloud-act

Limiting Reverse Searches in the Stored Communications Act

(Paul Ohm – Lawfare) Law enforcement increasingly conducts “reverse searches”—requests that ask online providers to search their massive databases not for information about a known suspect, but to identify unknown individuals based on location, conduct, or search queries. The most prominent examples are geofence warrants, which seek to identify all device users near a particular location during a specified window, and reverse keyword warrants, which seek to identify everyone who searched for a particular term. These searches act as digital dragnets, sweeping through the private data of hundreds of millions of users to find a handful of potential suspects. This report argues that reverse searches pose grave threats to privacy and civil liberties, likely violate the Fourth Amendment’s prohibitions on general warrants and overbroad searches, and are probably not authorized under the current Stored Communications Act. It proposes a new statutory framework—Section 2703A—that would ban reverse searches by default while narrowly permitting specific categories, beginning only with geofence warrants. Authorized reverse searches would require superwarrant-like protections borrowed from the federal Wiretap Act, including necessity and serious crime predication. The proposal also codifies and improves upon Google’s three-step process for handling geofence warrants, adding judicial oversight and capping the number of individuals whose identifying information may be disclosed. – https://www.lawfaremedia.org/article/limiting-reverse-searches-in-the-stored-communications-act

New PRC Cybercrime Law Heralds Digital Iron Curtain

(Youlun Nie – The Jamestown Foundation) The Ministry of Public Security’s draft Cybercrime Prevention and Control Law marks a seminal shift from reactive policing to preventive governance, codifying a regulatory system designed to eliminate all remaining digital gray zones. By outlawing privacy-enhancing tools based on function rather than intent, enforcing real-name registration down to the network infrastructure layer, and nationalizing the discovery of cybersecurity vulnerabilities, the legislation effectively eradicates technical anonymity and centralizes state control over critical zero-day resources. The draft leverages administrative power through exorbitant fines and extrajudicial detention, enabling public security bureaus (PSBs) to bypass the formal justice system and impose crippling penalties on ordinary netizens, technical facilitators, and private enterprises. Projecting control globally, the legislation formalizes border controls and authorizes the freezing of assets linked to “fake information,” providing a robust domestic legal foundation for transnational repression against foreign entities, international personnel, and the Chinese diaspora. – https://jamestown.org/new-prc-cybercrime-law-heralds-digital-iron-curtain/

Nvidia and AMD chips could be subject to U.S. approvals for foreign sales

(Nathan Bomey – Axios) The Trump administration is reportedly weighing rules that would require foreign buyers to obtain licenses from the U.S. government to buy American AI chips. Why it matters: The AI chips sector has been flourishing as tech companies ramp up their spending on data centers and new AI models, underpinning the broader market. The big picture: The draft regulations would give “Washington broad control over whether other countries can build facilities for training and running artificial-intelligence models — and under what conditions,” Bloomberg reported Thursday. – https://www.axios.com/2026/03/05/ai-nvidia-chips-amd-exports

EU launches panel on child safety online and social media age rules

(DigWatch) The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools. The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour. Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement. – https://dig.watch/updates/eu-launches-panel-on-child-safety-online

Sovereign AI becomes a strategic question for governments

(DigWatch) Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology. Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide. – https://dig.watch/updates/sovereign-ai-becomes-a-strategic-question-for-governments

Data centres’ expansion in London sparks energy and climate debate

(DigWatch) London authorities are drafting new data centre policies amid concerns about their environmental impact and rising energy use. City Hall aims to balance the sector’s economic advantages with pressures on electricity, water, and emissions. The Greater London Authority (GLA) estimates that 10 large data centres generate around 2.7 million tonnes of carbon emissions due to their high electricity consumption. Of the 100 data centres the UK plans, about 60 will be in London. – https://dig.watch/updates/london-data-centres-policy

ECB reports minor impact of AI on employment

(DigWatch) AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction. Some firms that use AI intensively were even four percent more likely to hire new staff than average. – https://dig.watch/updates/ecb-reports-minor-impact-of-ai-on-employment

Growing risks from AI meeting transcription tools

(DigWatch) Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent. Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work. – https://dig.watch/updates/growing-risks-from-ai-meeting-transcription-tools

Geostrategies

EU watchdog urges limits on US data access

(DigWatch) The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data. – https://dig.watch/updates/eu-watchdog-urges-limits-on-us-data-access

Global AI race intensifies as China claims leadership in strategic technologies

(DigWatch) China asserted its position as the global leader in AI and strategic technology R&D, pledging to accelerate advancement toward technological autonomy. The assertion was prominently featured in government reports presented to the National People’s Congress. A National Development and Reform Commission report states that China leads international research, development, and implementation in AI, biomedicine, robotics, and quantum technology. The report also references advancements in domestic chip innovation as proof of progress. – https://dig.watch/updates/china-ai-research-lead

Security and Surveillance

China’s Agentic AI Controversy

(Samm Sacks – Lawfare) A powerful new artificial intelligence (AI) agent called OpenClaw and Moltbook, a social networking site just for AI agents, has rocked the tech world with fear and excitement about what an AI future could look like. Just three months earlier, China gave us a glimpse of that future with its own controversy erupting over the first-ever smartphone with an AI agent embedded in the operating system. These developments have unleashed fierce debate in China and around the world about the security and privacy trade-offs that come with the expansive permissions necessary for agentic AI to succeed. The outcome of these debates in China will have ripple effects on AI everywhere. The Doubao AI phone quickly became among the hottest products in China’s fiercely competitive market. “I got my hands on one,” a friend in Beijing boasted late last year. He had bagged a limited edition ByteDance ZTE AI smartphone, also called the Nubia M153, released to much fanfare in China in early December 2025. Nothing like it exists beyond China, and it is upending the way that users interact with their devices, and revolutionizing how information flows among apps. “It has ushered in a profound transformation when it comes to control of traffic entry points, the boundaries of data security, and the future paradigm of human-computer interaction,” proclaimed a readout from a special forum held in Beijing days after the phone’s release. Indeed, the AI phone has caused an uproar in China. Within days, many of China’s biggest apps blocked the Doubao phone. They saw it as a serious risk to data security. Built into the operating system of the phone itself, it has a kind of master key that gives the embedded AI agent blanket access to the screen, all app content, and the ability to tap or click as if it were a user. Critics dubbed the agent a “burglar” with “god’s fingertips” increasing risks of malicious input and intrusion attacks by criminal actors. For the banks, it was impossible to distinguish actions taken by the agent and those of the user, creating myriad vulnerabilities for fraud and hacking. – https://www.lawfaremedia.org/article/china-s-agentic-ai-controversy

Iran’s MuddyWater Hackers Hit US Firms with New ‘Dindoor’ Backdoor

(Kevin Poireault – Infosecurity Magazine) Several US companies have been targeted by Iranian hacking group MuddyWater in a new campaign that started in early February and has continued after the US and Israeli military strikes on Iran. The campaign was detected by the Threat Hunter Team at Broadcom’s Symantec and Carbon Black. The potential victims include a US bank, a US airport, non-governmental organizations in both the US and Canada and the Israeli operation of a US software company that supplies the defense and aerospace sectors. Each of these organizations has experienced suspicious activity on their networks in recent days and weeks, said the Threat Hunter Team in a March 5 report. The campaign involves a previously unknown backdoor, dubbed ‘Dindoor’ by the cyber threat researchers. – https://www.infosecurity-magazine.com/news/iran-muddywater-hackers-us-firms/

Digital Psychological Warfare: How the Weaponization of Digital Platforms Threatens Minds, Markets, and Modern Institutions

(Tarnveer Singh – Infosecurity Magazine) Digital psychological warfare has become one of the most urgent and least understood threats facing modern organisations. This is explored in my new book,  Digital Psychological Warfare: Weaponization of Digital Platforms. The weaponization of digital platforms is no longer confined to geopolitical conflict or fringe extremist groups. It now permeates mainstream social networks, workplace collaboration tools, customer‑facing platforms, and even AI‑driven systems. For C‑suite executives, policymakers, and researchers, the challenge is clear: psychological harm is being engineered, amplified, and automated at scale—and organisations must be prepared to defend their people. – https://www.infosecurity-magazine.com/opinions/digital-psychological-warfare/

AI-Driven Insider Risk Now a “Critical Business Threat,” Report Warns

(Danny Palmer – Infosecurity Magazine) The risk of insider threats is on the rise and businesses are concerned about the cybersecurity implications of intentionally malicious or negligent employees, research by Mimecast has warned. According to the company’s State of Human Risk Report 2026, internal cybersecurity risk has grown across the board, to the extent that it should be treated as a “critical business threat.”. In many cases, the additional insider risk is because of employees mishandling or actively abusing AI tools. – https://www.infosecurity-magazine.com/news/ai-insider-risk-critical-business/

Microsoft warns of ClickFix campaign exploiting Windows Terminal to deliver Lumma Stealer

(Pierluigi Paganini – Security Affairs) Microsoft revealed a new ClickFix campaign where attackers exploit Windows Terminal to run a complex attack chain, ultimately deploying Lumma Stealer malware. The campaign uses social engineering to trick users into executing malicious commands, highlighting growing risks to Windows environments. In February 2026, Microsoft Defender experts uncovered a widespread ClickFix campaign exploiting Windows Terminal. The researchers noticed that instead of the usual Run dialog method, attackers guide users to launch Terminal via Windows + X → I, creating a trusted-looking admin environment. This bypasses Run-dialog detections while prompting targets to paste malicious PowerShell commands from fake CAPTCHAs, troubleshooting prompts, or verification-style lures, blending the attack seamlessly into routine Windows workflows. – https://securityaffairs.com/189046/malware/microsoft-warns-of-clickfix-campaign-exploiting-windows-terminal-for-lumma-stealer.html

Iran-nexus APT Dust Specter targets Iraq officials with new malware

(Pierluigi Paganini – Security Affairs) Zscaler ThreatLabz researchers linked the Iran-nexus group Dust Specter to a campaign targeting Iraqi government officials. Threat actors impersonated the country’s Ministry of Foreign Affairs in phishing messages that delivered previously unseen malware, including SPLITDROP, TWINTASK, TWINTALK, and GHOSTFORM, through multiple infection chains. “In January 2026, Zscaler ThreatLabz observed activity by a suspected Iran-nexus threat actor targeting government officials in Iraq.” reads the report published by Zscaler. “Due to significant overlap in tools, techniques, and procedures (TTPs), as well as victimology, between this campaign and activity associated with Iran-nexus APT groups, ThreatLabz assesses with medium-to-high confidence that an Iran-nexus threat actor conducted this operation. ThreatLabz tracks this group internally as Dust Specter.“. The researchers analyzed two attack chains used in the Dust Specter campaign targeting Iraqi officials. – https://securityaffairs.com/189033/apt/iran-nexus-apt-dust-specter-targets-iraq-officials-with-new-malware.html

U.S. CISA adds Apple, Rockwell, and Hikvision  flaws to its Known Exploited Vulnerabilities catalog

(Pierluigi Paganini – Security Affairs) The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added Apple, Rockwell, and Hikvision flaws to its Known Exploited Vulnerabilities (KEV) catalog. – https://securityaffairs.com/189005/security/u-s-cisa-adds-apple-rockwell-and-hikvision-flaws-to-its-known-exploited-vulnerabilities-catalog.html

Google GTIG: 90 zero-day flaws exploited in 2025 as enterprise targets grow

(Pierluigi Paganini – Security Affairs) Google’s Threat Intelligence Group (GTIG) identified 90 zero-day vulnerabilities exploited in the wild in 2025. While slightly below the 100 observed in 2023, the number increased from 78 in 2024, with researchers noting a rising trend of attacks specifically targeting enterprise technologies and corporate infrastructure. – https://securityaffairs.com/188993/security/google-gtig-90-zero-day-flaws-exploited-in-2025-as-enterprise-targets-grow.html

Defence, Intelligence, and Warfare

Armed robots take to the battlefield in Ukraine war

(Vitaly Shevchenko – BBC) Since the start of Russia’s full-scale invasion, the war in Ukraine has developed into a high-tech conflict. Swarms of spy and killer drones have set the skies of Ukraine abuzz, and uncrewed boats have crippled the Russian navy in the Black Sea. Now, Ukraine has embarked on a massive programme to deploy armed robots on the ground. Uncrewed ground vehicles (UGVs), or ground robot systems as they are known in Ukrainian military parlance, have already proven their worth. – https://www.bbc.com/news/articles/c62662gzlp8o

The War on Anthropic: Pretextual Designation and Unlawful Punishment

(Harold Hongju Koh, Bruce Swartz, Avi Gupta and Brady Worthington – Just Security) On Feb. 27, U.S. Secretary of Defense Pete Hegseth announced—via tweet—that he had directed his Department to “designate Anthropic a Supply-Chain Risk to National Security.” Accordingly, Hegseth claimed, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” As one former Trump advisor put it, the designation amounted to “attempted corporate murder.” Soon thereafter, U.S. President Donald Trump chimed in via Truth Social, directing “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” “The Leftwing nut jobs at Anthropic,” the President threatened, “better get their act together, and be helpful” in phasing out their technology from government use or he would “use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.” Trump invoked no authority for his actions, although the Treasury Department, for one, soon followed his lead. The Department of Defense officially informed Anthropic of the designation on Mar. 5, and even before that official communication some defense contractors were already reportedly removing Anthropic’s frontier AI model Claude from their systems in response to the Administration’s perceived “blacklisting” of Anthropic. While there are reports that talks may be ongoing between the Department of Defense and Anthropic to attempt to resolve this dispute, the underlying issues raised by the Administration’s attack on Anthropic will persist. The use of rarely-invoked national security authorities to target Anthropic is not a one-off action by this Administration. It should instead be viewed as the latest chapter in a concerted campaign of pretextual retaliations—parading in the guise of emergency national security regulation—that have characterized the second Trump Administration. Anthropic is but the latest in a long and growing list of targets of Donald Trump’s punitive presidency. President Trump has previously attempted to wield the power of the presidency to punish, among others, his political opponents, law firms that hired or represented them, universities whose administrative decisions he disagreed with, journalistic outlets that refused to use his preferred terminology, and companies that refused to fire Trump’s political enemies. The background of the Trump Administration-Anthropic dispute has been treated in greater depth elsewhere. But the underlying disagreement is straightforward. The Department of Defense demanded that Anthropic, as a Department contractor, not prevent its technology from being used both for domestic mass surveillance of Americans and for fully autonomous lethal weapons. After Anthropic refused to abandon those core principles against such uses, Hegseth not only announced that the Department’s contract with Anthropic would be cancelled, but labeled Anthropic a “supply chain risk”—exiled, by fiat, from doing business with the federal government as well as (according to Hegseth) any other government contractor. What Anthropic believed to be core principled limitations on how its technology should be used, Hegseth characterized as Anthropic’s “cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” – https://www.justsecurity.org/133247/anthropic-hegseth-unlawful-punishment/

Artificial Urgency: Reflecting on AI Hype at the 2026 REAIM Summit

(Zena Assaad – Just Security) The third Summit on Responsible AI in the Military Domain (REAIM) took place from February 4 to 5 in A Coruña, Spain, bringing together States, non-governmental actors, academics, and tech industry representatives. This year’s Summit aimed to build upon previous gatherings, which focused on establishing a common understanding of the challenges and opportunities associated with military AI governance, by moving towards “concrete, practical, and realistic steps to translate previously agreed principles into effective and tangible measures.”. This action-oriented objective of the third Summit was complimented by an opening plenary panel which focused on technical understandings and considerations around AI—a perspective which is concerningly diluted in the legal and policy discourse of military AI. This set the tone for ongoing discussions on the tension between theoretical framings of AI and technical realities—a tension which surfaces many misconceptions and miscalibrated measures for procuring and implementing this technology. – https://www.justsecurity.org/132504/ai-hype-2026-reaim-summit/

Fog, Proxies and Uncertainty: Cyber in US-Israeli Operations in Iran

(Louise Marie Hurel – RUSI) These are days of considerable uncertainty in Iran and across many countries in the Middle East and, as with any military intervention, reporting in the first instance remains at best speculative. As we carefully assess the potential, and eventually actual, role and effects of cyber capabilities and activities in the context of Operations Epic Fury and Roaring Lion, there are at least seven elements that merit close attention. – https://www.rusi.org/explore-our-research/publications/commentary/fog-proxies-and-uncertainty-cyber-us-israeli-operations-iran

Frontiers and Markets

UK to launch new lab for breakthrough AI research

(DigWatch) Researchers in the UK will gain a new AI lab designed to drive transformational breakthroughs in healthcare, transport, science, and everyday technology, supported by government funding. The lab will provide up to £40 million in funding over six years, alongside substantial access to large-scale computing resources, inviting UK researchers to pitch their most ambitious ideas. – https://dig.watch/updates/uk-to-launch-new-lab-for-breakthrough-ai-research