Weekly Digest on AI and Emerging Technologies (13 april 2026)

Daily Digest on AI and Emerging Technologies (8 april 2026) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-8-april-2026/

Daily Digest on AI and Emerging Technologies (9 april 2026) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-9-april-2026/

Daily Digest on AI and Emerging Technologies (10 april 2026) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-10-april-2026/

 

Governance/Regulation/Legislation

White House AI Framework Proposes Industry-Friendly Legislation

(Jakub Kraus – Lawfare) On March 20, the White House released a “comprehensive” national framework for artificial intelligence (AI), three months after calling for legislative recommendations on the technology in an executive order that sought to curb certain state AI laws. The framework has already received support from influential Republicans in Congress, including House Speaker Mike Johnson (R-La.) and Sen. Ted Cruz (R-Texas), who will likely work closely with the White House to advance AI legislation aligned with the framework. On the other side of the aisle, Sen. Maria Cantwell (D-Wash.), who serves alongside Cruz as ranking member of the Senate’s commerce committee, said the framework “identifies key areas to address.” Thus, the framework offers a fairly clear sketch of which types of AI policy could become U.S. law before the 2026 midterm elections. – https://www.lawfaremedia.org/article/white-house-ai-framework-proposes-industry-friendly-legislation

Non-State Entities and National Security

(David S. Kris  Lawfare) In March, the Defense Department designated Anthropic a supply chain risk, sparking legal controversy. The designation came after Anthropic prohibited the use of its artificial intelligence (AI) model, Claude, for “mass domestic surveillance” and for “fully autonomous weapons.”. Apart from its fascinating particulars, this ongoing dispute reveals a fundamental shift: Non-state entities (NSEs) of various kinds—corporations, universities, and individuals—are becoming much more important for national security. These entities are increasingly enabling, and sometimes limiting, defense and intelligence activity. Unsurprisingly, states are pushing back, treating NSEs as rival geopolitical actors, and using a wide array of carrots and sticks to dominate them. Both trends—NSEs limiting states, and states dominating NSEs—stress existing frameworks for national security governance. Those frameworks were not designed for the current geopolitical reality and would benefit from a systematic review. – https://www.lawfaremedia.org/article/non-state-entities-and-national-security

The Code Is Not the Law: Why Claude’s Constitution Misleads

(Lisa Klaassen, Ralph Schroeder – Lawfare) In January, the frontier artificial intelligence (AI) company Anthropic published a landmark document for its AI model called Claude’s Constitution. Described as Anthropic’s “vision for Claude’s character,” the document marked a notable departure from standard industry prose. It is not simply a safety policy or a public-facing white paper. Anthropic frames the constitution as a legal and philosophical charter: a detailed account of the values, priorities, and forms of judgment that should guide Claude’s behavior—and one that the company says will play a “crucial role” in training future versions of the model. At first glance, the constitution is a groundbreaking exercise in transparency. Across 84 pages, Anthropic sets out an ambitious vision for how the model is supposed to behave. Claude should be not merely useful but also “broadly safe,” “broadly ethical,” compliant with company guidelines, and “genuinely helpful” to the user, in that order of priority. Its significance lies in revealing, in unusually explicit terms, how one major AI corporation attempts to govern its technology from within. Lawfare has already given the document close attention. Kevin Frazier argues that the constitution is important because it moves beyond the dry mechanics of a system prompt and invites public engagement in the shaping of a frontier model. Alan Rozenshtein offers a different reading, treating the document less as a legal charter than as a “character bible” for an artificial agent. Both articles illuminate why the constitution warrants scrutiny. Yet neither confronts its central problem: Anthropic’s framing overstates the document’s legitimacy while understating where the power to shape AI behavior actually resides. Transparency should not be mistaken for conceptual clarity or institutional legitimacy. The challenges presented by the constitution, in our view, are threefold. First, the constitution anthropomorphizes Claude, encouraging readers to think of the model as though it possesses the moral character of a human being. Second, it borrows the language of constitutionalism—and, with it, the symbolic authority of public law—for what remains a corporate product. Third, the document presents a hierarchy of “principals” in which Anthropic retains ultimate authority, while the implications for developers and end users are left thinly specified. These features are not incidental defects; they shape how responsibility is allocated, how legitimacy is imagined, and why users may be encouraged to trust the model. – https://www.lawfaremedia.org/article/the-code-is-not-the-law–why-claude-s-constitution-misleads

Grammarly Lawsuit Shows Existing Laws Can Combat Deepfakes

(Jennifer E. Rothman – Lawfare) Debates about synthetic media have been dominated by concerns about deepfakes—audio and video fabrications that appear to be authentic recordings when they are not. These deepfakes threaten to erode trust in everything from elections to court proceedings to intimate relationships. They also threaten people’s livelihood. With the recent dramatic improvement in the accessibility and quality of generative artificial intelligence (AI), the locus of concern has expanded to virtually every context. The most recent flashpoint is not a forged video of a world leader or a sex tape, but something much more benign: a writing assistant. In early March, Wired reported that the AI-powered software Grammarly, which promises its software tool will help guide and generate your writing, offered users the ability to edit text “in the style” of identifiable journalists and scholars without their consent, and allegedly singling out specific people by name, thereby signaling their participation or endorsement of the service. What might once have seemed like a parlor trick has now become the basis for litigation, raising foundational questions about identity, attribution, and control in an age of generative-AI authorship. One of the key tools to combat such overreaching impersonations is the right of publicity—a legal doctrine that gives individuals control over the use of their name, likeness, voice, and other recognizable aspects of identity when used without authorization by others. The right is governed primarily by state law. – https://www.lawfaremedia.org/article/grammarly-lawsuit-shows-existing-laws-can-combat-deepfakes

India proposes new rules to regulate news and political posts on social media

(Cherylann Mollan, Umang Poddar – BBC) The Indian government has proposed changes to extend its regulatory framework to a wider range of online news voices, including influencers and podcasters on platforms such as Facebook, YouTube and X. Last week, the Ministry of Electronics and Information Technology (MeitY) suggested amendments to India’s IT rules – which govern digital media content – to include “users who are not publishers” who share content related to “news and current affairs” within a “code of ethics” it currently applies to registered news publishers. Experts say this will potentially give the government more power over news-related posts shared by ordinary users, including independent journalists and podcasters. The government has proposed requiring social media platforms to follow orders and guidelines if they want to keep “safe harbour” protection – legal immunity from liability for content posted by users. The proposed amendments have alarmed digital rights activists and independent news creators, who say they could enforce near-total compliance with state-led censorship on social media platforms. They also warn the rules could be misused to target critics and clamp down on dissent. The government says the amendments will strengthen existing IT rules and curb fake news, hate speech and deepfakes, and has invited public feedback by 14 April. But critics remain sceptical of the government’s stated intentions. Akash Banerjee, who runs the YouTube channel The Deshbhakt with more than six million subscribers, says the rules could create a climate of fear, pushing many creators toward self-censorship. – https://www.bbc.com/news/articles/ce9mx2j3xlxo

Geostrategies

New Internet of Things Plan Targets Global Infrastructure

(Matthew Johnson – The Jamestown Foundation) A new action plan for the Internet of Things (IoT) increases the possibility that Chinese-built connected infrastructure in the United States could become a platform for data access, cyber pre-positioning, and attacks on U.S. cyber-physical systems in a prolonged crisis or confrontation. The plan, launched jointly by nine ministries, defines IoT as a total cyber-physical environment that links “people, machines, and things” across sensing, networks, platforms, applications, and security, and sets targets for 10 billion terminal connections, more than 50 standards, and deployment across production, consumption, and governance. The plan indicates Beijing is moving from connected devices to connected backbone systems. It reinforces the new Five-Year Plan, suggesting that the People’s Republic of China (PRC) wants to supply not only endpoints like sensors, appliances, and vehicles but also the next generation of AI, computing, and space-ground communications infrastructure that will underpin them. – https://jamestown.org/new-internet-of-things-plan-targets-global-infrastructure/

Procurement Documents Reveal AI Chip Workarounds

(Sunny Cheung, Kai-shing Lau – The Jamestown Foundation) Procurement records suggest that U.S. export controls on frontier artificial intelligence (AI) chips imposed meaningful constraints until January 2026, as institutions in the People’s Republic of China’s (PRC) appeared to be adapting procurement practices to preserve access rather than replacing foreign hardware with domestic alternatives. Tender documents from universities and state-linked entities show repeated efforts to obtain Nvidia H200-class computing power. Some mention H200s explicitly, others specify capabilities that can only refer to H200s, and others still obfuscate by appending politically acceptable labels such as “domestic chips” or “H20” to specifications that indicate they actually refer to H200s. The persistence of these workarounds indicates that despite visible progress by domestic firms such as Huawei, PRC alternatives remain insufficient for the most demanding frontier AI workloads, making continued access to U.S. hardware strategically consequential. – https://jamestown.org/procurement-documents-reveal-ai-chip-workarounds/

PRC’s Photonic Chip Push Signals Leapfrogging Moment

(Sunny Cheung – The Jamestown Foundation) The People’s Republic of China (PRC) has gone from a single pilot production line to a string of headline breakthroughs in photonics technology since 2024. Beijing has framed progress by researchers at Shanghai Jiao Tong University, Tsinghua University, Fudan University, and the Chinese Academy of Sciences (CAS) as a way around U.S. chip sanctions. Photonic chips, which move information using light rather than electricity, are faster, run cooler, and—crucially for Beijing—do not depend on the cutting-edge factory equipment that the United States has blocked the PRC from buying. PRC labs are at or near the global frontier in several photonic research benchmarks, but the United States and Taiwan still dominate the parts of the photonic supply chain that turn lab demonstrations into viable, scalable products. – https://jamestown.org/prcs-photonic-chip-push-signals-leapfrogging-moment/

Cyber Security & Surveillance

CVE-2026-39987: Marimo RCE exploited in hours after disclosure

(Pierluigi Paganini – Security Affairs) A critical flaw in Marimo, tracked as CVE-2026-39987 (CVSS score of 9.3) was exploited just 10 hours after disclosure (On April 8, 2026). Sysdig Threat Research Team observed exploitation of the Marimo flaw within 9 hours and 41 minutes of disclosure, with credential theft completed in under 3 minutes, despite no public exploit code. Marimo is an open-source Python notebook tool used for data science, analysis, and interactive coding. The bug allows pre-authenticated remote code execution and affects versions up to 0.20.4. Version 0.23.0 addressed the issue. – https://securityaffairs.com/190623/hacking/cve-2026-39987-marimo-rce-exploited-in-hours-after-disclosure.html

Ransomware attack on ChipSoft knocks EHR services offline across hospitals in the Netherlands and Belgium

(Pierluigi Paganini – Security Affairs) ChipSoft, a major Dutch provider of EHR systems, was hit by a ransomware attack that forced it to take its website and digital services offline, disrupting access for hospitals, healthcare providers, and patients. EHR (Electronic Health Record) is a digital version of a patient’s medical history, stored and managed by healthcare providers. The company’s flagship HiX platform, widely used across the Netherlands, was impacted, with users reporting outages earlier this week. The ransomware attack occurred on April 7, and the Dutch CERT Z-CERT has been coordinating closely with the vendor and healthcare institutions. As a precaution, access to key services like Zorgportaal, HiX Mobile, and Zorgplatform was disabled, with systems now being gradually restored and new credentials issued to users. – https://securityaffairs.com/190615/cyber-crime/ransomware-attack-on-chipsoft-knocks-ehr-services-offline-across-hospitals-in-the-netherlands-and-belgium.html

UAT-10362 linked to LucidRook attacks targeting Taiwan-based institutions

(Pierluigi Paganini – Security Affairs) LucidRook is a new Lua-based malware used in targeted phishing attacks against NGOs and universities in Taiwan. Cisco Talos links it to a skilled group tracked as UAT-10362. In Oct 2025, attackers used password-protected email attachments to spread the malware in spear-phishing campaigns. “Cisco Talos observed a spear-phishing attack delivering LucidRook, a newly identified stager that targeted a Taiwanese NGO in October 2025. The metadata in the email suggests that it was delivered via authorized mail infrastructure, which implies potential misuse of legitimate sending capabilities.” reads the report published by Cisco Talos. “The email contained a shortened URL that leads to the download of a password protected and encrypted RAR archive. The decryption password was included in the email body. Based on this email and the collected samples, Talos observed two distinct infection chains originating from the delivered archives.”. The phishing emails came from likely legitimate infrastructure and included shortened links to password-protected RAR archives, with passwords inside the message. The archives contained fake government or security-related decoy documents to distract victims. – https://securityaffairs.com/190598/security/uat-10362-linked-to-lucidrook-attacks-targeting-taiwan-based-institutions.html

EngageLab SDK flaw opens door to private data on 50M Android devices

(Pierluigi Paganini – Security Affairs) Microsoft researchers found a critical flaw in EngageSDK that lets apps bypass Android sandbox protections and access private data. The flaw put millions of users, including over 30M crypto wallet installs, at risk. Developers fixed it in version 5.2.1 after coordinated disclosure, and vulnerable apps were removed from Google Play. The good news is that no active exploitation has been confirmed, but the case highlights risks from third-party SDKs widely used in mobile apps. “As mobile wallets and other high‑value apps become more common, even small flaws in upstream libraries can impact millions of devices. These risks increase when integrations expose exported components or rely on trust assumptions that aren’t validated across app boundaries.” reads the report published by Microsoft. – https://securityaffairs.com/190586/hacking/engagelab-sdk-flaw-opens-door-to-private-data-on-50m-android-devices.html

Bitcoin Depot hack leads to $3.6M Bitcoin theft via stolen credentials

(Pierluigi Paganini – Security Affairs) Hackers breached the largest US Bitcoin ATM operator, Bitcoin Depot, on March 23, stole login credentials, and drained about 50.9 BTC worth $3.6M from company wallets. Bitcoin Depot told the SEC that a hacker accessed its systems and stole credentials linked to its digital asset settlement accounts, gaining control and enabling unauthorized activity. – https://securityaffairs.com/190578/cyber-crime/bitcoin-depot-hack-leads-to-3-6m-bitcoin-theft-via-stolen-credentials.html

Just Three Ransomware Gangs Accounted for 40% of Attacks Last Month

(Danny Palmer – Infosecurity Magazine) Just three ransomware groups were responsible for almost half of all ransomware attacks during the last month, analysis of reported incidents has revealed. According to cybersecurity analysts at Check Point, a total of 672 ransomware incidents were reported during March 2026, representing an increase in attacks compared with the previous month. The figures, released on April 9, detailed how three ransomware operations dominated the attack landscape, as they accounted for 40% of incidents. – https://www.infosecurity-magazine.com/news/three-ransomware-gangs-40-percent/

When Agentic AI Becomes Your Riskiest Third Party

(Tarnveer Singh – Infosecurity Magazine) Agentic AI has evolved from a buzzword to a practical tool. Unlike a typical AI Large Language Model (LLM) these systems do more than generate text: They can plan tasks, act on them, and chain tools together autonomously. Essentially, they behave like digital teammates by performing multistep tasks toward specific goals, not just answer prompts. This new capability changes the security landscape for your business. Many third-party risk management (TPRM) programs still treat AI tools as standard software. They ignore how the autonomy and system access in these tools can create a severe security risk. Organizations that underestimate agentic AI may face operational, financial, and security problems. – https://www.infosecurity-magazine.com/opinions/when-agentic-ai-becomes-riskiest/

Defense/Intelligence/Warfare

DIA centralizes AI efforts with Digital Modernization Accelerator

(Sydney J. Freedberg Jr. – Breaking Defense) After a year-long push to rationalize its previously uncoordinated AI efforts, the Defense Intelligence Agency is institutionalizing its new, more centralized and more efficient approach, the DIA’s chief AI officer said Thursday. “One of the things that we were serious about was getting out capabilities quickly,” said Maj. Gen. Robert Kinney at SCSP’s ai+intelligence conference here. “[When] I talk to the team … I reinforce this frequently: I want you to move like somebody’s on your heels, and they’re about ready to eat you.”. Those reforms include a new “hub-and-spoke” organization centered around the Digital Modernization Accelerator (DMA), created on March 1 as the permanent incarnation of the ad hoc Task Force Sabre, Kinney said. Nicknamed the “Maverick Accelerator,” it is already helping the DIA consolidate scarce expertise and push out technical support — not just to the agency’s own directorates, Kinney said, but also to the four-star theater Combatant Commands around the world. – https://breakingdefense.com/2026/04/dia-centralizes-ai-efforts-with-digital-modernization-accelerator/

CIA employees will get AI ‘coworkers’—and eventually run teams of AI agents, deputy says

(David DiMolfetta – Defense One) The Central Intelligence Agency aims to integrate artificial intelligence-powered “coworkers” into analysts’ workflows in coming years, a top official said Thursday.  CIA Deputy Director Michael Ellis said these AI coworkers would be housed in agency analytics platforms to help humans with basic tasks. “It won’t do the thinking for our analysts, but it will help draft key judgments, edit for clarity and compare drafts against tradecraft standards,” Ellis said in a speech at a Special Competitive Studies Project event on AI and the intelligence community. The AI tools would help triage and flag trends for human analysts to review. – https://www.defenseone.com/technology/2026/04/cia-ai-coworkers-agents/412746/

Ukrainian Military Offers Lessons Learned to NATO (Part One)

(Taras Kuzio – The Jamestown Foundation) In March, a senior North Atlantic Treaty Organization (NATO) military delegation led by Supreme Allied Commander for Transformation Admiral Pierre Vandier visited Kyiv, highlighting a new phase of military cooperation between Ukraine and the alliance. Ukraine is gaining the status of a military innovator as Kyiv heads its own military training, increases success on the frontlines, expands medium and long-range missile attacks against Russia, targets Russian energy infrastructure, and receives urgent demand from Europe and the Gulf states for its military technology. Battlefield-tested drone tactics, advanced command-and-control systems, and a rapidly expanding private defense sector have made Ukraine one of the world’s leading laboratories for modern warfare, positioning it as a future hub for Western military innovation. – https://jamestown.org/ukrainian-military-offers-lessons-learned-to-nato-part-one/