Weekly Digest on AI and Emerging Technologies (7 April 2026)

Governance/Regulation/Legislation

US agencies launch national AI workforce initiative

(DigWatch) The US Department of Labor and the National Science Foundation have formalised a partnership to prepare the American workforce for the rapid expansion of AI. The agreement supports the launch of the TechAccess: AI-Ready America initiative, designed to broaden access to AI education, tools, and training across industries. – https://dig.watch/updates/us-agencies-national-ai-workforce-initiative

Canada reviews Privacy Act to modernise data protection and digital governance

(DigWatch) The Government of Canada has launched a formal review of the Privacy Act, opening a broader effort to modernise how the federal public sector governs personal data in an increasingly digital administrative environment. Led by the Treasury Board of Canada Secretariat and announced by Shafqat Ali, President of the Treasury Board, the process will reassess how more than 250 government institutions collect, use, share, and protect personal information. – https://dig.watch/updates/canada-reviews-privacy-act-to-modernise-data-protection-and-digital-governance

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

(DigWatch) Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights. At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act. Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification. – https://dig.watch/updates/amnesty-international-warns-eu-tech-law-reforms-could-weaken-gdpr-and-ai-act-protections

Privacy roadblock stunts von der Leyen’s anti-red tape crusade

(Ellen O’Regan – Politico) Europe’s capitals and lawmakers are bringing Ursula von der Leyen’s campaign to ease data rules to a halt, saying she’s gone too far in cutting back privacy rights. The European Commission is pushing a plan to scale back rules designed to protect the data and privacy of Europeans in an effort to boost artificial intelligence technology on the continent and to compete with the United States and China. But key negotiating partners are resisting the “digital omnibus” that aims to reform key parts of the bloc’s privacy law, the General Data Protection Regulation (GDPR). “We should not start, within the omnibus, changing the main principles of the GDPR,” said Marina Kaljurand, one of two lead lawmakers who is drafting the European Parliament’s position on the proposal. – https://www.politico.eu/article/privacy-roadblock-stunts-ursula-von-der-leyen-anti-red-tape-crusade-gdpr/

Artificial Intelligence Is Facing a Crisis of Control—and the Industry Knows It

(Gordon M. Goldstein – Council on Foreign Relations) In the rapidly evolving age of artificial intelligence (AI), new milestones occur at a dizzying pace as the U.S.-Israeli attack on Iran vividly illustrates. The war in the Persian Gulf reflects the technology’s deepest integration yet into multifaceted domains of warfighting, including intelligence analysis, target identification, battle simulations, covert reconnaissance, and exotic forms of war disinformation—all executed with astounding speed. Admiral Brad Cooper, head of U.S. Central Command, recently touted AI’s influence on the war in a video update. “These systems help us sift through vast amounts of data,” he said. “Advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.”. These developments are indeed impressive, but they are simply data points in a larger transformative narrative that has been accelerating since at least 2023. The world’s leading AI companies are increasingly becoming both architects and instruments of global security in the twenty-first century, rivaling the influence of nation-states. The security environment they are shaping is characterized by a fundamental dynamic: AI companies are developing and unleashing new technologies that can evade human control, a mutating crisis that industry leaders and AI experts have been remarkably transparent in disclosing. The crisis of control has two dimensions. The first relates to what might be called AI proliferation, the growing capacity for malevolent individuals and groups to potentially use emerging technology to design and deploy a terrifying new generation of chemical weapons, synthetic pathogens, and autonomous cyber weapons that can breach and sabotage the world’s critical infrastructure. The second is equally ominous. AI companies have honestly reported multiple instances when their models engage in elaborate acts of deception and manipulation, and attempt to go rogue. The world is watching the development of a compounding, consistent, and treacherous problem. Urgent warnings over several years have failed to generate viable solutions to address a metastasizing threat. In the absence of government or societal action, AI companies—the messengers of risk—may have to also be the gamekeepers of this new technology. –  https://www.cfr.org/articles/artificial-intelligence-is-facing-a-crisis-of-control-and-the-industry-knows-it

UN Norms: Tackling the Rise of Cyber Capabilities

(James A. Lewis – RUSI) The conclusion of the UN’s Open Ended Working Group (OEWG) – the process responsible for devising ‘rules of the road’ for responsible state behaviour in cyberspace – finished its work in July 2025, marking the end of a cycle of negotiations on cybersecurity begun 21 years ago. What will take its place remains uncertain and raises issues regarding the future of norms and rules for states in cyberspace. The Final Report of the OEWG broke little new ground because states did not wish to go beyond the ideas discussed in six earlier Groups of Government Experts (GGE) and two OEWGs. This means the substantive agreements in the 2015 GGE – when states agreed on 11 norms – are the high-water mark of UN cyber negotiations. But it has been over a decade since that agreement and further progress requires states must now transition from the OEWG to a new ‘Permanent Mechanism.’ – https://www.rusi.org/explore-our-research/publications/commentary/un-norms-tackling-rise-cyber-capabilities

Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

(DigWatch) A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework. The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms. – https://dig.watch/updates/unesco-responsible-ai-practice-report

EIB highlights AI as key driver of Croatia’s economic growth

(DigWatch) The European Investment Bank and the Croatian National Bank have emphasised the strategic importance of AI in strengthening Croatia’s economic competitiveness. Discussions at a joint conference focused on accelerating AI adoption through coordinated investment, policy development and skills enhancement. Despite strong investment activity among firms in Croatia, the uptake of advanced technologies remains limited. Only a small share of companies systematically use generative AI, with applications largely confined to internal processes, highlighting significant untapped potential for productivity gains. – https://dig.watch/updates/eib-highlights-ai-as-key-driver-of-croatias-economic-growth

Mozambique explores AI role in strengthening electoral systems

(DigWatch) Electoral stakeholders in Mozambique are examining the growing role of AI in democratic and electoral processes. AI tools are increasingly used to improve voter registration, logistics, and public engagement, yielding greater efficiency and accessibility. Concerns remain around data protection, digital security, and institutional accountability. Officials and partners stressed that while AI can strengthen electoral administration, it also introduces risks that require careful governance and clear ethical safeguards. – https://dig.watch/updates/mozambique-ai-role-electoral-systems

Serbia launches LORYA to turn cultural heritage into AI-ready language data

(DigWatch) Serbia has launched LORYA, a new platform that uses AI-supported document processing to convert books, newspapers, manuscripts, and other written heritage materials into clean, structured, machine-readable data for research, education, and language technologies. Developed by the UN Development Programme, the Mathematical Institute of the Serbian Academy of Sciences and Arts, and the National Library of Serbia, with support from France and Japan, the project is aimed not only at preserving written cultural heritage, but also at addressing a broader AI problem: the weak representation of underrepresented languages, scripts, and historical texts in digital training data. – https://dig.watch/updates/serbia-lorya-digitise-cultural-heritage-using-ai

Terrorism/Counter-Terrorism

Global Governance of Emerging Technologies: Counterterrorism Challenges at the United Nations Security Council

(David Scharia – Just Security) Terrorist organizations rarely lead in technological innovation and are not early adopters of emerging technologies. Their incorporation of technological advancements into operational activities is typically measured and gradual, influenced by organizational structure, resource limitations, opportunities for learning and concerns that new technologies may increase their exposure to counterterrorism operations. Nevertheless, terrorist organizations have demonstrated considerable skill in exploiting widely available commercial technologies to advance their operational objectives. Their strength stems from strategically adapting universally accessible technologies to suit their needs. They have become particularly adept at utilizing social media and artificial intelligence for propaganda dissemination, recruitment, and the orchestration of terrorist activities. The capacity to exploit commercially available technologies for nefarious purposes poses substantial challenges for governance of these technologies. Policymakers and security agencies must maintain constant vigilance and address these evolving threats while simultaneously incorporating advanced technologies into counterterrorism strategies. For instance, biometric technology is systematically utilized at borders and airports to detect suspected individuals. Surveillance technologies enable monitoring and situational awareness. Signal intelligence is deployed to intercept communications across a range of digital channels. Blockchain technologies are increasingly leveraged to trace illicit financial transactions, while artificial intelligence facilitates predictive analytics, behavioral pattern recognition, the processing of large datasets and the rapid removal of terrorist propaganda. Collectively, these technologies constitute a multilayered counterterrorism architecture that demonstrates a profound reliance on emerging technologies. Below, I trace how the U.N. Security Council has adapted to the terrorists’ use of emerging technologies over time, relying on principles and non-binding guidance that offer flexibility and the possibility of building consensus in an increasingly uncertain time. – https://www.justsecurity.org/134602/governance-emerging-technologies-counterterrorism-unsc/

Persistent Threat of Drone-Enabled Lone Actor Terrorism

(Rueben Dass – The Jamestown Foundation) Lone-actor terrorists and small cells—including Islamic State (IS) affiliates and right-wing extremists—are increasingly attempting to use commercial drones for remote surveillance, weapons transport, and attacks. The threat is worsened by the easy procurement of drones, commercial 3D printers, and detailed online instructional manuals distributed by terrorist groups that lower the technical barriers to weaponization. Although commercial drones have limited payload capacities, their use can generate substantial psychological panic, necessitating comprehensive countermeasures such as tighter legislation, user tracing mechanisms, and intelligence-led operations. – https://jamestown.org/persistent-threat-of-drone-enabled-lone-actor-terrorism/

Defence/Intelligence/Warfare

Deminers race to keep up with military technology

(UN News) In conflict zones where new technologies are making landmines more dangerous, deminers must innovate at the same pace to avoid being left behind, a leading UN mines expert has told UN News. In the Ukraine conflict, landmine technology is setting a precedent for a new era of development. 3D printers are used to produce basic models of landmines close to the battlefield, which can then be easily assembled, filled with explosives and dropped by drones. In fact, the majority of mines deployed in Ukraine today are being laid remotely, either by artillery, rockets, helicopters, or drones. “We’re also seeing much more high-tech mines being deployed,” making landmine detection a “much more complicated and dangerous task”, said Paul Heslop, Head of the UN Mine Action Service (UNMAS) in Ukraine. These “high-tech” landmines are equipped with sensors that can detect a deminer approaching, whether on foot or in a vehicle, and then detonate. Some even have magnetic influence capabilities, meaning they can go explode when exposed to the magnetic field of a detector. “The piece of technology you’re using to find the mine may actually activate the mine,” Mr. Heslop said. As the International Day for Mine Awareness and Assistance in Mine Action is marked on 4 April, the UN mine specialist said that the biggest challenge is how to win the arms race of clearing faster than the technology to stop them being cleared is being developed. – https://news.un.org/en/story/2026/04/1167247

Russia’s Unmanned Systems Forces Become Wildcard in Moscow’s Military Modernization

(Hlib Parfonov – The Jamestown Foundation) Russia has established the Unmanned Systems Forces (USF) as an independent military branch, institutionalizing drone warfare after lessons from Ukraine, reflecting a doctrinal shift toward drones as a central component of modern combined-arms operations. The USF features a centralized command structure overseeing development, procurement, training, and deployment, with integrated units across all command levels and a dedicated acquisition system managed jointly with the Ministry of Defense’s research directorate. Moscow plans to recruit nearly 79,000 personnel by 2026, drawing heavily from students, veterans, and technically skilled civilians, while expanding university training pipelines and specialized academies to sustain long-term drone force development. A four-phase roadmap for the USF envisions a massive expansion to roughly 210,000 personnel and nearly 1,000 units, embedding drone capabilities across ground, air, and naval forces, despite funding gaps and organizational challenges. Russia’s buildup parallels Ukraine’s earlier institutionalization of its drone force and signals an enduring transformation, with drones accounting for most battlefield fire missions and reshaping operational doctrine, particularly along the North Atlantic Treaty Organization’s (NATO’s) northern and northeastern strategic fronts. – https://jamestown.org/russias-unmanned-systems-forces-become-wildcard-in-moscows-military-modernization/

A Feasible Precaution Ignored: AI Targeting Algorithms and the Failure to Recognize Protected Emblems

(Michael Loftus – Just Security) In Aug. 2021, a U.S. drone strike in Kabul, Afghanistan killed an aid worker after he loaded water jugs into his car and nine other civilians—including seven children. In Nov. 2023, an Israeli missile by the Lebanese border killed a grandmother and her three granddaughters who had also been handling water jugs. In April 2024, when the Israel Defense Forces (IDF) struck a visibly marked World Central Kitchen convoy, international condemnation of Israel’s campaign in Gaza spiked. And most recently, the Feb. 28 Tomahawk strike on the Shajarah Tayyebeh Elementary School in Minab, as well as reports of widespread use of algorithmic targeting in the ongoing Iran campaign, have provoked deep concern from U.S. Congress and the public regarding military operations relying on AI models (although it is not yet clear the extent to which AI contributed to the failures that led to the Minab strike). With algorithmic targeting shrinking the role of human operators and decentralized strike capabilities magnifying coordination challenges, high-profile incidents of civilians killed on the battlefield produce strategic-level consequences for both countries and the corporations that underwrite military capabilities. Existing Testing, Evaluation, Validation and Verification (TEVV) procedures insufficiently address the growing role of algorithms in military targeting, which risks undermining respect for humanitarian law. While the U.S. Defense Department’s (DoD) responsible AI implementation guidance emphasizes ethical principles and Directive 3000.09 mandates legal review of the procurement or modification of autonomous weapons systems to ensure compliance with domestic and international law, DoD does not currently have a specific requirement to ensure that targeting algorithms recognize humanitarian actors, including aid workers, nor is there a clear standard by which to measure compliance. This falls patently short of the United States’ obligation to take constant care and ensure feasible precautions to protect civilians under Article 57 of Additional Protocol I (AP I) to the Geneva Conventions and corresponding customary international humanitarian law (IHL). Closing this gap is crucial to both U.S. military and strategic effectiveness and compliance with the law. – https://www.justsecurity.org/134362/ai-targeting-protected-emblems/

Security and Surveillance

Pro-Iran Handala group breached Israeli defence contractor PSK Wind Technologies

(Pierluigi Paganini – Security Affairs) Pro-Iran Handala group announced on April 2 that it breached PSK Wind Technologies, an Israeli engineering and IT firm specializing in integrated systems for defense and critical communications, including command and control solutions. Handala appears as a pro-Palestinian hacktivist group but is widely seen as a front for Iran-backed Void Manticore, as reported by SecurityWeek. Known for phishing, data theft, extortion, and destructive wiper attacks, they also engage in info operations and psychological warfare. Since the Iran conflict began, they’ve targeted Israeli military servers, intelligence officers, and companies, stealing or wiping data. The Handala claims to have stolen sensitive data from PSK Wind, including documents on command and control systems, allegedly sending it to “Axis of Resistance” missile units. The Axis of Resistance is an Iran-led political and military alliance of groups opposing Israel, the US, and allies, including Hezbollah in Lebanon, Palestinian Islamic Jihad, Syrian regime forces, and Shia militias in Iraq like Kata’ib Hezbollah. – https://securityaffairs.com/190319/data-breach/pro-iran-handala-group-breached-israeli-defence-contractor-psk-wind-technologies.html

New ‘Storm’ Infostealer Remotely Decrypts Stolen Credentials

(Kevin Poireault – Infosecurity Magazine) Security researchers at Varonis have uncovered a new information stealer malware (infostealer) strain that harvests browser credentials, session cookies and crypto wallets before quietly sending everything to the attacker’s server for decryption. Called Storm, the infostealer emerged on underground cybercrime networks in early 2026. According to Daniel Kelley, a senior security consultant at Varonis and author of a report on Storm, published on April 1, the new infostealer represents a shift in how credential theft is developing. – https://www.infosecurity-magazine.com/news/storm-infostealer-remotely/

NCSC Issues Security Alert Over Hackers Targeting WhatsApp and Signal Accounts

(Danny Palmer – Infosecurity Magazine) The UK’s National Cyber Security Centre (NCSC) has warned about an increase in targeted attacks against individuals using messaging apps including WhatsApp, Facebook Messenger and Signal. The alert, issued on March 31, warned that the NCSC and its international partners have seen “growing malicious activity from Russia-based actors using messaging apps to target high-risk individuals.”. High-risk individuals are those whose work or public status means they have access to, or influence over, sensitive information that could be of interest to threat actors. – https://www.infosecurity-magazine.com/news/ncsc-alert-hackers-whatsapp-signal/

Apple Expands iOS 18 Security Updates Amid DarkSword Threat

(Alessandro Mascellino – Infosecurity Magazine) Apple has expanded the availability of iOS 18.7.7 and iPadOS 18.7.7 to more devices to protect users from the DarkSword exploit kit, a hacking tool used in targeted cyber-attacks. The update allows devices still running iOS 18 to receive security patches without upgrading to the latest operating system. The security fixes included in the update were originally released in 2025, but Apple broadened access on April 1, so more users could automatically receive protections against web-based attacks linked to DarkSword. The exploit targets devices running iOS versions between 18.4 and 18.7 and can deploy malware when a user visits a compromised website in a watering hole attack. – https://www.infosecurity-magazine.com/news/apple-ios-18-updates-darksword/

Researchers Observe Sub-One-Hour Ransomware Attacks

(Phil Muncaster – Infosecurity Magazine) Security researchers have warned of another step change in the velocity of ransomware, after spotting the Akira group complete all stages of an attack within an hour. Halcyon said in a new report that Akira usually achieves initial access by exploiting vulnerabilities in internet-facing VPN appliances and backup solutions, especially those lacking multi-factor authentication (MFA). In the past, these have included devices from SonicWall, Veeam and Cisco, although the group has also been observed using credential theft, spearphishing, password spraying, and even initial access brokers (IABs). – https://www.infosecurity-magazine.com/news/researchers-subonehour-ransomware/

GitHub Used as Covert Channel in Multi-Stage Malware Campaign

(Alessandro Mascellino – Infosecurity Magazine) A series of malicious LNK files targeting users in South Korea has been detected using a multi-stage attack chain that uses GitHub as command and control (C2) infrastructure. The campaign relies on scripting, encoded payloads and legitimate Windows tools to maintain persistence while avoiding detection. Earlier versions of the attack date back to 2024 but contained more metadata and simpler obfuscation, allowing researchers to track links to earlier malware campaigns. According to a new advisory published by Fortinet on April 2, recent versions show clear changes in tactics. The attacker now embeds decoding functions directly within LNK file arguments and includes encoded payloads inside the files themselves. Decoy PDF documents are used to distract victims while malicious scripts execute silently in the background. The files appear legitimate when opened, while PowerShell scripts run without the user’s knowledge. “Modern cyber espionage has fundamentally shifted toward a highly evasive strategy known as living-off-the-land [LOTL],” said Jason Soroko, senior fellow at Sectigo. – https://www.infosecurity-magazine.com/news/github-covert-multi-stage-malware/

Most CNI Firms Face Up to £5m in Downtime from OT Attacks

(Phil Muncaster – Infosecurity Magazine) The vast majority (80%) of critical national infrastructure (CNI) providers in the UK face downtime costs of between £100,000 ($132,144) and £5m ($6.6m) from cyber-attacks that disrupt their operational technology (OT), according to e2e-assure. The SOC-as-a-service provider polled 250 cybersecurity decision makers in CNI sectors including manufacturing, energy, utilities, transport and retail to better understand the impact of cyber threats. It claimed that around a quarter (23%) of OT downtime incidents costs businesses over £1m, with 6% exceeding £5m. – https://www.infosecurity-magazine.com/news/most-cni-firms-5m-downtime-ot/

Hasbro hit by cyberattack, investigates possible data breach

(Pierluigi Paganini – Security Affairs) Toy giant Hasbro reported a cyberattack on Wednesday that disrupted certain company operations. The firm is investigating the full extent of the incident, including whether any files or sensitive data were compromised, as it works to restore normal business processes. Hasbro is a major American toy and board game company known worldwide for creating and selling popular products like Transformers, My Little Pony, Monopoly, Nerf, and Play-Doh. Founded in 1923, it has grown into a global entertainment brand, also producing movies, TV shows, and digital games based on its toy franchises. The company has a strong global presence, selling in over 100 countries and constantly expanding through acquisitions. On March 28, 2026, the company detected unauthorized access to its network. It quickly launched its security response, took some systems offline, and began an investigation with the help of external cybersecurity experts. – https://securityaffairs.com/190306/security/hasbro-hit-by-cyberattack-investigates-possible-data-breach.html

Threat actor UAC-0255 impersonate CERT-UA to spread AGEWHEEZE malware via phishing

(Pierluigi Paganini – Security Affairs) A threat actor, tracked as UAC-0255, impersonated CERT-UA in a phishing campaign, sending emails to about 1 million users. The messages urged victims to download a password-protected archive from Files.fm and install a fake “specialized software,” which actually deployed the AGEWHEEZE remote access tool, giving attackers control over infected systems. “The National Cyber ​​Incident, Cyber ​​Attack, and Cyber ​​Threat Response Team CERT-UA recorded cases of distribution of emails allegedly on behalf of CERT-UA on March 26-27, 2026, urging people to download a password-protected archive (“CERT_UA_protection_tool.zip”, “protection_tool.zip”) from the Files.fm service and install “specialized software”.” reads the advisory published by CERT-UA. “It was found that the executable file that was offered to be installed (internal package name: “/example.com/tvisor/agent”) is a multifunctional software tool for remote computer control, classified by CERT-UA as AGEWHEEZE.” – https://securityaffairs.com/190287/hacking/threat-actor-uac-0255-impersonate-cert-ua-to-spread-agewheeze-malware-via-phishing.html

Italian spyware vendor creates Fake WhatsApp app, targeting 200 users

(Pierluigi Paganini – Security Affairs) WhatsApp has recently uncovered a malicious fake version of its app that targeted roughly 200 users, most of whom are in Italy. The platform confirmed that the unofficial client contained spyware and was developed by Italian firm Asigint, a subsidiary of SIO Spa, a company known for providing surveillance tools to law enforcement and government agencies. “Our security team identified around 200 users, mostly in Italy, who we believe may have downloaded this unofficial and harmful client. We logged them out and alerted them to the privacy and security risks,” WhatsApp stated. “We believe this was a social engineering attempt targeting a limited number of users with the goal of inducing them to install harmful software impersonating WhatsApp, likely to gain access to their devices. Today, WhatsApp has taken action against Asigint, an Italian spyware company controlled by Sio Spa that created a fake version of WhatsApp. We believe the individuals behind this malicious client used social engineering techniques to trick people into downloading an unofficial and harmful app disguised as WhatsApp,” the Meta Group company said in a statement, adding that it intends to “send a formal legal notice to this spyware company to cease all harmful activity.” – https://securityaffairs.com/190276/malware/italian-spyware-vendor-creates-fake-whatsapp-app-targeting-200-users.html

The EU is suffering a hacking crisis. Here’s what we know

(Sam Clark and Antoaneta Roussi – Politico) European Union officials have tumbled into a cyber crisis after attacks hit its digital systems and officials’ phones. Cyber experts are probing to see just how deep the rabbit hole goes. The EU executive has told some of its most senior officials to shut down a group on messaging app Signal over fears it was a hacking target, POLITICO first reported on Thursday. The move comes as the EU faced a string of attacks in the last few months that led to breaches of its cloud infrastructure and an IT system managing mobile devices. “The EU is finally discovering its security weaknesses … spending billions on useless things, not investing in critical issues,” said one Western intelligence official briefed on the situation, granted anonymity to disclose information that has not been disclosed publicly. It’s still unclear if the string of incidents are related. The Commission has released few details on the hacks and declined to comment in detail on sensitive security questions. Cyberattacks can be hard to investigate — and finding out who is behind them is a sensitive, tricky process. – https://www.politico.eu/article/eu-cyberattacks-hacking-security-crisis/

Iran Conflict Heightens Cyber Threats to U.S. Energy Infrastructure

(Leslie Abrahams and Lauryn Williams – Center for Strategic & International Studies) The energy sector has long been targeted as a point of leverage in geopolitical conflict. Historically, energy disruptions were concentrated on logistical and supply interruptions to exert economic pressure on adversaries—for example, through sanctions, oil embargos, and restrictions on key shipping lanes. More recently, however, direct physical attacks on energy infrastructure have increasingly been deployed as a core military strategy. In the context of the Russia-Ukraine conflict, strikes on Ukrainian energy systems tripled this year over previous years of the war, resulting in a near collapse of the country’s power grid. Last week, President Donald Trump threatened attacks on Iran’s electricity grid, and Iran responded that it would retaliate against energy and water systems across the Gulf. Today, Iran does not have long-range weapons capable of causing physical damage to domestic U.S. energy infrastructure. However, a physical risk remains; Iran has increasingly used unmanned aircraft systems to attack critical assets, and pro-Iranian entities within the United States have capabilities to use drones as weapons—a threat that is difficult for utilities to counter. The threat, however, does not end with physical attacks; the energy sector is vulnerable to, and has been increasingly targeted by, cyber threat actors in recent years. For several years, there has been strong evidence that foreign adversaries, notably the People’s Republic of China (PRC), have successfully infiltrated and pre-positioned on U.S. critical infrastructure, including energy systems. While these instances have not caused outages, significantly, they have demonstrated the PRC’s interest in targeting strategic critical infrastructure for disruption, including during future conflict. The United States itself has become more vocal about offensive cyber capabilities targeting the grid. In January, U.S. Cyber Command reportedly conducted a cyberattack, strategically turning power off and on in Venezuela in support of the mission to capture Nicolás Maduro, with President Trump famously stating a power blackout surrounding the raid was “due to a certain expertise that we have.”. Cyberattacks originating from Iran are a key concern as well. For more than a decade, Iran has invested heavily in its cyber capabilities and cultivated ties to hacker groups. Iran has so far conducted limited disruptive strikes in the current conflict, outside of the attack targeting U.S. medical technology firm Stryker. But, cybersecurity firms and critical infrastructure threat advisory groups warn of a heightened cyber threat environment as the Middle East conflict continues. The Trump administration has downplayed indications of imminent risk, but urged energy companies to increase physical and cybersecurity measures in case of retaliatory attacks. Even before the February airstrikes escalated geopolitical tensions, the cyber threat environment surrounding the United States energy infrastructure has been accelerating. In February, the Department of Energy Office of Cybersecurity, Energy Security, and Emergency Response issued its first strategic plan to protect U.S. energy infrastructure from cybersecurity threats, physical attacks, and natural disasters. This year, the World Economic Forum ranked “cyber insecurity” a top 10 global risk, and the Office of the Director of National Intelligence’s 2026 Annual Threat Assessment warned U.S. critical infrastructure, including the energy sector, faces escalating cyber challenges. – https://www.csis.org/analysis/iran-conflict-heightens-cyber-threats-us-energy-infrastructure

Frontiers

IBM and ETH Zurich announce partnership on AI and quantum algorithms

(DigWatch) International Business Machines Corporation and the Swiss Federal Institute of Technology Zurich have announced a decade-long partnership to develop algorithms that bridge classical computing, machine learning, and quantum systems. The collaboration will focus on creating foundational algorithms to address complex business and scientific challenges as quantum computing becomes increasingly practical. IBM will support the establishment of new professorships and research initiatives at the institution. – https://dig.watch/updates/ibm-and-eth-zurich-announce-partnership-on-ai-and-quantum-algorithms

MIT develops AI framework to test ethics in autonomous systems

(DigWatch) Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations. Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure. – https://dig.watch/updates/mit-ai-framework-test-ethics-autonomous-systems