Governance/Regulation/Legislation
Singapore Ministry of Health addresses AI-developed drugs and patient data safeguards
(DigWatch) Singapore’s Ministry of Health has said that drugs developed with the use of AI will be subject to the same regulatory expectations as conventionally developed medicines, including requirements on quality, safety and efficacy. The ministry made the statement in response to a parliamentary question on the regulation of AI-developed drugs, clinical trials and safeguards for patient data used in AI-related healthcare innovation. – https://dig.watch/updates/singapore-ministry-of-health-addresses-ai-developed-drugs-and-patient-data-safeguards
To realize returns on their AI investments, corporations must consider their workers
(Reevana Balmahoon – Atlantic Council) Artificial intelligence (AI) is a key factor in the ongoing workforce transformation that is both creating and displacing jobs. For business leaders to benefit from this transformation and achieve returns on investment for AI implementation, it will be essential for them to prioritize workers and earn their trust. The job landscape is expected to change significantly. According to the 2025 WEF Future of Jobs report, there will be a net increase in jobs by 2030, with 170 million jobs created, while 92 million will be displaced. While this job growth is forecast to take place in the logistics, software/technology, and healthcare industries, a host of jobs that are largely routine function-based are at risk of fading away. For business leaders thinking about their organizations’ sustainability, this workforce transformation should be an important consideration. Job security, dignity, and career growth are foremost in most workers’ minds. As business leaders work toward the kinds of tech innovation driving the AI workforce transformation, they should consider these concerns of workers to ensure that they are bought in on AI implementation. – https://www.atlanticcouncil.org/blogs/geotech-cues/to-realize-returns-on-their-ai-investments-corporations-must-consider-their-workers/
EU clinches deal to roll back AI restrictions
(Pieter Haeck – Politico) Restrictions on high-risk uses of artificial intelligence in the EU will be postponed by more than a year under a deal agreed by EU legislators on Thursday morning. The plan to delay a key part of the bloc’s flagship AI law received support from European Parliament lawmakers and EU countries after heavy pressure from industry and capitals. The deal struck early on Thursday also largely exempts the use of AI in industrial applications from the scope of the law — a big win for Germany after top officials including Chancellor Friedrich Merz pushed for the change to keep tech heavyweights Siemens and Bosch competitive. – https://www.politico.eu/article/eu-clinches-deal-to-roll-back-ai-restrictions/
European industry fears ‘back door’ for US in cloud law
(Mathieu Pollet – Politico) European companies have raised concerns that an EU law to reduce the bloc’s reliance on foreign tech will actually keep the door wide open for U.S. giants. The European Commission is drawing up plans to spell out how the bloc should build out a home-grown cloud industry, and which sectors will rely on it to shield sensitive data and operations from foreign powers. But European companies are wary that Brussels will stop short of a full crackdown on American firms — arguing the involvement of competitors from across the pond in sensitive industries would render the effort useless. – https://www.politico.eu/article/eu-cloud-plan-us/
UAE launches national AI security lab for certification and cyber resilience
(DigWatch) The UAE Cyber Security Council, Cisco and Open Innovation AI have launched the UAE’s National AI Test and Validation Lab, creating a national platform designed to assess the security, safety and trustworthiness of AI systems. Hosted in Abu Dhabi, the facility will evaluate AI models, autonomous agents and applications before deployment across government and private sector environments. The initiative forms part of the UAE’s wider strategy to strengthen sovereign AI capabilities and reinforce cybersecurity protections as AI adoption accelerates across critical infrastructure and public services. – https://dig.watch/updates/uae-launches-national-ai-security-lab-for-certification-and-cyber-resilience
Navigating the digital future: from the Western Balkans regulators on a fast-track learning journey
(UNESCO) As digital platforms reshape how people access news, public debate, and verified information, media regulators in the Western Balkans face a dual challenge: safeguarding freedom of expression and public-interest journalism, while gearing up for new European rules on the information ecosystem. To support this effort, UNESCO organised a study visit within the EU-funded project Building Trust in Media in South-East Europe: Support to Journalism as a Public Good. – https://www.unesco.org/en/articles/navigating-digital-future-western-balkans-regulators-fast-track-learning-journey?hub=701
US Department of Labor launches website to build artificial intelligence skills, expand AI-focused Registered Apprenticeship programs
(U.S. Department of Labor) The U.S. Department of Labor announced the launch of a its AI in Registered Apprenticeship Innovation Portal, a one-stop resource for organizations looking to build artificial intelligence literacy and develop AI-focused Registered Apprenticeship programs. Announced during the National Apprenticeship Week event, “Building the AI-Ready Workforce through Registered Apprenticeship,” the website provides practical tools and actionable guidance to help organizations integrate artificial intelligence skills into Registered Apprenticeship programs through skill-building resources, industry-specific training, and flexible program pathways. The initiative builds on the objectives laid out in the department’s AI Literacy Framework that was released earlier this year. – https://www.dol.gov/newsroom/releases/eta/eta20260429
New frontier of AI forces Trump’s heavy hand
(Zachary Basu, Sam Sabin, Ashley Gold – Axios) President Trump set out on his first day in office to free artificial intelligence from government constraints. 15 months later, his own White House is preparing to become a gatekeeper for the most powerful new models on Earth. AI has crossed a threshold that no administration — not even one ideologically committed to staying out of its way — can afford to ignore. It’s a sea change in both Silicon Valley and Washington, accelerated by a new class of models that can hunt down cybersecurity flaws with extraordinary speed and precision. Anthropic’s Mythos, withheld from public use due to safety concerns, was the first model to trigger panic. But with OpenAI’s GPT-5.5 now matching its capabilities and Chinese labs racing to catch up, it won’t be the last. – https://www.axios.com/2026/05/05/trump-anthropic-ai-regulation-mythos-cyber
UNDP highlights challenges in public sector digital transformation outcomes
(DigWatch) According to UNDP, global public sector investment in digital technology now exceeds US$800 billion, yet most transformation efforts continue to fall short of expectations. The report links persistent underperformance to structural and institutional barriers rather than technological limitations. The report also notes that digital initiatives often lack alignment with broader policy goals, resulting in fragmented systems that improve internal processes but do not transform public services. – https://dig.watch/updates/governments-struggle-to-turn-digital-investment
Code for America highlights challenges in measuring AI use in public services in the US states
(DigWatch) According to Code for America, AI is reshaping how public services are delivered across the United States, yet adoption remains uneven and difficult to measure. They added that state governments are rapidly embracing AI through low-risk pilot programmes while still lacking clear frameworks to evaluate impact. The report describes AI adoption as following a staged progression beginning with readiness, where leadership structures, workforce skills and infrastructure are developed. Piloting then introduces experimentation through sandboxes and limited deployments, while implementation embeds AI into operational systems such as fraud detection, document automation, research support and citizen-facing chat assistants. – https://dig.watch/updates/code-for-america-highlights-challenges-in-measuring-ai-use-in-public-services-in-the-us-states
The White House Wants to Vet AI Models. It Won’t Solve the Safety Problem
(Emma Hatheway – Tech Policy Press) On Monday, The New York Times reported that the Trump administration is weighing an executive order to create a working group of industry executives and government officials to develop options to give the federal government initial access to new AI models to vet them before release. According to the Times, one option under discussion is “having the NSA, the White House Office of the National Cyber Director and the director of national intelligence oversee the model review,” though such review would not necessarily result in a model being blocked from public release. This is a meaningful shift from an administration that spent its first year dismantling Biden-era AI safety frameworks, and it signals that public concern may have finally registered with government officials. It remains to be seen what the working group might produce. But merely reviewing a model without consequence does not equate with meaningful oversight, and a working group co-designed with the companies being reviewed does not meet necessary standards of independence to establish an ideal framework. While Anthropic’s recent decision to withhold Mythos due to cybersecurity risks is regarded as an attempt to be responsible, it also serves as a reminder that we are relying on the discretion of a small number of executives to make decisions that affect the public. A safety regime that depends on which CEO is in charge or who they wish to brief is not a safety regime. Replacing that with a federal review staffed by representatives from the intelligence community and directly influenced by industry tech giants would not be any better. – https://www.techpolicy.press/the-white-house-wants-to-vet-ai-models-it-wont-solve-the-safety-problem/
Geostrategies
ICESCO and Morocco sign agreement on AI and digital capacity building
(DigWatch) The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight. The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco. – https://dig.watch/updates/icesco-and-morocco-sign-agreement-on-ai-and-digital-capacity-building
Norway Joins the Pax Silica Initiative
(Government of Norway) Norway is joining the U.S.-led Pax Silica initiative, which aims to strengthen cooperation on securing robust and reliable supply chains for emerging technologies. “It is important for Norway to cooperate with the United States and other global leaders in new AI technologies. One of the government’s main priorities is to help ensure that Norwegian industry and businesses have good market access, and this initiative may provide Norwegian companies with better access to advanced technological value chains,” says Norway’s Minister of Trade and Industry Cecilie Myrseth. – https://www.regjeringen.no/en/whats-new/norge-slutter-seg-til-pax-silica-initiativet/id3158545/
EU and Armenia sign connectivity partnership, strengthen economic ties and deepen security cooperation
(European Commission) European Union and Armenia held their first ever Summit in Yerevan, reinforcing cooperation in the areas of connectivity, security and defence, economic development and people-to-people contacts. European Commission President Ursula von der Leyen said: “This first EU-Armenia Summit elevates our partnership to a new level and sets a clear direction and agenda for the coming years. At the heart of this work is our joint commitment to peace and stability in the region. Going forward, we will also deepen political dialogue, strengthen economic ties, and work towards a more secure, prosperous, and stable future. Our cooperation is grounded in common values, a shared vision for the South Caucasus, and full respect for sovereign choices.”. The summit served to take stock of the EU-Armenia relations, as well as to address broader regional and global challenges, including the peace agenda and the normalisation of relations in the South Caucasus. President von der Leyen, European Council President Costa together with Armenian Prime Minister Pashinyan witnessed the signing of the EU-Armenia Connectivity Partnership, a major step forward in strengthening transport, energy, and digital links. Fully aligned with the EU’s Cross-Regional Connectivity Agenda and Armenia’s Crossroad of Peace initiative, it will boost trade, create jobs, reinforce resilience, and support regional stability. The partnership will be institutionalised through a High-Level Dialogue on Connectivity, alongside a High-Level Transport Dialogue, also launched at the summit. – https://ec.europa.eu/commission/presscorner/detail/en/ip_26_988
Commission services sign cooperation arrangement with Japan’s Ministry of Internal Affairs and Communications to support the enforcement of digital platform regulation
(European Commission) During the fourth meeting of the EU-Japan Digital Partnership Council in Brussels, the Commission services responsible for the enforcement of the Digital Services Act (DSA) signed a cooperation arrangement with Japan’s Ministry of Internal Affairs and Communications (MIC), which also serves as Japan’s platform regulator. – https://digital-strategy.ec.europa.eu/en/news/commission-services-sign-cooperation-arrangement-japans-ministry-internal-affairs-and
Security and Surveillance
Australian Cyber Security Centre Issues Alert Over ClickFix Attacks
(Danny Palmer – Infosecurity Magazine) The Australian Cyber Security Centre (ACSC) has issued a warning about a malicious cyber campaign which exploits the ClickFix social engineering technique to deliver potent password-stealing malware. In the alert, issued on May 7, Australian Signals Directorate’s (ADC) ACSC warned that the Vidar Stealer campaign is targeting infrastructure and organizations across multiple sectors. Vidar Stealer is a form of infostealer which primarily targets Microsoft Windows users and is designed to steal sensitive information from victims. Information it targets includes usernames, passwords, credit card data, cryptocurrency wallets, browser history, multi-factor authentication (MFA) tokens and more. The malware has been active since 2018. – https://www.infosecurity-magazine.com/news/australian-cyber-security-centre/
PCPJack Campaign Boots TeamPCP Off Compromised Machines
(Phil Muncaster – Infosecurity Magazine) Security researchers have discovered an unusual new threat campaign designed to target victims of notorious cybercrime group TeamPCP. PCPJack is a credential theft framework that “worms across exposed cloud infrastructure and removes artifacts associated with TeamPCP,” according to SentinelOne senior threat researcher, Alex Delamotte. TeamPCP is the group behind some major open source supply chain attacks this year, including one that compromised the GitHub Actions for Aqua Security’s popular Trivy vulnerability scanner to deliver infostealer malware to countless downstream users including LiteLLM. “Many of the services targeted by the PCPJack framework are similar to the early TeamPCP/PCPCat campaigns from December 2025, before the high-visibility campaigns of early 2026 brought significant attention to TeamPCP and purportedly led to changes in group membership,” explained Delamotte In aSentinelLABS post. “We believe this could be a former operator who is deeply familiar with the group’s tooling.” – https://www.infosecurity-magazine.com/news/pcpjack-campaign-boots-teampcp-off/
How Crowdsourced Security is Transforming the Public Sector Cybersecurity Landscape
(Laurie Mercer – Infosecurity Magazine) Cyber-attacks are rising at a significant and highly concerning rate, with the UK National Cyber Security Centre (NCSC) handling an average of four ‘nationally significant’ attacks every week throughout 2025. According to the NCSC Annual Review 2025, a substantial proportion of all cybersecurity incidents handled over the last 12 months were linked to advanced persistent threat (APT) actors – either nation-state actors or highly capable criminal groups. This is perhaps no surprise, with state-sponsored campaigns perpetrated by groups such as Midnight Blizzard in Russia seeing notable rises in 2025, many of which exploited identity layers and cloud collaboration tools for persistence. In response, the NCSC continues to work across both public and private sector organizations, including local authorities and operators of critical national infrastructure, to strengthen defensive posture and improve national cyber resilience. In practice, many security leaders are being asked to modernize defenses while operating legacy estates, constrained procurement cycles and persistent hiring gaps. The mandate to digitize services has accelerated; the security capacity to support that shift has not always kept pace. – https://www.infosecurity-magazine.com/opinions/crowdsourced-public-sector-cyber/
Legacy Security Tools Are Failing Data Protection, Capital One Software Report Finds
(Beth Maundrill – Infosecurity Magazine) Traditional network security tools are inhibiting firms from adequate data security as a majority of IT leaders report that data security has never been more critical. A new report, commissioned by Capital One Software with research conducted by Forrester, found that 72% of security professionals agreed that data security is more critical than ever, but investments in traditional network and perimeter security tools impede adequate data protection. Without rethinking data protection, AI adoption is “impossible” argued the research. As AI agents act autonomously and bypass human oversight, the risk of unintended data exposure is heightened. – https://www.infosecurity-magazine.com/news/legacy-security-tools-are-failing/
Cline Kanban Flaw Lets Websites Hijack AI Coding Agents
(Alessandro Mascellino – Infosecurity Magazine) A critical vulnerability in the Cline Kanban server has been disclosed that allows any website a developer visits to silently exfiltrate workspace data, inject commands into the AI agent’s terminal or kill active agent sessions. The flaw, given a CVSS score of 9.7, was identified in a security assessment by researchers at Oasis Security, who published a technical analysis of the issue on May 7. It affects version 0.1.59 of the Kanban npm package and stems from missing origin validation and authentication on three WebSocket endpoints exposed by the local server. Cline is one of the most widely adopted open-source AI coding assistants, and its Kanban feature provides a web-based project management interface backed by a local HTTP and WebSocket server on port 3484. – https://www.infosecurity-magazine.com/news/cline-kanban-websocket-hijack-ai/
OpenAI and Anthropic LLMs Used in Critical Infrastructure Cyber-Attack, Warns Dragos
(Danny Palmer – Infosecurity Magazine) Commercial large language models (LLMs) were used as part of a cyber-attack which targeted a municipal water and drainage utility provider in Mexico, cybersecurity researchers at Dragos have warned. A “significant compromise” of the water infrastructure providers’ IT environment escalated into an attempted attack against the organization’s operational infrastructure (OT), said a Dragos report, published on May 6. The research suggested that attackers used Anthropic’s Claude AI and OpenAI’s GPT models to aid with planning and conducting the campaign. – https://www.infosecurity-magazine.com/news/llm-critical-infrastructure/
Fake Claude AI Site Drops Beagle Backdoor on Windows Users
(Alessandro Mascellino – Infosecurity Magazine) A fraudulent imitation of Anthropic’s Claude website has been used to distribute a previously undocumented backdoor named Beagle, deployed through a Dynamic Link Library (DLL) sideloading chain that abuses a signed antivirus updater binary. The malicious domain claude-pro[.]com presents a stripped-down imitation of the legitimate Claude interface and offers a fictitious tool called Claude-Pro Relay, served as an approximately 505 MB ZIP archive, according to new analysis by Sophos X-Ops. The researchers assessed that the site is part of an active malvertising campaign and traced the hosting infrastructure to a server set up in March 2026. – https://www.infosecurity-magazine.com/news/fake-claude-site-beagle-backdoor/
Researchers Spot Uptick in Use of Vercel for Phishing Campaigns
(Phil Muncaster – Infosecurity Magazine) Low-skilled threat actors are abusing legitimate generative AI (Gen AI) platforms in growing numbers to create highly convincing phishing campaigns, Cofense has warned. The security vendor said that it has observed a number of campaigns based around v0[.]dev, a powerful GenAI tool provided by web application development specialist Vercel. “This AI tool is the driving force behind the malicious sign-in pages created by attackers. With just a few text prompts v0[.]dev can create a fully functioning malicious site that completely resembles real-life brands,” it explained in an article published on 6 May. “Although Vercel has created a genuinely useful and innovative platform, threat actors are taking advantage of the platform and are abusing it for malicious gain.” – https://www.infosecurity-magazine.com/news/researchers-spot-uptick-vercel/
From Android TVs to routers: the xlabs_v1 Mirai-based botnet built for DDoS attacks
(Pierluigi Paganini – Security Affairs) A new Mirai‑derived botnet called xlabs_v1 is hijacking internet‑exposed devices running Android Debug Bridge (ADB) and using them for large‑scale DDoS attacks. Hunt.iodiscovered the bot on an unsecured server, it includes 21 flood techniques across TCP, UDP, and raw protocols, allowing it to bypass basic protections. It appears to be sold as a DDoS‑for‑hire service, especially for targeting game and Minecraft servers. During routine monitoring, researchers spotted an exposed directory on a Netherlands‑hosted server (176.65[.]139.44) used for bulletproof hosting. The operator had left their entire toolkit publicly accessible over TCP/80 with no authentication, allowing investigators to index everything before the attacker realized it was exposed. – https://securityaffairs.com/191796/malware/from-android-tvs-to-routers-the-xlabs_v1-mirai-based-botnet-built-for-ddos-attacks.html
Polish intelligence warns hackers attacked water treatment control systems
(Alexander Martin – The Record) Poland’s domestic intelligence service said attackers breached water treatment facilities in five towns in 2025, in some cases gaining access to industrial control systems that could have disrupted water supplies. In a new public report, the Internal Security Agency (Agencja Bezpieczeństwa Wewnętrznego, or ABW) said water treatment stations in Jabłonna Lacka, Szczytno, Małdyty, Tolkmicko and Sierakowo were targeted. “Attackers, gaining access in some cases to industrial control systems, had the ability to alter technical parameters of devices,” the report said, creating “a direct risk” to the continuity of water supply operations. – https://therecord.media/polish-intelligence-warns-hackers-attacked-water-treatment
Empowering Defenders: AI for Cybersecurity
(World Economic Forum) AI is transforming cybersecurity, but realizing its full value requires strategic deployment, robust governance and balanced human oversight. This white paper, Empowering Defenders: AI for Cybersecurity, offers practical guidance for organizations seeking to harness AI in their cybersecurity efforts. To support effective implementation, the paper outlines the critical questions executives and chief information security officers must address and provides an early perspective on the opportunities and challenges posed by agentic AI. It suggests that executive and cyber leaders embarking on the AI adoption journey for cyber defence should: align the adoption of AI in cybersecurity with enterprise strategic priorities; establish organizational readiness across processes, data, infrastructure, skills and governance before deploying AI in cybersecurity; validate AI solutions through structured pilots prior to full deployment; and scale and monitor the performance of AI in cybersecurity and optimize as needed. Drawing on real-world case studies from World Economic Forum partners, as well as insights from a community of more than 84 organizations across 15 industries, it highlights how AI is being applied across the cybersecurity lifecycle. – https://www.weforum.org/publications/empowering-defenders-ai-for-cybersecurity/
North Korean hackers targeted ethnic Koreans in China with Android ‘BirdCall’ malware
(Jonathan Greig – The Record) Ethnic Koreans living in the Yanbian region of China were targeted by a sophisticated North Korean hacking group with a strain of malware attached to a popular Android mobile game. Researchers at cybersecurity firm ESET attributed the campaign to APT37 and said the hackers used a backdoor attached to a suite of card games from a company called Sqgame. The backdoor, named BirdCall by the researchers, allowed APT37 to take screenshots, record calls, steal personal data and more. The Yanbian region of China is on the border with North Korea and is often referred to as “Third Korea.” ESET researchers said the campaign was likely aimed at refugees or defectors from the North Korean regime. – https://therecord.media/north-korean-hackers-target-ethnic-koreans-in-china
Hackers compromise Daemon Tools in global supply-chain attack, researchers say
(Daryna Antoniuk – The Record) Hackers have compromised installers of widely-used disk imaging software in a supply chain attack that has affected users in more than 100 countries, according to a new report. Researchers at Kaspersky said attackers tampered with installers for Daemon Tools — a popular program used to mount disk images as virtual drives — and distributed them through the software’s official website. The malicious versions, first observed in early April, affected multiple releases of the software installed on thousands of machines across more than 100 countries, including Russia, Brazil, Turkey, Spain, Germany and China. – https://therecord.media/hackers-compromise-daemon-tools-global-supply-chain-attack
New CISA initiative aims for critical infrastructure to operate offline during cyberattacks
(Jonathan Greig – The Record) The federal cyber defense agency unveiled a new initiative this week aimed at preparing critical infrastructure organizations for technology and telecommunications outages caused by cyberattacks. The Cybersecurity and Infrastructure Security Agency (CISA) published a guide that urges critical infrastructure organizations to prepare to operate through a crisis or conflict and continue delivering services even when under attack. The initiative, named CI Fortify, focuses on isolation and recovery efforts that would see critical infrastructure organizations proactively disconnect from third-party dependencies and find ways to operate without reliable telecommunications and internet. – https://therecord.media/cisa-initiative-aims-for-critical-infrastructure-to-operate-during-cyberattacks
Iranian cyber espionage disguised as a Chaos Ransomware attack
(Pierluigi Paganini – Security Affairs) A newly discovered cyber intrusion attributed to the Iran-linked APT MuddyWater (aka SeedWorm, TEMP.Zagros, Mango Sandstorm, TA450, and Static Kitten) reveals how state-sponsored attackers are increasingly leveraging ransomware tactics to disguise espionage operations. The campaign, uncovered by security researchers at Rapid7, blended social engineering, credential theft, data exfiltration, and extortion under the guise of a ransomware incident — but with no evidence of actual file encryption. The attack unfolded in early 2026 and initially appeared to be a routine ransomware case. Victims were led to believe they were dealing with the Chaos ransomware group, which operates a leak site for stolen data. However, further investigation showed no ransomware had been deployed. Instead, the attackers relied on espionage tradecraft — lateral movement, credential harvesting, and information theft — consistent with MuddyWater’s long-standing intelligence-gathering profile. “In early 2026, a sophisticated intrusion initially appearing to be a standard Chaos ransomware attack was assessed to be consistent with a targeted state-sponsored operation. While the threat actor operated under the banner of the Chaos ransomware-as-a-service (RaaS) group, forensic analysis revealed the incident was a “false flag” masquerade.” reads the report published by Rapid7. “Technical artifacts, including a specific code-signing certificate and Command-and-Control (C2) infrastructure, suggest with moderate confidence that this activity is linked to MuddyWater (Seedworm), an Iranian Advanced Persistent Threat (APT) affiliated with the Ministry of Intelligence and Security (MOIS).” – https://securityaffairs.com/191765/breaking-news/iranian-cyber-espionage-disguised-as-a-chaos-ransomware-attack.html
Taiwan High-Speed Rail Emergency Braking Hack: How a Student Stopped the Trains and Exposed a Major Security Gap
(Pierluigi Paganini – Security Affairs) Taiwan high‑speed rail system, one of the most important pieces of national infrastructure, was thrown into chaos during the Qingming Festival holiday when several trains suddenly came to an unexpected halt. Experts initially investigated a technical glitch but soon discovered the incident was caused by a cyber intrusion carried out by a 23-year-old university student. “The Ministry of Transportation and Communications yesterday pledged to submit a report on ways to harden the communication security of railway systems after a university student hacked into Taiwan High Speed Rail Corp’s (THSRC) radio communications system and disrupted operations of four high-speed rail trains last month.” reported the Taipei Times. “Investigation by the police and prosecutors found that the university student and radio enthusiast, surnamed Lin (林), first used a software-defined radio (SDR) filter to analyze THSRC signals, downloaded the data to a computer, cracked the parameters and then programmed the codes into his radio devices.”. Authorities revealed that the student, identified only by his surname Lin, used radio equipment and software tools bought online to imitate the communication signals used inside Taiwan High-Speed Rail (THSR). By doing so, he triggered a general emergency alarm, forcing train operators to stop four trains, disrupting service for nearly an hour and delaying hundreds of passengers heading home from the holiday. – https://securityaffairs.com/191785/hacking/taiwan-high-speed-rail-emergency-braking-hack-how-a-student-stopped-the-trains-and-exposed-a-major-security-gap.html
Malicious PyTorch Lightning update hits AI supply chain security
(Pierluigi Paganini – Security Affairs) A malicious update of the PyTorch Lightning library exposed developers to credential theft and remote compromise. Attackers uploaded version 2.6.3 to the Python Package Index (PyPI), where it spread among developers before maintainers removed it at the end of April. PyTorch Lightning is an open-source framework built on top of PyTorch that simplifies how developers train and deploy deep learning models. Given the library’s popularity in AI development, the incident raised serious concerns about the security of software supply chains. – https://securityaffairs.com/191732/ai/malicious-pytorch-lightning-update-hits-ai-supply-chain-security.html
Agentic AI risks outlined in joint cyber agency guidance
(DigWatch) Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments. The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset. The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre. – https://dig.watch/updates/agentic-ai-security-guidance-cyber-agencies
Swisscom says AI and geopolitics are reshaping the cyber threat landscape
(DigWatch) Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations. The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies. – https://dig.watch/updates/swisscom-says-ai-and-geopolitics-are-reshaping-the-cyber-threat-landscape
Microsoft warns of global campaign stealing auth tokens from 35K users
(Pierluigi Paganini – Security Affairs) Microsoft disclosed a major phishing campaign that targeted over 35,000 users across 26 countries in mid-April 2026. Attackers used fake “code of conduct” emails sent through legitimate platforms to trick recipients into visiting bogus sites that stole authentication tokens. “The campaign targeted tens of thousands of users, primarily in the United States, and directed them through several stages of CAPTCHA and intermediate staging pages designed to reinforce legitimacy while filtering out automated defenses.” reads the report published by Microsoft. “The lures in this campaign used polished, enterprise-style HTML templates with structured layouts and preemptive authenticity statements, making them appear more credible than typical phishing emails and increasing their plausibility as legitimate internal communications.“. Most victims (92%) were in the U.S., mainly in healthcare and finance. – https://securityaffairs.com/191695/security/microsoft-warns-of-global-campaign-stealing-auth-tokens-from-35k-users.html
Educational tech firm Instructure data breach may have impacted 9,000 schools
(Pierluigi Paganini – Security Affairs) Instructure is a U.S.-based educational technology company best known for developing Canvas, one of the world’s most widely used learning management systems (LMS). The U.S. firm confirrmed a cybersecurity incident that exposed users’ personal information. The company is working with external cybersecurity experts and law enforcement to investigate the breach. Canvas is widely used by schools and universities to manage courses, assignments, and online learning, raising concerns about student and staff data security. The company says the security incident appears to be contained while investigations continue. Instructure revoked privileged credentials and access tokens, deployed security patches, rotated some keys as a precaution, and increased monitoring across systems. – https://securityaffairs.com/191686/cyber-crime/educational-tech-firm-instructure-data-breach-may-have-impacted-9000-schools.html
Hackers target governments and MSPs via critical cPanel flaw CVE-2026-41940
(Pierluigi Paganini – Security Affairs) A threat actor is exploiting critical cPanel vulnerability CVE-2026-41940 to target government and military organizations in Southeast Asia, along with MSPs and hosting providers in countries like the Philippines, Laos, Canada, South Africa, and the U.S. The attacks highlight the rapid weaponization of newly disclosed flaws. cPanel is a widely used web hosting control panel that lets users manage websites and servers through a graphical interface instead of command-line tools. CVE-2026-41940 is an authentication bypass flaw affecting cPanel and WHM versions after 11.40. A weakness in the login flow allows remote attackers to skip or manipulate authentication checks, granting access to the control panel without valid credentials. This could let attackers manage hosting settings, access sensitive data, or take control of the server. Cybersecurity experts at watchTowr first disclosed the flaw last week and released a tool to help defenders identify vulnerable hosts in their estates. – https://securityaffairs.com/191666/breaking-news/hackers-target-governments-and-msps-via-critical-cpanel-flaw-cve-2026-41940.html
Bluekit phishing kit enables automated phishing with 40+ templates and AI tools
(Pierluigi Paganini – Security Affairs) Bluekit is a newly discovered phishing kit still in development that includes advanced features such as an AI assistant and automated domain registration. According to Varonis, it offers over 40 website templates along with tools for spoofing, voice cloning, antibot protection, geolocation tricks, and two-factor authentication bypass support. “Varonis Threat Labs recently discovered Bluekit, a new phishing kit pitching a broader model. It advertises 40+ website templates, automated domain purchase and registration, 2FA support, spoofing, geolocation emulation, Telegram and browser notifications, antibot cloaking, and add-ons like an AI assistant, voice cloning, and a mail sender.” reads the report published by Varonis. Bluekit supports multiple phishing templates targeting major services such as iCloud, Apple ID, Gmail, Outlook, Yahoo, ProtonMail, GitHub, Twitter, Zoho, Zara, and Ledger. It combines email, cloud, crypto, and developer platforms in one kit. – https://securityaffairs.com/191646/cyber-crime/bluekit-phishing-kit-enables-automated-phishing-with-40-templates-and-ai-tools.html
Surveillance Technology is Silencing Journalists in Kashmir
(Petra Molnar – Tech Policy Press) Few places illustrate the entanglement of territorial conflict, digital governance, and repressive spyware as starkly as Kashmir, the mountainous region located in the northernmost part of the Indian subcontinent, bordered by Afghanistan to the northwest, China to the northeast, and Pakistan to the west. Long administered in part by India and marked by decades of militarization, Kashmir entered a new phase in 2019 when the government led by Narendra Modi revoked its limited constitutional autonomy. In the years since, Kashmir has become a dense site of surveillance, where checkpoints and patrols are increasingly supplemented by biometric systems, networked CCTV, telecommunications controls, and expansive data governance frameworks. Indian authorities routinely frame these measures as ‘necessary’ for security and stability. Yet these technologies are also reshaping everyday life into a space of continuous monitoring, where movement, communication, and association are subject to continual scrutiny. In April 2025, a deadly militant attack near Pahalgam in Kashmir killed at least 26 civilians, many of them tourists, marking one of the deadliest attacks on civilians in the region in recent years. The incident triggered a sweeping security response, including intensified military deployments and expanded surveillance measures, further entrenching an already pervasive security architecture across the region. – https://www.techpolicy.press/surveillance-technology-is-silencing-journalists-in-kashmir/
Defense/Intelligence/Warfare
AI, Cyberwarfare, and Autonomous Weapons: Inside America’s New Military Strategy
(Pierluigi Paganini – Security Affairs) May 2026 marks a turning point in the evolution of modern warfare: the convergence of artificial intelligence, cybersecurity, and conventional military power is no longer theoretical. It is becoming an operational reality. The Pentagon has signed agreements with major technology companies, including OpenAI, Google, Microsoft, Amazon, and SpaceX to integrate advanced AI models into classified military networks. The stated goal is clear: transform the United States into an “AI-first” military force capable of maintaining decision superiority across every battlefield domain. Under this strategy, AI is no longer treated as a laboratory tool or analytical assistant. It is moving directly into the military chain of command, intelligence analysis, logistics, targeting, and operational planning. More than 1.3 million Department of Defense employees are already using the GenAI.mil platform, dramatically reducing processes that once took months to just days. – https://securityaffairs.com/191842/cyber-warfare-2/ai-cyberwarfare-and-autonomous-weapons-inside-americas-new-military-strategy.html
With launches slated to grow a hundredfold, Space Force seeks more sites, money, people, and AI
(Thomas Novelly – Defense One) The guardians manning screens in the mission-ops center here oversaw the launch of five types of rockets in April, a new record that involved NASA’s Artemis II, the first reused New Glenn booster, and a Falcon 9 lofting the final GPS III satellite. But tomorrow’s Space Force may have no time to mark even epochal missions. Within a decade, service leaders say, Cape Canaveral Space Force Station will be launching hundreds of rockets a year. To facilitate the Pentagon’s fast-growing demand for orbital capability, the Space Force is looking for more launch sites, more money, more troops, and more AI. “In 2025, the Space Force saw a drastic increase in mission requirements across space access, global mission operations, and space control. This trend shows no signs of slowing,” Gen. Chance Saltzman, the Space Force’s top uniformed leader, told House lawmakers last week. “The Space Force we have today is not the Space Force we will need in the future.” – https://www.defenseone.com/threats/2026/05/launches-slated-grow-hundredfold-space-force-seeks-more-sites-money-people-and-ai/413403/
Pentagon turns to AI targeting to help troops shoot drones
(Michael Peck – Defense News) The Department of Defense is looking for AI-enhanced target recognition to help troops, vehicles and ships destroy drones. The C-UAS Close-In Kinetic Defeat Enhancement project focuses on aided target recognition, or AiTR. This uses concepts such as AI, machine learning and computer vision to create a system that can detect threats — and distinguish them from non-threats such as birds — faster than a human operator can. – https://www.defensenews.com/industry/2026/05/07/pentagon-turns-to-ai-targeting-to-help-troops-shoot-drones/
Australian Defence AI policy risks writing modern EW out of the force
(Rhys Kissell – ASPI The Strategist) In 2025, Ukrainian-led opposing forces defeated NATO formations in two major exercises. At exercise Hedgehog in Estonia, about 10 drone operators with cheap first-person-view drones rendered two NATO battalions combat-ineffective in half a day. At Dynamic Messenger off Portugal, a Ukrainian-led red team beat NATO naval forces in all five scenarios, sinking a frigate undetected – with low-cost systems that cost a fraction of the platforms they destroyed. NATO forces couldn’t detect or counter them. The electromagnetic warfare (EW) capabilities that could have done so – software-defined, AI-enabled systems able to identify and disrupt drone control links across a wide area – were either not present or inadequate. – https://www.aspistrategist.org.au/australian-defence-ai-policy-risks-writing-modern-ew-out-of-the-force/
NATO needs policies, standards for sharing AI-enhanced geospatial intel: Official
(Theresa Hitchens – Breaking Defense) The growing use of artificial intelligence to enhance monitoring of adversary activities poses huge interoperability challenges for NATO that require near-term agreements on policies and data standards, NATO’s top intelligence policy officer warned on Monday. Among the biggest concerns for Maj. Gen. Paul Lynch, a British Royal Marine serving as NATO deputy assistant secretary general for intelligence, is the potential for allied commanders to be faced with conflicting national intelligence reports. “We have decades of experience or common standards for air defense, maritime awareness, data formats. The question is whether we apply that same rigor to AI before the technology outpaces the frameworks, or after,” Lynch said at the US Geospatial Intelligence Foundation’s annual GEOINT Symposium here. “And the answer will be decided in the next three years.” – https://breakingdefense.com/2026/05/nato-needs-policies-standards-for-sharing-ai-enhanced-geospatial-intel-official/
Pentagon seeks smarter, self-organizing drones as autonomous-warfare budget is poised to skyrocket
(Patrick Tucker – Defense One) Two requests to industry may help the Pentagon address one of the emerging challenges of warfare: enabling a relatively small number of human operators to direct a far larger number of robots. The Materials for Physical Compute in Untethered Robotics effort seeks to make autonomous systems more intelligent, while Decentralized Artificial Intelligence through Controlled Emergence aims to help robots form teams and carry out missions. These DARPA projects may feed ideas to the Defense Autonomous Working Group, the lead Pentagon office for drone warfare, whose budget would soar from $226 million this year to $54 billion under the new 2027 spending proposal. Much of that huge sum will be wasted if the military spends it before establishing a clear understanding of how operators will buy, train on, use, and maintain autonomous weapons, according to a recent commentary piece by David Petraeus, the retired Army general and former CIA director, and scholar Isaac Flanagan. Writing for The Hill, they argue that the lack of such understanding constrained the use of drones during the past decade of U.S. wars in the Middle East. – https://www.defenseone.com/technology/2026/05/pentagon-drones-autonomous-warfare/413323/?oref=d1-featured-river-top
Navy F/A-18Gs over Iran, Venezuela show rise in aerial electronic attack
(Andrew Dardine – Defense One) The Pentagon is using the Navy’s EA-18G Growlers more than ever, suggesting more development and a bigger role for aerial electronic attack are on the way. Flying from the carriers Abraham Lincoln and Gerald R. Ford in recent months, the Growlers have used jammers and missiles to confuse, suppress, and destroy Iranian communications and radar systems and surface-to-air missile batteries. They were also key to January’s seizure of Venezuelan President Nicolas Maduro, when they suppressed and destroyed Russian and Chinese-derived air defenses and other infrastructure to allow the abduction team to reach their Caracas target with virtually no resistance. As usual in these types of operations, Venezuelan air defense operators learned of the attack only when their radar screens went dark. – https://www.defenseone.com/ideas/2026/05/navy-growler-iran-venezuela-electronic-attack/413316/?oref=d1-featured-river-secondary
Frontiers
One in three researchers have no access to quantum research facilities, depriving society of its full potential
One in three researchers have no access to quantum research facilities, heavily limiting its potential in fields including healthcare, computing, cybersecurity and climate modelling, according to a UNESCO report. ‘The Quantum Moment: A Global Report, Outcomes of the International Year of Quantum Science and Technology’ shows there are stark North-South divides in access to the technology, with Europe and North America holding seven times more quantum science events per country in the past year compared to Africa. Findings also highlight a persistent gender gap, especially among senior level quantum researchers. – https://www.unesco.org/en/articles/one-three-researchers-have-no-access-quantum-research-facilities-depriving-society-its-full?hub=701
Useful quantum computers move closer, Harvard researchers say
(DigWatch) Researchers in the Harvard quantum ecosystem say useful quantum computers may arrive sooner than expected, as rapid progress in networking, fault tolerance, and commercialisation reshapes assumptions about the field. The report points to three spinout companies emerging from affiliated labs as evidence that the technology is moving more quickly into real-world development than many had anticipated. – https://dig.watch/updates/quantum-computers-move-closer-than-expected
New dataset uses AI and disaster news to fill in knowledge gaps and map interconnected risks
(European Commission) Climate-related disasters such as hurricanes, floods, and wildfires, or geological hazards like earthquakes and landslides generate enormous amounts of news coverage. Yet most of this information remains scattered, unstructured, and too fragmented for scientists, policymakers, or emergency responders to act on quickly. To bridge the gap, a new study by the JRC, carried out in cooperation with researchers from the IT company Engineering Ingegneria Informatica and the Institute of Health and Society (IRSS) of the University of Louvain developed an AI-powered pipeline that reads disaster news and turns them into clear, structured knowledge. The resulting resource advances data-driven approaches to disaster scenario modelling, impact analysis, and decision support in disaster risk management. The study is published in Nature Scientific Data. All compiled data, code, and processing workflows are openly available. An interactive dashboard lets anyone explore directly the disaster storylines and knowledge graphs. – https://dig.watch/updates/ai-turns-disaster-news-into-global-risk-maps
Australia expands collaboration efforts in key science and technology areas
(DigWatch) The Australian Government Department of Industry, Science and Resources has announced $6.2 million in funding for nine international projects under round two of the Global Science and Technology Diplomacy Fund (GSTDF). The programme supports collaboration, innovation and commercialisation in priority technology areas. The selected projects focus on AI, advanced manufacturing, quantum technologies and hydrogen, with several initiatives applying AI to areas such as robotics, satellite networks and ocean forecasting. – https://dig.watch/updates/australia-expands-collaboration-efforts-in-key-science-and-technology-areas
White paper sets priorities for Europe’s digital sovereignty and tech competitiveness
(DigWatch) A new whitepaper by GITEX AI Europe, in partnership with research firm LUE, outlines key priorities for strengthening Europe’s digital sovereignty and long-term technological competitiveness. The study suggests scaling AI computing power, expanding cloud infrastructure, adopting open-source standards and increasing startup investment as central pillars. These measures aim to align innovation capacity with broader economic and industrial growth. – https://dig.watch/updates/white-paper-sets-priorities-for-europes-digital-sovereignty-and-tech-competitiveness
Meta explores agentic AI assistants
(DigWatch) Meta is developing an advanced ‘agentic’ AI assistant designed to perform complex, multi-step tasks for consumers. The initiative reflects the company’s broader push to expand its AI capabilities beyond basic chat functions. The planned assistant is intended to act more autonomously, helping users complete actions such as organising activities or managing digital tasks. Powered by a new internal model called Muse Spark, the assistant is still under development, and its rollout timeline depends on internal testing. – https://dig.watch/updates/meta-explores-agentic-ai-assistants
Advancing Canada’s capacity in photonic semiconductors and AI innovation
(Government of Canada) The Honourable Mélanie Joly, Minister of Industry and Minister responsible for Canada Economic Development for Quebec Regions, announced that work will begin to spin off the National Research Council of Canada (NRC)’s Canadian Photonics Fabrication Centre (CPFC) into a commercial entity with firmly Canadian foundations and with Canadian industrial development at its core. Photonics technology is a central part of the Government of Canada’s plan to build up our country’s advanced manufacturing sectors and sovereign capabilities, including in auto, defence, aerospace and AI. As the global demand for AI technologies increases, photonic devices will play an increasingly important role in addressing challenges associated with performance, power and heat in large data centres and AI compute facilities. By strengthening domestic photonics capabilities, Canada can enhance its economic resilience, safeguard its technological sovereignty and secure a leadership role in the compound semiconductor industry. – https://www.canada.ca/en/national-research-council/news/2026/05/advancing-canadas-capacity-in-photonic-semiconductors-and-ai-innovation.html