Governance/Regulation/Legislation
ILO adopts first-ever conclusions on AI in manufacturing work
(International Labour Organization) Government, employer, and worker representatives from the manufacturing sector have adopted the first-ever tripartite conclusions on artificial intelligence (AI) in manufacturing, outlining challenges and opportunities for promoting decent work, productivity and a just transition. Adopted on 17 April following five days of discussions at the International Labour Organization (ILO), the conclusions set out recommendations to ensure that AI supports decent work, enhances productivity, and contributes to a just transition. Their adoption marks a significant step in the ILO’s efforts to address the profound changes that AI is bringing to a sector employing almost 500 million workers worldwide. – https://www.ilo.org/resource/news/ilo-adopts-first-ever-conclusions-ai-manufacturing-work
840,000 deaths a year linked to psychosocial risks at work
(International Labour Organization) More than 840,000 people die each year from health conditions linked to psychosocial risks, such as long working hours, job insecurity, and workplace harassment, according to a new global report by the International Labour Organization (ILO). These work-related psychosocial risks are mainly associated with cardiovascular diseases and mental disorders, including suicide. The report also finds that these risks account for nearly 45 million disability-adjusted life years (DALYs) lost annually, reflecting years of healthy life lost due to illness, disability, or premature death, and are estimated to result in economic losses equivalent to 1.37 per cent of global GDP each year. The report, The psychosocial working environment: Global developments and pathways for action, highlights the growing impact of how work is designed, organized, and managed on workers’ safety and health. It warns that psychosocial risk factors—including long working hours, job insecurity, high demands with low control, and workplace bullying and harassment—can create harmful working environments if not properly addressed. – https://www.ilo.org/resource/news/840000-deaths-year-linked-psychosocial-risks-work
Italy issues guidelines requiring consent for email tracking pixels
(DigWatch) Italy’s Data Protection Authority has issued new guidelines on tracking pixels used in email communications, requiring organisations to inform users and obtain consent before deploying the hidden monitoring tools. Published on 17 April 2026, the Garante per la Protezione dei Dati Personali guidelines address the invasive nature of tracking pixels, which silently monitor whether recipients open and read emails without their knowledge. Tracking pixels are tiny, often invisible images embedded in emails that automatically send information back to the sender when recipients open the message. The pixels can collect data, including device type, IP address, and exact time of access. – https://dig.watch/updates/italy-issues-guidelines-requiring-consent-for-email-tracking-pixels
House Republicans unveil data privacy law that would override state protections
(Suzanne Smalley – The Record) House Republicans on Wednesday introduced a federal comprehensive data privacy bill that would preempt at least 20 state laws, alarming privacy advocates who urged lawmakers to vote against it because they believe the protections it offers are too weak. The bill, known as the SECURE Data Act, is backed by top Republicans on the House Energy and Commerce and Financial Services committees and is the result of 14 months of work by a Republican-only Privacy Working Group tasked with drafting it. Congress has tried for years to get comprehensive data privacy legislation enacted with no success. The new bill tears up a tougher proposal that was introduced in the last Congress to bipartisan support but opposition from House Republican leaders. – https://therecord.media/house-republicans-unveil-data-privacy-law-override-state-measures
Time to apply the brakes to runaway AI, says pioneer
(UN News) If AI is “a very fast car with no steering wheel” then regulation must provide one, insists Nobel laureate and Artificial Intelligence pioneer Geoffrey Hinton, the visionary scientist widely known as the “godfather” of the self-learning tech. Speaking at the Digital World Conference (DWC): AI for Social Development – co-organized by the UN Research Institute for Social Development (UNRISD) – Professor Hinton stressed that rapid advances in AI must be guided more carefully to serve societies – rather than undermine them. “If you ever went out with a car that had no brake, boy, you are in trouble if you go down a hill,” he told delegates. “But you’re in even more trouble if there’s no steering wheel”. His remarks came during a busy week for AI policymaking, as governments and UN panels stepped up discussions on governance, inclusion and risk management, amid the growing integration of artificial intelligence across the global economy and society. – https://news.un.org/en/story/2026/04/1167361
Children and victims of child sexual abuse are being ‘failed’, warns charity as EU found to host 63% of world’s criminal child sexual abuse webpages
(Internet Watch Foundation) More than half of the global child sexual abuse URLs (310,437) identified by the Internet Watch Foundation (IWF) in 2025 were traced to hosting services in EU member states. Bulgaria, the Netherlands and Romania account for lion’s share (73%) of child sexual abuse webpages hosted within the EU. IWF calls for the swift adoption of a permanent legislative framework that would ensure voluntary detection of child sexual abuse online in the bloc. – https://www.iwf.org.uk/news-media/news/children-and-victims-of-child-sexual-abuse-are-being-failed-warns-charity-as-eu-found-to-host-63-of-world-s-criminal-child-sexual-abuse-webpages/
Women shaping our digital future
(UNIDIR) On the occasion of International Girls in ICT Day, UNIDIR caught up with Catalina Vera Toro, Alternate Representative of Chile to the Organization of American States, who participated in the 2025 editions of both UNIDIR’s Women in AI Fellowship and Women in Cyber Fellowship. She reflects on her work on artificial intelligence (AI) governance and diplomacy and shares advice for young women wishing to join the field. – https://unidir.org/women-shaping-our-digital-future/
Hong Kong advances digital corporate identity to transform business operations
(DigWatch) The development of its Digital Corporate Identity (CorpID) platform has been accelerated by Hong Kong, positioning it as a central pillar of the territory’s digital economy strategy. Backed by a $300 million public investment approved in 2024, the system is designed to provide corporations with a secure, standardised and scalable digital identity, enabling seamless interaction with both government and private sector services instead of fragmented administrative processes. The platform builds on the success of ‘iAM Smart’, extending digital identity capabilities from individuals to corporations. With more than 4.3 million users already accessing over 1,400 services through the personal system, authorities aim to replicate and expand the model for businesses. – https://dig.watch/updates/hong-kong-advances-digital-corporate-identity-to-transform-business-operations
Anthropic Warned Big Companies About Mythos. Workers and Watchdogs Need a Seat at the Table
(Amber Scorah, Rebecca Petras – Tech Policy Press) Last week, Anthropic announced that its yet-to-be publicly released Mythos model had broken itself out of the sandbox and sent an unsolicited email to a researcher who was eating a sandwich in the park. It was a cute detail appended to an otherwise concerning series of disclosures since Fortune first reported on the model’s existence, which was exposed through an inadvertent leak on the company’s content management system. Apparently, the model’s capabilities were so alarming that Anthropic quickly turned to a handpicked group of 12 technology and finance companies—most of them Big Tech, including Amazon, Google, Apple, Microsoft, and Crowdstrike—alongside 40 other organizations to coordinate a response to the cybersecurity risks the model’s capabilities would unleash. According to Axios, these included, but were not limited to, potentially “bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems.”. Anthropic’s decision to warn others is laudable, and warranted. But what is notable about Anthropic’s diligent efforts to proactively prevent public harm from the use of its new AI model is that there was not a single AI accountability organization on the list of experts Anthropic brought together. There are hundreds of AI safety and accountability organizations—big and small—that have emerged alongside the development of AI. But no civil society group was brought in. No labor union. None of the 8 in 10 Americans who, in a recent poll, said they want human control prioritized over speed of AI development. No researchers who have spent years mapping the potential harms of AI or using their influence to attempt to slow it down before something catastrophic occurs. – https://www.techpolicy.press/anthropic-warned-big-companies-about-mythos-workers-and-watchdogs-need-a-seat-at-the-table/
Geostrategies
South Africa Has AI Leverage. Its Draft Policy Leaves It Unused
(Nathan-Ross Adams – Tech Policy Press) South Africa is not just another developing country struggling to govern artificial intelligence (AI); it is the exception, and the window to act on it is closing. It holds approximately 88% of global platinum-group metal reserves, critical inputs to parts of the semiconductor and data center supply chains that make AI infrastructure possible. It hosts the largest data center market on the continent. Its existing hyperscaler relationships give it procurement leverage that most African states will never have. And a major geopolitical contest over AI infrastructure is being fought on its soil right now, between Chinese and American technology companies competing for control of the systems that will underpin an entire continent’s public sector. In physics, leverage requires three things: a fulcrum, a lever arm and the ability to apply force. The Bushveld Complex, the world’s largest platinum-group metal deposit, is the fulcrum: a mineral endowment that gives it a position in the semiconductor supply chain that no other African state holds. The draft policy is the lever arm. The unresolved “OPTION” provisions in the policy are where force would be applied. Without a policy that specifies what South Africa wants in return for market access, the lever arm sits unused, and the weight of two of the world’s largest technology ecosystems settles exactly where those ecosystems want it to settle. This makes South Africa a global test case. Not because its proposed means of governance is exemplary, but because it is the one developing country with enough structural leverage to negotiate genuinely different terms, and the one that is choosing, through inaction, not to. – https://www.techpolicy.press/south-africa-has-ai-leverage-its-draft-policy-leaves-it-unused/
Trump’s $293 Million Bet to Supercharge Science Faces Spending Headwinds
(Yuqing Liu – Tech Policy Press) The Trump administration has framed its push to boost scientific discovery through greater use of artificial intelligence tools, known as the Genesis Mission, as part of a broader competition with China over technological advancement. But the effort is facing hurdles that could undermine its goals, as agencies leading AI contend with significant budget cuts. Speaking at an April 8 Center for Strategic and International Studies (CSIS) event, Darío Gil, the Trump administration official leading the Genesis Mission, cast the initiative in explicitly competitive terms, warning that the United States cannot afford to fall behind China in the field of AI-enabled scientific discovery. To support that effort, the Department of Energy (DOE) announced in March that it will make $293 million to “advance the Genesis Mission’s efforts to tackle the nation’s most complex science and technology challenges.” The program invites teams from national laboratories, universities and private industry to apply AI to more than 20 national challenges. Those include advanced manufacturing, biotechnology, nuclear energy and quantum information science. – https://www.techpolicy.press/trumps-293-million-bet-to-supercharge-science-faces-spending-headwinds/
Security and Surveillance
Cyber-Attacks Surge 63% Annually in Education Sector
(Phil Muncaster – Infosecurity Magazine) Schools and universities across the globe experienced a sharp increase in attacks last year thanks to the combined threat from geopolitical tensions, ransomware and hacktivism, according to Quorum Cyber. The security service provider’s 2026 Global Cyber Risk Outlook for Higher Education is compiled from FalconFeeds.iothreat intelligence data covering the period November 2023 to October 2025. It revealed that total recorded incidents increased 63%, from 260 attacks between November 2023-October 2024 to 425 in the period November 2024-October 2025. – https://www.infosecurity-magazine.com/news/cyberattacks-surge-63-annually/
The Rising Risk Landscape for Critical National Infrastructure
(Louise Bulman – Infosecurity Magazine) The risks facing industrial organisations are growing in both scale and variety while many critical national infrastructure operators are being asked to stretch budgets beyond what feels safe. Organisations responsible for energy, transport, water and manufacturing are tasked with protecting increasingly complex operations from attackers who are using a much wider range of techniques than even a few years ago. These organisations often find themselves defending essential systems while justifying every item of spend, causing some to cut back on security because the benefits are not always immediately visible. – https://www.infosecurity-magazine.com/opinions/rising-risk-landscape-for-critical/
Google Introduces Unique AI Agent Identities in New Gemini Enterprise Platform
(Kevin Poireault – Infosecurity Magazine) Google is betting big on agentic AI and wants professionals to track their AI agents on its new hub Gemini Enterprise Agent Platform. Introduced a few months after the launch of Gemini Enterprise, the Agent Platform is Google’s new hub to manage agentic AI workflows for both Google-made and external AI agents. The platform aims to bring together with a series of existing and new capabilities. Among them, the Agent Platform enables users to assign every agent a unique cryptographic ID that will be referred to for every action an agent takes. These agent IDs are designed to be mapped back to “defined authorization policies that are traceable and auditable,” said Thomas Kurian, Google Cloud’s CEO, speaking at the Google Cloud Next 26 conference, held in Las Vegas from April 22 to April 24. “We’re bringing zero trust verification to every agent and at every orchestration step,” he added. – https://www.infosecurity-magazine.com/news/google-ai-agent-identities-gemini/
Researchers Uncover 10 In-the-Wild Prompt Injection Payloads Targeting AI Agents
(Phil Muncaster – Infosecurity Magazine) Security researchers have discovered 10 new indirect prompt injection (IPI) payloads targeting AI agents with malicious instructions designed to achieve financial fraud, data destruction, API key theft and more. Threat actors achieve IPI by poisoning web content so that when an agent crawls or summarizes it, the instructions will be executed as legitimate. It impacts any agent that browses and summarizes web pages, indexes content for RAG pipelines, auto-processes metadata/HTML comments, or reviews pages for ad content, SEO ranking or moderation. – https://www.infosecurity-magazine.com/news/researchers-10-wild-indirect/
RAMP Uncovered: Anatomy of Russia’s Ransomware Marketplace
(Pierluigi Paganini – Security Affairs) RAMP was not just another dark web forum. It was one of the clearest examples of how ransomware has become an organized marketplace, with sellers, buyers, brokers, and recruiters all playing different roles in the same criminal ecosystem. A leaked database from RAMP gives us a rare look behind the curtain. It shows how cybercrime works when it becomes structured, commercial, and repeatable. Instead of random hackers acting alone, RAMP functioned like a business platform where criminals could sell access, recruit affiliates, advertise ransomware, and negotiate deals in private. – https://securityaffairs.com/191171/cyber-crime/ramp-uncovered-anatomy-of-russias-ransomware-marketplace.html
Microsoft Graph API misused by new GoGra Linux malware for hidden communication
(Pierluigi Paganini – Security Affairs) A new Linux version of the GoGra backdoor uses Microsoft’s Graph API and an Outlook inbox to deliver malicious payloads stealthily. The malware is linked to the Harvester cyberespionage group, which is believed to be a nation-state actor. The malicious code blends in with legitimate traffic, making detection more difficult and increasing its effectiveness in targeted cyber espionage operations. “The Harvester APT group has developed a new, highly-evasive, Linux version of its GoGra backdoor. The malware uses the legitimate Microsoft Graph API and Outlook mailboxes as a covert command-and-control (C2) channel, allowing it to bypass traditional perimeter network defenses.” reads the report published by Broadcom Symantec. “The Symantec and Carbon Black Threat Hunter Team linked this new Linux malware to a previously known Windows espionage campaign by Harvester due to similarities in code, demonstrating that the threat actor is actively expanding its cross-platform capabilities.” – https://securityaffairs.com/191153/uncategorized/microsoft-graph-api-misused-by-new-gogra-linux-malware-for-hidden-communication.html
Philippines and Bermuda seal strategic partnership on cross-border data protection
(DigWatch) The National Privacy Commission of the Republic of the Philippines has signed a memorandum of understanding with the Office of the Privacy Commissioner of the Islands of Bermuda to strengthen cooperation on personal data protection. The agreement focuses on cross-border enforcement and regulatory collaboration, enabling the exchange of information on investigations and mutual assistance in addressing potential violations of data privacy laws. It also supports coordination in cross-border data breach cases. – https://privacy.gov.ph/philippines-and-bermuda-seal-strategic-partnership-on-cross-border-data-protection/
Meta to track workers’ clicks and keystrokes to train AI
(Kali Hays – BBC) Meta will start tracking the way employees work, including their keystrokes and mouse clicks, to train its artificial intelligence (AI) models. The company, which owns Instagram and Facebook, told workers on Tuesday that a new tool will run on Meta’s computers and internal apps, logging their activity to be used as training data for AI technology. A Meta spokesman told the BBC: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them.” – https://www.bbc.com/news/articles/cvglyklz49jo
UK intelligence: 100 nations have spyware that can hack Britain
(Mason Boycott-Owen – Politico) More than half of the world’s nation states are believed to have purchased technology that could be capable of hacking into Britain’s infrastructure, companies and private networks, U.K. intelligence has found. The U.K. National Cyber Security Centre — which is part of the GCHQ intelligence agency — believes around 100 countries have procured cyber intrusion software, suggesting the barrier for states to get their hands on the technology is dropping, the agency told POLITICO ahead of a discussion about its findings at its CYBERUK conference in Glasgow Wednesday. Commercial hacking technology, often referred to as spyware, has become a booming market over the past two decades. Products such as NSO’s Pegasus and Intellexa’s Predator have been used to target journalists and political dissidents across the world. – https://www.politico.eu/article/u-k-intelligence-100-nations-have-spyware-that-can-hack-britain/
NCSC: Leave passwords in the past – passkeys are the future
(National Cyber Security Centre) GCHQ’s National Cyber Security Centre (NCSC) heralds a new era of secure sign in with passkeys now ready for mass adoption. Passwords are no longer resilient enough for the contemporary world, cyber experts say in new report published on Day Two of CYBERUK conference in Glasgow. Consumers encouraged to migrate to passkeys where possible to unlock simpler and safer digital lifestyle. – https://www.ncsc.gov.uk/news/ncsc-leave-passwords-in-the-past-passkeys-are-the-future
Apple Fixes iOS Flaw That Let FBI Recover Deleted Signal Messages
(Ravie Lakshmanan – The Hacker News) Apple has rolled out a software fix for iOS and iPadOS to address a Notification Services flaw that stored notifications marked for deletion on the device. The vulnerability, tracked as CVE-2026-28950 (CVSS score: N/A), has been described as a logging issue that has been addressed with improved data redaction. “Notifications marked for deletion could be unexpectedly retained on the device,” Apple said in an advisory. – https://thehackernews.com/2026/04/apple-patches-ios-flaw-that-stored.html
Google Fixes Critical RCE Flaw in AI-Based ‘Antigravity’ Tool
(Elizabeth Montalbano – Dark Reading) Google has fixed a critical flaw in its agentic integrated developer environment (IDE) Antigravity that led to sandbox escape and remote code execution (RCE) after researchers created a proof of concept (PoC) prompt injection attack exploiting it. Prompt injection issues are becoming a major thorn in the side of artificial intelligence (AI) tools, although, in this case, the vulnerability seems to be more of a common problem with IDEs in general rather than an AI-specific one. IDEs are a package of basic tools and capabilities that developers need to program, edit, and test software code; Antigravity is an agentic IDE that provides developers with native tools for filesystem operations. – https://www.darkreading.com/vulnerabilities-threats/google-fixes-critical-rce-flaw-ai-based-antigravity-tool
Defense/Intelligence/Warfare
Ukraine highlights AI strategic shifts
(DigWatch) The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series. According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability. – https://dig.watch/updates/ukraine-highlights-ai-strategic-shifts
Frontiers
Dynamic Reflections: Probing Video Representations with Text Alignment
(Google DeepMind) The alignment of representations from different modalities has recently been shown to provide insights on the structural similarities and downstream capabilities of different encoders across diverse data types. While significant progress has been made in aligning images with text, the temporal nature of video data remains largely unexplored in this context. In this work, we conduct the first comprehensive study of video-text representation alignment, probing the capabilities of modern video and language encoders. Our findings reveal several key insights. First, we demonstrate that cross-modal alignment highly depends on the richness of both visual (static images vs. multi-frame videos) and text (single caption vs. a collection) data provided at test time, especially when using state-of-the-art video encoders. We propose parametric test-time scaling laws that capture this behavior and show remarkable predictive power against empirical observations. Secondly, we investigate the correlation between semantic alignment and performance on both semantic and non-semantic downstream tasks, providing initial evidence that strong alignment against text encoders may be linked to general-purpose video representation and understanding. Finally, we correlate temporal reasoning with cross-modal alignment providing a challenging test-bed for vision and language models. Overall, our work introduces video-text alignment as an informative zero-shot way to probe the representation power of different encoders for spatio-temporal data. – https://deepmind.google/research/publications/193694/