Governance/Regulation/Legislation
European Commission moves to standardise AI transparency obligations
(DigWatch) The European Commission has published draft guidelines outlining how transparency obligations under Article 50 of the AI Act should be applied across certain AI systems. The guidance is intended to help competent authorities, providers and deployers ensure compliance in a consistent, effective and uniform manner. Prepared in parallel with a separate Code of Practice on the marking and labelling of AI-generated content, the draft guidelines clarify the scope of legal obligations and address areas not covered by the code. The focus is on helping users identify when they are interacting with AI systems or encountering AI-generated content. – https://dig.watch/updates/european-commission-to-standardise-ai
EDPS frames safe AI as Europe’s next big idea
(DigWatch) The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights. In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. – https://dig.watch/updates/edps-frames-safe-ai-as-europes-next-big-idea
EU briefing warns AI health benefits need safeguards
(DigWatch) A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight. The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange. – https://dig.watch/updates/eu-ai-health-wellbeing-safeguards
UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act
(DigWatch) The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms. – https://dig.watch/updates/uks-ofcom-prioritises-child-protection-and-ai-moderation-under-online-safety-act
AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings
(DigWatch) The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform. In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value. – https://www.adalovelaceinstitute.org/wp-content/uploads/pdfs/34927/measuring-up.pdf
A Nobel economist models how AI rots the information environment
(Meg Tapia – ASPI The Strategist) Most Australians already know something is wrong with their information environment. An Australian National University survey of 20,000 average Australians found disinformation consistently ranked among top national security concerns. Participants rated it as more serious than the prospect of a foreign military attack. That gut feeling – what participants described as being overwhelmed by information volume, an inability to distinguish truth from falsehood, and a sense of algorithmic manipulation – now has economic proof. In a September 2025 report, Nobel Prize-winning economist Joseph Stiglitz and his Columbia University colleague Maxim Ventura-Bolet use economic modelling to show why the information environment is deteriorating and explain why AI will make it worse unless governments intervene. – https://www.aspistrategist.org.au/how-ai-rots-the-information-environment-a-nobel-economist-has-modelled-it/
Australia launches national AI platform ‘AI.gov.au’
(DigWatch) The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan. AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability. – https://dig.watch/updates/australia-launches-national-ai-platform-ai-gov-au
Empowering a Digital Generation: The Rise of “Digital Citizens for Peace”
(UNESCO) Digital technologies have fundamentally transformed how information is accessed and consumed, with UNESCO research highlighting that 56 percent of internet users now rely on social media as their primary news source (UNESCO/IPSOS, 2023). In Pakistan, this shift is characterized by a stark “literacy – connectivity gap.” While the country boasts over 205 million mobile subscribers and 116 million internet users, the rapid expansion of connectivity has significantly outpaced the development of critical media literacy. For many citizens, unprecedented access to the digital world leaves them vulnerable to a widespread prevalence of disinformation, due to the gap in critically assessing sources or being able to distinguish fact from fake information. – https://www.unesco.org/en/articles/empowering-digital-generation-rise-digital-citizens-peace?hub=701
China launches AI ethics review pilot programme
(DigWatch) A national pilot programme for AI ethics review and services has been launched by China, as authorities move to strengthen oversight of growing risks linked to advanced AI systems. The initiative, announced by China’s Ministry of Industry and Information Technology, aims to establish practical mechanisms for AI ethics governance as concerns over algorithmic discrimination, emotional dependence, and broader societal risks continue to grow. Authorities said the initiative will initially operate in provincial-level regions hosting national AI industrial innovation pilot zones. It will focus on refining provincial AI ethics review rules, supporting the creation of ethics committees, and developing specialised ethics review and service centres. Chinese regulators also plan to transform the ethics review process into technical standards while improving mechanisms for reporting AI-related ethical concerns. – https://dig.watch/updates/china-launches-ai-ethics-review-pilot-programme
China outlines AI and energy integration plan
(DigWatch) The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector. – https://dig.watch/updates/china-outlines-ai-and-energy-integration-plan
World Economic Forum analysis explores AI-driven future planning for organisations
(DigWatch) A World Economic Forum article argues that organisations need to move beyond static reports and analytical forecasts to become more future-ready in an era marked by rapid technological and geopolitical change. The article highlights FutureSlam, a foresight method that combines participatory scenario-building, AI-supported reflection and improvisational performance to help organisations experience possible futures rather than analyse them. The authors say many organisations already invest in foresight, but struggle to translate insights into operational decisions because they often remain confined to strategy teams and slide decks. – https://dig.watch/updates/world-economic-forum-analysis-explores-ai-driven-future-planning-for-organisations
Security and Surveillance
Google warns artificial intelligence is accelerating cyberattacks and zero-day exploits
(Pierluigi Paganini – Security Affairs) Artificial intelligence is rapidly changing the cyber threat landscape, and a new report from the Google Cloud Threat Intelligence team highlights how attackers already use AI to improve vulnerability exploitation and gain initial access to cloud environments. The report shows a clear shift in attacker behavior. Attackers now target software flaws and cloud services more than stolen credentials or phishing, making vulnerability exploitation a top entry method. One of the most important findings concerns the growing role of AI in offensive operations. Attackers no longer use AI only to write phishing emails or automate repetitive tasks. They now experiment with AI systems capable of identifying vulnerabilities, generating exploit code, and accelerating attack chains. – https://securityaffairs.com/191984/ai/google-warns-artificial-intelligence-is-accelerating-cyberattacks-and-zero-day-exploits.html
Dirty Frag: Linux kernel hit by second major security flaw in two weeks
(Alexander Martin – The Record) A second major Linux vulnerability has been disclosed in as many weeks, this time by an independent security researcher who published a working exploit after a coordinated disclosure embargo collapsed. Nicknamed “Dirty Frag,” the issue was found in the same area of the Linux kernel that produced last month’s Copy Fail bug, and also allows anyone with a basic account on an affected computer to seize full administrative control. Copy Fail had prompted concern as it provided hackers with an escape route from cloud containers, meaning a compromised application running inside a supposedly isolated environment can break out and take control of the entire host server — a major risk given the cloud industry’s dependence on Linux distributions. – https://therecord.media/dirty-frag-linux-kernel-hit-by-second-major-bug
UK water company allowed hackers to lurk undetected for nearly two years, regulator finds
(Alexander Martin – The Record) A British utilities company supplying drinking water to 1.6 million people failed to discover hackers hidden inside its computer network for nearly two years before the intrusion came to light through an IT performance slowdown, the UK’s data protection regulator has found. The Information Commissioner’s Office (ICO) fined South Staffordshire Water £963,900 ($1.3 million) on Monday over an attack by the Cl0p ransomware group that led to the personal data of 633,887 customers and employees being published in August 2022. According to the penalty notice, the initial access occurred almost two years earlier in September 2020 when an employee opened a malicious email attachment, installing software that gave the attacker a foothold on the corporate network. – https://therecord.media/uk-water-company-had-hackers-lurking-for-years
Texas sues Netflix over alleged data practices that create ‘surveillance machinery’ without user consent
(Suzanne Smalley – The Record) Texas Attorney General Ken Paxton said Monday that the state is suing Netflix for allegedly not obtaining user consent before collecting and sharing subscriber data with advertisers and data brokers. The lawsuit cites several examples of Netflix leadership asserting that the company does not collect and share user data with advertisers even as the company has long used “intentional engineering to track and log users’ viewing habits, preferences, devices, household networks, application usage, and other sensitive behavioral data,” according to a press release. This tracking is also used to analyze kids’ profiles, the lawsuit said, and to pinpoint users’ locations. – https://therecord.media/texas-sues-netflix-over-data-practices-surveillance
TrickMo Variant Routes Android Trojan Traffic Through TON
(Alessandro Mascellino – Infosecurity Magazine) A new variant of the TrickMo Android banking trojan has moved its primary command-and-control (C2) transport onto The Open Network (TON) Blockchain, routing communications through the decentralized overlay’s .adnl identities to make traditional domain takedowns largely ineffective. The variant, identified by ThreatFabric and labeled TrickMo C, was tracked between January and February 2026 in active campaigns against banking and wallet users in France, Italy and Austria, according to new analysis from the firm’s Mobile Threat Intelligence Team. Telemetry indicated the variant was progressively replacing its predecessor across operator campaigns, with TikTok-themed lures circulated via Facebook ads. – https://www.infosecurity-magazine.com/news/trickmo-c-ton-network-android/
Fake Claude Code Page Pushes PowerShell Stealer at Devs
(Alessandro Mascellino – Infosecurity Magazine) A previously undocumented information stealer has been distributed through fake Claude Code installation pages, hijacking Chromium browsers to bypass App-Bound Encryption and exfiltrate cookies, passwords and payment data from developer workstations. The campaign was detailed on 11 May by Ontinue’s Cyber Defense Center, which traced the activity to three operator-controlled domains registered within a six-day window in April 2026. Victims arrived at the lookalike installation page after clicking sponsored search results for “install claude code.” – https://www.infosecurity-magazine.com/news/fake-claude-code-installer/
US: FCC Relaxes Foreign-Made Router Ban to Allow for Security Updates
(Kevin Poireault – Infosecurity Magazine) The US Federal Communications Commission (FCC) has extended the deadline for owners of banned internet routers to provide security updates to US-based users by two years. In March 2026, the Commission banned the import and sale of all “consumer-grade” internet routers produced in a foreign country, citing “an unacceptable risk” to the national security of the US. – https://www.infosecurity-magazine.com/news/us-fcc-relaxes-foreign-router-ban/
ShinyHunters Escalates Canvas Extortion with School by School Ransom Campaign
(Beth Maundrill – Infosecurity Magazine) The education sector has found itself in the crosshairs of a ShinyHunters “pay or leak” extortion campaign following the compromise of Instructure, the company behind the Canvas Learning Management System. The original compromise of Instructure occurred on April 25 with around 275 million records from 8809 educational institutions stolen. ShinyHunters gained unauthorized access to Instructure systems by exploiting a vulnerability in the Free-For-Teacher version of Canvas. Over 3.65 TB of data is said to have been exfiltrated by the ransomware gang. The group made its first extortion attempt by posting a ransom demand on its data leak site. The initial deadline was 8 May, after which the group threatened to leak data. – https://www.infosecurity-magazine.com/news/shinyhunters-escalates-canvas/
Zara Data Breach Impacts Nearly 200,000 Customers
(Phil Muncaster – Infosecurity Magazine) A ShinyHunters campaign has resulted in the compromise of information belonging to over 197,000 customers of fashion outlet Zara, according to HaveIBeenPwned. The data breach notification service posted a brief note on its website explaining data stolen during an April 2026 incident included unique email addresses alongside product Stock Keeping Units (SKU), order IDs and information relating to support tickets. Initially, Zara parent company Inditex claimed that no names, passwords, bank-card details or any other payment methods were affected by the incident. – https://www.infosecurity-magazine.com/news/zara-data-breach-impacts-200000/
Crimenetwork returns after takedown, dismantled again by German authorities
(Pierluigi Paganini – Security Affairs) German police dismantled a resurrected version of the German-language cybercrime marketplace Crimenetwork, just months after the original platform was taken down. The second iteration of the site had already attracted more than 22,000 users and over 100 sellers, showing how quickly underground markets can recover when operators are able to rebuild their infrastructure. “Before being shut down by law enforcement at the end of 2024, “Crimenetwork” was for many years one of the central marketplaces of the German-speaking underground economy. The relaunch of the platform offered a similarly wide range of illegal goods and services, including stolen data, drugs, and forged documents. The relaunch most recently boasted over 22,000 users and more than 100 vendors.” reads the announcement by BKA. “Users of the new platform used cryptocurrencies such as Bitcoin, Litecoin, and Monero to conduct their transactions. During the operation, law enforcement secured extensive evidence suggesting the platform generated revenue exceeding €3.6 million.” – https://securityaffairs.com/191969/cyber-crime/crimenetwork-returns-after-takedown-dismantled-again-by-german-authorities.html
Frontiers
China opens a new era of computing with fourth generation quantum machine
(DigWatch) China has launched its fourth-generation superconducting quantum computer, marking a further step in the country’s push to scale advanced computing infrastructure. Developed by Origin Quantum, the system, named Origin Wukong-180, has begun accepting quantum computing tasks from users worldwide. The machine is built around a 180-qubit superconducting chip and integrates fully self-developed core systems, including the chip architecture, measurement and control systems, environmental support, and operating software. According to the company, the platform represents full-stack domestic capability across the quantum computing chain. – https://dig.watch/updates/fourth-generation-quantum-machine
Brazil tests quantum-secure communication over Recife fibre network
(DigWatch) Researchers in Brazil have developed the Recife Quantum Network, a quantum key distribution system that uses inactive optical fibre already installed in the city’s urban infrastructure to test secure communications outside a laboratory setting. The project, led by Professor Daniel Felinto at the Federal University of Pernambuco, connects university departments through dark fibre and uses quantum key distribution to protect information exchange. – https://dig.watch/updates/brazil-recife-quantum-key-distribution-network