Governance and Regulation
Turkey reviews major social media platforms’ handling of children’s data
(Turkish Minute) Turkey’s data protection watchdog has opened a review of how six major social media platforms process children’s personal data, as Ankara prepares separate legislation that would tighten state control over online accounts and content. The Personal Data Protection Authority said the Personal Data Protection Board decided to launch an ex officio review of TikTok, Instagram, Facebook, YouTube, X and Discord. The authority said the review will examine how children’s personal data is processed on the platforms and what safeguards are in place, citing the “best interests of the child” and the need to protect children from risks in digital environments. – https://www.turkishminute.com/2026/02/21/turkey-reviews-major-social-media-platforms-handling-of-childrens-data/
OpenAI’s Altman says world ‘urgently’ needs AI regulation
(Japan Today) Sam Altman, head of ChatGPT maker OpenAI, told a global artificial intelligence conference on Thursday that the world “urgently” needs to regulate the fast-evolving technology. An organization could be set up to coordinate these efforts, similar to the International Atomic Energy Agency (IAEA), he said. Altman is one of a host of top tech CEOs in New Delhi for the AI Impact Summit, the fourth annual global meeting on how to handle advanced computing power. – https://japantoday.com/category/tech/openai%27s-altman-says-world-%27urgently%27-needs-ai-regulation
Digital addiction in Italy sparks debate over social media bans
(DigWatch) Italy has warned that digital addiction among teenagers is rising sharply, as health authorities link excessive social media and gaming use to family and educational challenges. Officials say bans alone will not resolve the issue. According to Italy’s National Institute of Health, about 100,000 young people aged 15 to 18 are at risk of social media addiction. A further 500,000 are estimated to suffer from gaming disorder, recognised by the World Health Organisation as a medical condition. – https://dig.watch/updates/italy-teen-social-media-addiction
Geostrategies
Saudi Arabia steps into global AI leadership to shape AI future
(DigWatch) The Global Partnership on Artificial Intelligence (GPAI), a multilateral initiative hosted by the OECD and launched by the G7, has officially welcomed Saudi Arabia as a new member. The move reflects the Kingdom’s commitment to shaping global AI governance and ethical technology use. – https://dig.watch/updates/saudi-arabia-steps-into-ai-leadership
Security and Surveillance
EU–US draft data pact allows automated decisions on travellers
(DigWatch) A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections. The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms. A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans. – https://dig.watch/updates/eu-us-draft-data-pact-allows-automated-decisions-on-travellers
Anthropic unveils Claude Code Security to detect and fix code bugs
(Pierluigi Paganini – Security Affairs) Anthropic has introduced Claude Code Security, a new AI-powered service designed to scan software codebases for vulnerabilities and recommend fixes. Built into Claude Code, the tool aims to help teams detect and remediate security flaws faster. The capability is currently being rolled out in a limited research preview for Enterprise and Team customers. “Claude Code Security, a new capability built into Claude Code on the web, is now available in a limited research preview. It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss.” reads the announcement published by Anthropic. – https://securityaffairs.com/188358/ai/anthropic-unveils-claude-code-security-to-detect-and-fix-code-bugs.html
Shai-Hulud-Like Worm Targets Developers via npm and AI Tools
(Alessandro Mascellino – Infosecurity Magazine) A supply chain worm resembling earlier Shai-Hulud malware has been discovered spreading through malicious npm packages. According to Socket’s Threat Research Team, the campaign, tracked as SANDWORM_MODE, has been identified across at least 19 npm packages published under two aliases, official334 and javaorg. The operation builds on known supply chain tradecraft but adds a notable twist: direct interference with AI coding tools.Researchers said the malware not only stole developer and CI credentials and propagated through compromised npm and GitHub accounts, but also injected rogue MCP servers into local AI assistant configurations and harvested API keys for nine large language model providers. – https://www.infosecurity-magazine.com/news/shai-hulud-like-worm-devs-npm-ai/
AI-powered campaign compromises 600 FortiGate systems worldwide
(Pierluigi Paganini – Security Affairs) Amazon Threat Intelligence reports that a Russian-speaking, financially motivated threat actor used commercial generative AI services to compromise more than 600 FortiGate devices in 55 countries. The activity, observed between January 11 and February 18, 2026, highlights how cybercriminals are increasingly leveraging AI tools to scale and automate attacks against exposed network infrastructure worldwide. The attacker did not exploit any FortiGate vulnerabilities. Instead, the threat actor abused exposed management ports and weak single-factor credentials. “Amazon Threat Intelligence observed a Russian-speaking financially motivated threat actor leveraging multiple commercial generative AI services to compromise over 600 FortiGate devices across more than 55 countries from January 11 to February 18, 2026.” reads the report published by Amazon. “No exploitation of FortiGate vulnerabilities was observed—instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale.” – https://securityaffairs.com/188351/hacking/ai-powered-campaign-compromises-600-fortigate-systems-worldwide.html
Russian Cyber Threat Actor Uses GenAI to Compromise Fortinet Firewalls
(Kevin Poireault – Infosecurity Magazine) A low-skilled cyber threat actor has been observed leveraging several generative AI (GenAI) tools to deploy a malicious campaign aimed at compromising Fortinet’s FortiGate firewall appliances. In an Amazon Web Services (AWS) Security blog published on February 20, CJ Moses, CISO of Amazon Integrated Security, shared findings about the campaign. Amazon Threat Intelligence assessed that the attacker was a Russian-speaking, financially motivated threat actor with limited technical capabilities. The threat actor used multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operation. AWS assessed the campaign ran from January 11 to February 18, 2026, and compromised over 600 FortiGate devices across more than 55 countries. Amazon Threat Intelligence noted that AWS infrastructure was not involved in this campaign and that no exploitation of FortiGate vulnerabilities was observed. – https://www.infosecurity-magazine.com/news/russian-threat-actor-genai/
Fraud Investigation Reveals Sophisticated Python Malware
(Alessandro Mascellino – Infosecurity Magazine) A sophisticated Python-based malware deployment uncovered during a fraud investigation has revealed a layered attack involving obfuscation, disposable infrastructure and commercial offensive tools. The discovery was made by the Secuinfra Falcon Team after a user reported unusual desktop behaviour and unauthorised PayPal transfers. The case began when the victim noticed “strange black windows” appearing briefly on screen and captured screenshots. Those images showed fragments of a command script that had failed to fully suppress its output, exposing evidence of payload decoding and execution. – https://www.infosecurity-magazine.com/news/fraud-investigation-python-malware/
Leading Semiconductor Supplier Advantest Hit by Ransomware Attack
(Danny Palmer – Infosecurity Magazine) Advantest Corporation, the Japanese technology company and prominent manufacturer of testing equipment for the semiconductor industry, has been hit by a ransomware attack. In a statement released on February 19, the company, which is a supplier to major chip producers including Samsung, said it was “responding to a cybersecurity incident involving ransomware that may have impacted certain systems within its network.”. Headquartered in Tokyo, Advantest employees over 7500 people and has offices in locations around the world, including Munich, Germany and San Jose, California. – https://www.infosecurity-magazine.com/news/advantest-ransomware-attack/
Jackpotting Surge Costs Banks Over $20m, Warns FBI
(Phil Muncaster – Infosecurity Magazine) Nearly two-fifths of ATM jackpotting attacks recorded in the US since 2020 occurred last year, the FBI has warned. A new FBI Flash alert claimed that the 700+ attacks seen in 2025 resulted in losses of over $20m. Typically, threat actors deploy malware such as the Ploutus variant to exploit the eXtensions for Financial Services (XFS) API and give them control over the ATM, the FBI explained. “When a legitimate transaction occurs, the ATM application sends instructions through XFS for bank authorization. If a threat actor can issue their own commands to XFS, they can bypass bank authorization entirely and instruct the ATM to dispense cash on demand,” it said. – https://www.infosecurity-magazine.com/news/jackpotting-surge-costs-banks-20m/
University of Mississippi Medical Center Still Offline After Ransomware Attack
(Phil Muncaster – Infosecurity Magazine) Mississippi’s largest hospital group is still reeling from a ransomware attack late last week that has forced its IT systems offline. The University of Mississippi Medical Center (UMMC) is one of the state’s largest employers, with over 10,000 staff working across seven hospitals, dozens of clinics and over 200 telehealth sites. It revealed in a post on X on February 19 that “many UMMC IT systems are down, including access to our electronic medical records,” due to a cybersecurity attack. “Outpatient and ambulatory surgeries/procedures and imaging appointments are cancelled and will be rescheduled,” it continued. “Hospital services are continuing for our patients using downtime procedures.” – https://www.infosecurity-magazine.com/news/university-mississippi-medical/
Introducing EVMbench. Making smart contracts safer by evaluating AI agents’ ability to detect, patch, and exploit vulnerabilities in blockchain environments
(OpenAI) Smart contracts routinely secure $100B+ in open-source crypto assets. As AI agents improve at reading, writing, and executing code, it becomes increasingly important to measure their capabilities in economically meaningful environments, and to encourage the use of AI systems defensively to audit and strengthen deployed contracts. Together with Paradigm, we’re introducing EVMbench, a benchmark evaluating the ability of AI agents to detect, patch, and exploit high-severity smart contract vulnerabilities. EVMbench draws on 120 curated vulnerabilities from 40 audits, with most sourced from open code audit competitions. EVMbench additionally includes several vulnerability scenarios drawn from the security auditing process for the Tempo(opens in a new window) blockchain, a purpose-built L1 designed to enable high-throughput, low-cost payments via stablecoins. These scenarios extend the benchmark into payment-oriented smart contract code, where we expect agentic stablecoin payments to grow, and help ground it in a domain of emerging practical importance. – https://openai.com/index/introducing-evmbench/
UIDAI launches AI enabled biometric deduplication & document verification platform
(The Statesman) The Unique Identification Authority of India (UIDAI) has launched landmark initiatives in India’s digital security framework. It has deployed next-generation AI enabled biometric deduplication and document verification platform. This platform will improve the deduplication accuracy of the Enrolment or Update transaction being undertaken by UIDAI. This “Invisible Shield” marks a new chapter in India’s digital safety mission — a multi-layered AI defence system that performs crores of computations, harnessing accelerated computing to protect citizens’ trust and data integrity. – https://www.thestatesman.com/india/uidai-launches-ai-enabled-biometric-deduplication-document-verification-platform-1503559973.html
Frontiers and Markets
China Develops AI-Powered System for Rare Disease Diagnosis
(SANA) A Chinese research team has developed an advanced system for diagnosing rare diseases using artificial intelligence (AI) technologies, named “DeepRare,” setting a new global record for diagnostic accuracy. Xinhua News Agency reported that the new system was developed by a joint team from Shenhua Hospital, affiliated with Shanghai Jiao Tong University School of Medicine, and the university’s School of Artificial Intelligence. – https://sana.sy/en/miscellaneous/2298155/
Chinese scientists put quantum chaos in ‘slow motion’
(Zhang Tong – SCMP) In a landmark achievement, Chinese scientists have directly observed and manipulated prethermalisation – a critical transitional state in quantum systems – using the 78-qubit “Chuang-tzu 2.0” superconducting processor. This allows researchers to “tune” the speed of quantum decoherence, providing a vital tool for managing complex quantum environments. If a quantum system is disturbed, it naturally returns to a balanced state. The energy and information within it spreads out until they are evenly distributed. It would be similar to nudging a pendulum: it swings for a while but eventually slows down and stops. This is a major challenge for quantum computing, which relies on keeping information perfectly intact. If a quantum system changes too quickly, its computational results become difficult to save and retrieve. However, predicting how long this process takes or what affects it is beyond the power of existing classical computers. – https://www.scmp.com/news/china/science/article/3344006/chinese-scientists-put-quantum-chaos-slow-motion
Secure quantum-safe optical transport strengthens Japan’s AI data center infrastructure
(DigWatch) Nokia and KDDI Corporation demonstrated quantum-safe optical transport at Sakai Data Center, supporting advanced AI workloads. The network aims to deliver secure, uninterrupted data transfer while protecting sensitive AI operations. The demonstration showcases KDDI’s scalable AI-ready infrastructure for real-time training, inference, and analytics. Quantum-safe encryption and resilient transport protect customer data and critical infrastructure across Japan’s distributed data centres. – https://dig.watch/updates/quantum-safe-optical-transport-data-center