Governance, Regulation, and Legislation
Anthropomorphism Is Breaking Our Ability to Judge AI
(James Ball – Tech Policy Press) How should we interact with a technology designed to ‘speak’ with us on what appear to be human terms? The question of how best to deal with large language models is most frequently raised in the context of inappropriate personal relationships, but it’s one that is increasingly encroaching upon the professional domain, too. Most AI models are designed to format their outputs to seem as naturalistic and human as possible. Text is first-person, informal, and conversational, while synthetic voices are engineered to sound human—to the extent that multiple AI companies have faced lawsuits over whether or not they have impersonated real people. Some of this is the result of deliberate product design, as user feedback shows many users prefer AI systems that are ‘friendly’ in their communications, notoriously to the level of sycophancy, prioritizing this in practice even over accuracy of answers. Some, too, is a natural result of training: the overwhelming majority of text that exists (at least until 3-5 years ago) was written by humans, for humans. AI models simply cannot help but to imitate what is in their training data. The result is that interactions with large language models blur the lines between human conversation and the more typical experience of using technology—which seems to be causing confusion even among what should be experienced and sophisticated users. – https://www.techpolicy.press/anthropomorphism-is-breaking-our-ability-to-judge-ai/
LatamGPT Navigates the Gap Between Regional Aspiration and Market Realities
(Ezequiel Rivero – Tech Policy Press) On February 10, Chilean President Gabriel Boric stood before a regional audience to unveil LatamGPT, calling it more than just a technological project. “Some might think creating a language generator from Latin America is a matter for nerds, but it’s not. Here we’re defending our identity and our right to exist,” he declared. The statement encapsulates both the ambition and the underlying anxiety driving Latin America’s first collaborative artificial intelligence model, a project that promises technological sovereignty but faces the stark realities of a consolidated global AI market. LatamGPT represents an unprecedented regional effort to develop a large language model (LLM) trained specifically on Latin American data, contexts, and cultural nuances. Led by Chile’s National Center for Artificial Intelligence (CENIA) and involving 15 countries, over 200 collaborators, and 33 institutional alliances, the project has generated a 70-billion-parameter model based on Meta’s Llama 3.1 Built on more than eight terabytes of information from 2.6 million documents across 20 Latin American countries and Spain, the model aims to address a fundamental gap: Spanish and Portuguese data represent only 2-3% of the training material used in existing AI models. The technical infrastructure behind LatamGPT marks a significant achievement for the region. The University of Tarapacá in Arica, Chile, invested $10 million in a supercomputing center, the first facility of its kind in Latin America capable of training large-scale models domestically. Importantly, LatamGPT is not a chatbot for direct public use but rather an open-source infrastructure that developers, companies, and governments can adapt for specific applications in education, healthcare, public services, and cultural preservation. – https://www.techpolicy.press/latamgpt-navigates-the-gap-between-regional-aspiration-and-market-realities/
Geostrategies
Singapore and South Korea expand AI partnership
(DigWatch) South Korean President Lee Jae Myung used the opening day of his state visit to Singapore to set out plans for deeper cooperation in emerging technologies and renewable energy. He framed the partnership as a chance to build a future-oriented agenda shaped by a shared reliance on human capital rather than natural resources. – https://dig.watch/updates/singapore-and-south-korea-expand-ai-partnership
Security and Surveillance
Middle east crisis prompts UK NCSC warning on potential Iranian cyber activity
(Pierluigi Paganini – Security Affairs) The UK’s National Cyber Security Centre (NCSC) has warned organizations of a potential increase in Iranian cyber threats amid the escalating Middle East conflict. While it sees no immediate shift in the direct threat to Britain, officials stress the situation could change rapidly. The advisory targets companies with operations or supply chains in the region, urging them to remain alert and strengthen defenses. “As a result of the ongoing conflict in the Middle East, there is likely no current significant change in the direct cyber threat from Iran to the UK, however due to the fast-evolving nature of the conflict, this assessment may be subject to change.” reads the advisory published by UK NCSC. “There is almost certainly a heightened risk of indirect cyber threat for those organisations and entities who have a presence, or supply chains, in the Middle East. – https://securityaffairs.com/188800/apt/middle-east-crisis-prompts-uk-warning-on-potential-iranian-cyber-activity.html
India Is Using AI to Police Identity and Expel Minorities
(Suvradip Maitra – Tech Policy Press) The Indian state of Maharashtra is developing an AI tool that uses “accent, tone and word choices” to identify and deport Bangladeshi Muslims and displaced Rohingyas from Myanmar. The system is intended for use by law enforcement as a preliminary screening mechanism prior to document-based nationality verification. Framed as objective technology, the tool is in fact grounded in linguistic profiling that risks reinforcing xenophobia, prejudice, and racial discrimination. Its deployment raises serious concerns under international human rights law, including the International Convention on the Elimination of All Forms of Racial Discrimination. This initiative must be situated within a broader expansion of AI-driven border policing. Agencies such as US Immigration and Customs Enforcement in the United States have adopted data-driven enforcement systems, and several Indian police departments are increasingly turning to “carceral AI” for policing caste, women and religious minorities. This is also not the first instance of accent and dialect being employed as a way to identify the origin of migrants. As early as 2017, the German Bundesamt für Migration und Flüchtlinge (BAMF, the Federal Office for Migration and Refugees) deployed DIAS, a tool for accent and dialect recognition, following the Syrian refugee crisis, to validate asylum claims. Since then, several EU countries and Turkey have tested the technology but determined that it was not “mature enough” for implementation. – https://www.techpolicy.press/india-is-using-ai-to-police-identity-and-expel-minorities/
Russia-linked APT28 exploited MSHTML zero-day CVE-2026-21513 before patch
(Pierluigi Paganini – Security Affairs) Akamai reports that Russia-linked APT28 may have exploited CVE-2026-21513 CVSS score of 8.8), a high-severity MSHTML vulnerability (CVSS 8.8), before Microsoft patched it in February 2026. The vulnerability is an Internet Explorer security control bypass that can lead to code execution when a victim opens a malicious HTML page or LNK file. The flaw could be triggered by opening a malicious HTML or LNK file, allowing attackers to bypass protections and potentially execute code. While Microsoft shared few details. Microsoft confirmed CVE-2026-21513 was exploited in real-world zero-day attacks and credited MSTIC, MSRC, the Office Security Team, and Google’s GTIG for reporting it. Akamai found a malicious sample uploaded to VirusTotal on January 2026 tied to infrastructure linked to APT28. – https://securityaffairs.com/188782/security/russia-linked-apt28-exploited-mshtml-zero-day-cve-2026-21513-before-patch.html
APT37 combines cloud storage and USB implants to infiltrate air-gapped systems
(Pierluigi Paganini – Security Affairs) North Korean group ScarCruft (aka APT37, Reaper, and Group123) deployed new tools in a campaign dubbed Ruby Jumper, using a backdoor that leverages Zoho WorkDrive for C2 and a USB-based implant to breach air-gapped systems. Zscaler ThreatLabz discovered the campaign in December 2025; the attacks relied on multiple malware families to conduct surveillance and deliver additional payloads. The recent attacks begin with malicious LNK files and deploys multiple newly identified tools, including RESTLEAF and SNAKEDROPPER, to deliver backdoors such as FOOTWINE and BLUELIGHT for surveillance. – https://securityaffairs.com/188767/apt/apt37-combines-cloud-storage-and-usb-implants-to-infiltrate-air-gapped-systems.html
Europol’s Project Compass nets 30 arrests in crackdown on “The Com”
(Pierluigi Paganini – Security Affairs) A yearlong operation, code-named Project Compass, led by Europol has dealt a major blow to The Com,’ a cybercrime network known for targeting children and teenagers. The joint effort, called Project Compass and coordinated by Europol’s European Counter Terrorism Centre, brought together law enforcement agencies from 28 countries. “The Com” operates through a scattered online network, using social media, messaging apps, gaming platforms and streaming services to recruit and exploit young people. Its decentralized structure makes. The Com is mostly composed of English-speaking cybercriminals aged 16 to 25. The group has been linked to attacks ranging from crippling British retailers’ IT systems to making bomb threats and coercing teenage girls into self-harm. Its latest alleged victims are premium users of Pornhub, whose data was reportedly hacked by ShinyHunters, an offshoot tied to the broader Com network, which includes Scattered Spider. – https://securityaffairs.com/188708/cyber-crime/europols-project-compass-nets-30-arrests-in-crackdown-on-the-com.html
ClawJacked flaw exposed OpenClaw users to data theft
(Pierluigi Paganini – Security Affairs) A high-severity vulnerability called ClawJacked in OpenClaw allowed malicious websites to brute-force and take control of local AI agent instances. Oasis Security discovered the flaw, which enabled silent data theft. OpenClaw addressed the issue with version 2026.2.26, released on February 26. OpenClaw is an open-source AI agent framework that lets developers run autonomous AI assistants locally. It connects large language models to tools, browsers, and system resources, enabling task automation such as web interaction, data processing, and workflow execution on a user’s machine. – https://securityaffairs.com/188749/hacking/clawjacked-flaw-exposed-openclaw-users-to-data-theft.html
Ukrainian hacker pleads guilty to running OnlyFake AI ID scam site
(Pierluigi Paganini – Security Affairs) Ukrainian man Yurii Nazarenko pleaded guilty to operating OnlyFake, an AI-powered site that generated and sold more than 10,000 counterfeit IDs globally. “United States Attorney for the Southern District of New York, Jay Clayton, and Assistant Director in Charge of the New York Field Office of the Federal Bureau of Investigation (“FBI”), James C. Barnacle, Jr., announced today that Ukrainian national YURII NAZARENKO, a/k/a “Yuriy Nazarenko,” a/k/a “Uriel Septimberus,” a/k/a “Tor Ford,” a/k/a “John Wick,” has been charged and pled guilty for his role in operating the website “OnlyFake,” which sold fake photos of identification documents such as passports and driver’s licenses (“Digital Fake IDs”).” reads the press release published by DoJ. “NAZARENKO pled guilty today to conspiracy to commit fraud in connection with identification documents, authentication features, and information before U.S. District Judge Margaret M. Garnett.” – https://securityaffairs.com/188734/cyber-crime/ukrainian-hacker-pleads-guilty-to-running-onlyfake-ai-id-scam-site.html
Live facial recognition rolled out in Cardiff policing operation
(DigWatch) South Wales Police has deployed live facial recognition technology in Cardiff to help prevent and detect crime. The operation is designed to identify suspects, wanted individuals and high-risk missing persons. The deployment forms part of the force’s broader strategy to integrate advanced technologies into policing across South Wales. Officers will operate in clearly marked vehicles and designated recognition zones during the initiative. – https://dig.watch/updates/live-facial-recognition-rolled-out-in-cardiff-policing-operation
Cybersecurity M&A Roundup: Firms Focus on AI Agents, as Check Point Announces Three Acquisitions
(Danny Palmer – Infosecurity Magazine) February 2026 saw cybersecurity vendors continue to focus heavily building out their AI-related offerings, as several major players completed or announced plans to acquire start-ups, service providers and other technology companies. There was a raft of acquisitions relating to AI agents, with Check Point, Sophos, Proofpoint and Palo Alto Networks announcing M&A plans in the agentic AI space. The second month of the year follows on from a steady beginning of 2026 for takeovers and mergers, following a strong 2025. – https://www.infosecurity-magazine.com/news-features/cybersecurity-ma-roundup-feb-26/
Defence and Intelligence
Pentagon–Anthropic brawl demands rethink of AI industry
(David Wroe – ASPI The Strategist) Imagine we found a way to build gods—or demons. Would we want private companies to have sole responsibility and control over the almighty? Imagine the workload on their legal teams. Fine, they’re dramatic questions, but they’re pressing ones after the past week’s blow-up between the Pentagon and artificial intelligence company Anthropic, which ended in severe penalties for the AI lab. This fight was about much more than one company’s right to veto two narrow uses of its models—fully autonomous lethal strike and domestic mass surveillance—by the US military. It’s about who controls technology that will increasingly wield enormous power over human lives, not just in military settings but across every realm. – https://www.aspistrategist.org.au/pentagon-anthropic-brawl-demands-rethink-of-ai-industry/
Frontiers and Markets
AI Agents and the Next Layer of India’s Digital Infrastructure
(Anuradha Sajjanhar – Tech Policy Press) At a gathering of government officials, tech leaders and artificial intelligence researchers during the India AI Summit last month, an MIT professor compressed an entire social theory for the technology’s future use into what was presented as a technical upgrade: that giving every citizen a personal AI agent could serve to decentralize AI. This vision suggests not simply broad consumer access to AI tools but a prevalence of personal proxies that negotiate, coordinate, transact and interface on one’s behalf. Essentially, it pictures agents speaking to agents so that people do not have to. The idea has gained traction in discussions around Doot, a whitepaper envisioning a citizen-owned AI agent built on India’s digital public infrastructure. The professor, Ramesh Raskar, offered an illustrative example of a 70-year-old woman in rural Bihar planning a visit to Kumbh Mela, a mass Hindu pilgrimage whose scale and administrative complexity make it a recurring test case for India’s infrastructural and governing capacities. Her agent, in this context, would organize travel, account for dietary constraints, coordinate accommodation and interact with vendors — provided, crucially, that the surrounding ecosystem was similarly agent-enabled. Vendors, platforms and institutions would also deploy agents, as the system would function most effectively when proxies interacted with one another. – https://www.techpolicy.press/ai-agents-and-the-next-layer-of-indias-digital-infrastructure/
Samsung advances toward AI autonomous factories by 2030
(DigWatch) The South Korean electronics corporation, Samsung, is preparing a major shift to autonomous manufacturing, converting global production sites into AI-driven factories by 2030. As such, the company is moving toward a model in which AI systems understand on-site conditions and make operational decisions independently, rather than relying on fixed automation. – https://dig.watch/updates/samsung-advances-toward-ai-autonomous-factories-by-2030
New all-island AI research alliance formed by Queen’s and UCD
(DigWatch) Queen’s University Belfast and University College Dublin (UCD) have formalised a cross-border partnership focused on artificial intelligence research and talent development. The collaboration will bring together researchers, faculty and students from both institutions to address shared challenges and opportunities in AI, including applications in healthcare, cybersecurity, data analytics and ethical AI governance. – https://dig.watch/updates/new-all-island-ai-research-alliance-formed-by-queens-and-ucd
Japanese bank Mizuho plans major AI shift across administrative operations
(DigWatch) Mizuho Financial Group plans to reduce work equivalent to 5,000 administrative positions over the next decade by introducing AI systems to improve operational efficiency. Around one-third of its 15,000 clerical staff nationwide will see their duties reshaped rather than eliminated. Administrative employees currently manage processes such as document checks and data entry when opening accounts at subsidiary branches. Management expects many of these routine activities to be handled by AI as automation expands across operations. – https://dig.watch/updates/japanese-bank-mizuho-plans-major-ai-shift-across-administrative-operations
AI in healthcare drives strategic transformation in hospital systems
(DigWaatch) AI is expanding across healthcare systems in Asia, particularly in diagnostics and hospital operations. Adoption is increasing, but governance frameworks and institutional guidance remain uneven. In South Korea, a survey by the Korea Health Industry Development Institute(KHIDI) found that nearly half of registered doctors have used AI, mainly for medical image interpretation in diagnosis and screening. However, only a small proportion of medical institutions have formal AI guidelines, and limited training and legal uncertainty remain key barriers. – https://dig.watch/updates/ai-in-healthcare-drives-strategic-transformation-in-hospital-systems