Governance
US Delegation Heads to India AI Summit Intent on ‘Domination’
(Merve Hickok, Marc Rotenberg – Tech Policy Press) This week, world leaders, civil society organizations, academic experts, and business executives will gather in New Delhi to discuss the future of artificial intelligence. The 2026 India AI Impact Summit has adopted the theme “Shaping AI for Humanity, Inclusive Growth, and Sustainability,” reflecting a growing global consens that AI policy must be aligned with human rights, democratic governance, and long-term social welfare. The United States has chosen a different approach. Following its new AI strategy into a period the White House calls “The Great Divergence,” the US delegation is expected to promote the adoption of the “American stack”: American-designed chips, American-controlled networks, and American-developed models and applications. Rather than emphasizing inclusive growth, shared governance frameworks or international cooperation, the US strategy underscores global “dominance” and technological dependence. – https://www.techpolicy.press/us-delegation-heads-to-india-ai-summit-intent-on-domination/
AI governance is not just top-down in China, research finds
(Patrick Daly – NorthEastern Global News) China-watchers arguing that Beijing’s artificial intelligence controls are dependent on its authoritarian government are peddling a “stereotypical narrative,” according to new research. Xuechen Chen, associate professor in politics and international relations at Northeastern University in London, has co-written a paper that explores how traditional Chinese values and commercial interests have also played a part in self-regulatory guardrails on AI being introduced. The argument is made in a peer-reviewed paper, “State, society, and market: Interpreting the norms and dynamics of China’s AI governance,” published by the Computer Law & Security Review. – https://news.northeastern.edu/2026/02/16/china-ai-governance/
AI chatbots to face strict online safety rules in UK
(Hanna Ziady – CNN) AI chatbot providers, including ChatGPT and Grok, are facing a crackdown on illegal content in the United Kingdom, as the government promises swift action to make the internet safer for children. “Today we are closing loopholes that put children at risk, and laying the groundwork for further action,” UK Prime Minister Keir Starmer said in a statement Monday. Britain’s clampdown comes as artificial intelligence and social media have come under renewed fire for potential harms to young people after the Grok chatbot generated sexualized images of women and children for weeks on X, prompting a major global backlash. – https://edition.cnn.com/2026/02/16/business/uk-ai-chatbots-online-safety-act-intl
Democratising AI: Towards Open, Decentralised AI Ecosystems
(Basu Chandola, Anirban Sarma – Observer Research Foundation) Over the past decade, India has demonstrated what inclusive digital transformation can achieve. From driving digital financial inclusion and powering the world’s largest vaccination programme to enabling secure e-commerce and strengthening direct benefit transfers, the country’s digital public infrastructure (DPI) has shown how technology can serve citizens at scale and create public value. The India Stack, built on open standards, interoperability, and public–private collaboration, has become a global reference point for how digital ecosystems can unlock innovation while expanding access. As India prepares to lead the Artificial Intelligence (AI) revolution, it is committed to building on this legacy. It envisages AI as “a big tool to solve many problems simultaneously”, one that can drive economic growth, strengthen public services, and address social challenges while containing the associated risks. Boosting the accessibility of the technology and ensuring that no single player has a monopoly over it are two critical priorities of the strategy. Similar to the development of DPI, India aims to create a model in which the government invests in platforms, enabling everyone to use the technology to innovate, develop, and deliver products and services in a competitive and collaborative manner. – https://www.orfonline.org/research/democratising-ai-towards-open-decentralised-ai-ecosystems
Geostrategies
Can Southeast Asia extend its AI data centre advantage into Space?
(Karryl Kim Sagun Trajano, Iuna Tsyrulneva – Lowy The Interpreter) Space-based data centres are seen as the next frontier. Technology titans from the United States and China are already moving to pioneer orbital platforms or satellite constellations that provide compute and storage for AI and other data-intensive workloads. The aim is to leverage the unique conditions of low Earth orbit to overcome terrestrial constraints. By drawing on continuous solar energy and the cold vacuum of space for passive radiative cooling, such systems reduce dependence on Earth-based power grids and lower energy and water requirements. They can support climate modelling, scientific simulations, and large-scale analytics. The US has been moving swiftly in this direction. Initiatives like Google’s Project Suncatcher indicate the space-compute frontier is rapidly moving from concept to reality. Meanwhile, Lonestar Data has demonstrated off-planet storage and data exchange on lunar missions and plans cislunar storage satellites. Elon Musk’s SpaceX recently acquired xAI, motivated by doing AI computations in space, including building space-based data centres. Yet Southeast Asia remains absent from this first wave of orbital computing initiatives. If the region is to participate meaningfully in this domain in the future, it must act now. – https://www.lowyinstitute.org/the-interpreter/can-southeast-asia-extend-its-ai-data-centre-advantage-space
Security and Surveillance
When National Security Becomes a Shield for Evading AI Accountability
(Ashwin Prabu, Marlena Wisniak – Tech Policy Press) As artificial intelligence becomes embedded in state security and surveillance across Europe, the legal safeguards meant to constrain its use are increasingly being left behind. EU member states are turning to AI to automate decision-making, expand surveillance, and consolidate state power. Yet many of these applications, particularly biometric surveillance and algorithmic risk assessments, remain largely unregulated when it comes to national security. Indeed, broad carve-outs and exemptions for national security in existing AI legislation, including Article 2 of the EU AI Act and Article 3(2) CoE Framework Convention on AI and Human Rights, the Rule of Law and Democracy, have created significant regulatory gaps. Compounding this issue, “national security” itself is so loosely defined that it allows states to bypass fundamental rights while deploying AI with minimal oversight. Against the backdrop of a rapidly shifting geopolitical environment and rising authoritarianism, national security risks are becoming a convenient cover for unchecked surveillance and executive authority. This dynamic is setting a dangerous precedent. EU governments and candidate countries are invoking national security to justify AI deployment in ways that evade regulatory scrutiny, particularly in surveillance and counterterrorism. Upholding the Court of Justice of the European Union jurisprudence is critical because it provides a legal compass for defining national security and setting clear thresholds for when states can override fundamental rights. Without it, Europe risks building a security architecture powered by AI, but shielded from accountability. – https://www.techpolicy.press/when-national-security-becomes-a-shield-for-evading-ai-accountability/
Dior, Louis Vuitton, Tiffany Fined $25 Million in South Korea After Data Breaches
(Eduard Kovacs – Security Week) South Korea’s Personal Information Protection Commission (PIPC) announced last week that it has issued significant fines to several major luxury brands over a recent hacker attack that resulted in massive data breaches. The fines, totaling 36 billion Korean won ($25 million), were imposed on Louis Vuitton, Dior, and Tiffany, all owned by the Paris-based multinational luxury goods conglomerate LVMH. According to the Korean regulator, Louis Vuitton received a fine of roughly $15 million for cybersecurity failures that involved employee devices getting infected with malware and the information of approximately 3.6 million individuals getting compromised. – https://www.securityweek.com/dior-louis-vuitton-tiffany-fined-25-million-in-south-korea-after-data-breaches/
Crypto Payments to Human Traffickers Surges 85%
(Phil Muncaster – Infosecurity Magazine) Human trafficking operations made hundreds of millions of dollars last year, as cryptocurrency inflows surged 85% annually, according to Chainalysis. The blockchain analytics company argued that its data shows this activity is increasingly linked to the growth of South East Asia scam compounds, online casinos and Chinese-language money laundering (CMLN) networks operating on Telegram. – https://www.infosecurity-magazine.com/news/crypto-payments-human-traffickers/
Odido Breach Impacts Millions of Dutch Telco Users
(Phil Muncaster – Infosecurity Magazine) The largest mobile phone operator in the Netherlands has revealed a major data breach affecting millions of customers. Odido said in a statement late last week that the incident affected a “customer contact system.”. Although the firm pointed out that no passwords, call details, or billing data were taken in the raid, for some users, compromised information included names, home and email addresses, IBANs, dates of birth and passport/driver’s license numbers. – https://www.infosecurity-magazine.com/news/odido-breach-millions-dutch-telco/
North Korean hackers target users of top Ethereum wallet MetaMask
(Linas Kmieliauskas – Cybernews) North Korean criminals are now more aggressive and effective in their attempts to target users of the most popular ethereum (ETH) wallet, MetaMask, new research has shown, detailing how the attackers operate. Cybersecurity researcher Seongsu Park published a report on the Contagious Interview campaign, allegedly orchestrated by North Koreans and targeting people in the cryptoasset and AI industries. In the Contagious Interview campaign, threat actors are attempting to spread malware while conducting fake job interviews. Now, they are using new techniques designed to steal sensitive data and, subsequently, funds from their victims. – https://cybernews.com/crypto/north-korean-hackers-target-metamask/
Malicious npm and PyPI packages linked to Lazarus APT fake recruiter campaign
(Pierluigi Paganini – Security Affairs) ReversingLabs researcher uncovered new malicious packages on npm and PyPI connected to a fake job recruitment campaign attributed to the North Korea-linked Lazarus Group. The campaign uses deceptive hiring themes to trick developers into downloading infected packages, continuing the group’s efforts to target the software supply chain. “The ReversingLabs research team has identified a new branch of a fake recruiter campaign conducted by the North Korean hacking team Lazarus Group.” reads the report published by ReversingLabs. “The campaign, which the team named graphalgo, based on the first package included in this campaign in the npm repository, has been active since the beginning of May 2025.” – https://securityaffairs.com/188009/apt/malicious-npm-and-pypi-packages-llinked-to-lazarus-apt-fake-recruiter-campaign.html
Fintech firm Figure disclosed data breach after employee phishing attack
(Pierluigi Paganini – Security Affairs) Blockchain-based lending firm Figure confirmed a data breach after an employee fell victim to a social engineering attack. According to a company spokesperson, the incident allowed hackers to access and steal a limited number of files. The company disclosed the breach following inquiries and is assessing the impact. – https://securityaffairs.com/187988/data-breach/fintech-firm-figure-disclosed-data-breach-after-employee-phishing-attack.html
Suspected Russian hackers deploy CANFAIL malware against Ukraine
(Pierluigi Paganini – Security Affairs) Google Threat Intelligence Group identified a previously undocumented threat actor behind attacks on Ukrainian organizations using CANFAIL malware. The group is possibly linked to Russian intelligence services and has targeted defense, military, government, and energy entities at both regional and national levels in Ukraine. GTIG researchers observed the Russian intelligence conducting phishing campaigns to deliver CANFAIL malware. The actor is also interested in aerospace, drone-linked manufacturers, nuclear research, and humanitarian groups tied to Ukraine. Google reported that the APT group has also probed Romanian and Moldovan entities. – https://securityaffairs.com/187976/hacking/suspected-russian-hackers-deploy-canfail-malware-against-ukraine.html
If we can’t name China’s cyberattacks, we lose trust in ourselves
(Justin Bassi – ASPI The Strategist) In the space of just a few days, two big US tech companies took different approaches to China’s cyberattacks. Palo Alto Networks generically referred to a global cyber espionage operation by unnamed actors while Google specifically named China as the globe’s leading cyber security threat. That inconsistency hurts everyone but China. A refusal to name and shame China incentivises Beijing to carry on, leaves our public underinformed, and places little pressure on governments to tackle the problem. – https://www.aspistrategist.org.au/if-we-cant-name-chinas-cyberattacks-we-lose-trust-in-ourselves/
China has an AI crisis plan. Australia should, too
(Emily Grundy and Greg Sadler – ASPI The Strategist) China is the only country with a national-level plan for responding to an AI crisis. Australia’s National AI Plan, released in December, commits the country to dealing with AI incidents in its existing crisis management framework. Without a tailored approach, Australia could default to cyber crisis arrangements that are not well-suited to the specific threats, stakeholders and response mechanisms relevant to an AI crisis. Good implementation of the National AI Plan requires updating the Australian Government Crisis Management Framework (AGCMF) to explicitly cover these situations and creating an AI crisis plan to handle them. The AI crisis plan would facilitate the National Coordination Mechanism (NCM) by bringing AI companies, experts and data centre operators to the table to help the government effectively handle the crisis. An AI crisis is where a frontier AI system is involved in a threat to public safety or national security. – https://www.aspistrategist.org.au/china-has-an-ai-crisis-plan-australia-should-too/