Daily Digest on AI and Emerging Technologies (6 February 2026)

Governance

AI ‘moving at the speed of light’ warns Guterres, unveiling recommendations for UN expert panel

(UN News) The UN on Wednesday (February 4) announced the list of experts nominated to the General Assembly to serve on a new Independent International Scientific Panel tasked with assessing how AI is transforming lives worldwide. “AI is moving at the speed of light,” said UN Secretary-General António Guterres, underscoring the urgency of regulating the breakthrough technology. “We need shared understandings to build effective guardrails, unlock innovation for the common good, and foster cooperation. The Panel will help the world separate fact from fakes, and science from slop.”. The roots of the Panel stretch back to 2023, in the wake of the release of ChatGPT in the United States and other pioneering technology, heralding a new era in the field of artificial intelligence. Mr. Guterres convened a group of leading technologists and academics and tasked them with advancing recommendations for safe governance. After a series of in-depth discussions, the experts came back with a vision for an approach to AI governance that could benefit humanity. Amongst the ideas was the creation of the International Scientific Panel – independent but supported by the UN. The Panel, says Mr. Guterres, will be the “first global, fully independent scientific body dedicated to helping close the AI knowledge gap and assess the real impacts of AI across economies and societies.”. Panellists will exchange ideas, run “deep dives” into priority areas such as health, energy and education, and share the latest leading-edge research. – https://news.un.org/en/story/2026/02/1166891

‘Deepfake abuse is abuse,’ UNICEF warns

(UN News) New evidence reveals a proliferation of sexualised images of youngsters generated by artificial intelligence (AI) and a dearth of laws to stop it, the UN Children’s Fund (UNICEF) warned on Wednesday. “The harm from deepfake abuse is real and urgent,” the UN agency said in a statement. “Children cannot wait for the law to catch up.”. At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN agency, international police agency, INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide. In some countries, this represents one in 25 children or the equivalent of one child in a typical classroom, the study found. – https://news.un.org/en/story/2026/02/1166886

Securing Justice for Cyber-Enabled International Crimes

(Harriet Moynihan – Just Security) At the United Nations and elsewhere, States have been discussing how international law applies in cyberspace for over twenty years. But these discussions have largely overlooked the applicability of international criminal law to cyber operations. In December 2025, the International Criminal Court (ICC) helped to plug that gap by publishing a policy on Cyber Enabled Crimes under the Rome Statute. At a time when cyberattacks are intensifying in scale, effects, and gravity worldwide, the policy demonstrates that the Rome Statute is technology-neutral and capable of applying to cyber activity as to any other activity. A new Chatham House research paper on Securing Justice for Cyber-Enabled International Crimes builds upon the policy in considering its broader relevance to national courts as well as the ICC, and suggests recommendations for strengthening accountability in this area. – https://www.justsecurity.org/129752/justice-cyber-international-crimes/

AI arms race approaches IPO reckoning

(Zachary Basu, Madison Mills – Axios) A trio of AI titans is barreling toward watershed IPOs this year, posing the ultimate test of whether their lofty ambitions, breakneck spending and founder feuds can survive life under public scrutiny. Why it matters: Between OpenAI, Anthropic and Elon Musk’s xAI, trillions of dollars in potential value — and enormous influence over the world’s most powerful technology — are coming up for judgment in 2026. The big picture: The AI race has become so expensive and competitive that even its most powerful players are being pushed toward public markets — each with a unique strategy and set of risks that they will now have to lay bare for the world to scrutinize. – https://www.axios.com/2026/02/05/openai-ipo-anthropic-xai-elon-musk

Why community benefit agreements are necessary for data centers

(Nicol Turner Lee and Darrell M. West – Brookings) Data centers are both controversial and critical to the artificial intelligence technologies undergirding the digital economy. Data centers house thousands of file servers and networking equipment to enable e-commerce, data analytics, health care, and other functions of a connected society. Without abundant data centers, the digital revolution could potentially stall, restricting access to the benefits of digital technologies for individuals, communities, governments, and businesses. Despite the crucial role of data centers in the emerging economy, protests have arisen throughout the country over financial, energy, and environmental concerns. Plans to build data centers have been stymied in Ohio, Georgia, Virginia, Arizona, and Indiana over worries about increases in electric bills, noise, and light pollution, and the possible dangers of AI itself. Public opinion surveys reveal a “techlash” where worries about AI’s impact on jobs, privacy, security, and human safety combine to increase concerns about wealth accumulation and corporate power in the tech sector, forming a populist backlash. Voter worries have roiled recent elections in New Jersey, Virginia, and Georgia, where opposition to data centers was part of campaign promises. Left unchecked, these community concerns could slow down the rapid construction of data centers, weaken AI growth, and slow AI revenue streams, all of which would limit the AI benefits promised by tech firms and government officials. In this report, we explain how AI companies can work more closely with local leaders to establish viable community benefit agreements (CBAs) to address public concerns. These agreements should be legally binding and developed collaboratively with host communities to demonstrate reciprocity between the developers of data centers and the communities in which they are housed. In particular, communities need to know the cost of data centers, understand who pays, examine local benefits and risks, and have back-up plans for the long-term evolution of AI.  A community benefit agreement for data centers should include quantifiable data on the job opportunities, tax revenue, workforce training programs, health and well-being contributions, and other benefits of proposed facilities. Combined with metric tracking and rigorous evaluation programs, CBAs can mitigate community concerns and help community leaders and residents achieve a better understanding of how data centers function and affect local areas. They can also ensure that the process involves mutual respect among all parties. – https://www.brookings.edu/articles/why-community-benefit-agreements-are-necessary-for-data-centers/

EU tests Matrix protocol as sovereign alternative for internal communication

(DigWatch) The European Commission is testing a European open source system for its internal communications as worries grow in Brussels over deep dependence on US software. A spokesperson said the administration is preparing a solution built on the Matrix protocol instead of relying solely on Microsoft Teams. – https://dig.watch/updates/eu-tests-matrix-protocol-as-sovereign-alternative-for-internal-communication

There is no AI bubble, Alibaba Group chairman says

(Sarmad Khan – The National) There is no AI bubble and the “astounding” surge in capital spending is being driven by anticipated future computing demands, the chairman of Alibaba Group has said. “If you look at the right now, [there is] massive amount of capex investment that all the hyper-scalers, all the model companies, are making,” Joseph Tsai said in conversation with UAE’s Omar Al Olama, Minister of State for AI, Digital Economy and Remote Work Applications, on day two of the World Governments Summit in Dubai. The latest quarterly reports show companies are doubling their spending, from between $60 billion and $80 billion per company last year to $120 billion-$150 billion now, Mr Tsai said. – https://www.thenationalnews.com/future/technology/2026/02/04/there-is-no-ai-bubble-alibaba-group-chairman-says/

ICE wants to “research” a data-driven market for “investigations activities”

(Konstancija Gasaitytė – Cybernews) Immigration and Customs Enforcement (ICE) has been eyeing tools used by the advertising technology (ad tech) industry to aid its investigations. Such tools are known to provide location data and support large-scale analytics. ICE’s Homeland Security Investigations (HSI) issued a Request for Information (RFI) on January 23rd, 2026. The document states that “this RFI is solely for market research, planning, and information gathering purposes and is not to be construed as a commitment by the Government to issue a subsequent solicitation.”. This means that the organization isn’t putting a proposal or looking for companies to contract, but is seeking to obtain information and feedback about the market. – https://cybernews.com/privacy/ice-ad-data-market-research/

Security and Surveillance

Russian crypto criminals caught behind Solana and TON draining campaigns

(Linas Kmieliauskas – Cybernews) Security researchers have identified another Russia-linked crypto crime organization that is said to be behind more than $10 million worth of cryptoasset thefts. After monitoring the Rublevka Team organization since August 2025, researchers at Recorded Future’s Insikt Group found that this crypto-focused cybercrime-as-a-service group, operational since 2023, contributed to at least 240,000 cryptoasset wallet drains, worth up to $20,000 per transaction. According to Insikt, the criminal group is an example of a “traffer team,” composed of a network of thousands of social engineering specialists tasked with directing victim traffic to malicious pages. Initially, these criminals targeted the TON blockchain ecosystem, supported by the company behind the Telegram messenger, before moving on to the Solana (SOL) blockchain in the spring of 2025. This ongoing campaign resulted in the biggest losses, as Solana’s ecosystem users lost around $8.2 million. The researchers have identified that, after tricking a victim into connecting their cryptoasset wallet to a fraudulent website, threat actors ask to perform a crypto transaction, which drains all funds from the wallet. – https://cybernews.com/cybercrime/russian-crypto-criminals-behind-solana-ton-draining-campaigns/

Malicious Commands in GitHub Codespaces Enable RCE

(Alessandro Mascellino – Infosecurity Magazine) A set of attack vectors in GitHub Codespaces have been uncovered that enable remote code execution (RCE) by opening a malicious repository or pull request. The findings by Orca Security, show how default behaviours in the cloud-based development service can be abused to execute code, steal credentials and access sensitive resources without explicit user approval. GitHub Codespaces provides developers with a cloud-hosted Visual Studio Code (VSC) environment that spins up in minutes. It automatically applies repository-defined configuration files to streamline development and collaboration. That convenience, however, also creates an attack surface when those files are controlled by an adversary. – https://www.infosecurity-magazine.com/news/malicious-commands-in-github/

Smartphones Now Involved in Nearly Every Police Investigation

(Phil Muncaster – Infosecurity Magazine) Digital evidence, especially that extracted from smartphones, is now key to nearly all police investigations, a new report from Cellebrite has confirmed. The Israeli forensics company compiled its 2026 Industry Trends Report based on interviews with 1200 law enforcement practitioners in 63 countries. It found that a majority (95%) now agree that digital evidence is key to solving cases, up from 74% two years ago. In fact, nearly all (97%) respondents noted that the public expects it to be used in almost all cases. – https://www.infosecurity-magazine.com/news/smartphones-involved-every-police/

Australia said to grant US access to Australians’ biometric data

(Anthony Kimery – Biometric Update) Following biometric data sharing agreements between the US, Chile and Ecuador, the Australian government reportedly has also agreed to provide the Trump administration and U.S. agencies such as Immigration and Customs Enforcement (ICE) with direct access to Australians’ biometric information and identity documents. According to reporting, the alleged arrangement is part of negotiations tied to the US Visa Waiver Program (VWP) and the Enhanced Border Security Partnership (EBSP) which requires participating countries to expand data sharing with U.S. authorities. Under the agreement, U.S. agencies could potentially access a wide range of sensitive personal information, including names, alias spellings, dates of birth, passport and other identity document numbers, and biometric identifiers such as facial images and fingerprints, as well as criminal and immigration records. – https://www.biometricupdate.com/202602/australia-said-to-grant-us-access-to-australians-biometric-data

New Hacking Campaign Exploits Microsoft Windows WinRAR Vulnerability

(Danny Palmer – Infosecurity Magazine) A hacking campaign took just days to exploit a newly disclosed security vulnerability in Microsoft Windows version of WinRAR, researchers at Check Point have said. The attackers leveraged CVE-2025-8088, a path traversal vulnerability in the widely used file archive and compression software WinRAR, which was first disclosed in August 2025. Check Point’s analysis of the campaign suggested that attackers were actively exploiting the vulnerability within days of its disclosure. – https://www.infosecurity-magazine.com/news/hacking-exploits-windows-winrar/

AI-Enabled Voice and Virtual Meeting Fraud Surges 1000%+

(Phil Muncaster – Infosecurity Magazine) Fraudsters significantly ramped up their use of AI to enhance campaigns across voice and virtual meeting channels last year, boosting speed and volume, according to Pindrop. The voice authentication and deepfake detection specialist said its new report, Inside the 2025 AI Fraud Spike, is based on its own data collected between January and December 2025. The firm pointed to a 1210% increase in AI-enabled fraud during this time, versus a 195% surge in traditional fraud. – https://www.infosecurity-magazine.com/news/ai-voice-virtual-meeting-fraud/

Frontiers and Markets

When context is everything, AI models still struggle in the real world: Tencent

(Vincent Chow – SCMP) Leading US and Chinese artificial intelligence models are frustrating to use in real-world settings because they struggle to learn from context, Tencent Holdings said in a new technical paper – the first co-authored by Vinces Yao Shunyu since he took up the role of chief AI scientist at the firm. AI developers need to place “context learning” at the centre of future model design if their products are to become genuinely useful outside controlled environments, according to researchers from Tencent and Fudan University’s Institute of Trustworthy Embodied AI. “Models often fail in subtle but consequential ways,” the researchers wrote in a paper published on Tuesday. “Until [context learning] improves, [models] will remain brittle precisely in the settings where we most want them to help: messy, dynamic, real-world environments.” – https://www.scmp.com/tech/big-tech/article/3342386/when-context-everything-ai-models-still-struggle-real-world-tencent

Contextual errors limit real-world performance of medical AI

(News Medical Life Sciences) Medical artificial intelligence is a hugely appealing concept. In theory, models can analyze vast amounts of information, recognize subtle patterns in data, and are never too tired or busy to provide a response. However, although thousands of these models have been and continue to be developed in academia and industry, very few of them have successfully transitioned into real-world clinical settings. Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School, and colleagues are exploring why – and how to close the gap between how well medical AI models perform on standardized test cases and how many issues the same models run into when they’re deployed in places like hospitals and doctors’ offices. In a paper published Feb. 3 in Nature Medicine, the researchers identify a major contributor to this gap: contextual errors. – https://www.news-medical.net/news/20260203/Contextual-errors-limit-real-world-performance-of-medical-AI.aspx

Carbon Robotics built an AI model that detects and identifies plants

(Rebecca Szkutak – TechCrunch) What is and isn’t a weed that needs to be eliminated in the field is determined by the eyes of the farmer — and now, increasingly, by a new AI model from Carbon Robotics. Seattle-based Carbon Robotics, which builds the LaserWeeder — a robot fleet that uses lasers to kill weeds — announced a new AI model, the Large Plant Model (LPM), on Monday. This model recognizes plant species instantly and allows farmers to target new weeds without needing to retrain the robots. – https://techcrunch.com/2026/02/02/carbon-robotics-built-an-ai-model-that-detects-and-identifies-plants/