Governance and Regulation
Can workers compete with machines and stay relevant in the AI era?
(UN News) AI looks set to be transformative for us all, but it also brings a real risk of job losses and widening social and economic divides. UN experts are focusing on how to manage that transition, to ensure the benefits of the technology outweigh the threats. Whether you are a “doomer” or a “boomer” on the subject, it’s impossible to ignore AI, which is seeping into every corner of our personal and professional lives. The UN has been banging the drum for a “people-first” approach to the subject for years now. UN Secretary-General António Guterres warned the Security Council back in 2024 that the fate of humanity “must never be left to the ‘black box’ of an algorithm,” and that people must always retain oversight and control over AI decision-making to ensure that human rights are upheld. Since then, the UN System has been consolidating work on the ethical global governance of AI, building on the guidelines and recommendations contained in the landmark Global Digital Compact. – https://news.un.org/en/story/2026/01/1166847
From deepfakes to grooming: UN warns of escalating AI threats to children
(UN News) The staggering amount of harmful AI-generated online content has prompted an urgent call from across the UN system for a raft of measures to protect children from abuse, exploitation and mental trauma. Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunications Union (ITU) – one of the key agencies that drafted the statement, which includes guidelines and recommendations – catalogues a dizzying array of ways that children are targeted. This extends from grooming to deepfakes, the embedding of harmful features, cyberbullying and inappropriate content: “We saw that, during the COVID-19 pandemic, many children, particularly girls and young women, were abused online and, in many cases, that translated to physical harm,” he says. – https://news.un.org/en/story/2026/01/1166827
New Dutch government to push for EU social media ban for under-15s
(Pieter Haeck – Politico) The three parties that have formed the new Dutch minority government have pitched raising the European minimum age for social media to 15, according to coalition plans unveiled on Friday. With the move, the Netherlands is the latest country to push for a de facto social media ban at 15, following France’s example. The three Dutch parties — the centrist D66, the Christian Democrat CDA and the liberal VVD — will still need to seek support for their proposals, as they hold only 66 of 150 seats in the Dutch parliament. The parties want an “enforceable European minimum age of 15 for social media, with privacy-friendly age verification for young people, as long as social media are not sufficiently safe,” they write in the plans. The current EU minimum age stands at 13. – https://www.politico.eu/article/d66-cda-vvd-dutch-government-aims-to-keep-under-15s-off-social-media/
Why the Gulf States Share in the AI Governance Dilemma
(Nayef Al-Nabet – Middle East Council on Global Affairs) When regulators began discussing adjustments to the European Union’s Artificial Intelligence Act of 2024, mere months after its passage, it was a clear sign of the regulatory challenges in this sector. The comprehensive legislation was the first of its kind, and even seasoned regulators like the EU were struggling amid political and corporate pressure, unable to set realistic and effective measures to guide rapid advances in AI technology. For observers in the Gulf, these challenges were a warning sign of the urgent need to balance innovation with risk management. The AI industry has already become a key part of innovation and economic diversification plans in the Gulf, with countries such as Saudi Arabia, the United Arab Emirates, and Qatar investing heavily in AI infrastructure and capabilities, while positioning themselves to be important nodes in the international AI network. Yet while the bourgeoning AI sector is essentially a competitive race, with no one wanting to hamstring themselves, no one is exactly sure where the race is leading to either, or what will happen—to their economies, labor markets, societies, etc.—along the way. Accordingly, regulations need to be flexible enough to enable growth and provide opportunities for pursuing competitive advantage. However, trajectorial uncertainty, market pressures and unresolved debates about what aspects of AI need to be tamed make regulatory clarity unlikely. This difficulty is not accidental—it stems from the structural inability of existing political and economic systems to respond quickly and efficiently to rapidly evolving technologies. Therefore, governance will remain contested and fragile as long as forces driving development do not align with accountability needs. – https://mecouncil.org/blog_posts/why-the-gulf-states-share-in-the-ai-governance-dilemma/
IMF chief sounds alarm at Davos 2026 over AI and disruption to entry-level labour
(DigWatch) AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption. According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities. – https://dig.watch/updates/imf-chief-sounds-alarm-at-davos-2026-over-ai
Universal basic income could be used to soften hit from AI job losses in UK, minister says
(Lauren Almeida – The Guardian) The UK could introduce a universal basic income (UBI) to protect workers in industries that are being disrupted by AI, the investment minister Jason Stockwood has said. “Bumpy” changes to society caused by the introduction of the technology would mean there would have to be “some sort of concessionary arrangement with jobs that go immediately”, Lord Stockwood said. The Labour peer told the Financial Times: “Undoubtedly we’re going to have to think really carefully about how we soft-land those industries that go away, so some sort of [universal basic income], some sort of lifelong mechanism as well so people can retrain.” – https://www.theguardian.com/technology/2026/jan/29/universal-basic-income-used-cover-ai-job-losses-minister-says
Google revamps AI playbook for mayors
(Madison Mills – Axios) Google is rolling out an updated “Mayors AI Playbook” with the U.S. Conference of Mayors at the group’s Winter Meeting in Washington today, the company first told Axios. Why it matters: Cities are spending more on technology, but many lack the expertise to deploy AI safely and at scale. Whoever helps them cross that gap could lock in years of government contracts. The big picture: Google’s first AI playbook for mayors was about awareness. Now, it’s about action: a blueprint for implementing AI strategies at the local level. What they’re saying: “[T]he most important step you can take now is to just start. Don’t wait for the perfect moment, the opportunity is now,” Tom Cochran, USCM CEO and executive director, said in a statement. – https://www.axios.com/2026/01/28/google-mayors-ai-playbook-cities
WhatsApp faces new user protection obligations under the EU’s toughest digital rules
(Euronews) WhatsApp has been officially classified as a “Very Large Online Platform” (VLOP) under the EU’s Digital Services Act (DSA), a designation that subjects the platform to the bloc’s strictest obligations to protect its users. The move means WhatsApp will be required to actively prevent the spread of disinformation and the manipulation of public opinion, while also safeguarding users’ mental health, particularly that of younger audiences. – https://www.euronews.com/next/2026/01/26/whatsapp-faces-new-user-protection-obligations-under-the-eus-toughest-digital-rules
UK government makes bold move with AI tutoring trials for 450,000 pupils
(DigWatch) The government plans to trial AI tutoring tools in secondary schools, with nationwide availability targeted for the end of 2027. The tools will be developed through a government-led tender, bringing together teachers, AI labs, and technology companies to co-create solutions aligned with classroom needs. The initiative aims to provide personalised, one-to-one-style learning support, adapting to individual pupils’ needs and helping them catch up where they struggle. A central objective is to reduce educational inequality, with up to 450,000 disadvantaged pupils in years 9–11 potentially benefiting each year, particularly those eligible for free school meals. – https://dig.watch/updates/uk-government-makes-bold-move-with-ai-tutoring-trials-for-450000-pupils
Science fiction writers, Comic-Con say goodbye to AI
(Anthony Ha – Tech Crunch) In recent months, some of the major players in science fiction and popular culture have been taking firmer stances against generative AI. Separate decisions by San Diego Comic-Con and the Science Fiction and Fantasy Writers Association (SFWA) illustrate the depth of AI opposition within some creative communities — though they’re certainly not the only ones, with music distribution platform Bandcamp also recently banning generative AI. – https://techcrunch.com/2026/01/25/science-fiction-writers-comic-con-say-goodbye-to-ai/
Is Amazon cutting jobs to replace humans with AI? Here’s what experts say
(Andrew Johnson – BNN Bloomberg) Amazon says its latest round of job cuts affecting 16,000 corporate positions worldwide is about streamlining its business, not replacing human workers with artificial intelligence. Experts say they’re not surprised. The layoffs, announced Wednesday, mark the second major workforce reduction at the company in three months. While Amazon CEO Andy Jassy has openly discussed his expectation generative AI will reduce the company’s corporate workforce in the future, Amazon says the current cuts are aimed at reducing layers of management and bureaucracy following years of rapid expansion. – https://www.bnnbloomberg.ca/business/company-news/2026/01/29/is-amazon-cutting-jobs-to-replace-humans-with-ai-heres-why-experts-say-no/
Data centre boom drives surge in legal services in India
(DigWatch) India’s data centre expansion, fuelled by investment in AI-ready infrastructure and cloud capacity, is creating strong demand for legal services, with law firms increasingly advising on land acquisition, regulatory approvals, financing and long-term compliance for large projects. – https://dig.watch/updates/data-centre-boom-drives-surge-in-legal-services-in-india
Legislation
France’s National Assembly backs under-15 social media ban
(DigWatch) France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote. As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems). – https://dig.watch/updates/frances-national-assembly-backs-under-15-social-media-ban
Georgia leads push to ban datacenters used to power America’s AI boom
(Timothy Pratt – The Guardian) Lawmakers in several states are exploring passing laws that would put statewide bans in place on building new datacenters as the issue of the power-hungry facilities has moved to the center of economic and environmental concerns in the US. In Georgia a state lawmaker has introduced a bill proposing what could become the first statewide moratorium on new datacenters in America. The bill is one of at least three statewide moratoriums on datacenters introduced in state legislatures in the last week as Maryland and Oklahoma lawmakers are also considering similar measures. But it is Georgia that is quickly becoming ground zero in the fight against untrammelled growth of datacenters – which are notorious for using huge amounts of energy and water – as they power the emerging industry of artificial intelligence. – https://www.theguardian.com/technology/2026/jan/26/georgia-datacenters-ai-ban
Geostrategies
The Trump Administration’s Cyber Strategy Fundamentally Misunderstands China’s Threat
(Matthew Ferren – Council on Foreign Relations) Against a steady drumbeat of ransomware attacks, data breaches, and sophisticated intrusions, President Donald Trump’s administration is preparing to release a new national cybersecurity strategy this month centered on offensive cyber operations. Senior officials have repeatedly emphasized hitting back at the hackers and nation-states who have compromised U.S. networks with seeming impunity. If early signals are any indication, the strategy will treat offense as the primary solution to the United States’ cybersecurity challenges. Meanwhile, the administration has weakened the foundations of U.S. cyber defenses. The Cybersecurity and Infrastructure Security Agency (CISA) has seen its budget reduced and staffing slashed, and the agency still lacks a Senate-confirmed director. Similar cuts have affected cyber defense offices across federal agencies, and the administration is rolling back cybersecurity requirements for critical infrastructure operators. This combination—more offense, less defense—reflects a seductive logic: why play defense when you can take the fight to the enemy? But against China, now the most active and persistent cyber threat to U.S. networks, an offense-first strategy is a dangerous miscalculation. Cyber operations cannot stop or even substantially diminish Beijing’s campaigns. Doubling down on offense while neglecting defense will leave the United States more vulnerable, not less. – https://www.cfr.org/articles/the-trump-administrations-cyber-strategy-fundamentally-misunderstands-chinas-threat
France proposes EU tools to map foreign tech dependence
(DigWatch) France has unveiled a new push to reduce Europe’s dependence on US and Chinese technology suppliers, placing digital sovereignty back at the centre of the EU policy debates. Speaking in Paris, France’s minister for AI and digital affairs, Anne Le Hénanff, presented initiatives to expose and address the structural reliance on non-EU technologies across public administrations and private companies. – https://dig.watch/updates/france-proposes-eu-tools-to-map-foreign-tech-dependence
Europe’s digital reliance on US Big Tech: Does the EU have a plan?
(Diya Gupta – France 24) In the digital era, almost every part of life – from communication to healthcare infrastructure and banking – functions within an intricate digital framework, led by a handful of companies operating mainly out of the United States. If the framework collapses, so do many of the essential services that allow society to function. Transatlantic tensions have been steadily rising during US President Donald Trump’s chaotic first year back in the White House. Trump’s repeated demands for the Danish autonomous territory of Greenland and tariff threats have driven the EU to reassess its relationship with its long-time ally, who may not be as dependable as was previously thought. US cooperation with Europe isn’t just key for trade and diplomacy, it’s also essential to maintain a robust technological and digital frontier. The bulk of European data is stored on US cloud services. Companies like Amazon, Microsoft and Google own over two-thirds of the European market, while US-based AI pioneers like OpenAI and Anthropic are leading the artificial intelligence boom. According to a European Parliament report, the EU “relies on non-EU countries for over 80 percent of digital products, services, infrastructure, and intellectual property”. – https://www.france24.com/en/europe/20260124-europe-s-digital-reliance-on-us-big-tech-does-the-eu-have-a-plan
Stargate UAE data centre to cost more than $30bn, AI minister says
(Alvin R Cabral – The National) The Stargate UAE data centre project will cost more than $30 billion to build and be at the centre of plans to grow artificial intelligence alliances around the world, Omar Al Olama, Minister of State for AI, Digital Economy and Remote Work Applications, has said. The development of the 5-gigawatt Stargate UAE – backed by some of the world’s biggest technology companies – proves the Emirates’ ambitions to be at the forefront of the AI revolution, he said at the Machines Can Think summit in Abu Dhabi on Monday. Stargate is “the most famous piece of evidence … to not just ensure that we’re able to build international co-operation when it comes to AI infrastructure, but also to build something that no one has the audacity to dream of”, Mr Al Olama said. – https://www.thenationalnews.com/future/technology/2026/01/26/stargate-uae-data-centre-to-cost-more-than-30bn-ai-minister-says/
Indonesia – Investment Minister Sees Blended Finance as Key to AI Investment
(Martin Bagya Kertiyasa – Jakarta Globe) Investment and Downstreaming Minister Rosan Roeslani is pushing for innovative financing schemes, including blended finance, to meet the massive capital needs of Indonesia’s high-tech sector. Rosan, who also serves as CEO Danantara, said the rapid global expansion of artificial intelligence and digital infrastructure means Indonesia must keep innovating to avoid falling behind. In a statement received in Jakarta on Friday, he said the surge in computing demand driven by AI development should be seen as both a strategic opportunity for national economic growth and a challenge that requires sound financing governance and policy design. – https://jakartaglobe.id/business/investment-minister-sees-blended-finance-as-key-to-ai-investment#goog_rewarded
Security and Surveillance
Labyrinth Chollima Evolves into Three North Korean Hacking Groups
(Kevin Poireault – Infosecurity Magazine) One of the most prolific North Korean-linked cyber threat groups, Labyrinth Chollima, has recently evolved to make to three distinct hacking groups, according to CrowdStrike. In a new blog published on January 29, the cybersecurity giant said the three groups will now be tracked as Labyrinth Chollima, Golden Chollima and Pressure Chollima. The firm assessed “with high confidence” that while Labyrinth Chollima continues to focus on cyber espionage, targeting industrial, logistics and defense companies, the other groups have shifted towards targeting cryptocurrency entities. – https://www.infosecurity-magazine.com/news/labyrinth-chollima-dprk-three/
New AI-Developed Malware Campaign Targets Iranian Protests
(Kevin Poireault – Infosecurity Magazine) A new malicious campaign is spreading malware against people in Iran, likely including non-governmental organizations and individuals involved in documenting recent human rights abuses during the protest wave in the country. The campaign, discovered by the cyber threat research team at French cybersecurity firm HarfangLab, was first observed in early January 2026. HarfangLab obtained malicious samples on January 23 and shared a malware analysis on January 29. – https://www.infosecurity-magazine.com/news/ai-malware-redkitten-iranian/
Precision Becomes the New Playbook for Software Supply Chain Attacks
(Keith McCammon – Infosecurity Magazine) Software supply chain attacks have become one of the most difficult risks for security leaders to anticipate. Recent incidents have shown how quickly trust can be eroded when a single software component used by thousands of organizations is compromised. However, the next wave of attacks will not be focused on volume. It will be about precision. Adversaries are shifting from broad, opportunistic campaigns to targeted, long-term strategies that take advantage of the way modern software is built and shared. As businesses grow more dependent on interconnected tools and open-source software components, it’s never been more important to understand this shift – and prepare for it. – https://www.infosecurity-magazine.com/opinions/precision-playbook-software-supply/
Google Disrupts Extensive Residential Proxy Networks
(Alessandro Mascellino – Infosecurity Magazine) Google and several industry partners have taken coordinated action to disrupt what is believed to be one of the largest residential proxy networks globally, known as IPIDEA. The network operates largely out of public view but has become a key enabler for cybercrime, espionage and information operations. Residential proxy services allow customers to route traffic through IP addresses assigned to households and small businesses. This approach helps malicious actors hide their activity within normal consumer traffic, creating serious challenges for network defenders. – https://www.infosecurity-magazine.com/news/google-disrupts-proxy-networks/
Operation Winter SHIELD: FBI Issues Call to Arms for Organizations to Improve Cybersecurity
(Danny Palmer – Infosecurity Magazine) The FBI has launched Operation Winter SHIELD outlining ten actions which organizations should implement to help protect themselves, society and the state against cyber-attacks and malicious intrusions. The Securing Homeland Infrastructure by Enhancing Layered Defense (SHIELD) cyber resilience campaign details actions which organizations can take to help detect, confront, and dismantle cyber threats. “Winter SHIELD provides industry with a practical roadmap to better secure information technology (IT) and operational technology (OT) environments, hardening the nation’s digital infrastructure and reducing the attack surface,” the FBI said in an announcement on January 28. – https://www.infosecurity-magazine.com/news/fbi-operation-winter-shield-cyber/
New CISA Guidance Targets Insider Threat Risks
(Alessandro Mascellino – Infosecurity Magazine) The risk posed by insiders with authorized access to sensitive systems has prompted a renewed call to action from the US Cybersecurity and Infrastructure Security Agency (CISA). The government entity has released a new infographic designed to help organizations prevent, detect and respond to insider threats that can disrupt operations and undermine trust. The resource is aimed at critical infrastructure operators and state, local, tribal and territorial (SLTT) governments. It outlines practical steps for building teams that can manage insider risk in a structured and coordinated way, drawing on expertise across security, legal, human resources and operational functions. – https://www.infosecurity-magazine.com/news/cisa-targets-insider-threat-risks/
FBI Takes Down RAMP Ransomware Forum
(Kevin Poireault – Infosecurity Magazine) The notorious cybercriminal forum Russian Anonymous Marketplace (RAMP) has reportedly been taken down by the FBI. The news came on January 28, when several cyber threat intelligence (CTI) analysts noticed both RAMP clear and dark web sites were down and replaced by a law enforcement banner showing the message: “This site has been seized.”. The banner says the FBI seized the site in collaboration with the US Attorney’s Office for the Southern District of Florida and the US Justice Department’s (DoJ) Computer Crime and Intellectual Property Section (CCIPS). – https://www.infosecurity-magazine.com/news/fbi-takes-down-ramp-ransomware/
Ransomware Victim Numbers Rise, Despite Drop in Active Extortion Groups
(Danny Palmer – Infosecurity Magazine) Ransomware gangs claimed a deluge of victims during the final quarter of 2025, despite a decline in the number of active ransomware groups, analysis by cybersecurity researchers at ReliaQuest has revealed. As detailed in the company’s Ransomware and Cyber Extortion in Q4 2025 report, the number victim organizations which had their data posted on ransomware leak sites in the final three months of 2025 was up by 50% compared with the previous quarter, and increased by 40% compared with the same period in the previous year. The organizations which had data published on leak sites were victims of ransomware attacks and the perpetrators released some of the stolen data during their intrusion to put additional pressure on the target to pay a ransom. – https://www.infosecurity-magazine.com/news/ransomware-numbers-rise-despite/
Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse
(Priya Bharadia and Aisha Down – The Guardian) Millions of people around the world are creating and sharing deepfake nudes on the secure messaging app Telegram, a Guardian analysis has shown, as the spread of advanced AI tools industrialises the online abuse of women. The Guardian has identified at least 150 Telegram channels – large encrypted group chats popular for their secure communication – that appear to have users in many countries, from the UK to Brazil, China to Nigeria, Russia to India. Some of them offer “nudified” photos or videos for a fee: users can upload a photo of any woman, and AI will produce a video of that woman performing sexual acts. Many more offer a feed of images – of celebrities, social media influencers and ordinary women – made nude or made to perform sexual acts by AI. Followers are also using the channels to share tips on available deepfake tools. – https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abuse
Council presidency launches talks on AI deepfakes and cyberattacks
(DigWatch) EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency. The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc. According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation. – https://dig.watch/updates/council-presidency-launches-talks-on-ai-deepfakes-and-cyberattacks
Canada – Cyber Centre releases Ransomware Threat Outlook 2025 to 2027
(Government of Canada) The Canadian Centre for Cyber Security (Cyber Centre), part of the Communications Security Establishment Canada (CSE), released its Ransomware Threat Outlook 2025 to 2027, its latest assessment of ransomware threats facing Canada. The modern ransomware landscape is a highly sophisticated and interconnected ecosystem that is constantly evolving. Understanding current and emerging trends is critical to helping Canadians better prepare for and mitigate ransomware risks. The Cyber Centre’s report covers the early history of ransomware, highlights emerging and projected trends, outlines its impact on Canada and Canadian organizations, and debunks common myths and misconceptions. – https://www.canada.ca/en/communications-security/news/2025/12/cyber-centre-releases-ransomware-threat-outlook-2025-to-2027.html
10 ways AI can inflict unprecedented damage in 2026
(David Berlind – ZD Net) This year’s cybersecurity threat landscape will be much, much worse than last year, experts warn. Here are 10 areas of vulnerability that deserve every business leader’s attention in 2026 – https://www.zdnet.com/article/these-4-big-technology-bets-will-reshape-the-global-economy-in-2026/
UK announces largest ever facial recognition rollout as part of policing reforms
(Masha Borak – Biometric Update) The UK has announced large-scale policing reforms, including new investments into artificial intelligence and increased Live Facial Recognition (LFR) deployments. The Home Office has pledged to fund 40 new LFR vans as part of a national program to expand facial recognition capabilities in town centres across England and Wales, according to a white paper published on Monday. The country also plans to invest £115 million (US$157.3 million) over the next 3 years into a National Centre for AI in Policing, known as Police.AI. The institution will focus on testing and deploying AI technology that can catch criminals, speed up investigations and reduce administrative burdens. – https://www.biometricupdate.com/202601/uk-announces-largest-ever-facial-recognition-rollout-as-part-of-policing-reforms
Cybersecurity emerges as a top spending priority for UK organisations’ tech strategies in 2026 while AI still dominates
(KPMG) With ongoing geopolitical tensions and recent high-profile cyber-attacks front of mind, cybersecurity emerged as the number one priority for large increases in investment over the next 12 months, according to the KPMG Global Tech Report 2026, which gains insight on priorities from tech executives across the world, including 151 in the UK. More than half of UK organisations (57 per cent) reported they are planning on increasing their budget for cybersecurity by more than 10 per cent over the next 12 months. Globally, only 41 per cent are planning to increase their cybersecurity budget by the same amount, signalling greater emphasis on cybersecurity in the UK. – https://kpmg.com/uk/en/media/press-releases/2026/01/cybersecurity-emerges-as-a-top-spending-priority.html
ICA trials facial recognition clearance for motorcyclists at Woodlands Checkpoint
(Khoo Yi-Hang – AsiaOne) The Immigration and Checkpoints Authority (ICA) on Monday (Jan 26) begun a trial using facial recognition instead of fingerprint scanning to clear motorcyclists entering Singapore via Woodlands Checkpoint. Taking place at two designated motorcycle lanes in the arrival zone, the trial is aimed at speeding up immigration clearance and increasing convenience, while maintaining border security, said the authority in a Facebook post. – https://www.asiaone.com/singapore/ica-facial-recognition-motorcycle-woodlands-checkpoint
Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media
(Robert Booth – The Guardian) Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new “disruptive threat” posed by hard-to-detect, malicious “AI swarms” infesting social media and messaging channels. A would-be autocrat could use such swarms to persuade populations to accept cancelled elections or overturn results, they said, amid predictions the technology could be deployed at scale by the time of the US presidential election in 2028. – https://www.theguardian.com/technology/2026/jan/22/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media
When will ‘Q–Day’ arrive? Scientists predict the date when quantum computing will crack all of Earth’s digital encryption – with terrifying consequences
(Wiliam Hunter – Daily Mail) As terrifying as it might sound, experts believe the world will soon face a technological crisis that threatens to fundamentally overthrow digital secrecy. Known as ‘Q–Day’, this is the moment when quantum computers will crack open all of Earth’s digital encryption. From then, any information not secured by ‘post–quantum’ protection will be laid bare – including financial transactions and military communications. – https://www.dailymail.co.uk/sciencetech/article-15498725/qday-scientists-quantum-computing-digital-encryption.html
Defence, Military, Intelligence, and Warfare
Keep AI Testing Defense-Worthy
(Matteo Pistillo – Lawfare) As the spending on frontier artificial intelligence (AI) capabilities for defense and intelligence increases and the most advanced AI models are incrementally entrenched in the national security apparatus, the Department of Defense and the intelligence community should test, and not assume, that procured AI models are sufficiently aligned with their intent. In order for “every warfighter” to soon “wield frontier AI as a force multiplier” and depend on AI as a “teammate,” AI models must be sufficiently reliable and trustworthy. This requires adapting and accelerating existing AI testing and evaluation pipelines within the Department of Defense and the intelligence community to detect and counter instances of AI misalignment. – https://www.lawfaremedia.org/article/keep-ai-testing-defense-worthy
Ukraine Becomes World Leader in Unmanned Ground Vehicles
(Taras Kuzio – The Jamestown Foundation) Russia’s war against Ukraine has transformed Ukraine into the world’s leading innovator in unmanned warfare, expanding from aerial and naval drones to large-scale production and battlefield deployment of unmanned ground vehicles (UGVs). Ukraine’s UGV ecosystem combines real combat experience, North Atlantic Treaty Organization (NATO)-standard certification, and a growing private defense sector, positioning Ukrainian engineers at the forefront of future global military technology. UGVs now perform surveillance, logistics, fire support, and self-detonating attacks in lethal frontline “kill zones,” reducing Ukrainian casualties and reshaping tactics through coordinated, multi-domain robotic warfare. Ukrainian UGVs are increasingly replacing infantry in high-risk missions, providing sustained firepower, engineering support, and resilience against electronic warfare, terrain challenges, and prolonged deployments where human soldiers would face extreme danger. UGVs are an essential tool for logistics, medevac, and emergency response. They enable the delivery of supplies, the evacuation of wounded, mine clearance, and civilian rescue. – https://jamestown.org/ukraine-becomes-world-leader-in-unmanned-ground-vehicles/
Frontiers and Markets
China plans to launch space data centres over next five years
(John Tanner – Developing Telecoms) China Aerospace Science and Technology Corporation (CASC) is reportedly planning to launch gigawatt AI space data centres over the next five years. According to a report from Reuters, citing Chinese state broadcaster CCTV, CASC said it will “construct gigawatt-class space digital-intelligence infrastructure,” under a five-year development plan. The space data centres will “integrate cloud, edge and terminal capabilities” and achieve the “deep integration of computing power, storage capacity and transmission bandwidth,” the report said. – https://developingtelecoms.com/telecom-technology/data-centres-networks/19687-china-plans-to-launch-space-data-centres-over-next-five-years.html
Google launches Project Genie allowing users to create interactive AI-generated worlds
(DigWatch) Google has launched Project Genie, an experimental prototype that allows users to create and explore interactive AI-generated worlds. The web application, powered by Genie 3, Nano Banana Pro, and Gemini, is rolling out to Google AI Ultra subscribers in the US aged 18 and over. Genie 3 represents a world model that simulates environmental dynamics and predicts how actions affect them in real time. Unlike static 3D snapshots, the technology generates paths in real time as users move and interact, simulating physics for dynamic environments. – https://dig.watch/updates/google-launches-project-genie-to-create-ai-worlds
Mapping surprise in the human mind, with help from AI
(Benjamin Ransom – UChicago News) We build AI systems to mimic the human brain: writing emails, answering questions and predicting what comes next. But new research aims to turn that relationship around—using large language models (LLMs) to explore how our brains anticipate and process stories. “I think that the way that an LLM represents events is similar to how humans do. That’s a really interesting part of our research,” said Bella Summe, a fourth-year data science major currently involved in a research project in the Cognition, Attention and Brain Lab at the University of Chicago. The project, directed by psychology Assoc. Prof. Monica Rosenberg, is aimed to determine if large language models can predict a fundamental process in human cognition—surprise. Their approach was to compare how humans and AI respond to the same narrative moments. – https://news.uchicago.edu/story/mapping-surprise-human-mind-help-ai
NVIDIA, Microsoft, and Amazon plan up to $60B investment in OpenAI
(Crypto Briefing) NVIDIA, Microsoft, and Amazon are negotiating a potential investment of up to $60 billion in OpenAI, which would value the AI research organization at $730 billion before financing, The Information reported Wednesday. – https://cryptobriefing.com/nvidia-microsoft-amazon-openai-investment/
Worldcoin spikes 40% as OpenAI reportedly plans biometric X rival
(Coin Telegraph) OpenAI-linked token Worldcoin spiked 40% on Wednesday following a report that the artificial intelligence firm is working on a bot-free social media platform that requires “proof of personhood.” . According to a Tuesday Forbes report citing sources familiar with the matter, OpenAI is aiming to develop a “humans-only platform” as a point of difference from other social media services on the market. – https://cointelegraph.com/news/worldcoin-token-spike-report-openai-build-biometric-x-rival
Most complex time crystal yet has been made inside a quantum computer
(Karmela Padavic-Callaghan – New Scientist) A time crystal more complex than any made before has been created in a quantum computer. Exploring the properties of this unusual quantum setup strengthens the case for quantum computers as machines well-suited for scientific discovery. Typical crystals have atoms arranged in a specific repeating pattern in space, but time crystals are defined by a pattern that repeats in time instead. A time crystal repeatedly cycles through the same set of configurations and, barring deleterious influences from its environment, should continue cycling indefinitely. This indefinite motion initially made time crystals seem like a threat to the fundamental laws of physics, but throughout the past decade researchers have made several of them in the lab. Now, Nicolás Lorente at Donostia International Physics Center in Spain and his colleagues have used an IBM superconducting quantum computer to make an unprecedentedly complex time crystal. – https://www.newscientist.com/article/2513426-most-complex-time-crystal-yet-has-been-made-inside-a-quantum-computer/
Despite its steep environmental costs, AI might also help save the planet
(Nir Kshetri – Japan Today) The rapid growth of artificial intelligence has sharply increased electricity and water consumption, raising concerns about the technology’s environmental footprint and carbon emissions. But the story is more complicated than that. I study emerging technologies and how their development and deployment influence economic, institutional and societal outcomes, including environmental sustainability. From my research, I see that even as AI uses a lot of energy, it can also make systems cleaner and smarter. AI is already helping to save energy and water, cut emissions and make businesses more efficient in agriculture, data centers, the energy industry, building heating and cooling, and aviation. – https://japantoday.com/category/tech/despite-its-steep-environmental-costs-ai-might-also-help-save-the-planet
UVA Data Science researcher leads $4.7M project for AI-powered diabetes management
(News Medical Life Sciences) University of Virginia School of Data Science researcher Heman Shakeri has been awarded a major new research grant to lead work at the intersection of machine learning and diabetes care. Shakeri will serve as a contact PI alongside Dr. Greg Forlenza, pediatric endocrinologist at the University of Colorado Anschutz Medical Campus (Barbara Davis Center for Diabetes). The award is jointly funded by Breakthrough T1D and The Leona M. and Harry B. Helmsley Charitable Trust. The project leverages a $3.9 million grant combined with $800,000 in in-kind contributions from industry partners Tandem Diabetes Care and Arecor, bringing the total project support to approximately $4.7 million. – https://www.news-medical.net/news/20260127/UVA-Data-Science-researcher-leads-2447M-project-for-AI-powered-diabetes-management.aspx
AI supercomputer gets £36m upgrade from government
(Louise Parryand, David Webster – BBC) One of the UK’s most powerful supercomputers is being given a £36m upgrade by the government as part of further investment in artificial intelligence (AI). The Dawn supercomputer in Cambridge, which has already supported more than 350 projects for free, will see its power boosted sixfold. The system is used for public projects such as helping to reduce NHS waiting lists and developing new tools to tackle climate change, although AI requires vast amounts of energy. Professor Sir John Aston, at the University of Cambridge, said: “This investment will give researchers, clinicians and innovators the tools they need to drive breakthroughs that improve public services.” – https://www.bbc.com/news/articles/c79rjg3yqn3o
These AI Models Might Take Down Superbugs
(Alex Knapp – Forbes) At Davos, all eyes were on President Trump’s remarks before the session and his designs on Greenland. But that was far from the only conversation happening at the World Economic Forum. Elsewhere at the event, scientists spoke about the urgent need for governments to address the problem of the growing resistance of microbes to antibiotics and other treatments, warning that deaths from these “superbugs” could cause more deaths than cancer by the middle of the century. Glen Gowers, cofounder of AI biotech Basecamp Research, thinks his company could help address this urgent problem. Earlier this month, it released new AI models it says can accelerate drug design. Working with hardware giant NVIDIA, these systems were trained on data from a large variety of genetic data from species around the world, creating a very robust and accurate way to model complex biological molecules. One application of this system is enabling development of better design methods of gene therapies for complex diseases. But another is finding new drugs against resistant microbes. In an accompanying paper, which has not yet been peer-reviewed, the company used its AI systems to design new drugs, 97% of which showed some effectiveness in the laboratory. As Gowers describes it, the system is sophisticated enough to simply prompt something like: “Design me something this bacteria has never seen before that will kill it,” and it will give you options. “We want to create those molecules and get them into pipelines,” he added. – https://www.forbes.com/sites/the-prototype/2026/01/22/these-ai-models-might-take-down-superbugs/
AI applied to abdominal imaging can help predict fall risk in adults
(News Medical Life Sciences) Artificial intelligence (AI) applied to abdominal imaging can help predict adults at higher risk of falling as early as middle age, a new Mayo Clinic study shows. The research, published in Mayo Clinic Proceedings, highlights the importance of abdominal muscle quality, a component of core strength, as a key predictor of fall risk in adults aged 45 years and older. – https://www.news-medical.net/news/20260122/AI-applied-to-abdominal-imaging-can-help-predict-fall-risk-in-adults.aspx
AI Can Predict 130 Health Issues From One Night of Sleep
(Davia Sills – Psychology Today) One of the greatest opportunities for artificial intelligence (AI) machine learning is in the field of health and disease diagnostics. A new breakthrough study demonstrates how AI can predict a person’s risk of developing over a hundred serious medical conditions from data collected noninvasively from just a single night of sleep. “This study underscores the potential of sleep-based foundation models for risk stratification and longitudinal health monitoring,” wrote Stanford University co-corresponding authors James Zou and Emmanuel Mignot in collaboration with co-authors Rahul Thapa, Magnus Ruud Kjaer, Bryan He, Ian Covert, Hyatt Moore IV, Umaer Hanif, Gauri Ganjoo, M. Brandon Westover, Poul Jennum, and Andreas Brink-Kjaer. – https://www.psychologytoday.com/gb/blog/the-future-brain/202601/ai-can-predict-130-health-issues-from-one-night-of-sleep