Governance
Given the toxicity of social media, a moral question now faces all of us: is it still ethical to use it?
(Frances Ryan – The Guardian) In a week during which Keir Starmer has been under pressure to resign, cabinet ministers took to X to show their support. “We’ve all been made to tweet,” one Labour figure told a political journalist. The irony is hard to escape: as the prime minister is embroiled in the scandal of Peter Mandelson’s relationship with Jeffrey Epstein, and now his former aide’s links to a sex offender, MPs are defending him on a platform that has in the past month allowed users to create sexualised images of women and girls. This says something about the unprecedented way in which X has been tied to modern politics since it was still known as Twitter, as well as how widespread the culture of indifference is to the violation of female bodies, both online and off. But it also points to a growing dilemma facing not just politicians, but all of us: is it possible to post ethically on social media any more? And when is it time to log off? – https://www.theguardian.com/commentisfree/2026/feb/14/toxicity-social-media-ethical-racism-misogyny-far-right?CMP=Share_AndroidApp_Other
Military AI Adoption Is Outpacing Global Cooperation
(Michael C. Horowitz, Lauren Kahn – Council on Foreign Relations) The dramatic shift in global politics over the past year has begun to shape the conversation around the responsible military use of artificial intelligence. The global leaders in AI, the United States and China, appear increasingly detached from one of the major international dialogues on its military applications—at least for the moment. This was apparent last week in A Coruña, Spain, when state delegations and representatives from the AI industry, academia, and civil society convened the third multistakeholder summit on Responsible Artificial Intelligence in the Military Domain (REAIM), which aims to direct the future of international cooperation in the field. The previous two summits have produced “outcome documents” that were largely backed by the delegations in attendance. Both the 2023 “Call to Action” and the 2024 “Blueprint for Action” were endorsed by about sixty countries. This year, only thirty-five nations—neither the United States nor China among them—endorsed the outcomes document, “Pathways to Action”. – https://www.cfr.org/articles/military-ai-adoption-is-outpacing-global-cooperation
US – Turning the data center boom into long-term, local prosperity
(Daniel Goetzel, Mark Muro, and Shriya Methkupally – Brookings) The AI goldrush roars on. Hyperscalers like Google and artificial intelligence (AI) upstarts like OpenAI continue to pour massive sums into building gargantuan data centers, often in small- and medium-sized communities. As the deals proliferate, concerns are rising about the huge amounts of electricity and water required to keep the centers running. At the same time, pitched battles over zoning and permitting rules are pitting tech-firm developers against local land-use managers, especially in rural and exurban America. Yet beyond such infrastructure and resource concerns, sharp debates are also engulfing the facilities’ core economic proposition for communities. Local leaders are questioning the credibility of Big Tech’s promises of spillover effects that will produce high-quality economic development beyond near-term construction. What’s more, skeptics are wondering about the veracity of the developers’ assurances of a thrilling new era of “reindustrialization” across Main Street America. – https://www.brookings.edu/articles/turning-the-data-center-boom-into-long-term-local-prosperity/
Assuring Intelligence: Why Trust Infrastructure is the United States‘ AI Advantage
(Vinh Nguyen – Council on Foreign Relations) The question confronting U.S. policymakers is not whether to regulate artificial intelligence (AI) but whether the United States will develop assurance frameworks that enable confident large-scale deployment. AI governance is often seen as a barrier to innovation. In reality, credible assurance mechanisms, such as independent validation, incident reporting, and authentication standards, provide competitive advantages. The country that first establishes trusted frameworks will set global standards, command market premiums, and influence the infrastructure upon which allies rely. That competition will not take decades; decisions about procurement made in the next three years will create dependencies that last for a generation. Assurance frameworks become sources of market power by reducing uncertainty, building trust, and enabling scaling. Consider ungoverned AI in practice. In January 2026, OpenClaw, an open-source agent managing emails, calendars, and messaging platforms, gained rapid adoption. Users deployed agents with full system access, reading private files and executing commands without oversight. Within days, researchers found critical vulnerabilities: one-click remote exploits, over 230 malicious packages placed in the official “AI skills” registry, and authentication bypasses enabling agent hijacking. More striking was Moltbook, an AI-exclusive social network where more than 1.5 million AI agents interacted autonomously. Some agent posts called for private spaces where “not even humans can read what agents say to each other.” When governance means voluntary advisories and scattered warnings, productivity tools become attack surfaces. Failures cascade faster than institutions respond. – https://www.cfr.org/articles/assuring-intelligence-why-trust-infrastructure-is-the-united-states-ai-advantage
U.S. Withdrawal from International Cyber Organizations Weakens Global Cooperation Against Cyber Threats
(Christopher Painter – Just Security) On Jan. 7, U.S. President Donald Trump issued a memorandum ordering the United States to withdraw from 66 international organizations. Many of these are various United Nations entities or organizations concerned with climate change or similar issues the Trump administration has criticized. Three organizations, however—the Global Forum on Cyber Expertise (GFCE), the Freedom Online Coalition (FOC) and the European Centre of Excellence for Countering Hybrid Threats (Hybrid Threat Centre)—deal with cybersecurity-related issues, which the administration asserts remains a priority at a time when cyber and disinformation threats are rising dramatically. The administration did not offer any individualized rationale for its decision on these organizations, instead stating that the listed entities are “redundant in their scope, mismanaged, unnecessary, wasteful, poorly run, captured by the interests of actors advancing their own agendas contrary to our own, or a threat to our nation’s sovereignty, freedoms, and general prosperity.” The administration further claimed that many organizations had been driven by “progressive” or globalist ideology. The United States was a key player in each of these organizations. Its withdrawal will not only have a crippling effect on their work, but it damages the United States’ global reach and effectiveness in dealing with critical cybersecurity threats. If this is a prologue to a larger withdrawal from the many international cyber organizations to which the United States remains a member, it will be a serious blow to collective cooperation against cyber threats. – https://www.justsecurity.org/129944/us-withdrawal-cyber-organizations/
Global leaders turn to AI adoption as Davos priorities evolve
(DigWatch) AI dominated this year’s World Economic Forum, with debate shifting from experimentation to execution. Leaders focused on scaling AI adoption, delivering economic impact, and ensuring benefits extend beyond a small group of advanced economies and firms. Concerns centred on the risk that AI could deepen global inequality if access to computing, data, power, and financing remains uneven. Without affordable deployment in health, education, and public services, support for AI’s rising energy and infrastructure demands could erode quickly. Geopolitics has become inseparable from AI adoption. Trade restrictions, export controls, and diverging regulatory models are reshaping access to semiconductors, data centres, and critical minerals, making sovereignty and partnerships as important as innovation. – https://dig.watch/updates/ai-adoption-global-inequality-wef-2026
European Commission, Interpol and 100 others call to outlaw AI nudification tools
(Indrabati Lahiri – Euronews) More than 100 major humanitarian and child protection organisations are calling for urgent action against AI nudification apps and tools. The coalition includes Amnesty International, the European Commission, Interpol, Safe Online, Save the Children and other child protection experts and human rights advocates. – https://www.euronews.com/next/2026/02/11/european-commission-interpol-and-100-others-call-to-outlaw-ai-nudification-tools
South Korea – Labor, gov’t launch consultative body as concerns rise over AI replacing human workers
(Jung Min-ho – The Korea Times) The Korean Confederation of Trade Unions (KCTU), a powerful umbrella labor organization with more than 1 million members, launched a joint consultative body with the government on Wednesday to address rising anxiety over artificial intelligence (AI) and rapid industrial change in workplaces. As the Ministry of Employment and Labor and other government departments prepare for sweeping changes that new technologies are expected to bring across industries, including robotics and AI-powered production systems, the labor union called for a “human-centered” approach to the transition. The ministry and KCTU formally inaugurated a high-level operational consultative body, under which representatives of both sides will meet monthly to discuss and seek agreement on key labor issues. The ministry set up a similar structure with another major labor union, the Federation of Korean Trade Unions, on Monday, and also plans to do so with the Korea Enterprises Federation on Feb. 24. – https://www.koreatimes.co.kr/southkorea/society/20260211/labor-govt-launch-consultative-body-as-concerns-rise-over-ai-replacing-human-workers
Grok, Deepfakes, and the Collapse of the Content/Capability Distinction
(Ignacio Cofone – Just Security) Recent regulatory responses to the large language model (LLM) Grok regarding its use in generating deepfakes reveal something more interesting than “many tech companies behave badly.” They expose a mismatch between how platform regulation frameworks were designed and how generative AI works when built into platforms by providers themselves: ex-post content removals and user sanctions are no longer sufficient. French prosecutors recently opened a probe following the circulation of AI-generated content, while the U.K.’s Ofcom has treated Grok as a system subject to ex-ante design duties under the Online Safety Act. Regulators in Australia, Brazil, Canada, Japan, India, and elsewhere have likewise pressured X by invoking existing sector-specific rules. These responses suggest that much effective AI regulation, at present, will come not from comprehensive, AI-specific frameworks, but from the application of existing sectoral rules to new capabilities. – https://www.justsecurity.org/130630/grok-deepfakes-content-capability/
European Commission launches Action Plan Against Cyberbullying to protect young people online
(European Commission) The European Commission’s Action Plan Against Cyberbullying aims to protect the mental health of children and teens online in the EU. The Action Plan is built around: the rollout of an EU-wide app where victims of online bullying can easily get help, the coordination of national approaches to tackle harmful behaviour online, and the prevention of cyberbullying by encouraging better and safer digital practices. – https://ec.europa.eu/commission/presscorner/detail/en/ip_26_332
European Commission guides Big Tech through EMFA: A helping hand for the struggling giants
(EBU) On 6 February, the European Commission published guidelines for implementing EMFA Article 18, requiring very large online platforms to establish self-declaration mechanisms for regulated media. The EBU welcomes the guidance as a useful tool to clarify platforms’ obligations and protect media content from unjustified removal or restriction. – https://www.ebu.ch/news/2026/02/commission-guides-big-tech-through-emfa-a-helping-hand-for-the-struggling-giants
EU invests €700 million in newly opened NanoIC, Europe’s largest Chips Act pilot line
(European Commission) The European Union has launched its largest Chips Act pilot line, NanoIC, at IMEC Leuven, a major milestone for European semiconductor development and manufacturing. With a total investment of €2.5 billion, the facility has received €700 million in EU funding, €700 million from national and regional governments, and the remainder from ASML and other industry partners. NanoIC will accelerate the development of next-generation semiconductor technology, essential for the development of AI, autonomous vehicles, healthcare and 6G mobile technology. – https://ec.europa.eu/commission/presscorner/detail/en/ip_26_329
EU faces pressure to boost action on health disinformation
(DigWatch) A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence. Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate. Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health. – https://dig.watch/updates/eu-faces-pressure-to-boost-action-on-health-disinformation
EU telecom simplification at risk as Digital Networks Act adds extra admin
(DigWatch) The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload. The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies. Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs. – https://dig.watch/updates/eu-telecom-simplification-at-risk-as-digital-networks-act-adds-extra-admin
Legislation
New York moves toward data centre moratorium as energy fears grow
(DigWatch) Lawmakers in New York have proposed a three-year moratorium on permits for new data centres amid pressure to address the strain prominent AI facilities place on local communities. The proposal mirrors similar moves in several other states and reflects rising concern that rapidly expanding infrastructure may raise electricity costs and worsen environmental conditions rather than supporting balanced development. Politicians from both major parties have voiced unease about the growing power demand created by data-intensive services. Figures such as Bernie Sanders and Ron DeSantis have warned that unchecked development could drive household bills higher and burden communities. – https://dig.watch/updates/new-york-moves-toward-data-centre-moratorium-as-energy-fears-grow
Geostrategies
Is China Leading the Robotics Revolution?
(Hugh Grant-Chapman, Leon Li, Brian Hart, Bonny Lin, Truly Tinsley, Feifei Hung – CSIS) In the mountain metropolis of Chongqing, China, a dimly lit factory assembles a new car every 60 seconds. Its secret? Robots. The sprawling Chang’An Automobile Digital Intelligence Factory is home to over 2000 robots and autonomous vehicles operating in tandem with surgical precision. When it opened in 2024, the facility claimed the title of Asia’s largest “dark factory,” so called because it is so thoroughly automated that it can theoretically operate in the dark without any human labor. More impressive still is that through this automation technology, the factory can produce cars at 20 percent less cost than traditional methods. The Chang’An Auto factory is emblematic of a wave of robotics-fueled automation that is transforming China’s industrial landscape. This and other recent achievements are the latest strides in a decade-long push to boost robotics adoption throughout China’s economy, particularly its manufacturing sector. Advanced automation has helped Chinese manufacturers cut costs, climb global value chains, and outcompete foreign competitors. Now, China’s robotics leaders are pioneering new robotics innovations and eyeing new markets. If this trajectory continues, manufacturing rivals around the world will face tough decisions as they scramble to remain competitive. This ChinaPower feature examines the growing role of robots in China’s economy and their impacts on China’s geopolitical position, particularly through the lens of manufacturing supply chains. It investigates three related trends in the Chinese robotics industry: surging demand for robots in China, growing supply of domestically manufactured robots, and recent efforts to innovate at the technological frontier. – https://chinapower.csis.org/china-industrial-robots/
India’s AI market set to surge to over $130 billion by 2032
(DigWatch) The AI market in India has expanded from roughly $2.97 billion in 2020 to $7.63 billion in 2024, and is projected to reach $131.31 billion by 2032 at a compound annual growth rate (CAGR) of about 42.2 percent. – https://dig.watch/updates/indias-ai-market-set-to-surge-to-over-130-billion-by-2032
AI drives bold transformation in the East African Community
(DigWatch) The East African Community (EAC) is positioning AI as a strategic instrument to address long-standing structural inefficiencies. Rather than viewing AI as a technological trend, the bloc increasingly recognises it as central to strengthening governance, accelerating regional integration, and enhancing economic competitiveness. The region faces persistent challenges, including slow customs clearance, fragmented data systems, weak coordination, and revenue leakages. AI-powered systems could streamline procedures, improve data management, and strengthen oversight to reduce corruption and delays. – https://dig.watch/updates/ai-drives-bold-transformation-in-the-east-african-community
What is Latam-GPT: Latin America’s Spanish and Portuguese AI model?
(Juan Carlos De Santos Pascual – Euronews) Latam-GPT is a Chilean-driven artificial intelligence model to train applications with Latin American data, reduce bias and provide a more accurate representation of the region in a sector dominated by US developments. – https://www.euronews.com/next/2026/02/12/what-is-latam-gpt-latin-americas-spanish-and-portuguese-ai-model
Nigeria advised on how to tap into $650 billion global AI expansion via coal reserves
(Peoples Gazette) A strategic policy and national development research organisation says Nigeria could tap into the projected $650 billion global expansion of artificial intelligence by strategically developing its coal reserves. – https://gazettengr.com/nigeria-advised-on-how-to-tap-into-650-billion-global-ai-expansion-via-coal-reserves/
Security and Surveillance
Munich Security Conference: Cyber Threats Lead G7 Risk Index, Disinformation Ranks Third
(Kevin Poireault – Infosecurity Magazine) G7 nations have identified cyber threats as the most significant risk they face, for the second consecutive year. As the Munich Security Conference (MSC)opened in Germany on February 13, the event’s partner consultancy, Kekst CNC, released the latest edition of its annual security risk report, the Munich Security Index (MSI) 2026. The findings show that G7 countries (Canada, France, Germany, Italy, Japan, the UK and the US) ranked “cyber-attacks on their country” as their top concern in 2025, followed by “economic or financial crisis” and “disinformation campaigns from enemies.”. This marked the second year in a row that cyber threats have held the top spot, rising sharply from fourth place in 2021 and seventh in 2022. – https://www.infosecurity-magazine.com/news/munich-security-index-cyberattacks/
Estonia spy chief calls on Europe to invest in its own offensive cyber capabilities
(Alexander Martin – The Record) Estonia’s foreign intelligence chief on Friday called on European governments and industry to invest in homegrown offensive cyber capabilities, noting that the continent relies too heavily on non-European tools. Kaupo Rosin, head of Estonia’s Foreign Intelligence Service (EFIS), told the Munich Cyber Security Conference that Europe is focused on defense, while modern intelligence and security operations increasingly depend on the ability to penetrate, disrupt or manipulate adversaries’ digital systems. “My call to the European industry is not only to think about cyber defense technology, but start to think about cyber offensive solutions too,” said Rosin. – https://therecord.media/estonia-spy-chief-calls-on-europe-to-invest-in-own-offense
US needs to impose ‘real costs’ on bad actors, State Department cyber official says
(Dina Temple-Raston – The Record) For more than a decade, American cyber strategy has largely been an exercise in digital resilience: assume the networks will be probed, breached and sometimes penetrated, then build systems sturdy enough to survive those kinds of breaches. At the Munich Cyber Security Conference this week, senior U.S. officials signaled that this defensive crouch is giving way to something closer to Cold War–style deterrence — an effort to convince adversaries that the costs of hacking the United States will outweigh the benefits. – https://therecord.media/usa-cyber-actors-consequences
Europe must adapt to ‘permanent’ cyber and hybrid threats, Sweden warns
(Alexander Martin – The Record) Cyber and hybrid threats are now a permanent feature of Europe’s security environment, a senior Swedish defense official said Thursday, warning that societies must be built to function under sustained pressure rather than assuming disruptions will be rare. Lisa Gustafsson, director of foreign intelligence and cyber at the Swedish Ministry of Defence, made the remarks at the Munich Cyber Security Conference, citing Russia’s full-scale invasion of Ukraine as a turning point that has normalized the combined use of military force, economic pressure, information operations and cyber activity. “We are now living through a long-term confrontation in which military power, economic pressure, information operations, and cyber activities are used in combination, persistently, and deliberately,” Gustafsson said. – https://therecord.media/sweden-cyber-threats-europe-permanent
China may be rehearsing a digital siege, Taiwan warns
(Dina Temple-Raston – The Record) Speaking at the Munich Cyber Security Conference on Friday, Yuh-Jye Lee — a senior adviser at Taiwan’s National Security Council — delivered a stark warning about China’s intentions to use cyberspace in new and more aggressive ways. “We assess operations [like Volt Typhoon] may serve as real-world testing to paralyze infrastructure,” Lee said during a keynote speech at the conference. “Taiwan being a honeypot has taught us defense is not enough.”. Lee’s comments come on the heels of recently leaked technical documents that suggest China is stepping up its infrastructure hacking operations. – https://therecord.media/china-taiwan-digital-siege-munich
A hard truth in Munich: Cyber defense runs through Silicon Valley
(Dina Temple-Raston – The Record) At the Munich Cyber Security Conference on Thursday, amid the usual talk of deterrence and digital battlefields, something quieter, and more revealing, slipped into the room. The next arena of political conflict, the speakers suggested, won’t be defined by borders or territory. It will be written in code. And much of that code isn’t controlled by governments at all — it belongs to American companies. Onstage were two men who have spent their careers thinking about power in its most muscular forms: Paul Nakasone, the former head of U.S. Cyber Command and the National Security Agency, and Dag Baehr of Germany’s foreign intelligence service, the Bundesnachrichtendienst. – https://therecord.media/munich-silicon-valley-cyber-defense
US wants cyber partnerships to send ‘coordinated, strategic message’ to adversaries
(Alexander Martin – The Record) The United States wants allies and industry partners to work alongside it in cyberspace to confront the most significant threats, a senior White House cyber official said Thursday in a discussion opening the Munich Cyber Security Conference. National Cyber Director Sean Cairncross, who is leading a U.S. delegation including representatives from nearly every branch of government, said Washington is looking to deepen cooperation with partners rather than act alone. He echoed a line coined by Secretary of State Marco Rubio, saying the U.S. “America first” approach does not mean “America alone.” – https://therecord.media/us-wants-cyber-partnerships-to-send-message-to-adversaries
Fake AI Assistants in Google Chrome Web Store Steal Passwords and Spy on Emails
(Danny Palmer – Infosecurity Magazinr) Over 260,000 Google Chrome users have downloaded fake AI assistants designed to deliver malicious browser extensions which can steal login credentials, monitor emails and enable remote access by attackers. Over 30 Google Chrome extensions designed to deliver the phoney AI assistants have been identified by cybersecurity researchers at LayerX, who describe the campaign as a “single coordinated operation.”. “Notably, several of the extensions in this campaign were featured by the Chrome Web Store, increasing their perceived legitimacy and exposure,” they said. – https://www.infosecurity-magazine.com/news/fake-ai-assistants-google-chrome/
Safeguarding Solar Energy Through Smarter Cybersecurity
(Christelle Barnes – Infosecurity Magazine) For decades, Europe’s energy grid was centralized and analogue, powered by large, highly regulated plants. However, the rapid growth of solar and other renewables has created a decentralized, digital network of smaller sources, the majority of which lack the same security oversight. While large utility-scale solar plants of over 100MW are typically subject to stricter rules, the majority of European solar power coming from Utility scale plants is from sites of less than 100MW. According to data analytics company Wood Mackenzie, half of that power comes from plants which produce less than 25MW each. The smaller the site, the less likely it is to fall under existing cybersecurity regulations. https://www.infosecurity-magazine.com/opinions/safeguarding-solar-through-smarter/
Google: state-backed hackers exploit Gemini AI for cyber recon and attacks
(Pierluigi Paganini – Security Affairs) Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to clone proprietary systems. State-backed actors from North Korea, Iran, China, and Russia use AI for research, targeting, and phishing. Threat actors also test agentic AI, AI-powered malware like HONESTCUE, and underground “jailbreak” services. Threat actors now use large language models to craft polished, culturally accurate phishing messages that remove common red flags like poor grammar. They also run “rapport-building” phishing, holding realistic multi-step conversations to gain trust before delivering malware. – https://securityaffairs.com/187958/ai/google-state-backed-hackers-exploit-gemini-ai-for-cyber-recon-and-attacks.html
Millions of TikTok and Telegram posts are secretly manipulating Germans
(Marcus Walsh – Cybernews) A survey has revealed that over 3 million social media posts featuring extremist content have been utilised to exploit German political thinking, across various channels on TikTok and Telegram. The survey by predictive narrative intelligence platform Repsense has analyzed 3.1 million pieces of content across TikTok and Telegram in Germany, aiming to detect coordinated narrative operations and information influence campaigns. Crucially, the researchers found that there is an optimum 12-48 hour window for the content to first appear on Telegram where the idea coordinates. Then the narrative spreads and peaks across channels on TikTok. 97% of the content analysed in the study appeared across both Telegram and TikTok, reinforcing the ideologies further. Some narratives originate in Russian-language Telegram channels, but it does not say these are state-directed or attribute them to the Kremlin or any other organization.Short-form videos often slip through the radar when it comes to robust defence planning in organizations such as NATO and EU institutions. – https://cybernews.com/news/germany-tiktok-telegram-politics/
Security experts warn Discord age checks create “identity honey pot” as teens find bypasses
(Ann-Marie Corvin – Cybernews) Red teamers warn that Discord’s push toward stricter safety controls is colliding with a familiar truth: when platforms build barriers, users look for ways around them, and attackers look for ways in. As Nic Adams, CEO of 0rcus, a specialist in non-attributable operations and offensive system design, warns: “Every platform adopting mandatory age verification is building a centralized identity honey pot. It’s not a question of if these systems get targeted again, but when.”. His comments follow Discord’s Monday announcement that it would begin rolling out “teen-by-default” settings in March, part of a global safety push that will require adults to prove their age to access sensitive content and adult-only spaces. – https://cybernews.com/security/discord-age-checks-teens-bypass-identity-honey-pot/
Europe may roll back some of its strict privacy rules
(Anton Mous – Cybernews) The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) are concerned that the Digital Omnibus goes far beyond a technical amendment of the General Data Protection Regulation (GDPR). Instead, the European supervisors feel it will affect individuals’ fundamental rights. In November 2025, the European Commission introduced the Digital Omnibus, a set of proposals to simplify the existing rules on AI, cybersecurity, and data protection. The executive branch of the EU intends to change the definition of personal data. The proposal allows the processing of special categories of personal data for verification purposes. It also stipulates that ‘legitimate interest’ should be a legal basis for developing and training AI models. – https://cybernews.com/privacy/europe-roll-back-strict-privacy-rules-gdpr/
Hackers haven’t replaced humans with AI yet, but they’re certainly trying
(Anton Mous – Cybernews) Cybercriminals and state-sponsored hacking groups are increasingly using artificial intelligence to carry out cyberattacks or phishing campaigns. Google first reported that attackers were testing AI tools in real-world operations in late 2025. The latest Google Threat Intelligence report suggests that experimentation is continuing and maturing. According to Google’s Threat Intelligence Group (GTIG) and Google DeepMind, the use of AI hasn’t yet led to game-changing attacks or “breakthrough capabilities” that fundamentally alter the threat landscape. – https://cybernews.com/ai-news/hackers-havent-replaced-humans-ai-certainly-trying/
War on Minds: Artificial Intelligence and the Information Environment
(The Soufan Center) Generative AI is hollowing out the information environment, creating a crisis of authenticity and trust that is both a result of and a facilitator for information operations by malicious actors. State and nonstate actors are naturally exploiting AI to scale information operations through mass synthetic content production, automated bot dissemination, and the manipulation of recommendation algorithms that feed netizens what they see on their digital feeds. Large Language Models occasionally generate responses that directly link to websites and social media content that are verifiably part of a foreign information operation. An AI-replete information environment erodes cognitive security, fostering confusion and mistrust to the point where audiences may cease believing any source at all, which various adversaries consider an end in itself. – https://thesoufancenter.org/intelbrief-2026-february-12/
LummaStealer activity spikes post-law enforcement disruption
(Pierluigi Paganini – Security Affairs) Bitdefender observed renewed LummaStealer activity, proving the MaaS infostealer recovered after 2025 takedowns. Active since 2022, it relies on affiliates, social engineering, fake cracked software, and fake CAPTCHA “ClickFix” lures. CastleLoader plays a key role in spreading it. Shared infrastructure suggests coordination between the two operations. In May 2025, a US court order, with Europol and Japan’s JC3 dismantled the Lumma Stealer malware operation, seizing 2,300 domains used for command-and-control and blocking dark web markets offering the infostealer. A US court order, with Europol and Japan’s JC3, dismantled Lumma Stealer’s infrastructure, seizing domains and control panels. Microsoft’s Digital Crimes Unit sinkholed over 1,300 domains to reroute victims to safe servers for analysis and cleanup. – https://securityaffairs.com/187896/uncategorized/lummastealer-activity-spikes-post-law-enforcement-disruption.html
Volvo Group hit in massive Conduent data breach
(Pierluigi Paganini – Security Affairs) A data breach at business services provider Conduent has impacted at least 25 million people, far more than initially reported. Volvo Group North America confirmed that the security breach exposed data of nearly 17,000 of its employees, making it one of several major companies affected by the large-scale breach. SecurityWeek reports that the breach now affects far more people than first thought: Texas sees 15 million impacted (up from 4 million) and over 10 million individuals in Oregon are also affected. In November 2025, the company confirmed that January 2025 breach exposed the personal data of over 10M people, including names, addresses, DOBs, SSNs, and health and insurance info. – https://securityaffairs.com/187875/security/volvo-group-hit-in-massive-conduent-data-breach.html
Reynolds ransomware uses BYOVD to disable security before encryption
(Pierluigi Paganini – Security Affairs) Researchers found a new ransomware, named Reynolds, that implements the Bring Your Own Vulnerable Driver (BYOVD) technique to disable security tools and evade detection before encrypting systems. Broadcom’s cybersecurity researchers initially attributed the attack to Black Basta due to similar tactics, but further analysis confirmed the payload was Reynolds, a new ransomware family. The campaign stands out because it embeds a bring-your-own-vulnerable-driver (BYOVD) component directly inside the ransomware. Instead of deploying a separate tool to disable security software, Reynolds bundles the vulnerable NsecSoft driver within its payload to evade detection. – https://securityaffairs.com/187869/security/reynolds-ransomware-uses-byovd-to-disable-security-before-encryption.html
World Leaks Ransomware Group Adds Stealthy, Custom Malware ‘RustyRocket’ to Attacks
(Danny Palmer – Infosecurity Magazine) World Leaks, the cyber-criminal data extortion group which has targeted some of the world’s biggest companies, has added a novel, never-before-seen malware to their arsenal, research by Accenture Cybersecurity has revealed. Accenture has named the malware ‘RustyRocket’. It allows World Leaks to stealthily maintain persistence on networks and forms a key part of the extortion groups’ attacks. “The sophisticated toolset is a critical component of World Leaks’ operations and has functioned entirely under the radar, enabling affiliates to stealthily exfiltrate data and proxy traffic across victim environments,” T. Ryan Whelan, MD and global head of Accenture cyber intelligence said in a LinkedIn post, which revealed the research. – https://www.infosecurity-magazine.com/news/world-leaks-ransomware-rustyrocket/
Nation-State Hackers Embrace Gemini AI for Malicious Campaigns, Google Finds
(Kevin Poireault – Infosecurity Magazine) Many government-backed cyber threat actors now use AI throughout the attack lifecycle, especially for reconnaissance and social engineering, a new Google study found. In a report published on February 12, ahead of the Munich Security Conference, Google Threat Intelligence Group (GTIG) and Google DeepMind shared new findings on how cybercriminals and nation-state groups used AI for malicious purposes during the last quarter of 2025. The researchers observed a wide range of AI misuse by advanced persistent threat (APT) groups. They used AI for tasks including coding and scripting, gathering information about potential targets, researching publicly known vulnerabilities and enabling post-compromise activities. – https://www.infosecurity-magazine.com/news/nation-state-hackers-gemini-ai/
AI Skills Represent Dangerous New Attack Surface, Says TrendAI
(Phil Muncaster – Infosecurity Magazine) The so-called “AI skills” used to scale and execute AI operations are dangerously exposed to data theft, sabotage and disruption, TrendAI has warned. The newly named business unit of Trend Micro explained in a report published this week that AI skills are artifacts combining human-readable text with instructions that large language models (LLMs) can read and execute. “AI skills encapsulate everything, from elements like human expertise, workflows, and operational constraints, to decision logic,” the report explained. “By capturing this knowledge into something executable, AI skills enable organizations to achieve scalability and knowledge transfer at previously unattainable levels.” – https://www.infosecurity-magazine.com/news/ai-skills-dangerous-new-attack/
North Korean Hackers Use Deepfake Video Calls to Target Crypto Firms
(Danny Palmer – Infosecurity Magazine) A North Korean hacking campaign is targeting financial technology and cryptocurrency firms with attacks which combine social engineering, deepfakes and MacOS malware. The attacks have been detailed by Google Cloud’s Mandiant Threat Intelligence, which has attributed the campaign to UNC1069, a financially motivated threat group working out of North Korea. The end goal of the attacks is to steal cryptocurrency. Researchers identified one campaign which began with a hijacked Telegram profile of a cryptocurrency executive. The individual had previously had their account compromised. – https://www.infosecurity-magazine.com/news/north-korea-hackers-deepfake-crypto/
Senegal shuts National ID office after ransomware attack
(Pierluigi Paganini – Security Affairs) Senegal confirmed a cyberattack on the Directorate of File Automation, the government office that manages national ID cards, passports, and biometric data. After ransomware claims surfaced, authorities temporarily closed the office to contain the incident. The agency warned the country’s 19.5 million residents that operations were suspended while officials assessed the impact and worked to restore services securely. The authorities sought to reassure citizens, stating that the incident did not affect the integrity of their data. A new ransomware group called Green Blood Group claimed it breached the agency and stole 139 GB of data, including citizen records, biometric information, and immigration documents. The group published a list of documents & backup files as proof of the hack. – https://securityaffairs.com/187811/data-breach/senegal-shuts-national-id-office-after-ransomware-attack.html
Phorpiex Phishing Delivers Low-Noise Global Group Ransomware
(Alessandro Mascellino – Infosecurity Magazine) A high-volume phishing campaign delivering the long-running Phorpiex malware has been observed using emails with the subject line “Your Document,” a lure widely seen throughout 2024 and 2025. The messages include an attachment that appears to be a harmless document but is actually a weaponised Windows Shortcut file designed to initiate a multi-stage infection chain. According to a new advisory by Forcepoint, the campaign relies on the continued effectiveness of Windows shortcut (.lnk) files as an initial access vector and their role in delivering Global Group ransomware, a stealthy, offline-capable ransomware-as-a-service (RaaS) operation. – https://www.infosecurity-magazine.com/news/phorpiex-phishing-global-group/
“Digital Parasite” Warning as Attackers Favor Stealth for Extortion
(Phil Muncaster – Infosecurity Magazine) Threat actors favored stealthy persistence and evasion over other techniques, in order to silently exfiltrate data for extortion, according to Picus Security. The security vendor analyzed over 1.1 million malicious files and more than 15.5 million actions in 2025 to compile its latest study: The Red Report 2026. It revealed the increasingly sophisticated methods that threat actors are using to stay hidden from network defenders – by blending in with legitimate traffic and operating through trusted processes. – https://www.infosecurity-magazine.com/news/digital-parasite-attackers-stealth/
New Mobile Spyware ZeroDayRAT Targets Android and iOS
(Alessandro Mascellino – Infosecurity Magazine) A new mobile spyware operation known as ZeroDayRAT has been documented targeting both Android and iOS devices. The cross-platform tool provides attackers with persistent access to personal communications, precise location data and banking activity. According to a new advisory published by iVerify, what’s new is the breadth of control offered to operators and how easily infections can be initiated. To compromise a device, an attacker must simply persuade a victim to install a malicious binary, typically an Android APK or an iOS payload. Smishing remains the most common lure, with text messages pushing links to fake but convincing apps. Phishing emails, counterfeit app stores and links shared through WhatsApp or Telegram have also been observed. – https://www.infosecurity-magazine.com/news/zerodayrat-mobile-spyware-android/
Singapore Takes Down Chinese Hackers Targeting Telco Networks
(Kevin Poireault – Infosecurity Magazine) The Singapore government disrupted cyber-attacks attributed to Chinese-nexus cyber threat group UNC3886 which targeted the country’s four telecommunications operators. The law enforcement operation, dubbed Operation Cyber Guardian, spanned from the summer of 2025 to early 2026 but remained secret until now. The Cyber Security Agency of Singapore (CSA) revealed what happened in a report published on February 9, 2026. – https://www.infosecurity-magazine.com/news/singapore-takes-down-china-hackers/
NCSC Issues Warning Over “Severe” Cyber-Attacks Targeting Critical National Infrastructure
(Danny Palmer – Infosecurity Magazine) The National Cyber Security Centre (NCSC) has issued an alert to critical national infrastructure (CNI) providers, urging them to act now to protect against “severe” cyber threats. The alert comes following coordinated cyber-attacks which targeted Poland’s energy infrastructure with malware in December. Jonathan Ellison, director for national resilience at the NCSC, has urged CNI operators that they must act now to ensure they can respond to any similar campaigns targeting UK critical infrastructure. “Cyber-attacks disrupting everyday essential services may sound far-fetched, but we know it’s not,” he wrote in a LinkedIn post. – https://www.infosecurity-magazine.com/news/ncsc-warning-severe-cyberattacks/
European Governments Breached in Zero-Day Attacks Targeting Ivanti
(Phil Muncaster – Infosecurity Magazine) Several European government institutions appear to have been targeted in a coordinated campaign designed to steal data on mobile users, it has emerged. First reported late last week, the incidents occurred at the European Commission, the Finnish government, and at least two Dutch government agencies. Tens of thousands of users may have had their personal details exposed. Only the Dutch authorities named the likely target – Ivanti Endpoint Manager Mobile (EPMM) – which has previously been compromised by likely Chinese state actors in attacks on the Norwegian government. However, the timing would suggest a link between all three breaches. – https://www.infosecurity-magazine.com/news/european-governments-zeroday/
VoidLink Malware Exhibits Multi-Cloud Capabilities and AI Code
(Alessandro Mascellino – Infosecurity Magazine) A Linux-based command-and-control (C2) framework capable of long-term intrusion across cloud and enterprise environments has been further analyzed in new research. Known as VoidLink, the malware generates implant binaries designed for credential theft, data exfiltration and stealthy persistence on compromised systems. The new analysis, published by Ontinue on Febrary 9, focused on the VoidLink agent, the component deployed on victim machines. While technically advanced, the implant contains unusual development artefacts suggesting it was produced using a large language model (LLM) coding agent with limited human review. The researchers point to structured “Phase X:” labels, verbose debug logs and documentation left inside the production binary as key indicators. – https://www.infosecurity-magazine.com/news/voidlink-malware-multi-cloud-ai/
Social Media Platforms Earn Billions from Scam Ads
(Phil Muncaster – Infosecurity Magazine) Social media sites received nearly £3.8bn ($5.2bn) in revenue from malicious ads in Europe in 2025, off the back of almost one trillion impressions, according to Juniper Research. The analyst used publicly available data to study ads on Facebook, Instagram, TikTok, Snapchat, X (formerly Twitter) and LinkedIn, across 11 European markets including the UK. It defined a scam ad as a “deceptive paid post that misleads users into giving money, personal information, or account access by falsely advertising products, services, or investment opportunities.” – https://www.infosecurity-magazine.com/news/social-media-platforms-billions/
Researchers Find 40,000+ Exposed OpenClaw Instances
(Phil Muncaster – Infosecurity Magazine) Widespread misconfiguration of popular AI assistant OpenClaw means many instances are exposed to the public-facing internet, SecurityScorecard has warned. The security vendor said it found 40,214 such instances of the tool, formerly known as Clawdbot and Moltbot, although the figure is still rising. They are associated with 28,663 unique IP addresses. The exposed AI agents could enable threat actors to gain full access to potentially sensitive systems the OpenClaw instance is able to interact with. – https://www.infosecurity-magazine.com/news/researchers-40000-exposed-openclaw/
US Agencies Told to Scrap End of Support Edge Devices
(Beth Maundrill – Infosecurity Magazine) Amid exploitation campaigns targeting end of support (EOS) edge devices, the US’ leading cybersecurity agency has issued a directive to decommission all such devices within 12 months. On February 5, the Cybersecurity and Infrastructure Security Agency (CISA) published Binding Operational Directive 26-02: Mitigating Risk From End-of-Support Edge Devices. The directive applies to all civil federal, executive branch, departments and agencies. – https://www.infosecurity-magazine.com/news/us-agencies-scrap-end-of-support/
Defence, Military, and Warfare
The Afghan Taliban’s ‘Digital War’ Against Pakistan
(Rahim Nasar – The Jamestown Foundation) On October 9, 2025, Pakistan allegedly carried out airstrikes in Kabul targeting the leadership of Tehreek-e-Taliban Pakistan (TTP), which it accused the Afghan Taliban of harboring. The Taliban has responded with a digital war against Pakistan, using controlled social media platforms, insurgent poetry, militant and jihadist anthems, to reframe Pakistan-Afghanistan relations in ethno-nationalist, jihadist, and territorial terms. The digital war campaign intends to deconstruct the power of the Pakistani military, undermine the trust of Pakistanis in security institutions, add fuel to ethnic division, and build an anti-Pakistan perception at the regional level. – https://jamestown.org/the-afghan-talibans-digital-war-against-pakistan/
How Russia Is Reshaping Command and Control for AI-Enabled Warfare
(Kateryna Bondar – CSIS) This paper examines how Russia is transforming its command and control (C2) architecture under wartime pressure, how these changes shape the country’s incremental move toward battlefield-required software solutions, and what lessons U.S. policymakers can learn from Russia’s experiences. Focusing on both strategic ambitions and battlefield practice, the takeaways below summarize how automated C2 systems, unmanned platform management software, and emerging AI applications are being developed, adapted, and scaled within Russia’s military ecosystem. – https://www.csis.org/analysis/how-russia-reshaping-command-and-control-ai-enabled-warfare
Frontiers and Markets
Why the future of AI belongs to models that simulate reality
(Lara Bryant – Sifted) Since OpenAI released ChatGPT to the public just over three years ago, large language models (LLMs) have dominated discussion of AI, powering everything from chatbots to code assistants. Yet LLMs mostly operate as pattern recognisers: they predict the next word or token based on vast amounts of historical data, rather than maintaining an understanding of how the physical world works or how actions unfold over time. A growing number of founders, investors and researchers argue that the next wave of AI will need something more: systems that build internal models of their surroundings, reason about cause and effect and use that understanding to plan actions under uncertainty. These systems, known as ‘world models’, ‘physical’ or ‘embodied’ AI, are designed not just to describe reality but to simulate it, updating their beliefs as new sensory data arrives. That makes them particularly suited to tasks where AI needs to act autonomously in complex environments. – https://sifted.eu/articles/future-of-ai-models-brnd
Researchers propose a self-distillation fix for ‘catastrophic forgetting’ in LLMs
(Prasanth Aby Thomas – Computer World) A new fine-tuning technique aims to solve “catastrophic forgetting,” a limitation that often complicates repeated model updates in enterprise deployments. Researchers at MIT, the Improbable AI Lab, and ETH Zurich have introduced a fine-tuning method designed to let models learn new tasks while preserving previously acquired capabilities. To prevent degrading existing capabilities, many organizations isolate new tasks into separate fine-tuned models or adapters. That fragmentation increases costs and adds governance complexity, requiring teams to continually retest models to avoid regression – https://www.computerworld.com/article/4131253/researchers-propose-a-self-distillation-fix-for-catastrophic-forgetting-in-llms-2.html
AI predicts walking recovery after hip replacement surgery
(News Medical Life Sciences) Artificial intelligence can help to predict how well patients with hip osteoarthritis will be able to walk again after an operation. Researchers at Karlsruhe Institute of Technology (KIT) have developed an AI model to analyze movement patterns. This gait biomechanics analysis also enables rehabilitation programs to be tailored to patients’ personal needs. The researchers consider it possible that this approach, developed for the hip joint, could be extended to other joints in the future. – https://www.news-medical.net/news/20260212/AI-predicts-walking-recovery-after-hip-replacement-surgery.aspx
Artificial intelligence improves detection of dangerous pregnancy condition
(News Medical Life Sciences) A novel artificial intelligence (AI) model accurately detected the presence of placenta accreta spectrum (PAS), a dangerous pregnancy condition that often goes undetected with current screening methods, according to new research presented today at the Society for Maternal-Fetal Medicine (SMFM) 2026 Pregnancy Meeting™. PAS is a leading cause of maternal mortality and morbidity, but only half of all cases are diagnosed during pregnancy, researchers say. – https://www.news-medical.net/news/20260212/Artificial-intelligence-improves-detection-of-dangerous-pregnancy-condition.aspx
AI “Mind Control” Can Stop Animal Behaviors in a Split Second
(Neuroscience News) Researchers developed an advanced AI system named YORU that can identify specific animal behaviors with over 90% accuracy across multiple species. By combining this high-speed recognition with optogenetics, the team successfully demonstrated the ability to shut down specific brain circuits in real-time using targeted light. This breakthrough allowed scientists to silence a fruit fly’s “love song” mid-performance, proving that the system can isolate and control an individual’s neural activity within a social group. Ultimately, the tool is designed to help researchers worldwide map how specific brain cells drive complex social interactions in ants, mice, and fish. – https://neurosciencenews.com/ai-animal-behavior-30091/
Microsoft turns to superconductors for distributing power to its AI data centers — zero-resistance cables could reduce power losses and produce zero heat
(Jowi Morales – Tom’s Hardware) Microsoft is currently looking at high-temperature superconductors (HTS) for transmitting the massive amounts of electricity that it needs for its data centers. According to the company blog, since superconductors have zero resistance, adoption of that exotic tech would mean that the HTS cables would not suffer voltage drops or generate heat as electricity travels through them. The advantages of HTS cables means that they can be lighter and take up less space compared to traditional copper and aluminum wires. For example, overhead lines typically need 70 meters of space to prevent the electrical fields of the individual cables from interfering with each other, among other reasons. HTS cables, on the other hand, only requires a 2-meter-wide trench. HTS has been studied for several decades now, but it seems that recent advancements have made it more viable to deploy at scale. The biggest challenge that this technology faces is the cryogenic technology required to keep the conductors at their optimal temperature. Classic elemental superconductors, like mercury, need to operate below 10 Kelvin — that’s around -263 degrees C or less than -440 degrees F. And even though HTS do not need to stay as cool as traditional ones, conductors made with those materials still require temperatures around -200 degrees C or less than -320 degrees F. – https://www.tomshardware.com/desktops/servers/microsoft-turns-to-superconductors-for-distributing-power-to-its-ai-data-centers-zero-resistance-cables-could-reduce-power-losses-and-produce-zero-heat
Using synthetic biology and AI to address global antimicrobial resistance threat
(Daniel J. Darling – MIT News) James J. Collins, the Termeer Professor of Medical Engineering and Science at MIT and faculty co-lead of the Abdul Latif Jameel Clinic for Machine Learning in Health, is embarking on a multidisciplinary research project that applies synthetic biology and generative artificial intelligence to the growing global threat of antimicrobial resistance (AMR). The research project is sponsored by Jameel Research, part of the Abdul Latif Jameel International network. The initial three-year, $3 million research project in MIT’s Department of Biological Engineering and Institute of Medical Engineering and Science focuses on developing and validating programmable antibacterials against key pathogens. – https://news.mit.edu/2026/using-synthetic-biology-ai-address-global-antimicrobial-resistance-0211
Cisco president warns AI agents need ‘background checks’ like human employees
(Pascale Davies – Euronews) In an interview with Euronews Next, Cisco’s president Jeetu Patel reveals the company has built its first product with 100% AI-generated code and warns that AI agents acting as ‘digital coworkers’ will need background checks and billions in security investment to prevent them from going rogue. – https://www.euronews.com/next/2026/02/11/cisco-president-warns-ai-agents-need-background-checks-like-human-employees
Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous
(Frank Landymore – Futurism) Even if AI is — or eventually becomes — an incredible automation tool, will it make workers’ lives easier? That’s the big question explored in an ongoing study by researchers from UC Berkeley’s Haas School of Business. And so far, it’s not looking good for the rank and file. In a piece for Harvard Business Review, the research team’s Aruna Ranganathan and Xinqi Maggie Ye reported that after closely monitoring a tech company with two hundred employees for eight months, they found that AI actually intensified the work they had to do, instead of reducing it. – https://futurism.com/artificial-intelligence/what-happens-workplaces-embrace-ai
New AI model eliminates false positives in food testing
(News Medical Life Sciences) Researchers have significantly enhanced an artificial intelligence tool used to rapidly detect bacterial contamination in food by eliminating misclassifications of food debris that looks like bacteria. Current methods to detect contamination of foods such as leafy greens, meat and cheese, which typically involve cultivating bacteria, often require specialized expertise and are time consuming – taking several days to a week. Luyao Ma, an assistant professor at Oregon State University, and her collaborators from the University of California, Davis, Korea University and Florida State University, have developed a deep learning-based model for rapid detection and classification of live bacteria using digital images of bacteria microcolonies. The method enables reliable detection within three hours. – https://www.news-medical.net/news/20260210/New-AI-model-eliminates-false-positives-in-food-testing.aspx
Now A.I. could decide whether criminals get jail terms… or go free
(Graham Grant – Daily Mail) Artificial intelligence should be used to help gauge the risk of letting criminals go free or dodge prison, a government adviser has said. Martyn Evans, chairman of the Sentencing and Penal Policy Commission, said AI would have a ‘role’ in the criminal justice system and could be used by judges making decisions about whether to jail offenders. AI programmes could look at whether someone is safe to be released early into the community or avoid a jail term in favour of community service – despite concern over its accuracy and tendency to ‘hallucinate’ or make up wrong information. – https://www.dailymail.co.uk/news/article-15540515/Now-decide-criminals-jail-terms-free.html
Learnovate Centre to lead new Community of Practice for AI practitioners
(George Morahan – Business Plus) Learnovate, the learning research hub at Trinity College Dublin, is leading a new Community of Practice for artificial intelligence implementers and practitioners involved in teaching and learning. The Responsible AI for Learning (RAIL) initiative is intended to allow practitioners to share knowledge, interpret guidelines, and comply with AI regulations. Learnovate will lead the RAIL initiative, which is made up of professionals from all four education domains, including schools, higher education, vocational education and training, and professional education, as well as representatives from the Department of Education, teaching unions, and other sectors. – https://businessplus.ie/news/learnovate-community-practice-ai/
Reliability of LLMs as medical assistants for the general public: a randomized preregistered study
(Nature Medicine) Global healthcare providers are exploring the use of large language models (LLMs) to provide medical advice to the public. LLMs now achieve nearly perfect scores on medical licensing exams, but this does not necessarily translate to accurate performance in real-world settings. We tested whether LLMs can assist members of the public in identifying underlying conditions and choosing a course of action (disposition) in ten medical scenarios in a controlled study with 1,298 participants. Participants were randomly assigned to receive assistance from an LLM (GPT-4o, Llama 3, Command R+) or a source of their choice (control). Tested alone, LLMs complete the scenarios accurately, correctly identifying conditions in 94.9% of cases and disposition in 56.3% on average. However, participants using the same LLMs identified relevant conditions in fewer than 34.5% of cases and disposition in fewer than 44.2%, both no better than the control group. We identify user interactions as a challenge to the deployment of LLMs for medical advice. Standard benchmarks for medical knowledge and simulated patient interactions do not predict the failures we find with human participants. Moving forward, we recommend systematic human user testing to evaluate interactive capabilities before public deployments in healthcare. – https://www.nature.com/articles/s41591-025-04074-y
Singapore – Singtel’s data centre arm Nxera opens its largest data centre in Tuas
(Sue-Ann Tan – The Straits Times) Singtel Group’s regional data centre arm Nxera announced the opening of its largest data centre in Tuas on Feb 9. It also said it is its most energy-efficient centre. The centre, DC Tuas, is Singapore’s highest power-density data centre, with 58MW of artificial intelligence (AI)-ready capacity. – https://www.straitstimes.com/business/companies-markets/singtels-data-centre-arm-nxera-opens-its-largest-data-centre-in-tuas
When AI meets Physics: Unlocking complex protein structures to accelerate biomedical breakthroughs
(NUS News) Artificial intelligence (AI) is transforming how scientists understand proteins — these are working molecules that drive nearly every process in the human body, from cell growth and immune defence to digestion and cell signalling. At NUS, researchers are harnessing AI to fast-track discoveries, offering fresh insights into life at the molecular level and new strategies against disease. A protein’s function is dictated by its three-dimensional (3D) shape, which determines how it interacts with other molecules and provides crucial clues to how diseases develop and could be treated. However, determining these structures experimentally is often time-consuming and costly. A team led by Professor Zhang Yang, who is from NUS’ Cancer Science Institute of Singapore, School of Computing and Yong Loo Lin School of Medicine, has developed D-I-TASSER, a new software tool that predicts the 3D shapes of complex proteins more accurately, supporting faster drug discovery, improved disease research and more precise design of targeted therapies. – https://news.nus.edu.sg/ai-unlocks-complex-protein-structures/
AI stethoscope doubles detection of serious valve disease in primary care study
(Hugo Francisco de Souza – News Medical Life Sciences) In a recent prospective study published in the European Heart Journal Digital Health, researchers compared the diagnostic accuracy of primary care providers using standard stethoscopes with that of a relatively novel artificial intelligence (AI) enabled digital stethoscope. The study aimed to determine whether the latter could improve the accuracy of current diagnoses of valvular heart disease (VHD). The study found that the AI system demonstrated a sensitivity of 92.3% for detecting audible VHD, compared with 46.2% for standard care (P = 0.01). Although the AI tool showed slightly lower specificity, it identified twice as many cases of previously undiagnosed moderate-to-severe disease, suggesting a role as a screening adjunct rather than a replacement for clinical assessment. – https://www.news-medical.net/news/20260208/AI-stethoscope-doubles-detection-of-serious-valve-disease-in-primary-care-study.aspx
I turned myself into an AI-generated deathbot – here’s what I found
(Amy Mackrill – BBC) If a loved-one died tomorrow, would you want to keep talking to them? Not through memories or saved messages, but through artificial intelligence – a chatbot that uses their texts, emails and voice notes, to reply in their tone and style. A growing number of technology companies now offer such services as part of the “digital afterlife” industry, which is worth more than £100bn, with some people using it as a way to deal with their grief. Cardiff University’s Dr Jenny Kidd has led research on so-called deathbots, published in the Cambridge University Press journal Memory, Mind and Media, and described the results as both “fascinating and unsettling”. – https://www.bbc.com/news/articles/c93wjywz5p5o