Daily Digest on AI and Emerging Technologies (22 October 2025)

Governance

PAHO issues new guide on designing AI prompts for public health

(DigWatch – 21 October 2025) The Pan American Health Organization (PAHO) has released a guide with practical advice on creating effective AI prompts for public health. The guide AI prompt design for public health helps professionals use AI responsibly to generate accurate and culturally appropriate content. PAHO says generative AI aids in public health alerts, reports, and educational materials, but its effectiveness depends on clear instructions. The guide highlights that well-crafted prompts enable AI systems to generate meaningful content efficiently, reducing review time while maintaining quality. – https://dig.watch/updates/paho-issues-new-guide-on-designing-ai-prompts-for-public-healthhttps://www.paho.org/en/news/20-10-2025-paho-publishes-guide-designing-artificial-intelligence-instructions-public-health

Meta changes WhatsApp terms to block third-party AI assistants

(DigWatch – 21 October 2025) Meta-owned WhatsApp has updated the terms of its Business API to forbid general-purpose AI chatbots from being hosted or distributed via its platform. The change will take effect on 15 January 2026. Under the revised terms, WhatsApp will not allow providers of AI or machine-learning technologies, including large language models, generative AI platforms, or general-purpose AI assistants, to use the WhatsApp Business Solution when such technologies are the primary functionality being provided. – https://dig.watch/updates/meta-changes-whatsapp-terms-to-block-third-party-ai-assistantshttps://techcrunch.com/2025/10/18/whatssapp-changes-its-terms-to-bar-general-purpose-chatbots-from-its-platform/

AI transforms Japanese education while raising ethical questions

(DigWatch – 21 October 2025) AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively. In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable. – https://dig.watch/updates/ai-transforms-japanese-education-while-raising-ethical-questionshttps://japantoday.com/category/tech/feature-ai-proving-useful-in-japanese-education-despite-overreach-concerns

Geostrategies

Africa’s AI Policy Ambitions Ignore Energy, Climate and Labor Concerns

(Vincent Obia – Tech Policy Press – 21 October 2025) AI strategies across Africa are becoming increasingly ambitious in response to the continent’s growing drive for AI development. This ambition is evident in plans to establish new data centres and expand computing capacity. Currently, at least 226 data centres are operational across 39 African countries, with many strategies outlining the goal of building more. Examples include Benin’s intention to upgrade its data centre to meet AI compliance standards and Egypt’s plan to construct a “cutting-edge domestic data centre.” While this momentum toward expanding AI capacity is understandable, it reveals a significant limitation: the near neglect of issues related to energy use, environmental impact, and labor exploitation. This finding is based on an August 2025 analysis of 14 publicly available AI strategies, both finalized and draft editions, released by the African Union, Benin, Egypt, Ethiopia, Ghana, Kenya, Lesotho, Mauritania, Mauritius, Nigeria, Rwanda, Senegal, South Africa, and Zambia. African governments must address this oversight, not least because of the message it sends to AI developers both within and beyond the continent, namely, that concerns about energy, climate, and labor can be sacrificed on the altar of unchecked AI advancement. – https://www.techpolicy.press/africas-ai-policy-ambitions-ignore-energy-climate-and-labor-concerns/

China leads the global generative AI adoption with 515 million users

(DigWatch – 21 October 2025) In China, the use of generative AI has expanded unprecedentedly, reaching 515 million users in the first half of 2025. The figure, released by the China Internet Network Information Centre, shows more than double the number recorded in December and represents an adoption rate of 36.5 per cent. Such growth is driven by strong digital infrastructure and the state’s determination to make AI a central tool of national development. – https://dig.watch/updates/china-leads-the-global-generative-ai-adoption-with-515-million-usershttps://www.scmp.com/tech/tech-trends/article/3329667/chinas-generative-ai-user-base-doubles-515-million-6-months?module=top_story&pgtype=homepage

Legislation

Governing Frontier AI: California’s SB 53

(Lam Tran – Lawfare – 21 September 2025) In late September, California Gov. Gavin Newsom signed Senate Bill 53 (SB 53), the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first U.S. state to enact legislation specifically aimed at regulating advanced AI systems. In the United States, technological development and adoption usually outpace regulatory actions. The passing of this legislation in California—home to most of the world’s leading AI companies and research labs—marks a key milestone in policymakers’ attempts to address the potential catastrophic risks posed by AI. With implementation scheduled for January 2026, SB 53 builds a governance architecture for frontier AI that emphasizes transparency, whistleblower protection, public infrastructure, and adaptive oversight, seeking to balance safety and innovation. As with the state’s privacy legislation, the 2018 California Consumer Privacy Act and 2020 California Privacy Rights Act—which are also America’s first comprehensive privacy laws—SB 53 reflects California’s leadership role in setting standards and norms for emerging technologies. – https://www.lawfaremedia.org/article/governing-frontier-ai–california-s-sb-53

AI Mormons incoming: Utah hopes new tech will improve government efficiency

(Cybernews – 21 October 2025) The US federal government is seemingly incapable of passing any meaningful legislation and regulations regarding the use of AI. Unsurprisingly, individual states are leading the way. In California, where the largest AI hub in Silicon Valley is based, Governor Gavin Newsom signed a new set of regulations for AI companies into law at the end of September, for example. But Utah, a deeply conservative state, is also not standing still – proving that individual states can play around with their own set of rules, at least until Washington wakes up. According to Axios, the Beehive State has rolled out Google Gemini to most employees. Utah’s commerce department is now also using AI to process international professional licenses, such as nursing, for state credentials. – https://cybernews.com/ai-news/utah-embraces-ai-government-services/

DOGE’s Plundering of Data Hastens Calls to Tighten Government Privacy Laws

(Gopal Ratnam – Tech Policy Press – 21 October 2025) For about a decade, lawmakers in Washington have sought to pass a comprehensive privacy law to prevent commercial platforms from misusing Americans’ data online. Now some in Congress and at the state level are increasingly raising alarm that the federal government is violating Americans’ privacy and calling for laws to prevent such abuse. Fears about Americans’ data being mishandled have ballooned in the wake of the push by the Elon Musk-formed Department of Government Efficiency, or DOGE, to force federal agencies to hand over sensitive data on United States citizens and residents, including social security numbers and the personal records of millions of federal employees and retirees. – https://www.techpolicy.press/doges-plundering-of-data-hastens-calls-to-tighten-government-privacy-laws/

Security and Surveillance

IAEA launches initiative to protect AI in nuclear facilities

(DigWatch – 21 October 2025) The International Atomic Energy Agency (IAEA) has launched a new research project to strengthen computer security for AI in the nuclear sector. The initiative aims to support safe adoption of AI technologies in nuclear facilities, including small modular reactors and other applications. AI and machine learning systems are increasingly used in the nuclear industry to improve operational efficiency and enhance security measures, such as threat detection. These technologies bring risks like data manipulation or misuse, requiring strong cybersecurity and careful oversight. – https://dig.watch/updates/iaea-launches-initiative-to-protect-ai-in-nuclear-facilitieshttps://www.iaea.org/newscenter/news/new-research-project-on-computer-security-for-nuclear-ai

Singapore Officials Impersonated in Sophisticated Investment Scam

(Infosecurity Magazine – 21 October 2025) A large-scale scam operation impersonating Singapore’s top officials has been uncovered by cybersecurity experts. The operation uses verified Google Ads, fake news websites and deepfake videos to lure victims into a fraudulent investment platform. The scam falsely associates itself with Singapore prime minister Lawrence Wong and coordinating minister for national security K Shanmugam to appear credible. According to a report published by Group-IB today, the campaign specifically targeted Singapore residents by configuring Google Ads to appear only to local IP addresses. Victims who clicked on the ads were funneled through a chain of redirect sites designed to conceal the final fraudulent destination – a Mauritius-registered forex investment platform. – https://www.infosecurity-magazine.com/news/singapore-officials-investment-scam/

Ransomware Payouts Surge to $3.6m Amid Evolving Tactics

(Infosecurity Magazine – 21 October 2025) The average ransomware payment has increased to $3.6m this year, up from $2.5m in 2024 – a 44% surge despite a decline in the overall number of attacks. The 2025 Global Threat Landscape Report findings from ExtraHop point to a clear evolution in cybercriminal strategy: fewer, more targeted operations that aim for higher returns and longer-lasting impact. – https://www.infosecurity-magazine.com/news/ransomware-payouts-surge-dollar36m/

Russian Coldriver Hackers Deploy New ‘NoRobot’ Malware

(Infosecurity Magazine – 21 October 2025) The Russian-affiliated hacking group Coldriver has been observed deploying a new malware set, according to researchers at the Google Threat Intelligence Group (GTIG). This malware set, made of several families connected via a delivery chain, seems to have replaced Coldriver’s previous primary malware LostKeys since it was publicly disclosed in May 2025, said a GTIG report published on October 20. The researchers noted that the new set was used more aggressively than any other previous malware campaigns ever attributed to the group. This indicates a rapidly increased development and operations tempo from Coldriver, according to GTIG. – https://www.infosecurity-magazine.com/news/russian-coldriver-hackers-new/

Attackers abusing OAuth to maintain access long after passwords are reset

(Cybernews – 21 October 2025) Researchers at Proofpoint, a cybersecurity firm, have warned about real-world cyberattacks in which hackers maintain persistence by issuing (OAuth) tokens to their malicious web apps. Despite user attempts to reset passwords and enforce multifactor authentication, the OAuth token – a string of symbols issued to third-party apps that acts as a key – remains valid. Hackers can retain access to email and other accounts and wreak havoc. – https://cybernews.com/security/attackers-abusing-oauth-to-maintain-access-despite-password-resets/

Hackers actively exploiting Windows SMB flaw, gaining SYSTEM privileges over networks

(Cybernews – 21 October 2025) The US cybersecurity agency CISA has added Microsoft Windows SMB client improper access control vulnerability (CVE-2025-33073) to its Known Exploited Vulnerabilities (KEV) catalog. This means that the flaw has become a frequent attack vector for cyberthreat actors and poses a significant risk. CISA updates its catalog based on evidence of active exploitation. – https://cybernews.com/security/hackers-exploit-windows-smb-flaw-cisa/

UK actors’ union demands rights as AI uses performers’ likenesses without consent

(DigWatch – 21 October 2025) The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent. Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content. – https://dig.watch/updates/uk-actors-union-demands-rights-as-ai-uses-performers-likenesses-without-consenthttps://www.theguardian.com/technology/2025/oct/13/equity-threatens-mass-direct-action-over-use-of-actors-images-in-ai-content

Frontiers

What We Risk When AI Systems Remember

(Gathoni Ireri – Tech Policy Press – 21 October 2025) In April 2025, while announcing improvements to ChatGPT’s memory, Sam Altman expressed his excitement about “AI systems that get to know you over your life,” promising that this would make them “extremely useful and personalized.”. This kind of personalized lifelong knowledge capacity in AI systems represents a fairly recent innovation. It involves a form of long-term memory called non-parametric memory, in which information is stored in external files rather than being embedded within the AI model itself. By default, AI systems could access information only within a limited context window, typically restricted to the current conversation. This constraint is analogous to human working memory, which can only hold a few items in active awareness at any given time. The expansion of memory capabilities isn’t unique to OpenAI’s ChatGPT; other companies, including Anthropic and Google, have implemented it in their respective AI systems. Given that such developments are likely to transform how users interact with AI, it’s important to question whether lifelong, personalized knowledge actually enhances their usefulness. – https://www.techpolicy.press/what-we-risk-when-ai-systems-remember/

Data centers turn to old jet engines to power AI’s soaring energy demands

(Interesting Engineering – 21 October 2025) As the world races to build the infrastructure behind artificial intelligence, data centers are hitting a critical energy roadblock. Grid delays and a shortage of new gas turbines are forcing developers to look skyward for answers. Across the U.S. and beyond, old airplane engines are being repurposed into power generators, keeping the AI boom from stalling. Data centers are expanding faster than utilities, with megawatt-hungry facilities waiting years to access grid power. Normally, developers would connect directly to the grid or build dedicated on-site power plants, but the surge in demand has exposed a deep supply shortage. Lead times for new gas turbines from manufacturers like GE, Vernova, and Siemens Energy now stretch from three to five years, sometimes even longer. – https://interestingengineering.com/energy/plane-engine-gas-turbine-energy

Anthropic unveils Claude Life Sciences to transform research efficiency

(DigWatch – 21 October 2025) Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector. The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process. The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation. – https://dig.watch/updates/anthropic-unveils-claude-life-sciences-to-transform-research-efficiencyhttps://www.anthropic.com/news/claude-for-life-sciences

Google Cloud and NVIDIA join forces to accelerate enterprise AI and industrial digitalization

(DigWatch – 21 October 2025) NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads. The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications. – https://dig.watch/updates/google-cloud-and-nvidia-join-forces-to-accelerate-enterprise-ai-and-industrial-digitalisationhttps://blogs.nvidia.com/blog/nvidia-google-cloud-enterprise-ai-industrial-digitalization/

China’s Unitree reveals next-generation humanoid ahead of major IPO

(DigWatch – 21 October 2025) Unitree Robotics has unveiled its most lifelike humanoid robot to date, marking a bold step forward in the country’s rapidly advancing robotics industry. The new H2 humanoid model, showcased in a short social media video, demonstrated remarkable agility and expressiveness, performing intricate dance moves with striking humanlike grace. – https://dig.watch/updates/chinas-unitree-reveals-next-generation-humanoid-ahead-of-major-ipo