Daily Digest on AI and Emerging Technologies (20 November 2025)

Governance

UN calls for legal safeguards for AI in healthcare

(UN News) Use of artificial intelligence (AI) is accelerating in healthcare – but basic legal safety nets that protect patients and health workers are lacking. The warning comes in a report by the UN World Health Organization’s (WHO) office in Europe, where AI is already helping doctors to spot diseases, reduce administrative tasks and communicate with patients. The technology is reshaping how care is delivered, data are interpreted, and resources are allocated. “But without clear strategies, data privacy, legal guardrails and investment in AI literacy, we risk deepening inequities rather than reducing them,” said Dr. Hans Kluge, WHO Regional Director for Europe. – https://news.un.org/en/story/2025/11/1166400

Growing internet connections mask deep inequalities, says ITU report

(DigWatch) According to a recent International Telecommunication Union (ITU) report, the number of internet connections continues to grow, but important inequalities persist across quality, affordability and usage. The ITU’s Facts and Figures 2025 report estimates that nearly 6 billion people (around three-quarters of the world’s population) are online in 2025, up from 5.8 billion in 2024. Despite the increase, 2.2 billion remain offline, the majority in low- and middle-income countries. – https://dig.watch/updates/growing-internet-connections-mask-deep-inequalities-says-itu-report

EU examines Amazon and Microsoft influence in cloud services

(DigWatch) European regulators have launched three market investigations into cloud computing amid growing concerns about sector concentration. The European Commission will assess whether Amazon Web Services and Microsoft Azure should be designated as gatekeepers for their cloud services under the Digital Markets Act, despite not meeting the formal threshold criteria. – https://dig.watch/updates/eu-examines-amazon-and-microsoft-influence-in-cloud-services

AI threatens global knowledge diversity

(DigWatch) AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions. Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives. – https://dig.watch/updates/ai-threatens-global-knowledge-diversity

At the Sovereignty Summit, Europe Put Start-Ups on Stage and Kept Big Tech in Control

(Aline Blankertz – Tech Policy Press) “First innovate, then regulate” was the core mantra repeated at every opportunity during the French-German digital sovereignty summit in Berlin on Tuesday. The summit carefully avoided sensitive topics like the trade war or the AI bubble. Instead, its declaration catered to both those who want to promote the European economy and to those who want to avoid meaningful confrontation with the United States. The political announcements — with most EU digital ministers in the audience or on stage — contained few surprises, but delivered more details for various initiatives that are slowly taking shape. – https://www.techpolicy.press/at-the-sovereignty-summit-europe-put-startups-on-stage-and-kept-big-tech-in-control/

Digital Rights Are on the Chopping Block in the European Commission’s Omnibus

(Daniel Leufer – Tech Policy Press) On Wednesday, the European Commission launched its proposal for a Digital Omnibus. The stated aim of the Digital Omnibus package is to ‘simplify’ the European Union’s digital rulebook to ease compliance burdens for industry. The Commission has repeatedly promised that its proposals will not lower the level of protection for fundamental rights, stating that its proposed changes “are not expected to modify or have negative impacts on the underlying acts as regards other areas such as the protection of fundamental rights or the environment.”. Not that digital rights advocates ever believed them, but these assurances were definitely undermined last week when draft versions of the two Omnibus texts (one focused on data, the other on artificial intelligence) were leaked, and more or less confirmed this Wednesday with the official release, in which even the worst fears of digital rights advocates were surpassed: under the misleading banner of simplification, the Commission was proposing nothing short of a destruction of core safeguards and fundamental principles from its digital rulebook. It’s important to note that Access Now is also not against simplification per se; nobody wants overly complex laws, and indeed, we have long advocated for measures that would simplify procedures and processes for the enforcement of digital laws that would allow people to exercise their rights. What we are against is what this Digital Omnibus is proposing: the deliberate weakening of fundamental rights safeguards solely to cut compliance costs for businesses. – https://www.techpolicy.press/digital-rights-are-on-the-chopping-block-in-the-european-commissions-omnibus/

Why Civil Society Is Sounding the Alarm on the EU’s Omnibus Rollback

(Joshua Franco – Tech Poliy Press) For years, the EU has taken a leading role in creating standards that protect our rights online. But the winds have now shifted, and under the guise of “simplification,” a corporate-backed wave of weakening digital rules is underway that threatens all of our rights – on and offline. Digital and human rights advocates, including Amnesty International, have been documenting some of the human impacts caused by new technologies, and it’s clear from these that what’s needed more than ever is stronger rights protections. Despite this, this simplification agenda aims to roll these very protections back. It is becoming increasingly clear that this process is inevitably leading towards the weakening of provisions of the AI Act and data protection, and perhaps much more. The Commission has also proposed a “Digital Fitness Check.” While we haven’t been told what this will mean in practice, it is most likely going to be an exercise to identify further laws to be “simplified”. All of this is being undertaken under expedited procedures without prior impact assessments to ask how individuals and communities experience or are harmed by high-risk and emerging technologies, on the preposterous basis that laws that protect our rights can be pared back without impacting our rights. – https://www.techpolicy.press/why-civil-society-is-sounding-the-alarm-on-the-eus-omnibus-rollback/

The Risks of the ‘Observer Effect’ from Being Watched by AI

(Koustuv Saha – Tech Policy Press) Imagine confiding in an AI chatbot late at night. You ask the AI about your relationship struggles, a health scare, workplace conflict, or the anxiety that’s been keeping you up. You believe it is private: just you and your personal computer or phone. But what if you later learned that your words could become part of the chatbot’s training data, helping refine the system, and fragments of your intimate conversation might actually appear in someone else’s conversations with the chatbot? This question sits at the heart of an uncomfortable truth about AI: most people—including myself as an AI computer scientist—do not fully understand how these systems are trained or what truly happens to our data once we interact with them. Only recently, several families filed lawsuits against major AI companies, claiming that chatbots contributed to delusions and suicides. These tragic cases reignited urgent debates among industry leaders, academics and among policy makers over how conversational AI is designed, how data is used, and what responsibilities developers bear when their systems shape real human emotions and choices. – https://www.techpolicy.press/the-risks-of-the-observer-effect-from-being-watched-by-ai/

How Tech Oligarchs Profit from the Logic of ‘Finitude Capitalism’ and What to Do About It

(Ramya Chandrasekhar, Charlotte Ducuing, Caterina Santoro – Tech Policy Press) In the United States, the wealth of tech figures such as Elon Musk, Mark Zuckerberg, and Larry Ellison continues to soar. One or more of them may soon become a trillionaire—likely Musk, for whom Tesla shareholders just approved a pay package that could be worth $1 trillion over the next ten years. How the interests of the tech oligarchs are entwined with those of President Donald Trump and his administration’s brand of politics has been much reported on, including on Tech Policy Press, with various contributors detailing how the current configuration of power between the aspiring tech trillionaires and the Trump administration presents grave risks to core democratic principles, including the rule of law. We argue that this phenomenon is not only a coincidental convergence of the personal, political, and business interests of the individuals involved, but that it should be understood within the broader logic of ‘finitude capitalism.’ Applying this concept situates the current rise of the tech oligarchs and their entanglements with the US government and its president in a broader historical context, and points to potential interventions. – https://www.techpolicy.press/how-tech-oligarchs-profit-from-the-logic-of-finitude-capitalism-and-what-to-do-about-it/

What Kids and Parents Want: Policy Insights for Social Media Safety Features

(Michal Luria, Aliya Bhatia – Center for Democracy & Technology) This report examines the gap between child safety policy proposals for social media and how teens and parents — the people these policies are meant to protect — experience and view them. While the topic of child safety online is becoming increasingly prominent, with governments worldwide introducing proposals that aim to keep children safe online, many interventions remain largely untested and raise concerns about effectiveness, privacy, and unintended consequences. To address this disconnect, we conducted qualitative research with 45 parents and teens using a human-centered design approach to evaluate perceptions of four widely proposed intervention categories: age verification, screen-time features, algorithmic feed controls, and parental access. – https://cdt.org/insights/what-kids-and-parents-want-policy-insights-for-social-media-safety-features/

Countdown to the Midterms: The Changing AI Threat Landscape for Elections

(Isabel Linzer, Tim Harper – Center for Democracy & Technology) 2024 was billed as “the year of AI elections.” And while widespread access to the technology changed how governments, political campaigns, technology companies, and civil society did their work, generative AI did not cause the widespread catastrophic impacts that some feared. That success, however, should not be mistaken for stability. Over the past year, the political and policy environment surrounding AI has shifted dramatically: norms that once constrained misuse have eroded, regulatory efforts across states have evolved, and voluntary commitments by technology companies to prevent abuse of their tools have expired. Taken together, these developments point to a more consequential environment for AI deployment in the United States’ 2026 midterms. AI will likely be more prevalent and impactful in the 2026 elections than it was last year, and the accompanying risks are made greater if the relatively uneventful 2024 cycle leads key actors to complacency. – https://cdt.org/insights/countdown-to-the-midterms-the-changing-ai-threat-landscape-for-elections/

AI Governance at the Frontier

(Mina Narayanan, Jessica Ji, Vikram Venkatram, Ngor Luong – CSET)  As artificial intelligence diffuses throughout society, policymakers are faced with the challenge of how best to govern the technology amid uncertainty over the future of AI development. To meet this challenge, many stakeholders have put forth various proposals aimed at shaping AI governance approaches. This report outlines an analytic approach to help policymakers make sense of such proposals and take steps to govern AI systems while preserving future decision-making flexibility. Our approach involves analyzing common assumptions across various proposals (as these assumptions are foundational elements for the success of multiple proposals), as well as unique assumptions across individual proposals, by answering three questions: What risks are important to mitigate and who should have primary oversight of frontier AI?; Who is delegated tasks and able to play a role?; Would the proposed mechanisms or tools actually achieve the proposal’s objectives? We apply this analytic approach to five U.S.-centric AI governance proposals that originate from industry, academia, civil society, and the federal and state governments. These proposals are generally aimed at governing frontier AI systems, which possess cutting-edge capabilities and therefore pose some of the most challenging questions for AI governance. Our analysis reveals that most proposals view AI-enabling talent and AI processes and frameworks as important enablers of AI governance. However, proposals lack consensus regarding the techniques that are most effective at mitigating AI risks and harms. Our analysis also bears lessons that are broadly applicable to policymakers seeking to analyze any proposal. Our case studies demonstrate that 1) policymakers should leverage proposals’ assumptions to more precisely understand disagreements and shared views among stakeholders and 2) policymakers can take action in an uncertain and rapidly changing environment by addressing common assumptions across proposals. By adopting our analytic approach, U.S. policymakers can move away from rhetorical debates about AI governance and better prepare the United States for a range of possible AI futures. – https://cset.georgetown.edu/publication/ai-governance-at-the-frontier/

Courts and Litigation

Old laws now target modern tracking technology

(DigWatch) Class-action privacy litigation continues to grow in frequency, repurposing older laws to address modern data tracking technologies. Recent high-profile lawsuits have applied the California Invasion of Privacy Act and the Video Privacy Protection Act. A unanimous jury verdict recently found Meta Platforms violated CIPA Section 632 (which is now under appeal) by eavesdropping on users’ confidential communications without consent. The court ruled that Meta intentionally used its SDK within a sexual health app, Flo, to intercept sensitive real-time user inputs. – https://dig.watch/updates/old-laws-now-target-modern-tracking-technology

Geostrategies

U.S. Commission on China Calls for ‘Quantum First’ National Goal by 2030, Recommends Significant Funding

(Quantum Insider) A federal commission is urging Congress to adopt a “Quantum First” national goal by 2030 to secure U.S. leadership in mission-critical quantum technologies. The recommendations call for major investments in quantum hardware, workforce development, modernized research infrastructure, and a new Quantum Software Engineering Institute. The panel warns that China’s state-supported quantum programs pose growing strategic risks and that early U.S. action is needed to prevent long-term disadvantage. – https://thequantuminsider.com/2025/11/18/u-s-commission-on-china-calls-for-quantum-first-national-goal-by-2030-recommends-significant-funding/

ALX and Anthropic partner with Rwanda on AI education 

(DigWatch) A landmark partnership between ALX, Anthropic, and the Government of Rwanda has launched a major AI learning initiative across Africa. The program introduces ‘Chidi’, an AI-powered learning companion built on Anthropic’s Claude model. Instead of providing direct answers, the system is designed to guide learners through critical thinking and problem-solving, positioning African talent at the centre of global tech innovation. – https://dig.watch/updates/alx-and-anthropic-partner-with-rwanda-on-ai-education

Security and Surveillance

EUR 47 million in crypto traced to disrupt digital piracy services

(Europol) Between 10 and 14 November, Europol, in collaboration with the European Union Intellectual Property Office (EUIPO) and the Spanish National Police (Policía Nacional), organised an “Intellectual Property Crime Cyber-Patrol Week”. A total of 30 investigators participated in this operation hosted at the EUIPO headquarters in Alicante, Spain, where they used advanced Open-Source Intelligence Techniques (OSINT) and cutting-edge online investigative tools to identify potential intellectual property infringements. Overall, the Cyber-Patrol Week led to: 69 sites identified and targeted; 25 illicit IPTV services referred to the participating crypto service providers for disruption; Investigations on 44 additional sites. The combined traffic for the 69 targeted sites is estimated at approximately 11 821 006 annual visitors. Investigators traced cryptocurrency valued at around USD 55 million (over EUR 47 million) through various accounts associated with these services. Several of the services remain under continued investigation by both public and private entities. – https://www.europol.europa.eu/media-press/newsroom/news/eur-47-million-in-crypto-traced-to-disrupt-digital-piracy-services

Help4U: A new digital platform to support young people facing online sexual abuse

(Europol) A new digital platform, Help4U, developed by Europol and CENTRIC, has been launched to support children and teenagers facing sexual abuse or online harm. Designed to be simple, private, and accessible, Help4U supports young people with finding trusted advice, understanding their rights, and connecting with people who can help. The platform offers clear, practical guidance for anyone under 18 who needs help, as well as information for parents, teachers, and professionals supporting them. – https://www.europol.europa.eu/media-press/newsroom/news/help4u-new-digital-platform-to-support-young-people-facing-online-sexual-abuse

Europol and partner countries combat online radicalisation on gaming platforms

(Europol) Europol supported eight countries in identifying and removing racist and xenophobic propaganda shared on gaming and gaming-related platforms. The Referral Action Day, involving Denmark, Finland, Germany, Luxembourg, Netherlands, Portugal, Spain, United Kingdom, led to the referral of thousands of URLs leading to dangerous and illicit online material. Conducted on 13 November 2025, this operational action by the European Union Internet Referral Unit (EU IRU) involved the referral of thousands of URLs leading to dangerous and illicit online material. This includes around 5 408 links to jihadist content, 1 070 links to violent right-wing extremist and terrorist content, and 105 links to racist and xenophobic content. This joint action highlights the complexity of tackling terrorist, racist and xenophobic content online on gaming and gaming-adjacent platforms. Creation and dissemination processes are layered and often affect several platforms. For instance, content may be recorded within an online game (or its chat function), altered with violent extremist jargon, suggestive emojis, chants, or music, and then disseminated on a mainstream social media platform. – https://www.europol.europa.eu/media-press/newsroom/news/europol-and-partner-countries-combat-online-radicalisation-gaming-platforms

Major names exposed in data breach at Ivy League school

(Vilius Petkauskas – Cybernews) Hackers accessed a database that likely contained the personal details of Jeff Bezos, Michelle Obama, Pete Hegseth, and several US Supreme Court Justices. Princeton University, one of the world’s most prestigious universities, has suffered a major data breach, exposing details of every single person who has ever graduated or enrolled in the Ivy League school. “On November 10th, a Princeton University Advancement database containing information about alumni, donors, faculty, staff, students, parents, and other members of the University community was compromised by outside actors,” read the university’s notice. – https://cybernews.com/security/princeton-university-data-breach-exposes-alumni/

Eternidade Stealer Trojan Fuels Aggressive Brazil Cybercrime

(Alessandro Mascellino – Infosecurity Magazine) A newly identified banking Trojan known as Eternidade Stealer has been observed pushing Brazil’s cybercrime ecosystem into a more aggressive phase, with attackers using WhatsApp as both an entry point and a propagation tool. According to new research from Trustwave SpiderLabs, the malware combines a WhatsApp-propagating worm, a Delphi-based stealer and an MSI dropper to harvest financial data, system details and contact lists used for rapid lateral spread. The researchers noted that a shift to Python for WhatsApp hijacking, along with dynamic command-and-control (C2) retrieval through IMAP, marks a notable evolution in the threat actor’s toolkit. – https://www.infosecurity-magazine.com/news/eternidade-stealer-trojan-brazil/

PlushDaemon Hackers Unleash New Malware in China-Aligned Spy Campaigns

(Kevin Poireault – Infosecurity Magazine) A China-aligned hacking group known for its global cyber espionage campaigns has been observed deploying an undocumented network implant that it uses to conduct adversary-in-the-middle (AitM) attacks. The group, PlushDaemon, has been active since at least 2018 and has targeted organizations in Cambodia, South Korea, New Zealand, the US, Taiwan and even Hong Kong and China. While the group’s main initial access vector is hijacking legitimate updates of Chinese applications, it was identified as the culprit behind a supply chain attack targeting IPany, a South Korean VPN company, in May 2024. – https://www.infosecurity-magazine.com/news/plushdaemon-new-malware-china-spy/

China-Linked Operation “WrtHug” Hijacks Thousands of ASUS Routers

(Phil Muncaster – Infosecurity Magazine) A new China-linked threat campaign has already compromised thousands of ASUS WRT routers around the world in a bid to build a new espionage network, SecurityScorecard has warned. The firm’s STRIKE team claimed in a new report today that Operation “WrtHug” exploits six mainly legacy vulnerabilities in order to gain elevated privileges on end-of-life SOHO devices. – https://www.infosecurity-magazine.com/news/chinal-operation-wrthug-thousands/

Half of Ransomware Access Due to Hijacked VPN Credentials

(Phil Muncaster – Infosecurity Magazine) Ransomware surged in Q3 2025, with just three groups accounting for the majority of cases (65%), and initial access most commonly achieved via compromised VPN credentials, according to Beazley Security. The Beazley Insurance subsidiary said Akira, Qilin and INC Ransomware were the most prolific groups in the third quarter, which saw 11% more leak posts than the previous three months. – https://www.infosecurity-magazine.com/news/half-ransomware-access-hijacked/

Beyond The Password Security Checkbox: Why Compliance Isn’t Enough

(Marcus White – Infosecurity Magazine) When it comes to security, compliance frameworks are a great start; they’re proven, research-based foundations that provide your organization with a baseline for cybersecurity activities. They give you standards to follow, boxes to check, and a way to demonstrate due diligence to auditors and stakeholders. But here’s the problem: meeting compliance requirements doesn’t automatically make you secure. In fact, organizations can pass their audits with flying colors only to suffer a breach months later. So, while you shouldn’t dismiss compliance frameworks, you shouldn’t have them as the lone tool in your cybersecurity toolbox. – https://www.infosecurity-magazine.com/blogs/beyond-the-password-security/

Defence, Military, and Warfare

The U.S. Aerial Drone Market

(Kyle Miller, Sam Bresnick, Jacob Feldgoise, Christian Schoeberl – CSET) The importance of unmanned aerial vehicles (UAVs), or drones, is growing rapidly in national defense and security efforts. While the United States has historically led in military drone development, the global commercial market is now dominated by lower-cost, dual-use platforms, many produced by Chinese companies. In response to concerns over supply chain dependencies and Chinese export controls, the U.S. government is prioritizing the growth of a self-sufficient domestic drone industry, exemplified by the Trump administration’s executive order “Unleashing American Drone Dominance” and the Department of Defense’s Replicator initiative. This report assesses the current state of the U.S. drone industry, focusing on the types of platforms marketed in the United States and the financial health of U.S.-headquartered UAV companies. Using data from the Association for Uncrewed Vehicle Systems International’s (AUVSI) Uncrewed Systems & Robotics Database (USRD) and PitchBook, the analysis finds that most U.S. drone companies focus on small UAVs (Groups 1-3), and only a handful of larger defense firms develop more complex military systems (Groups 4-5). Most U.S. drone companies are privately held, venture-backed companies, many of which were founded after 2010. Investment activity is concentrated in companies that produce smaller, commercial drones, while venture interest is limited in developers of larger military systems. However, significant gaps remain in publicly available data, particularly regarding manufacturing capacity and supply chain resilience, both of which are critical factors for determining the broader health of the U.S. UAV ecosystem. This report provides a snapshot of the U.S. drone industry, but a deeper analysis of component supply chains and manufacturing capabilities would allow for a fuller assessment of the industry’s ability to meet future national security needs. Greater data sharing between government and industry will be essential to identify supply chain vulnerabilities and guide future policy. – https://cset.georgetown.edu/publication/the-u-s-aerial-drone-market/

Frontiers and Markets

University of Liverpool Unveils Plans for £100M UK AI-Driven Materials Discovery Hub

(AI Insider) The University of Liverpool announced a £100 million AI Materials Hub for Innovation (AIM-HI) to position the Liverpool City Region and the UK as a global center for AI-driven materials research and development, according to the university. AIM-HI will serve as a national center of excellence with new research facilities, an innovation incubator and infrastructure designed to accelerate AI-enabled materials discovery, industry adoption and workforce development. The project is expected to support up to 900 jobs and generate more than £400 million in economic value, building on the university’s Materials Innovation Factory and expanding its national role in AI-accelerated materials science. – https://theaiinsider.tech/2025/11/19/university-of-liverpool-unveils-plans-for-100m-uk-ai-driven-materials-discovery-hub/

Palm Beach State College Emerges as Florida’s Frontier For Quantum Technology

(Quantum Insider) IonQ’s visit to Palm Beach State College (PBSC) signals South Florida’s emerging effort to build a statewide quantum ecosystem anchored in education, workforce development, and regional infrastructure. PBSC’s expanding technical momentum — including a recent statewide cybersecurity win and new partnerships — positions the college to support quantum training, cybersecurity needs, and future startup activity. Florida institutions are exploring coordinated quantum strategies as states and nations accelerate their own programs, with IonQ’s tour assessing readiness for local demand and potential in-state quantum hardware deployment. – https://thequantuminsider.com/2025/11/19/palm-beach-state-college-emerges-as-floridas-frontier-for-quantum-technology/

Google launches WeatherNext 2 for faster forecasts

(DigWatch) WeatherNext 2, Google’s latest AI forecasting model, offers significantly faster and more precise weather predictions. Developed by DeepMind and Google Research, the model produces forecasts eight times faster with hourly resolution, aiding decisions from supply chains to daily commutes. – https://dig.watch/updates/google-launches-weathernext-2-for-faster-forecasts

Agile Robots Launches Humanoid Robot for Industry

(AI Insider) Agile Robots launched its first industrial humanoid, Agile ONE, designed to operate alongside human workers and automated equipment using dexterous hands, multimodal interaction features and an AI model trained on real-world factory data, according to the company. The system is built for material handling, machine tending, tool use and precision manipulation, and integrates with the company’s broader robotics portfolio through its AgileCore software platform. Agile Robots plans to manufacture Agile ONE in-house at a new Bavarian facility beginning in early 2026, positioning the humanoid as part of a wider push toward “physical AI” systems for fully integrated intelligent production environments. – https://theaiinsider.tech/2025/11/19/agile-robots-launches-humanoid-robot-for-industry/

Bedrock Robotics Announces Supervised Autonomy Testing on Active Construction Sites in Move Towards Commercialization

(AI Insider) Bedrock Robotics said it is running the construction industry’s largest known supervised autonomy deployment, moving more than 65,000 cubic yards on a 130-acre manufacturing site with autonomous excavators loading human-operated dump trucks in standard workflows. The company has expanded its autonomous systems across excavators from 20 to 80 tons and recently completed autonomous excavation at Proto-Town in Central Texas, marking its second active deployment as it targets fully autonomous operations in 2026. Bedrock is growing its partner ecosystem with contractors including Austin Bridge & Road, Maverick Constructors and Haydon Companies, aiming to address skilled-labor shortages and scale autonomy across commercial, industrial and heavy-civil projects. – https://theaiinsider.tech/2025/11/19/bedrock-robotics-announces-supervised-autonomy-testing-on-active-construction-sites-in-move-towards-commercialization/