Daily Digest on AI and Emerging Technologies (8 January 2026)

Governance

Grok’s explicit images reveal AI’s legal ambiguities

(Ashley Gold, Ina Fried – AXIOS) Grok’s continued posting of nonconsensual images on X highlights a key unsettled legal issue around artificial intelligence: just who — if anyone — is liable for harm caused by a chatbot’s outputs. Why it matters: Businesses, individuals and society are increasingly reliant on AI, but there’s little clarity over who bears responsibility when things go wrong. The big picture: AI chatbots have gained massive usage around the world despite a number of legal uncertainties. – https://www.axios.com/2026/01/07/grok-bikini-images-legal-elon-musk

UAE launches AI ecosystem to boost climate-resilient agriculture worldwide

(Andrea Benito – ComputerWeekly.com) The United Arab Emirates (UAE) has launched an artificial intelligence (AI-)driven ecosystem aimed at helping climate-vulnerable agricultural regions adapt to increasingly erratic weather patterns, reinforcing the country’s ambition to position itself as a global hub for applied AI in climate and food security. – https://www.computerweekly.com/news/366636844/UAE-launches-AI-ecosystem-to-boost-climate-resilient-agriculture-worldwide

UK to spend £23M on AI to tell benefit claimants where to go

(Lindsay Clark – The Register) The UK’s Department for Work and Pensions (DWP) is set to introduce a conversational AI platform it hopes will steer calls from citizens with queries about their benefits. The contract is worth up to £23 million. The move is part of the government’s ambition to improve efficiency with AI across the public sector and manage costs in one of its highest-spending and politically sensitive departments. – https://www.theregister.com/2026/01/07/dwp_ai_call_handling/

Virginia’s datacenter tax breaks cost state $1.6B in 2025

(Dan Robinson – The Register) The US state of Virginia forfeited $1.6 billion in tax revenue through datacenter exemptions in fiscal 2025 – up 118 percent on the prior year – as the AI-driven construction boom accelerates. Good Jobs First, a nonprofit promoting corporate and government accountability, warns these incentives have become essentially automatic. Virginia’s qualification threshold requires just $150 million in capital investment and 50 new jobs, which is modest compared to the billions spent on today’s hyperscale facilities. The exemptions cover retail sales and use taxes on computer equipment, software, and hardware purchases. – https://www.theregister.com/2026/01/07/datacenter_tax_breaks_virginia/

Legislation

How Offline ID Checks Could Help Solve the Age Verification Head-Scratcher

(Finn Mitra – Tech Policy Press) Legislators around the world are grappling with how to craft effective age verification laws to prevent minors from accessing harmful digital content. Existing proposals have raised significant concerns relating to privacy, security and efficacy. But California’s recent legislation offers a new path that — with one key adaptation — could better balance these critical priorities. It contemplates a system in which individuals input their age when setting up new phones, laptops and tablets. Each user’s age is transmitted to the websites and apps they access on that device, enabling these platforms to restrict content accordingly without conducting age verification themselves. However, since users’ ages are self-reported, minors are only one fibbed date-of-birth away from access to adult content. The solution may be simpler than anyone expected: old-fashioned, in-person ID checks at the point of device purchase. These offline verifications augment California’s privacy-preserving approach by imposing a much stronger barrier for minors while avoiding the trails of sensitive, exploitable data generated when platforms are required to conduct age verification. – https://www.techpolicy.press/how-offline-id-checks-could-help-solve-the-age-verification-headscratcher/

December 2025 US Tech Policy Roundup

(Rachel Lau, J.J. Tolentino, Shirley Frame, Ben Lennett – Tech Policy Press) December’s US tech policy agenda was centered on executive action from the White House and a busy close to Congress. President Trump signed an executive order directing several federal agencies to review and potentially challenge state-level AI laws, in an effort to constrain the patchwork of state rules. The administration argued the approach was necessary to support competitiveness, particularly in relation to China. The order drew pushback from governors, lawmakers in both parties, and civil society groups, many of whom questioned its legal basis and objected to the use of threats to federal funding to pressure states to back down on AI regulation, even as Congress has made little progress on the issue. Congress did advance legislative measures on AI related to national defense as well as children’s online safety. Lawmakers passed the 2026 National Defense Authorization Act, which included provisions addressing how the military and intelligence agencies assess and use AI systems. A House subcommittee also advanced over a dozen bills targeting online harms to children, including those from AI chatbots. These included legislation requiring companies to make more robust disclosures or to implement age verification. Though many of the narrower bills found bipartisan support, broader regulatory frameworks like KOSA and COPPA 2.0 exposed partisan disagreement over issues like federal preemption of state law and enforcement. Taken together, December reflected ongoing efforts by the executive branch to shape AI policy amid continued uncertainty about how Congress can align on a federal framework. – https://www.techpolicy.press/december-2025-us-tech-policy-roundup/

Security and Surveillance

Rebuilding Digital Trust in the Age of Deepfakes

(Ricardo Amper – Infosecurity Magazine) Deepfake technology – once a niche experiment confined to research labs – has evolved at a staggering speed into a global threat. Tools that can convincingly swap faces, mimic videos or alter images are now widely accessible, outpacing public understanding and enterprise preparedness. The impact is already being felt, with deepfakes increasingly used in biometric fraud, exposing new vulnerabilities for organizations and consumers alike. As synthetic media becomes indistinguishable from reality, the foundations of digital trust are eroding. Traditional trust signals – logos, familiar faces, recognized voices or live videos – are no longer reliable. This is not just a technical challenge, but a human one. Recent research shows that human detection of high-quality deepfake videos is only 24.5% accurate. In an environment where reality can be convincingly forged, seeing is no longer believing, and trust must be rebuilt. – https://www.infosecurity-magazine.com/blogs/rebuilding-digital-trust-in-the/

China intensifies Cyber-Attacks on Taiwan as Energy Sector Sees Tenfold Spike

(Infosecurity Magazine) Chinese cyber threat actors intensified effort to gain access to Taiwan’s critical infrastructure organizations in 2025, with a particular emphasis on the energy sector, emergency rescue entities and hospitals. In a new report published on January 4, the National Security Bureau (NSB) of the Republic of China, the official name of Taiwan, shows that the country’s critical infrastructure suffered unprecedented cyber intrusion attempts coming from China over the past year. The NSB recorded a total of 960,620,609 cyber intrusion attempts targeting Taiwan’s critical infrastructure allegedly coming from “China’s cyber army” in 2025. This represents an average of 2.63 million cyber intrusion attempts hitting one organization deemed critical by the island nation every day. This also marks a 6% increase from 2024 data and a 112.5% increase compared to 2023. – https://www.infosecurity-magazine.com/news/china-intensifies-cyberattacks/

Personal LLM Accounts Drive Shadow AI Data Leak Risks

(Danny Palmer – Infosecurity Magazine) The rising use of generative AI tools like Large Language Models (LLMs) in the workplace is increasing the risk of cyber-security violations as organizations struggle to keep tabs on how employees are using them. One of the key challenges IT and security teams are facing is the continued use of Shadow AI, when employees use their personal accounts – such as ChatGPT, Google Gemini, and Microsoft Copilot – at work. According to Netskope’s Cloud and Threat Report for 2026, nearly half (47%) of people using generative AI tools in the workplace are using personal accounts and applications to do so. – https://www.infosecurity-magazine.com/news/personal-llm-accounts-drive-shadow/

Hackers Claim to Disconnect Brightspeed Customers After Breach

(Phil Muncaster – Infosecurity Magazine) A US internet service provider (ISP) is scrambling to investigate a recent security breach in which threat actors claim to have obtained information on over one million customers and disrupted their connectivity. Brightspeed offers high-speed fiber internet, digital voice and business services across 20 US states. On January 4, a hacking group known as Crimson Collective posted to Telegram that it had a raft of personally identifiable information (PII) in its possession. – https://www.infosecurity-magazine.com/news/hackers-disconnect-brightspeed/

MFA Failure Enables Infostealer Breach At 50 Enterprises

(Phil Muncaster – Infosecurity Magazine) Dozens of global organizations have had highly sensitive corporate and customer information stolen and put up for sale by a threat actor because they didn’t secure cloud systems with multi-factor authentication (MFA), a new report has revealed. The actor, known as “Zestix” (aka “Sentap”) scoured the dark web for infostealer logs containing credentials for popular cloud file sharing services ShareFile, Nextcloud and OwnCloud, according to Hudson Rock. – https://www.infosecurity-magazine.com/news/mfa-failure-infostealer-breach-50/

Inside the Chip: Rethinking Cybersecurity from the Ground Up

(Camellia Chan – Infosecurity Magazine) In today’s digital battlefield, data flows everywhere — and so do threats. Despite layers of detection and endless software patches, we remain trapped in a reactive cycle. Each time we patch one vulnerability, attackers exploit another, often below the surface. It is time we revisit the foundation itself — moving from cloud-dependent defenses to protections embedded directly into the hardware. This strategic paradigm shift looks beyond technological advancement, taking aim instead at anchoring trust in a place that is inherently harder to compromise — inside the chip. – https://www.infosecurity-magazine.com/blogs/inside-the-chip-cybersecurity/

Fake Booking.com lures and BSoD scams spread DCRat in European hospitality sector

(Pierluigi Paganini – Security Affairs) Researchers uncovered a late-December 2025 campaign, dubbed PHALT#BLYX, targeting European hotels with fake Booking-themed emails. Victims are redirected to bogus BSoD pages using ClickFix-style lures that prompt them to apply “fixes.” The multi-stage attack ultimately installs the DCRat remote access trojan, enabling full remote control of infected systems, according to Securonix. – https://securityaffairs.com/186606/cyber-crime/fake-booking-com-lures-and-bsod-scams-spread-dcrat-in-european-hospitality-sector.html

Ministry of Justice splurged £50M on security – still missed Legal Aid Agency cyberattack

(Connor Jones – The Register) The UK’s Ministry of Justice spent £50 million ($67 million) on cybersecurity improvements at the Legal Aid Agency (LAA) before the high-profile cyberattack it disclosed last year. The revelation was made in a report published by the Public Accounts Committee (PAC) today, which, alongside a thorough castigation of the MoJ’s handling of the unsafe HMP Dartmoor prison, highlights a list of failures and issues regarding the handling of the LAA cyberattack. – https://www.theregister.com/2026/01/07/legal_aid_agency_attack/

Frontiers and Markets

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

(NVIDIA) Through AI, Siemens and NVIDIA are reinventing the entire end-to-end industrial value chain — from design and engineering to manufacturing, production, operations and into supply chains. Siemens and NVIDIA to build AI-accelerated portfolio including AI-native electronic design, AI-native simulation as well as AI-driven adaptive manufacturing and supply chain. Siemens and NVIDIA to design the next generation of AI factories. Siemens and NVIDIA to optimize operations through shared innovation. – https://nvidianews.nvidia.com/news/siemens-and-nvidia-expand-partnership-industrial-ai-operating-system