Daily Digest on AI and Emerging Technologies (23 april 2026)

Governance/Regulation/Legislation

UK children’s bill advances with new online safety powers

(DigWatch) The UK’s Children’s Wellbeing and Schools Bill has moved forward with a substantial set of online safety amendments, showing how child protection policy is increasingly being folded into wider legislation beyond the Online Safety Act itself. The current printed version of the bill, published as it continues through consideration of amendments between the Commons and Lords, includes new powers that could allow ministers to require providers of specified internet services to prevent or restrict children’s access to certain services, features, or functionalities where there is a risk of harm. – https://dig.watch/updates/uk-childrens-wellbeing-and-schools-bill

Ofcom steps up child safety enforcement with Telegram and chat site investigations

(Ofcom) Ofcom has launched an investigation into Telegram under the UK’s Online Safety Act, to examine whether it is complying with its duties to prevent child sexual abuse material being shared. The UK’s online safety watchdog has also opened investigations into Teen Chat and Chat Avenue to examine whether they are meeting their duties to prevent children from the risk of being groomed by predators. Additionally, we have provided updates on file-sharing services that are now either using hash-matching technology to detect and swiftly remove child sexual abuse material (CSAM) or have taken steps to prevent people in the UK from accessing their sites. Suzanne Cater, Director of Enforcement at Ofcom, said: “Child sexual exploitation and abuse causes devastating harm to victims, and making sure sites and apps tackle this is one of our highest priorities. It’s why we work so closely with partners in law enforcement and child protection organisations to identify where these harms are occurring and hold providers to account where they’re failing to meet their obligations. Progress has undeniably been made, particularly with file-sharing services, which are too often used to share horrific child sexual abuse imagery. But this problem extends to big platforms too, and teen-focused chat services are too easily being used by predators to groom children. These firms must do more to protect children, or face serious consequences under the Online Safety Act.” – https://dig.watch/updates/uk-target-telegram-and-chat-in-child-exploitation

Australia – AI use accelerates in telco, media and gambling sectors

(Australian Communications and Media Authority) Australia’s telecommunications, media and online gambling sectors are rapidly adopting artificial intelligence, with new ACMA research showing strong innovation but also growing risks. In the media industry, AI is being used to personalise advertising and to streamline content production. However, researchers and commentators have raised concerns that some uses of AI may contribute to the amplification of misinformation and disinformation. This can affect public trust in news and raises concerns about the unauthorised use of copyrighted material. – https://www.acma.gov.au/articles/2026-04/ai-use-accelerates-telco-media-and-gambling-sectors

AI adoption across Australian Public Service depends on trust, alignment and imagination, Poole says

(DigWatch) Lucy Poole, deputy CEO of the Strategy, Planning and Performance Division of the Australian Digital Transformation Agency, outlined three priorities for AI adoption across the Australian Public Service in a keynote at the 12th Annual Data and Digital Governance Summit: imagination, alignment, and how people experience government in practice. In her account, the next phase is no longer just about using AI to speed up existing processes, but about considering how it could reshape decision-making, service delivery, and the relationship between government and the public. – https://dig.watch/updates/ai-adoption-aps-lucy-poole

The Meta Oversight Board’s Advisory Opinion on Global Community Notes Rollout: Another Check on Platform Power?

(Yohannes Eneyew Ayalew and Maria O’Sullivan – Just Security) On March 26, Meta’s Oversight Board issued a landmark advisory opinion assessing the potential human rights impacts of expanding the “community notes” program on the company’s platforms outside of the United States. The Board found that while community notes may enhance users’ freedom of expression and improve online discourse, a global “one-size-fits-all” approach could pose real-world harms in crisis and conflict zones, repressive regimes, and electoral contexts. Meta is the parent company of Facebook, Instagram, and Threads—platforms which are recognized as increasingly important in shaping public opinion and influencing elections. Statistics from 2025 show that 3.43 billion people use at least one of Meta’s products daily. The company’s attempts to counter false or misleading information on its platforms has been the subject of widespread criticism from academic experts and civil society. At present, Meta’s approach to counter misinformation consists of three strategies: (1) remove (removing certain categories of harmful misinformation); (2) reduce (limiting the distribution of content rated as false, altered, or partly false by third-party fact-checkers); and (3) inform (providing additional information or context, typically through labels applied to content that may be misleading or confusing, while continuing to distribute the content). Community notes fall within this third category. In simple terms, community notes are a form of crowdsourced content moderation in which users can choose to write brief assessments of potentially misleading or inaccurate tweets, posts, or videos. They can also rate other users’ assessments. – https://www.justsecurity.org/136035/meta-boards-opinion-community-notes/

Philippines presses Meta for faster action on online disinformation

(DigWatch) The Philippine government is intensifying pressure on Meta to act more quickly to address harmful online disinformation, arguing that the company’s current enforcement approach is insufficient to address rapidly spreading false content that can affect public order, economic confidence, and national security. The latest move comes in the form of a formal response from the Department of Information and Communications Technology, following an earlier joint request involving the Presidential Communications Office and the Department of Justice. Officials acknowledged Meta’s willingness to engage and its existing moderation policies, but said broad descriptions of enforcement mechanisms fall short of what the situation requires. According to the DICT, the government is seeking clear commitments, faster intervention processes, and measurable outcomes rather than general assurances about existing platform rules. – https://dig.watch/updates/philippines-presses-meta-for-faster-action-on-online-disinformation

Tarasoff Meets the AI Age

(Anat Lior – Lawfare) OpenAI recently disclosed that it was aware of concerning behavior by one of its users, Jesse Van Roostelaar from British Columbia, and suspended her ChatGPT account in June 2025. (While OpenAI and authorities have not shared the exact content of her interactions with the chatbot, a New York Times investigation into Van Roostelaar’s social media activity documented her posts about mental health issues, substance abuse, weapons, and online violence.) Following internal deliberations, OpenAI decided not to notify authorities about the “disturbing” nature of Van Roostelaar’s interactions with ChatGPT, stating that the content did not meet their threshold for reporting to law enforcement, which requires evidence of immediate risk of severe physical harm to others. On Feb. 10, 18-year-old Van Roostelaar carried out a mass shooting in Tumbler Ridge, B.C., killing nine people—including herself. The warning signs were clear. British Columbia Premier David Eby suggested that OpenAI may have had the opportunity to prevent the mass shooting. The critical question that emerges from this case is whether the company’s failure to act amounts to negligence. When a therapist learns that their patient intends to harm someone, the law may require them to act. This principle, born from the landmark Tarasoff v. Regents of the University of California decision, raises an urgent and largely unresolved question in the age of generative artificial intelligence (AI): What happens when the entity with foreknowledge of harm is not a human clinician, but a chatbot? As OpenAI, Anthropic, Google, and other AI companies deploy increasingly powerful conversational systems, they may find themselves in possession of information suggesting that a user—or someone that user intends to target—is at serious risk. Understanding Tarasoff’s foundational holding as well as the doctrinal questions its application to AI would raise, including how courts might navigate the tension between a duty to protect and the privacy interests of users, is essential to answering this question. – https://www.lawfaremedia.org/article/tarasoff-meets-the-ai-age

Experts say capabilities of agentic AI rising, along with risk to personal data, economy, national security

(Anna Lamb – The Harvard Gazette) As new agentic AI models continue to come online, cybersecurity experts laud their ability to sift through vast quantities of data quickly and autonomously — making them great tools to help fight cybercrime. But, they warn, those attributes could also be put to work by bad actors to hack systems and risk our personal data, our economy, and our national security. A group of cybersecurity experts were recently brought together for a Berkman Klein Center for Internet and Society discussion, during which all agreed that it’s high time for business and government leaders to regulate the tech — before it’s too late. – https://news.harvard.edu/gazette/story/2026/04/time-for-government-business-leaders-to-figure-out-ai-cybersecurity-regulation/

Security and Surveillance

Researchers Uncover ProxySmart Software Powering 90+ SIM Farms

(Phil Muncaster – Infosecurity Magazine) Cybersecurity researchers have uncovered a Belarus-based software platform which is helping SIM farm operators support cybercrime on an “industrial scale”. In a new report published yesterday on April 21, Infrawatch said that it had identified 87 instances of ProxySmart control panels in 17 countries and 94 phone farm locations. These farms are located across 19 US states, as well as countries in Europe and South America. “ProxySmart is publicly associated with a Belarus-based vendor footprint and offers an end-to-end stack for operating and monetizing a physical farm, including device management, automated IP rotation, customer provisioning, plan enforcement, and anti-bot countermeasures,” the report explained. – https://www.infosecurity-magazine.com/news/researchers-proxysmart-software-90/

Venezuela energy sector targeted by highly destructive Lotus wiper

(Pierluigi Paganini – Security Affairs) Kaspersky researchers found Lotus Wiper targeting Venezuela’s energy and utilities sector amid regional tensions in 2025–2026. Attackers first used batch scripts to weaken systems, disable defenses, and prepare the environment. Then they deployed the wiper, which erased recovery tools, overwrote disks, and deleted all files, leaving systems unusable. “Two batch scripts are responsible for initiating the destructive phase of the attack and preparing the environment for executing the final wiper payload. These scripts coordinate the start of the operation across the network, weaken system defenses, and disrupt normal operations before retrieving, deobfuscating and executing a previously unknown wiper that we dubbed ‘Lotus Wiper’.” reads the report published by Kaspersky. “The wiper removes recovery mechanisms, overwrites the content of physical drives, and systematically deletes files across affected volumes, ultimately leaving the system in an unrecoverable state.” – https://securityaffairs.com/191106/malware/venezuela-energy-sector-targeted-by-highly-destructive-lotus-wiper.html

Critical BRIDGE:BREAK flaws impact Lantronix and Silex Technology converters

(Pierluigi Paganini – Security Affairs) Researchers at Forescout Research Vedere Labs found 22 BRIDGE:BREAK flaws in serial-to-IP devices from Lantronix and Silex Technology. Serial-to-IP converters, also known as serial device servers, connect legacy serial equipment to modern IP networks for remote monitoring and control. They are widely used in sectors like energy (RTUs, relays), industry (PLCs), retail (POS systems), and healthcare (patient monitors). These devices allow organizations to integrate older hardware into TCP/IP networks without replacing existing systems, improving connectivity while extending equipment lifespan. The experts warn that around 20,000 devices sit exposed online. Attackers can take control of these converters and manipulate the data they transmit, creating serious risks for industrial and enterprise environments. – https://securityaffairs.com/191114/hacking/critical-bridgebreak-flaws-impact-lantronix-and-silex-technology-converters.html

UK could face ‘hacktivist attacks at scale’, says head of security agency

(Dan Milmo – The Guardian) The UK could face “hacktivist attacks at scale” if it becomes embroiled in a conflict and the impact could be similar to recent high-profile ransomware incidents, according to the head of the country’s online security agency. Richard Horne, chief executive of the National Cyber Security Centre (NCSC), will warn today that nation states now account for the most significant incidents the NCSC deals with. “Were we to be in, or near, a conflict situation, the UK would likely face hacktivist attacks at scale. With similar effects and sophistication to the ransomware attacks we see today. But … no option to pay a ransom to help recover,” the NCSC chief will say in a speech on Wednesday opening the annual CyberUK conference in Glasgow. – https://www.theguardian.com/technology/2026/apr/22/uk-hacktivist-attacks-at-scale-security-agencyhttps://www.infosecurity-magazine.com/news/uk-faces-a-cyber-perfect-storm-ncsc/

OpenAI briefs feds and Five Eyes on new cyber product

(Sam Sabin – Axios) OpenAI has been briefing federal agencies, state governments and Five Eyes allies on the capabilities of its new cyber product over the past week, Axios has learned. Companies and agencies are clamoring to get their hands on the latest AI tools, whose advanced cybersecurity capabilities promise big gains for defenders and frightening advances for malicious hackers. OpenAI held an event in D.C. on Tuesday for approximately 50 cyber defense practitioners across the federal government to demo the capabilities of its new GPT-5.4-Cyber model, which it rolled out under a tiered access program last week. – https://www.axios.com/2026/04/22/openai-gpt-cyber-government-meeting

YouTube expands AI deepfake detection tools for celebrities

(DigWatch) The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent. The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem. – https://dig.watch/updates/youtube-expands-ai-deepfake-detection-tools-for-celebrities

Scoop: CISA lacks access to Anthropic’s Mythos

(Sam Sabin – Axios) The Cybersecurity and Infrastructure Security Agency doesn’t have access to Anthropic’s powerful new Mythos Preview model, even though some other government agencies are using it, two sources tell Axios. The country’s top cyber defense agency, tasked with helping to secure everything from banks to power plants, is being left behind at a time when the industries it works with are deeply concerned about AI-powered cyberattacks overwhelming their defenses. Anthropic decided against a public release of Mythos due to its unprecedented ability to quickly discover and exploit security vulnerabilities. Instead, Anthropic provided it to more than 40 companies and organizations who are now testing it and working to shore up their systems. CISA is not on that list, the sources say. –  https://www.axios.com/2026/04/21/cisa-anthropic-mythos-ai-security

Anthropic investigates report of rogue access to hack-enabling Mythos AI

(Dan Milmo – The Guardian) The AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity. The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. “We’re investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments,” said Anthropic. – https://www.theguardian.com/technology/2026/apr/22/anthropic-investigates-report-of-rogue-access-to-hack-enabling-mythos-ai

Vodafone Business Launch New AI and Cybersecurity Solutions to Accelerate Small Business Digital Transformation in Partnership with Google Cloud

Vodafone Business and Google Cloud announce an expansion of their strategic partnership with two new solutions to equip small-and medium-sized businesses (SMBs) with advanced cybersecurity and agentic AI. – https://www.googlecloudpresscorner.com/2026-04-22-Vodafone-Business-Launch-New-AI-and-Cybersecurity-Solutions-to-Accelerate-Small-Business-Digital-Transformation-in-Partnership-with-Google-Cloud

NCSC Unveils SilentGlass, a Plug-In Device to Protect Monitors from Cyber-Attacks

(Infosecurity Magazine) The UK National Cyber Security Centre (NCSC) has unveiled a new technology designed to protect video connections from cyber-attacks. The device, dubbed SilentGlass, was launched on April 22 at CYBERUK, the UK government’s flagship annual cybersecurity conference. SilentGlass is plug-and-play device designed to actively block anything unexpected or malicious between HDMI or display port connections and monitor screens. It is approved for use in even the most high-threat cybersecurity environments. The device has already been successfully deployed on government estates, and now SilentGlass has been released for anyone to buy and use. The NCSC has partnered with Goldilock Labs and Sony UK to manufacture and sell SilentGlass globally. – https://www.infosecurity-magazine.com/news/ncsc-silentglass-a-plugin-stop/

Defence/Intelligence/Warfare

Pentagon seeks funds for Golden Dome, drones, AI in largest-ever budget request

(Tanya Noury – Defense News) The Department of Defense on Tuesday unveiled a $1.5 trillion budget proposal for fiscal 2027 — a 42% year-over-year increase and the most expensive military outlay in modern history. “We’re facing one of the most complex and dangerous threat environments in our nation’s 250-year history,” Jules J. Hurst III, the under secretary of war and chief financial officer, told reporters at a briefing at the Pentagon. “Our adversaries are rapidly advancing capabilities across every warfighting domain: in the air, land, sea, space and cyberspace, while years of underinvestment has strained our industrial base”. “This is a generational investment in the United States military”, Hurst added. According to officials, President Donald Trump’s key priorities include investments in the “Golden Dome” — a multi-layered defensive shield intended to safeguard the American homeland — as well as in drone warfare, artificial intelligence, data infrastructure, and the defense industrial base. – https://www.defensenews.com/news/pentagon-congress/2026/04/21/pentagon-seeks-funds-for-golden-dome-drones-ai-in-largest-ever-budget-request/

Ukrainian Military Offers Lessons Learned to NATO

(Taras Kuzio – The Jamestown Foundation) Ukraine’s most important battlefield lessons have much to teach the North Atlantic Treaty Organization (NATO). Ukraine’s experience has shown how cheap drones can destroy high-value assets, highlighting urgent gaps in NATO preparedness. Battlefield experience in Ukraine shows that innovation, speed, and adaptability matter more than expensive legacy systems in modern warfare. Its forces update software in weeks, use decentralized procurement, and integrate civilians and industry into defense. Ukraine has become a leader in modern warfare—producing thousands of drones daily, pioneering sea-drone combat, and achieving high air-defense interception rates. Its tactical creativity underscores that future wars require whole-of-society mobilization, flexible doctrines, and scalable, low-cost technologies. – https://jamestown.org/ukrainian-military-offers-lessons-learned-to-nato-part-two/

Anduril announces partnership with Kraken for small USVs

(Patrick Dawson – Breaking Defense) Anduril is teaming up with UK-based Kraken Technology Group to make small unmanned surface vessels (USVs) for the US Navy at a time when the service is increasingly focused on unmanned tech, the companies announced today. “This partnership reflects Kraken’s commitment to supporting global maritime challenges with hardened operational capabilities at a critical point in history,” Kraken CEO Mal Crease said in a company press release. “Under this agreement Kraken will deliver low-cost, scalable and modular systems that are both reliable and effective.” – https://breakingdefense.com/2026/04/anduril-announces-partnership-with-kraken-for-small-usvs/

US Southern Command stands up autonomous unit

(Cristina Stassis – Defense News) U.S. Southern Command is standing up a new element aimed at connecting tactical missions to long-term outcomes with unmanned systems, the command announced Tuesday. The development of the Autonomous Warfare Command was mandated by SOUTHCOM Commander Gen. Francis L. Donovan in an effort to further support the Trump administration’s national security objectives and SOUTHCOM’s operational dominance, per the statement. Once fully operational, the new command will be tasked with engaging autonomous, semi-autonomous and unmanned platforms to “counter threats across domains.” The announcement did not specify when SAWC will reach operational status. – https://www.defensenews.com/news/your-military/2026/04/21/us-southern-command-stands-up-autonomous-unit/

Saildrone announces new USV class aimed at anti-submarine warfare

(Cristina Stassis – Defense News) Saildrone, a maritime defense company, announced on Monday a new class of unmanned surface vessels designed for anti-submarine warfare operations. The company released the Saildrone Spectre design, a vessel 54 meters long and 250 metric tons that’s capable of a speed up to 30 knots, making it the company’s “most capable” platform, Salidrone said. – https://www.defensenews.com/industry/techwatch/2026/04/21/saildrone-announces-new-usv-class-aimed-at-anti-submarine-warfare/

Why the US can’t copy Ukraine’s robot navy

(Patrick Tucker – Defense One) Ukraine’s sinking of much of Russia’s Black Sea Fleet is “case alpha” in finding new ways to use robots across land, sea, and air, the U.S. Navy’s assessment chief said Monday. But the United States can’t just copy Ukraine’s homework and apply it to the vast, well-observed Pacific, or even the Red Sea, where it’s now tasked with enforcing a naval blockade and “getting a lot of unmanned stuff thrown at us,” Rear Adm. Doug Sasse said Monday. The Navy last week took possession of its first Sea Hawk, a 145-ton unmanned trimaran. It will deploy as part of the Theodore Roosevelt strike group in the Pacific later this year, Sasse said Monday at the Navy League’s Sea-Air-Space conference. By 2030, the Sea Hawk will be joined by “thousands” of small unmanned ships and “any number” of aerial drones by 2030 in the Pacific alone, Capt. Garrett Miller, commodore of Surface Navy Development Group One, said at the conference. – https://www.defenseone.com/technology/2026/04/why-us-cant-copy-ukraines-robot-navy/412992/?oref=d1-featured-river-secondary

Frontiers

SpaceX nears deal with Cursor

(Madison Mills – Axios) SpaceX said Tuesday it has agreed to a deal with AI coding startup Cursor that could result in an acquisition or $10 billion investment. The deal underscores SpaceX CEO Elon Musk’s push to make his company into an AI powerhouse ahead of its potential IPO, which may be the largest in history. – https://www.axios.com/2026/04/21/spacex-ai-cursor-deal