Daily Digest on AI and Emerging Technologies (9 December 2025) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-9-december-2025/
Daily Digest on AI and Emerging Technologies (10 December 2025) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-10-december-2025/
Daily Digest on AI and Emerging Technologies (11 December 2025) – https://pam.int/daily-digest-on-ai-and-emerging-technologies-11-december-2025/
Governance
How the Meaning of ‘Publicly Accessible’ Shapes Researcher Data Rights Under the DSA
(Daphne Keller – Tech Policy Press) One of the main goals of the EU’s Digital Services Act (DSA) is to advance transparency about online platforms. Article 40 seeks to do so by providing researchers with access to data about Very Large Online Platforms and Search Engines (VLOPSEs). As discussed in a prior post, Article 40(4) establishes a slow, careful process for vetted academic researchers to access platforms’ internally held data. Article 40(12) complements this — and every other DSA transparency provision — by allowing a broader array of researchers to collect or scrape information that platforms are actually displaying to users in their public interfaces. This is an important enough requirement that it has been part of several Commission investigations against platforms, and was one of three grounds for its €120 million enforcement against X. Researchers may collect data under Article 40(12) only if platforms have already made it “publicly accessible.” The universe of information available for these researchers will depend on what “publicly accessible” means. If history is any guide, platforms will oppose research by arguing that some data — despite being freely visible to anyone who visits their sites or apps — is not “publicly accessible” in the sense of Art. 40(12). Similar disputes about lawful information “access” have derailed journalism and research on both sides of the Atlantic for years. The term “public” also has a confusing array of legal meanings in contexts ranging from news reporting to insider trading to copyright. The purpose of Article 40(12) would be defeated if similar uncertainty deterred the very research that lawmakers intended to unleash. The DSA provides other legal mechanisms to balance research goals with potentially competing policy priorities like data protection. “Publicly accessible” data for research should be defined broadly, given the DSA’s language, purpose, and overall design. This post will examine the legal scope of available data under Article 40(12). It begins with legal analysis, continues with comparison to other relevant laws, and concludes with review of specific data categories for research. – https://www.techpolicy.press/how-the-meaning-of-publicly-accessible-shapes-researcher-data-rights-under-the-dsa/
Can the Digital Markets Act Free Users’ Data in the AI Age?
(Eliot Bendinelli – Tech Policy Press) Europe’s Digital Markets Act (DMA) defines “gatekeepers” as large digital platforms that have a significant impact on the market, provide a “core platform service” which is an important gateway for business users to reach end-users, and enjoy an entrenched and durable position. Right now, no company providing an exclusively AI-powered service has been designated as a gatekeeper, and no AI assistant or AI agent has been designated as a core platform service. But with the DMA aiming to make digital markets ‘fairer and more contestable,’ and in light of the concentration of power that’s observable in the AI space, there is a strong chance that such designation will happen in the next few years. – https://www.techpolicy.press/can-the-digital-markets-act-free-users-data-in-the-ai-age/
AI’s data-centre gold rush is drowning in debt
(Cybernews) As AI fever has propelled global stocks to record highs, the data centres needed to power the technology are increasingly being financed with debt, adding to concerns about the risks. A UBS report last month said AI data centre and project financing deals surged to $125 billion so far this year, from $15 billion in the same period in 2024, with more supply from the sector expected to be pivotal for credit markets in 2026. “Public and private credit seems to have become a major source of funding for AI investments, and its rapid growth raised some concerns,” said Anton Dombrovskiy, fixed income portfolio specialist at T. Rowe Price. – https://cybernews.com/ai-news/five-debt-hotspots-ai-data-centre-boom/
India expands job access with AI-powered worker platforms
(DigWatch) India is reshaping support for its vast informal workforce through e-Shram, a national database built to connect millions of people to social security and better job prospects. The database works together with the National Career Service portal, and both systems run on Microsoft Azure. AI tools are now improving access to stable employment by offering skills analysis, resume generation and personalised career pathways. – https://dig.watch/updates/india-expands-job-access-with-ai-powered-worker-platforms
India moves toward mandatory AI royalty regime
(DigWatch) India is weighing a sweeping copyright framework that would require AI companies to pay royalties for training on copyrighted works under a mandatory blanket licence branded as the hybrid ‘One Nation, One Licence, One Payment’ model. – https://dig.watch/updates/india-moves-toward-mandatory-ai-royalty-regime
China heats up AI race by launching giant computing power pool
(Eglė Krištopaitytė – Cybernews) China has activated a massive artificial intelligence (AI) computing power pool, but it remains to be seen whether it will give the country a competitive advantage in the intense AI race. The large-scale, distributed AI-computing network, known as the Future Network Test Facility (FNTF), started operating last week, according to state-run Science and Technology Daily. The network connects 40 cities across China, with a total optical transmission length exceeding 34,175 miles (55,000 kilometres). According to Chinese media, the computing power pool formed via this network could achieve 98% of the efficiency of a single data centre. – https://cybernews.com/ai-news/china-giant-ai-network/
Protecting truth in the era of AI mediation
(John Coyne – ASPI The Strategist) The ritual is now familiar. A user calls Grok, the AI model used on social media platform X, into a political argument. Grok gives a mainstream, citation-driven answer. Instead of settling anything, it becomes fresh ammunition: one side posts it triumphantly; the other side turns its outrage on the AI, accusing it of bias, censorship or foreign influence. The thread then descends into a lengthy back-and-forth between users and the machine, often more hostile than the original disagreement. This reflects a broader pattern seen in online conflict: systems are instrumentalised not for learning, but to generate content for performative outrage. If governments, platforms and users fail to recognise this as an emerging information-security problem, we shouldn’t be surprised when our newest referee becomes an active participant in an already-fragile contest over reality. This AI-mediated epistemic conflict is no longer speculative; it’s already here. – https://www.aspistrategist.org.au/protecting-truth-in-the-era-of-ai-mediation/
When an AI Agent Says ‘I Agree,’ Who’s Consenting?
(Camillia Rida – Tech Policy Press) Silicon Valley companies promise that AI agents will soon play a substantial role in daily routines, particularly when it comes to commerce. The promise is that agents operating within defined parameters can take on the tedious work of comparing, booking, renewing, and paying for products and services. But as users delegate, decision-making slides from “I choose” to “the AI agent chooses for me.” The convenience comes with risks, including to privacy and autonomy. This shift is no longer theoretical. In early November 2025, Amazon sued Perplexity, alleging that its “Comet” agent improperly accessed customer accounts and disguised automated browsing as human activity on Amazon’s site. Perplexity disputes the claims and casts agents as pro-consumer tools—tireless shoppers that surface the best deals and give people their time back. There’s the paradox: agents promise consumer “empowerment” (better information, less friction), yet without guardrails, they can narrow our choices (default nudges, closed pathways, dependence on a single interface). Even Amazon—experimenting with its own shopping assistants—argues for agent-to-platform interactions “on its terms,” a sign that this fight is as much about consumer protection as it is about control of the channels. Who really consents when an AI agent clicks? Who steers the choice? Who is accountable when things go wrong? And can European rules discipline AI agents without smothering innovation? – https://www.techpolicy.press/when-an-ai-agent-says-i-agree-whos-consenting/
How Civil Society Is Fighting to Protect Digital Rights Amid Global Crisis
(Luisa Ortiz Pérez, Rachael Kay – Tech Policy Press) In his work, Dr. Tabani Moyo, an advocate for freedom of expression at the Media Institute of Southern Africa (MISA), confronts the daily reality of trying to protect freedom of expression and privacy in a “polycrisis era” marked by conflict, climate change, shrinking solidarity, and chronic underfunding for human rights organizations. He has witnessed the consequences firsthand: violent post-election periods in Mozambique and Tanzania, internet shutdowns masking abuses in multiple countries, and declining regional cooperation compared to just a few years ago. Yet, the work carries on. “What we have been is hanging in there,” he said during a session hosted at the Mozilla Festival in Barcelona titled “Working Under Pressure: Upholding Free Expression, Digital Access, and Online Safety in a Time of Global Uncertainty.” During the conversation, which was curated by Vita Activa and brought together practitioners from across the globe to examine how shrinking resources, shifting alliances, and rising technical threats are reshaping the landscape of human rights work, Dr. Moyo warned that without strategic prioritization and renewed commitment from global allies and philanthropies, even the remaining support structures that make his organization’s work are at risk of collapse. MISA is not alone. The discussion in Barcelona, grounded in insights from the Human Rights Funders Network’s (HRFN) 2025 report, Funding at a Crossroads, highlighted an urgent reality: at a time when digital repression is escalating, global funding for internet freedom and human rights advocacy is contracting at historic levels. As advocates are forced to prioritize and narrow their focus, the very structure of the global human rights movement is becoming increasingly lean and stripped to its essentials. Only urgent collective action by the global community of advocates, progressive allies, and philanthropists will sustain the fragile ecosystem of organizations committed to this work. – https://www.techpolicy.press/how-civil-society-is-fighting-to-protect-digital-rights-amid-global-crisis/
Legislation
Why Trump’s AI EO Will be DOA in Court
(Olivier Sylvain – Tech Policy Press) The consensus view across the United States is that artificial intelligence companies should be more accountable for the ways in which their powerful models and services impact consumers. Algorithmic systems enable unfair price discrimination in housing and on ride hailing apps. AI-generated deepfakes fuel the exploitation of young women and efforts to confuse voters. Large language models drive people to delusion, depression and self-harm. These threats have done a remarkable thing this year: lawmakers in red and blue states as varied as California, Colorado, Florida, Michigan, New York, Texas and Utah agree that it is time for policymakers to redress the unique consumer safety risks that AI-powered services pose. All year, however, President Donald Trump has been threatening to block such laws unilaterally. Never mind the characteristic all-caps syntax and gratuitous race-baiting focus on “DEI ideology” and “Woke AI.” His language, more importantly, parrots the pro-innovation rhetoric of his Big Tech allies. Finally, this week, the White House published an executive order that purports to single-handedly stop the states in their tracks in the name of innovation and global competitiveness. – https://www.techpolicy.press/why-trumps-ai-eo-will-be-doa-in-court/
Following Trump Executive Order on AI Congress Must Act
(Kevin Frazier – Tech Policy Press) The executive order on AI signed Thursday by President Donald Trump aims to ensure that the federal government takes the lead on AI policy questions that implicate the nation’s economic and national security. How the order is implemented and whether the newly-created AI Litigation Task Force succeeds in that effort is to be determined. In any event, Congress must now take the lead on regulating AI. The United States constitutional order was devised with the expectation that Congress would address matters of national significance. Each day of inaction is tantamount to delegating AI governance to Sacramento and subjecting 300 million Americans to laws enacted without their consent. What’s more, the longer AI labs are forced to comply with state laws that implicate their ability to train, test, and deploy their models—laws often grounded in concerns about existential risk—they will fail to freely innovate and experiment, which, paradoxically, is likely the best approach to uncovering how to maximize the benefits of AI and minimize its risks. When Congress acts, it should adhere to three principles: experimentation, adoption, and information sharing. In practice, this looks like legislation that permits labs to deploy new models, rewards individuals and entities for integrating AI, and facilitates information sharing by both developers and deployers. These principles will facilitate a virtuous cycle of technological development that at once ensures the US continues to lead in AI innovation while not imposing undue risk on the public. – https://www.techpolicy.press/following-trump-executive-order-on-ai-congress-must-act/
Trump signs executive order on ‘national framework’ for AI regulation
(Suzanne Smalley – The Record) President Donald Trump has issued an executive order that seeks to create a “national framework” for AI by making it difficult for states to regulate the technology. The order blocks federal broadband funding from states that enforce “onerous” AI laws, creates an AI Litigation Task Force at the Department of Justice to challenge them and orders the secretary of Commerce to review whether they should be challenged on constitutional grounds or whether they require AI models to “alter truthful outputs.”. “United States AI companies must be free to innovate without cumbersome regulation. But excessive State regulation thwarts this imperative,” the order says. “My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.” – https://therecord.media/trump-executive-order-ai-national-framework
Trump signs order blocking individual US states from enforcing AI rules
(DigWatch) US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety. – https://dig.watch/updates/trump-signs-order-blocking-individual-us-states-from-enforcing-ai-rules
Courts and Litigation
Reddit begins legal battle with Australia over social media age law
(Cybernews) Message board website Reddit on Friday filed a lawsuit in Australia’s highest court seeking to overturn the country’s social media ban for children, calling it an intrusion on free political discourse and setting the stage for a protracted legal battle. – https://cybernews.com/privacy/reddit-legal-battle-australia-social-media-age-law/
Geostrategies
Edge AI gains momentum in Europe’s innovation strategy
(DigWatch) Europe is accelerating efforts to build digital sovereignty through high-performance technologies that do not increase power consumption, with AI now shifting from a smartphone-centric model to an agentic paradigm supported by hybrid cloud-edge architectures. Policymakers and industry leaders see edge AI as central to this shift, enabling sovereign AI ecosystems and distributed intelligence across sectors. These efforts aim to support a competitive and sustainable digital economy while deepening international collaboration. – https://dig.watch/updates/edge-ai-gains-momentum-in-europes-innovation-strategy
India and the UAE Could Define ‘Eastern’ Ethics for AI
(Hindol Sengupta, Hebatallah Adam – Observer Research Foundation) Ethical models for artificial intelligence are getting divided between Eastern and Western frameworks. India and the UAE are uniquely positioned to drive Eastern ethics in AI – https://www.orfonline.org/expert-speak/india-and-the-uae-could-define-eastern-ethics-for-ai
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure
(DigWatch) Major announcements from Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI investment trends, offering momentum at a time when analysts frame Indian markets as a ‘hedge’ against a potential global AI bubble. While India has rapidly adopted AI and attracted substantial funding for data centres and chip manufacturing, including a new collaboration between Intel and Tata Electronics, the country remains a follower rather than a frontrunner in sovereign AI capabilities. – https://dig.watch/updates/big-tech-boosts-indias-ai-ambitions-amid-concerns-over-talent-flight-and-limited-infrastructure
Australia all at sea on submarine cable risks
(Jocelinn Kang – Lowy The Interpreter) Google is laying new submarine cables along Australia’s northern and western approaches. The routes will link to a planned Google AI data centre on Christmas Island, likely building on an existing cloud deal with the Australian Department of Defence. The cables also connect to naval base HMAS Stirling, 35 kilometres south-west of Perth, which will host AUKUS partners and nuclear-powered submarines. Together, these moves reflect a shift in Defence planning as senior officials warn that the regional security environment is worsening. Australia’s Chief of Navy Mark Hammond described seabed cables at an address last month as “our lifelines”, noting that their loss would pose “an existential threat to our island and to our people”. His assessment reflects a growing recognition that the integrity of these cables underpins Australia’s economy, military networks and connections to the world. – https://www.lowyinstitute.org/the-interpreter/australia-all-sea-submarine-cable-risks
Security and Surveillance
China’s AI use for cyber espionage shifts cyber focus from detection to trust
(Gil Baram – ASPI The Strategist) The question facing security and technology leaders is no longer whether adversaries will deploy AI agents against their environment. Now, those leaders must ask whether their trust architecture, access models and identity systems are ready for a world where breakout time—the time taken for an attacker to move from initial access to lateral movement through a digital system—has vanished, and machine-speed attackers are the default assumption. Anthropic’s 13 November report marked a significant turning point in cybersecurity. Their investigation into the GTG-1002 campaign—assessed with high confidence as a Chinese state-sponsored operation—confirmed that AI-driven espionage is no longer hypothetical or in development. It is active and already targeting large technology firms, financial institutions, chemical manufacturers and government agencies worldwide. Anthropic describes it as the first documented case of a large-scale cyberattack carried out with minimal human involvement. The finding is important, but it should not come as a surprise. – https://www.aspistrategist.org.au/chinas-ai-use-for-cyber-espionage-shifts-cyber-focus-from-detection-to-trust/
Germany summons Russian ambassador over cyberattack, election disinformation
(Daryna Antoniuk – The Record) Germany on Friday summoned Russia’s ambassador after accusing Moscow of carrying out a cyberattack on the country’s air traffic control authority and conducting a disinformation campaign ahead of February’s general election, the Foreign Office said. Foreign Ministry spokesperson Martin Giese told reporters that Berlin had “clear evidence” linking an August 2024 cyberattack on Deutsche Flugsicherung — the state-owned company responsible for German air traffic control — to APT28, or Fancy Bear, a hacking group tied to Russia’s military intelligence agency, the GRU. Giese added that Russia had also sought to influence and destabilize the federal election through a disinformation operation known as Storm 1516, a threat actor active since at least 2023 and previously involved in efforts to discredit Ukraine and stir discord across Europe. The group has also targeted elections in the U.S. state of Georgia and elsewhere in the United States. – https://therecord.media/germany-summons-russian-ambassador-cyberattack-disinformation
Hamas-affiliated APT targeting government agencies in the Middle East, Morocco
(Jonathan Greig – The Record) A hacking group allegedly affiliated with Palestinian armed group Hamas is accused of using malware-laden documents to breach government and diplomatic entities tied to Oman, Morocco and the Palestinian Authority. Palo Alto Networks’ Unit 42 issued a report on Thursday about a group it refers to as Ashen Lepus. A spokesperson for the company told Recorded Future News that it attributed the group to Hamas based on years of profiling their activity, which they said “shows a consistent alignment with Hamas’s strategic interests.”. Unit 42 said the recent activity involved a new strain of malware they call AshTag that has allowed them to steal information from key entities across the Middle East. The report said Ashen Lepus has demonstrated increasing sophistication since 2020, developing more advanced hacking tactics that include infrastructure obfuscations and other new tools. – https://therecord.media/hamas-apt-targeting-government-agencies
Hackers hijack dozens State.gov websites to push AI Porn
(Stefanie Schappert – Cybernews) At least 38 State.gov sites have been hijacked since November, including Nebraska, Indiana, Hawaii, California, Washington, and Kansas. A third-party government web hosting company says the hackers exploited the sites’ public form uploads. Google search results for the sites were redirected to show ads for AI porn how-tos, sex chatbots, sex toys, rap videos, gaming cheats, and more. – https://cybernews.com/news/ai-porn-state-gov-websites-hijacked/
“Hacktivist” CyberVolk using Telegram-based bots for ransomware campaigns (with a few glitches)
(Ann-Marie Corvin – Cybernews) The resurfaced threat group is using bots via Telegram to manage command-and-control, marketing, sales, and affiliate support. While the group, thought to have originated in India, brands itself as pro-Russia and hacktivist, its actions are starting to resemble those of a medium-to-large enterprise. Thanks to Telegram’s enforcement actions, the group remained inactive for most of this year but now it appears to be back with a vengeance by enabling affiliate buyers to interact with ransomware through automated bots on the platform. – https://cybernews.com/cybercrime/hacktivists-cybervolk-telegram-bots-ransomware/
Chinese state hackers attended Cisco cybersec training, researcher claims
(Paulina Okunytė – Cybernews) Two Chinese hackers accused of running one of Beijing’s biggest cyber-espionage campaigns may have first learned their craft in a beginner-level Cisco training program. Two alleged members of the Chinese state-sponsored hacking group known as Salt Typhoon may have once been students in a global training program run by Cisco, according to new findings from SentinelLabs researcher Dakota Cary. Yu Yang and Qiu Daibing are both accused of participating in Beijing’s long-running cyber-espionage campaign. – https://cybernews.com/security/cisco-training-china-hackers-salt-typhoon/
Emergency fixes deployed by Google and Apple after targeted attacks
(Pierluigi Paganini – Security Affairs) Apple and Google have both pushed out urgent security updates after uncovering a highly targeted attacks against an unknown number of users. The attacks abused zero‑day vulnerabilities in their software. The campaign appears to involve nation-state actors and commercial spyware vendors, with a focus on specific high‑value individuals rather than mass exploitation. – https://securityaffairs.com/185628/hacking/emergency-fixes-deployed-by-google-and-apple-after-targeted-attacks.html
NCSC Plugs Gap in Cyber-Deception Guidance
(Phil Muncaster – Infosecurity Magazine) Cyber deception can be a great way to detect novel threats and uncover hidden compromises, but organizations face several barriers and risks associated with programs, the National Cyber Security Centre (NCSC) has warned. The NCSC yesterday shared its learnings from a pilot project it’s running under the Active Cyber Defence (ACD) 2.0 program, featuring 121 UK organizations and 14 cyber-deception solution providers. – https://www.infosecurity-magazine.com/news/ncsc-plugs-gap-cyber-deception/
Hired to Hack: Protecting Your Business from Remote Recruitment Scams
(Jonathan Armstrong – Infosecurity Magazine) If you are responsible for protecting your organization’s systems, data, or operations, the biggest threat may not come from hackers outside, but from the very candidates you hire. With remote work becoming the norm, companies can access talent from around the world. At the same time, this opens the door to candidates using false identities, fake CVs, or other tactics to secure positions. Even well-intentioned organizations can inadvertently hire individuals who pose serious financial, operational, or legal risks. Recent reports reveal thousands of covert actors, including North Korean IT operatives, exploiting remote job listings to infiltrate Western companies. For anyone responsible for security, compliance, or HR, understanding these risks is essential. – https://www.infosecurity-magazine.com/opinions/protecting-business-remote/
Frontiers and Markets
Humanoids Summit Enters Asia with Japan Event in May 2026
(AI Insider) The Humanoids Summit will expand to Asia with its first Tokyo edition on May 28–29, 2026 at the Takanawa Convention Center, reflecting growing Japanese momentum around humanoid robots, embodied AI, and commercial deployment. Professor Hiroshi Ishiguro of Osaka University will deliver the opening keynote and present a live demonstration of his Geminoid robot, highlighting Japan’s leadership in lifelike humanoid research. The Tokyo event follows a 2,000-attendee Silicon Valley summit and positions Japan as a key hub alongside Silicon Valley and London in the global humanoid robotics ecosystem. – https://theaiinsider.tech/2025/12/12/humanoids-summit-enters-asia-with-japan-event-in-may-2026/
Google Opens Its Advanced Willow Chip to UK Researchers in Search For Practical Uses
(Quantum Insider) Google and the UK government will give researchers access to the company’s Willow quantum chip to identify potential real-world applications, according to the BBC. Scientists can submit proposals and work with Google and the National Quantum Computing Centre to design and run experiments on the hardware. The partnership comes as global competition in quantum computing intensifies and the UK increases investment in the sector. – https://thequantuminsider.com/2025/12/12/google-opens-its-advanced-willow-chip-to-uk-researchers-in-search-for-practical-uses/
Multimodal AI reveals new immune patterns across cancer types
(DigWatch) A recent study examined the capabilities of GigaTIME, a multimodal AI framework that models the tumour immune microenvironment by converting routine H and E slides into virtual multiplex immunofluorescence images. Researchers aimed to solve long-standing challenges in profiling tumour ecosystems by using a scalable and inexpensive technique instead of laboratory methods that require multiple samples and extensive resources. The study focused on how large image datasets could reveal patterns of protein activity that shape cancer progression and therapeutic response. – https://dig.watch/updates/multimodal-ai-reveals-new-immune-patterns-across-cancer-types
Eurotech and PNY move to accelerate high-performance edge computing
(DigWatch) Eurotech and PNY Technologies have signed a strategic MoU intended to accelerate high-performance edge AI deployments across global markets. The agreement joins Eurotech’s secure industrial platforms with PNY’s NVIDIA-powered acceleration stack to support real-time computing. The partnership goes beyond a standard supplier arrangement, creating a coordinated ecosystem supporting edge AI software partners. Eurotech and PNY plan to co-design and promote integrated offerings that combine hardware, software and services. – https://dig.watch/updates/eurotech-and-pny-move-to-accelerate-high-performance-edge-computing
Serve Robotics Reaches 2K Robots in its Delivery Fleet
(AI Insider) Serve Robotics has surpassed its 2025 target by deploying more than 2,000 autonomous sidewalk delivery robots, creating the largest such fleet in the U.S. as demand for low-cost, sustainable last-mile delivery accelerates. The company has expanded rapidly across major U.S. markets including Los Angeles, Atlanta, Dallas-Fort Worth, Miami, Chicago, and Alexandria, Va., growing its fleet twentyfold this year through partnerships with platforms such as Uber Eats and DoorDash. Serve’s Level 4 autonomous, zero-emission robots operate with a reported 99.8% completion rate, positioning the company to expand beyond restaurant delivery into groceries, small parcels, and return logistics. – https://theaiinsider.tech/2025/12/13/serve-robotics-reaches-2k-robots-in-its-delivery-fleet/
Disney and OpenAI Reach $1B Deal to Bring Characters from Across Disney’s Brands to Sora
(AI Insider) The Walt Disney Company has reached a three-year licensing and investment agreement with OpenAI, making Disney the first major content partner on Sora, OpenAI’s generative AI video platform, and pairing licensed characters with a $1 billion equity investment. The deal allows Sora and ChatGPT Images to generate short videos and images using more than 200 Disney, Pixar, Marvel, and Star Wars characters and environments, while excluding actor likenesses and voices and enabling curated distribution on Disney+. Disney will also become a major OpenAI customer by deploying its APIs and ChatGPT internally and for consumer products, with both companies emphasizing responsible AI use, creator protections, and safety controls. – https://theaiinsider.tech/2025/12/12/disney-and-openai-reach-1b-deal-to-bring-characters-from-across-disneys-brands-to-sora/
ALM Ventures Announces New $100M Fund Focused on Humanoid Robots, Embodied AI, and Spatial Intelligence
(AI Insider) ALM Ventures has launched a $100 million early-stage fund focused on humanoid robots, embodied AI, and spatial intelligence, targeting seed and pre-seed investments as physical AI approaches commercial viability. The fund is designed to build concentrated early ownership and support follow-on financing, with an investment thesis spanning humanoid platforms, motion systems, spatial reasoning, world models, and deployment infrastructure. During formation, ALM Ventures made ten initial investments across the humanoid technology stack, including Sanctuary AI and several robotics and embodied intelligence startups, while expanding its ecosystem through the global Humanoids Summit series. – https://theaiinsider.tech/2025/12/12/alm-ventures-announces-new-100m-fund-focused-on-humanoid-robots-embodied-ai-and-spatial-intelligence/
Saviynt Secures $700M at Approximately $3B Valuation in KKR-Led Round to Establish Identity Security as the Foundation for the AI Era
(AI Insider) Saviynt raised $700 million in Series B growth funding at a ~$3 billion valuation to scale its AI-driven identity security platform across human, machine, and AI-agent identities. Its platform unifies IGA, PAM, AAG, ISPM, and access governance into a single system designed for cloud and AI-powered enterprises, addressing the surge of non-human and AI-generated identities. Funding will accelerate global expansion, deepen AI capabilities, and enhance integrations as identity security becomes foundational to enterprise-scale AI adoption. – https://theaiinsider.tech/2025/12/12/saviynt-secures-700m-at-approximately-3b-valuation-in-kkr-led-round-to-establish-identity-security-as-the-foundation-for-the-ai-era/
Marketing Evolution Announces New Investment Led by Insight Partners to Power AI-Ready Marketing Data for the Agentic Era
(AI Insider) Marketing Evolution secured new funding from Insight Partners to accelerate its shift from analytics provider to AI-ready marketing data infrastructure leader. Its Mevo platform unifies fragmented marketing data into a continuously learning, explainable intelligence layer that powers ROI optimization and supports predictive and generative AI. The investment will fuel product innovation, go-to-market expansion, and the 2026 launch of a next-generation enterprise data platform for AI-driven marketing operations. – https://theaiinsider.tech/2025/12/12/marketing-evolution-announces-new-investment-led-by-insight-partners-to-power-ai-ready-marketing-data-for-the-agentic-era/
1X Announces Strategic Partnership to Make up to 10,000 Humanoid Robots Available to EQT’s Global Portfolio
(AI Insider) 1X has formed a strategic partnership with private equity firm EQT to accelerate the commercial rollout of its NEO humanoid robot, marking a step toward large-scale deployment of general-purpose humanoids across multiple industries. The companies intend to facilitate access to up to 10,000 humanoid robots across EQT’s global portfolio companies between 2026 and 2030, with pilots starting in the United States in 2026 and potential expansion into Europe and Asia. The partnership targets use cases including logistics, warehousing, manufacturing, facility operations, and healthcare, positioning humanoid robots as tools to address labor shortages, improve safety, and support workforce transformation. – https://theaiinsider.tech/2025/12/12/1x-announces-strategic-partnership-to-make-up-to-10000-humanoid-robots-available-to-eqts-global-portfolio/
fal Raises $140M in Series D Led by Sequoia, with Major Participation from Kleiner Perkins and New Investment from Alkeon Capital and NVentures (NVIDIA’s venture capital arm) to Accelerate the Future of Real-Time Generative Media
(AI Insider) fal raised $140 million in a Sequoia-led Series D as demand for its real-time generative-media infrastructure surges, following rapid consecutive fundraises earlier in 2025. The platform now serves billions of monthly generative assets across image, video, audio, and 3D, powering millions of developers and hundreds of enterprise teams. Funding will accelerate hiring, expand global infrastructure, and advance new product lines as fal positions itself as the foundational layer for real-time generative media at scale. – https://theaiinsider.tech/2025/12/12/fal-raises-140m-in-series-d-led-by-sequoia-with-major-participation-from-kleiner-perkins-and-new-investment-from-alkeon-capital-and-nventures-nvidias-venture-capital-arm-to-accelerate-the/
Safebooks AI Announces $15M Funding Round to Automate Revenue Data Integrity for Enterprise Finance Teams
(AI Insider) Safebooks emerged from stealth with $15 million in seed funding and launched its Agentic Revenue Integrity platform, an AI-driven automation layer for quote-to-revenue operations. ARI continuously reconciles financial data across systems, reads documents in any format, detects discrepancies in real time, and automates remediation to eliminate manual workloads. The platform has already monitored over $40 billion in transactions, giving finance teams unified, audit-ready visibility and transforming revenue assurance into a proactive, continuous process. – https://theaiinsider.tech/2025/12/12/safebooks-ai-announces-15m-funding-round-to-automate-revenue-data-integrity-for-enterprise-finance-teams/
Assaia Secures $26.6M in Series B Funding to Enhance Global AI Leadership in Airport Operations
(AI Insider) Assaia raised $26.6 million in a Series B round to scale its AI platform, which optimizes aircraft turnarounds and apron operations at major airports worldwide. The funding will support global expansion and the launch of StandManager, an AI system that improves gate and stand assignments to boost operational efficiency. Growing industry demand for intelligent automation and Armira’s strategic backing position Assaia to address rising aviation traffic, labor constraints, and efficiency challenges. – https://theaiinsider.tech/2025/12/12/assaia-secures-26-6m-in-series-b-funding-to-enhance-global-ai-leadership-in-airport-operations/
Resemble AI Closes $13M in Funding From Sony Innovation Fund, Okta Ventures and Others to Tackle AI-Generated Threats as Deepfake Cyberattacks Surge Against Global Enterprises
(AI Insider) Resemble AI raised $13 million to expand its real-time deepfake detection platform, which secures enterprise generative AI across audio, video, image, and text. Its DETECT-3B Omni model delivers industry-leading accuracy across 38+ languages, while the new Resemble Intelligence platform adds explainability powered by Gemini 3. Funding will support global expansion as organizations face rising deepfake-driven fraud and seek stronger verification tools to protect identity, trust, and revenue. – https://theaiinsider.tech/2025/12/12/resemble-ai-closes-13m-in-funding-from-sony-innovation-fund-okta-ventures-and-others-to-tackle-ai-generated-threats-as-deepfake-cyberattacks-surge-against-global-enterprises/
TomNext Raises Funding to Build the Intelligence Layer for Private Markets
(AI Insider) TomNext emerged from stealth with new investment to launch its AI-powered workflow platform for LPs, addressing inefficiencies in private-market investing. The platform structures deal data, streamlines diligence, and improves visibility across private equity, credit, venture, and real-assets portfolios, with plans to add tokenized execution capabilities. Funding will support scaling TomNext as critical infrastructure for investors facing fragmented data, manual workflows, and rising operational complexity. – https://theaiinsider.tech/2025/12/12/tomnext-raises-funding-to-build-the-intelligence-layer-for-private-markets/