Weekly Digest on AI and Emerging Technologies (9 February 2026)

Daily Digest on AI and Emerging Technologies (3 February 2026) https://pam.int/daily-digest-on-ai-and-emerging-technologies-3-february-2026/

Daily Digest on AI and Emerging Technologies (4 February 2026) https://pam.int/daily-digest-on-ai-and-emerging-technologies-4-february-2026/

Daily Digest on AI and Emerging Technologies (5 February 2026) https://pam.int/daily-digest-on-ai-and-emerging-technologies-5-february-2026/

Daily Digest on AI and Emerging Technologies (6 February 2026) https://pam.int/daily-digest-on-ai-and-emerging-technologies-6-february-2026/

 

Governance

Understanding Global AI Governance Through a Three-Layer Framework

(Cedric (Yehuda) Sabbah, Moshe Uziel – Lawfare) “If the 20th century ran on oil and steel, the 21st century runs on compute and the minerals that feed it.” Thus, the Trump administration’s Pax Silica Initiative, a commitment between the United States and eight partner countries to work together on “securing strategic stacks of the global technology supply chain,” began on Dec. 11, 2025. Two days prior, the Linux Foundation announced the formation of the Agentic AI Foundation (AAIF), a group of artificial intelligence (AI) companies—including Amazon, Google, Microsoft, OpenAI, and Cloudflare—committed to “lay[ing] the groundwork for a shared ecosystem of tools, standards, and community-driven innovation” for agentic AI, referring to AI tools that can perform a series of tasks autonomously. These initiatives add to an already fragmented AI governance landscape, with new bodies and working groups emerging periodically and international organizations that produce an increasing number of normative texts and policy papers. Just since 2024 alone, the Council of Europe has finalized the Convention on Artificial Intelligence and a human rights risk assessment methodology for AI; the Organization for Economic Co-operation and Development (OECD) has updated its trustworthy AI principles and issued a report with recommendations on the use of AI in government; the U.S. government has launched Pax Silica; the AI Action Summit of 2025 has produced a statement on inclusive and sustainable AI; and the United Nations has established an “Independent International Scientific Panel on Artificial Intelligence” and a “Global Dialogue on Artificial Intelligence Governance.”. With this exponential growth in global AI policy output comes a risk of duplication of initiatives, substantive overlap, interoperability challenges, and even potential contradictions of policy and norms. The sheer number of bodies and initiatives also makes it challenging for AI actors to determine where to best engage in global AI governance in a way that leads to lasting impact. The United Nations has recently recognized the challenges of engaging in this context, emphasizing the need for “multistakeholder AI governance,” yet without specifying what this means in practice, resulting in greater confusion. In an attempt to make sense of the current AI landscape, we, the authors, propose a multilayered framework to conceptualize global AI governance based on the widely used three-layer framework for internet governance. It is modeled on common framings of the “AI stack,” specifically the hardware, software, data, and applications forming part of the AI supply chain, as well as its underlying material and energy components. This piece does not purport to establish a definitive framing. Indeed, such a framing would be ill-advised, given the dynamic nature of the field. Nonetheless, we hope it can provide some guidance and direction to better navigate the global AI landscape. – https://www.lawfaremedia.org/article/understanding-global-ai-governance-through-a-three-layer-framework

AI Disclosure Labels Risk Becoming Digital Background Noise

(Muhammad Irfan – Tech Policy Press) The next wave of synthetic media policy is racing toward a predictable cliff, not because regulators are ignoring deepfakes, but because the public will soon be flooded with so many “AI” labels that there is a risk most will stop noticing them. Labels are visual or textual indicators that identify content as AI-generated or altered. Essentially, they are signals placed on digital content so users know it was produced, modified, or influenced by AI. When signals become constant, attention collapses as individuals habituate to repeated warnings and cues over time, leading to reduced attention and responsiveness even for important warnings. The label fades into the interface. At the moment disclosure matters most, during an election, a breaking news event, or a coordinated harassment campaign, the warning lands on eyes trained to scroll past it. Europe is writing rules that will shape global practice, and the timeline is unusually specific. In December 2025, the European Commission released a first draft of a Code of Practice on marking and labeling AI-generated content for public consultation, with feedback due January 23, 2026. The process anticipates a second draft by mid-March 2026, followed by a final Code by June 2026. These steps come ahead of transparency obligations becoming applicable on August 2, 2026. Europe is not acting in a vacuum. In India, the so-called “Grok undressing” controversy pushed regulators into an enforcement posture. The Ministry of Electronics and Information Technology sent X a letter citing due diligence obligations under existing intermediary rules. Analysts argued the episode exposed gaps in how current platform law handles AI driven synthetic harms. The United States is moving toward a patchwork. There is no single federal disclosure rule for AI content in media and advertising. Some states have passed or are considering narrower requirements, especially around political ads and certain chatbot interactions. Platforms then layer their own disclosure rules on top, creating uneven expectations for users and creators. This is the moment to correct a recurring design assumption. Labeling is being treated as a compliance deliverable, something satisfying if a disclosure exists somewhere. In practice, labeling is a user experience and behavioral design problem. A label only helps if people notice it, understand it, and do not draw the wrong conclusion from it. If transparency fails at the interface layer, the best technical standards will still produce civic disappointment. – https://www.techpolicy.press/ai-disclosure-labels-risk-becoming-digital-background-noise/

Geostrategies

EU’s Digital Sovereignty Depends On Investment In Open-Source And Talent

(Amandine Le Pape, Nicholas Gates, Johan Linåker, Peter Neuhäusler, Denilton Luiz Darold, Timo Väliharju – Tech Policy Press) The United States and China are investing billions into developing and maintaining open, scalable, and strategically vital digital infrastructure, securing their technological independence through different but equally determined approaches. While these two superpowers wage digital geopolitical battle, Europe is falling behind, caught between these different but competing visions for our technological future and a lack of competitive local industry. We argue that Europe must think differently and invest where it matters, leveraging its strengths, and open technologies are the place to look. While Europe does not have the tech giants of the US and China, it possesses a huge pool of innovation and human capital, as well as a small army of capable and efficient technology service providers, start-ups, and SMEs. Many of these are already working with open source, while many more are well-positioned to do so. – https://www.techpolicy.press/eus-digital-sovereignty-depends-on-investment-in-opensource-and-talent/

Another Misstep in U.S.-China Tech Security Policy

(Justin Sherman – Lawfare) On Jan. 23, the Wall Street Journal reported that the Trump administration has pushed out two key officials at the Commerce Department’s Bureau of Industry and Security (BIS)—specifically, in its Office of Information and Communications Technology and Services (ICTS). The Journal rightfully called the departures “the latest dismissals of key personnel working on national security issues tied to Beijing.”. Certainly, the office in question may be less known outside of technology and national security circles. But the impact of its work to date—and the potential impact of its work in the future, if the office were to be appropriately staffed, resourced, and operated—is significant. Its sidelining is also the latest in a series of regulatory- and staffing-focused changes since January 2025 that have either shut down or effectively undermined the U.S. government’s ability to bolster national security protections for the technology supply chain. This piece outlines the history of the ICTS office, its authorities, and its key actions to date. It then argues that the recent personnel moves must be contextualized within broader U.S. shifts in the past year. These include rollbacks of key cybersecurity regulations, staffing cuts at other technology- and national security-focused agencies, and the U.S. government’s expressed willingness to let certain national security and regulatory authorities sit on a shelf. In total, these decisions represent the collective weakening of a regulatory apparatus configured to address critical national security risks to the U.S. technology ecosystem—with effects that, in many cases, will be difficult to quickly unravel or sufficiently mitigate. – https://www.lawfaremedia.org/article/another-misstep-in-u.s.-china-tech-security-policy

European social media alternatives exist. Why don’t they have more users?

(Eglė Krištopaitytė – Cybernews) There are already European alternatives to American social media platforms Instagram, Snapchat, X, and LinkedIn. But why aren’t they taking off? ​The announcement that Europeans are developing W, their own alternative to Elon Musk’s X, has drawn great interest across the continent, where calls for greater digital independence from the US have been especially loud since Donald Trump took office. Facebook, Instagram, and LinkedIn, all owned by the US company Meta, dominated the European market in 2025. Since then, China’s TikTok, which had 200 million users in Europe last year, has been acquired by American investors. ​Social media platforms built by European companies already exist, but some have failed to take off – at least for now – or stay afloat. – https://cybernews.com/tech/european-social-media/

Legislation

The FTC’s AI Preemption Authority is Limited

(Andy Jung – Tech Policy Press) Can the Trump administration preempt state consumer protection laws governing AI? The Federal Trade Commission (FTC) will soon try, but the agency’s authority to preempt state laws is limited. Last December, President Trump issued an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO directs the Chairman of the FTC to issue a policy statement “explain[ing] the circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.” Pursuant to the order, the agency has until March 11 to issue the statement. Section 5 of the FTC Act prohibits unfair or deceptive acts or practices in commerce. The EO focuses on deception, defined as a misrepresentation, omission, or other practice that misleads a consumer acting reasonably in the circumstances, to the consumer’s detriment. For example, false advertising misrepresents the quality or usefulness of a product, tricking consumers into buying it. – https://www.techpolicy.press/the-ftcs-ai-preemption-authority-is-limited/

Latest NDAA Supports AI Safety, Innovation, and China Decoupling

(Jakub Kraus – Lawfare) Since the release of ChatGPT in late 2022, most successful federal lawmaking on artificial intelligence (AI) has occurred within the annual defense bill. This year’s National Defense Authorization Act (NDAA) was no exception. Enacted in December 2025, the bill contains a title devoted to AI and other emerging technologies, as well as numerous AI-related provisions scattered throughout its 1,259 pages of text. Collectively, these provisions will significantly reshape how America approaches AI innovation, AI safety, and U.S.-China competition. – https://www.lawfaremedia.org/article/latest-ndaa-supports-ai-safety–innovation–and-china-decoupling

One more social media ban for teenagers is in the making: next is Slovenia

(Izabelė Pukėnaitė – Cybernews) Slovenia is preparing draft legislation that will ban access to social media for minors under 15, Deputy Prime Minister Matej Arcon told a news conference on Thursday. Arcon said the Education Ministry had initiated the move, based on the experience of other countries, and would include professionals in drafting the law that aims to protect children and adolescents. “This has been a hot topic around the world and in Europe in recent weeks and months, and with this, we as a government are showing that we care about our children,” Arcon said after the government session. – https://cybernews.com/privacy/more-social-media-bans-teenagers-next-slovenia/

Security and Surveillance

Google Calls on Governments And Industry to Prepare Now For Quantum-Era Cybersecurity

(Quantum Insider) Google is urging governments and industry to accelerate adoption of post-quantum cryptography, warning that advances in quantum computing could soon undermine the encryption that secures today’s digital systems. The company says it has been preparing for a post-quantum world since 2016, rolling out quantum-resistant protections across its infrastructure while aligning its migration plans with NIST standards finalized in 2024. Google calls on policymakers to drive society-wide momentum through cloud modernization, global alignment on standards, and closer engagement with quantum experts to avoid security surprises. – https://thequantuminsider.com/2026/02/06/google-calls-on-governments-and-industry-to-prepare-now-for-quantum-era-cybersecurity/

Unmasking EdTech’s Surveillance Infrastructure in the Age of AI

(Danai Nhando – Tech Policy Press) In December 2024, PowerSchool, a leading provider of cloud-based software that manages student grades, attendance, and records for K–12 schools, detected unauthorized access to its Student Information System, the administrative backbone for approximately 16,000 schools serving nearly 50 million students across North America. By January 2025, the scope of the breach became clear: more than 62 million student records and nearly 10 million teacher records had been exfiltrated, representing the largest breach of children’s data in US history. The compromised data extended beyond basic identifiers. Names, addresses, birthdates, and contact information sat alongside Social Security numbers (SSNs), medical conditions, disability accommodations, individualized education plans, disciplinary records, and family income data linked to free and reduced lunch programs. For millions of children, their most sensitive educational and personal information, data they never consented to provide and cannot revoke now circulates in underground markets and, increasingly, as inputs to AI systems. Eight months before the beach occurred, the EdTech Law Center issued a prescient warning. In May 2024 litigation, the organization cautioned that “[b]y collecting vast amounts of data from both students and their families, PowerSchool puts that data at risk.” The risk materialized exactly as predicted. The intrusion vector was unremarkable; PowerSchool’s system, with administrative access across thousands of districts, lacked mandatory multi-factor authentication for all accounts. In cybersecurity terms, this constitutes a category 1 control failure that industry standards have treated as a baseline requirement for over a decade. The breach exposed the edTech industry‘s governance model that has normalized the centralization of children’s data at an unprecedented scale without commensurate security architecture, regulatory oversight, or enforceable data minimization requirements. One year later, that model remains largely intact. – https://www.techpolicy.press/unmasking-edtechs-surveillance-infrastructure-in-the-age-of-ai/

Darknet kingpin gets 30 years: he stole from users and even taught cops crypto

(Cybernews) The US Attorney’s Office for the Southern District of New York’s announcement about a sentenced darknet marketplace operator has revealed more cryptocurrency-related details about how this platform operated. The operator, Rui-Siang Lin, a.k.a. Pharaon, 24, of Taiwan, was sentenced to 30 years in prison for operating Incognito Market, which was deemed “one of the world’s largest online narcotics marketplaces.” Per the attorneys, Lin sold more than $105 million worth of drugs to customers worldwide and profited more than $6 million from fees paid by vendors on Incognito. – https://cybernews.com/cybercrime/darknet-kingpin-prison-stole-users-taught-cops-crypto/

Chinese-Made Malware Kit Targets Chinese-Based Routers and Edge Devices

(Kevin Poireault – Infosecurity Magazine) A malware framework that remained hidden for years has been discovered by security researchers at Cisco Talos. The researchers were hunting for samples of DarkNimbus, a backdoor linked to the MOONSHINE exploit kit which have both been known about since 2023, , when they found a fully featured gateway-monitoring and adversary-in-the-middle (AitM) framework they had never seen before. Cisco Talos researchers have shared technical details about this framework, which they dubbed DKnife, in a new report published on February 5. – https://www.infosecurity-magazine.com/news/china-malware-kit-targets-routers/

Substack Confirms Data Breach, “Limited User Data” Compromised

(Kevin Poireault – Infosecurity Magazine) Newsletter platform Substack has confirmed it suffered a security incident, leading to the compromise of users’ email addresses and phone numbers. Chris Best, the CEO of Substack, notified users of the data breach in an email sent to some users on February 5. The CEO said his security team detected the incident on February 3, noticing “evidence of a problem with our systems that allowed an unauthorized third party to access limited user data without permission, including email addresses, phone numbers and other internal metadata.” – https://www.infosecurity-magazine.com/news/substack-confirms-data-breach/

Psychology, AI and the Modern Security Program: A CISO’s Guide to Human Centric Defence

(Tarnveer Singh, Sarah Zheng – Infosecurity Magazine) Cybersecurity has always been about people, even if we’ve spent years pretending it’s purely technical. Firewalls, encryption, and controls matter, but breaches usually begin with a very human moment: someone rushing, someone trusting the wrong message, someone taking a shortcut. As AI accelerates both our defensive capabilities and the sophistication of attacks, the psychological side of cybersecurity becomes even more important. CISOs who understand how people think, behave, and make decisions will build stronger, more resilient programmes. Those who ignore the human element risk being blindsided by threats that bypass technology entirely and go straight for the mind. – https://www.infosecurity-magazine.com/opinions/psychology-ai-and-modern-security/

Hack-proofing our space infrastructure

(Adam Bartley – ASPI The Strategist) The biggest and most immediate threat to space systems isn’t anti-satellite weaponry; it’s hacking. In October 2025, a group of computer scientists from the University of California, San Diego and the University of Maryland undertook a study to eavesdrop on geostationary satellites in orbit. Expecting to find some flaws in space systems during their scanning of internet traffic, they instead intercepted vast quantities of private and potentially sensitive communications. Some of these were from government and military sources. In August 2025, German researchers at a Black Hat computer security conference in Las Vegas demonstrated how software and encryption libraries used by NASA and Airbus could be exploited to shut down, move or crash the flight software of a satellite. Additional software flaws in open source app OpenC3 Cosmos were found to allow remote code execution (where arbitrary codes are run on a target system from a remote location) and cross-site scripting attacks (where malicious scripts are injected into a trusted website) on ground stations. – https://www.aspistrategist.org.au/hack-proofing-our-space-infrastructure/

Defence, Military, and Warfare

Army moves to link a full division with its next-gen C2 prototype

(Meghann Myers – Defense One) The 4th Infantry Division is working to scale testing of the Army’s next-generation command-and-control system from a battalion to division level by this summer, the division’s commander told reporters. The Colorado-based unit is coming off of more than two weeks in the field for its latest Ivy Sting exercise, Maj. Gen. Pat Ellis said, the fifth since the series began in September. This time, they increased from the ability to shoot from one networked artillery system to six, among other incremental advancements. “So the joke I like to make is we are no longer fighting with the network. We are now fighting using the network,” Ellis said, alluding to previous iterations of Army command-and-control that kept data on multiple systems and devices that prevented commanders on the battlefield from seeing a full picture all at once. – https://www.defenseone.com/defense-systems/2026/02/army-moves-link-full-division-its-next-gen-c2-prototype/411259/?oref=d1-featured-river-top

US Marine designs Corps’ first NDAA-compliant 3D-printed drone

(Zita Ballinger Fletcher – Defense News) The U.S. Marine Corps has pioneered a 3D-printed first-person view drone that is easy to assemble, ready for field use and conforms to national security standards. Sgt. Henry David Volpe, an automotive technician with the 2nd Marine Logistics Group, used his interest in engineering and robotics to help develop HANX, the Marine Corps’ first unmanned aircraft system built from 3D-printed parts to be approved by the National Defense Authorization Act, service officials announced last month. – https://www.defensenews.com/news/your-military/2026/02/06/us-marine-designs-corps-first-ndaa-compliant-3d-printed-drone/

Shield AI, ST Engineering join forces on fine-tuning drone swarms

(Elisabeth Gosselin-Malo – Defense News) American drone company Shield AI plans to integrate its AI-enabled software into Singaporean manned-unmanned-teaming applications, enabling the coordination of drone swarms. Local firm ST Engineering and Shield AI signed a memorandum of understanding at the Singapore Airshow here on Feb. 5 to combine the Hivemind autonomy software on different platforms manufactured by the national defense-tech champion. – https://www.defensenews.com/unmanned/2026/02/06/shield-ai-st-engineering-join-forces-on-fine-tuning-drone-swarms/

Overland AI Raises $100M to Scale Autonomous Ground Systems for Defense Applications

(AI Insider) Overland AI announced it has raised $100 million in new funding to expand deployment of its autonomous ground vehicle systems across the U.S. Armed Forces. The equity round was led by 8VC, with participation from Point72 Ventures, Ascend Venture Capital, Shasta Ventures, Overmatch Ventures, Valor Equity Partners, and StepStone Group, alongside a $20 million venture debt facility from TriplePoint Capital. – https://theaiinsider.tech/2026/02/06/overland-ai-raises-100m-to-scale-autonomous-ground-systems-for-defense-applications/

QuantX Labs and University of Adelaide Complete Optical Clock Research Project

(Quantum Insider) A collaborative research project involving Defence Trailblazer, QuantX Labs, and the University of Adelaide has concluded with advances in optical atomic clock technologies and progress toward commercial deployment. The project evaluated and demonstrated alternative optical clock architectures and techniques aimed at improving timing stability and supporting resilient timing systems independent of GNSS. The collaboration also contributed to workforce development through PhD research at Adelaide University focused on novel optical clock methods relevant to defence and critical infrastructure applications. – https://thequantuminsider.com/2026/02/06/quantx-labs-adelaide-university-precision-timekeeping-defence/

Frontiers and Markets

Los Alamos Forms Quantum Computing-Focused Research Center

(Quantum Insider) Los Alamos National Laboratory has established a new Center for Quantum Computing to consolidate its quantum research capabilities across national security, algorithms, computer science, and workforce development. The center will bring together up to three dozen researchers and support ongoing collaborations tied to DOE, DARPA, NNSA, and state-level quantum initiatives. It will also host the Quantum Computing Summer School, a 10-week fellowship program training up to 25 undergraduate and graduate students annually. – https://thequantuminsider.com/2026/02/06/los-alamos-forms-quantum-computing-focused-research-center/

QUICHE Collaboration Links Quantum Hardware with Chemistry Software

(Quantum Insider) The QUICHE project is a newly funded UK–Germany collaboration aiming to integrate quantum computing workflows directly into the widely used ORCA quantum chemistry software. Backed by Innovate UK and Germany’s ZIM programme, the project brings together Quantum Motion, FACCTs, and Riverlane to automate the translation of chemistry problems into quantum-ready circuits. QUICHE will develop optimised compilation pipelines and quantum backends to estimate resources and enable practical quantum chemistry calculations on future fault-tolerant hardware. – https://thequantuminsider.com/2026/02/06/quiche-quantum-computing-orca-chemistry/

Chinese Researchers Clear Hurdles For Long-Distance Quantum Networks

(Quantum Insider) Chinese researchers reported advances that move quantum communication closer to practical networks, combining longer-lived quantum memory with record-setting demonstrations of ultra-secure key distribution over fiber. The team generated device-independent quantum encryption keys over 11 kilometers of optical fiber, extending the previous distance record by roughly 3,000 times, and validated the approach at distances up to 100 kilometers. Separately, the researchers demonstrated a scalable building block for quantum repeaters, addressing signal-loss limits that have constrained long-distance quantum networks. – https://thequantuminsider.com/2026/02/06/chinese-researchers-clear-hurdles-for-long-distance-quantum-networks/

Infleqtion Advances to Phase 3 of Wellcome Leap Q4Bio Challenge

(Quantum Insider) Infleqtion, in collaboration with the University of Chicago and MIT, has advanced to Phase 3 of the Wellcome Leap Quantum for Bio (Q4Bio) Challenge to demonstrate quantum-enabled biomarker discovery for oncology. The team will test a hybrid quantum–classical workflow on real quantum hardware using clinical datasets, focusing on feature selection for cancer biomarker analysis. Phase 3 work will apply the approach to forecasting treatment response in head-and-neck cancer using a curated clinical cohort from the University of Chicago. – https://thequantuminsider.com/2026/02/06/infleqtion-quantum-oncology-q4bio-phase-three/

Florida’s Emerging Role in the Quantum Economy

(Quantum Insider) Florida crossed a structural inflection point in quantum development as coordinated corporate moves, academic investments, workforce initiatives, and capital alignment converged within a single week. IonQ’s planned acquisition of SkyWater Technology, D-Wave’s headquarters relocation and R&D expansion in Boca Raton, and Florida Atlantic University’s $20 million on-site quantum system collectively anchored both commercial and institutional quantum capacity in the state. Workforce programs led by Palm Beach State College, combined with Palm Beach County’s proven economic development framework and growing focus on advanced computing investment, signal a deliberate shift from quantum ambition to execution. – https://thequantuminsider.com/2026/02/05/floridas-emerging-role-in-the-quantum-economy/

Big Tech Accelerates AI Infrastructure Spending as Compute Race Intensifies

(AI Insider) Major technology companies are significantly increasing capital expenditures to expand artificial intelligence infrastructure, signaling an escalating industry race to secure long-term compute capacity. Amazon reported in its latest earnings that it expects approximately $200 billion in capital expenditures in 2026 across AI, chips, robotics, and low-Earth-orbit satellite initiatives, up from $131.8 billion in 2025. – https://theaiinsider.tech/2026/02/06/big-tech-accelerates-ai-infrastructure-spending-as-compute-race-intensifies/

SpaceX and xAI Advance Vision for Orbital AI Data Center Infrastructure

(AI Insider) SpaceX and xAI are moving forward with plans that could shift artificial intelligence computing infrastructure into orbit, following a Federal Communications Commission filing outlining a proposed satellite-based data center network. The initiative gained momentum after the formal merger of SpaceX and xAI, aligning space-launch capabilities with AI compute development. – https://theaiinsider.tech/2026/02/06/spacex-and-xai-advance-vision-for-orbital-ai-data-center-infrastructure/

China’s LE Robotics Secures Series A+ Funding to Scale Industrial Embodied AI Welding

(AI Insider) LE Robotics raised tens of millions of RMB in a Series A+ round to accelerate standardization and large-scale commercialization of its embodied AI robotic welding systems. The round was led by Shandong Luhua Investment with participation from Sinolink Innovation Investment, following a Series A completed in 2025 and signaling investor confidence in the company’s transition from validation to industrial scale. Founded in 2022, LE Robotics has deployed its AI-driven welding platforms across sectors such as rail, nuclear power, and petrochemical infrastructure, reporting partnerships with more than 50 global industrial firms and expansion into over 30 international markets. – https://theaiinsider.tech/2026/02/06/chinas-le-robotics-secures-series-a-funding-to-scale-industrial-embodied-ai-welding/

AI tool predicts brain age, cancer survival and other disease signals from unlabeled brain MRIs

(Medical X Press) Mass General Brigham investigators have developed a robust new artificial intelligence (AI) foundation model that is capable of analyzing brain MRI datasets to perform numerous medical tasks, including identifying brain age, predicting dementia risk, detecting brain tumor mutations and predicting brain cancer survival. The tool, known as BrainIAC, outperformed other, more task-specific AI models and was especially efficient when limited training data were available. – https://medicalxpress.com/news/2026-02-ai-tool-brain-age-cancer.html#google_vignette

AI to track icebergs adrift at sea in boon for science

(Phys.org) British scientists said Thursday that a world-first AI tool to catalog and track icebergs as they break apart into smaller chunks could fill a “major blind spot” in predicting climate change. Icebergs release enormous volumes of freshwater when they melt on the open water, affecting global climate patterns and altering ocean currents and ecosystems. – https://phys.org/news/2026-02-ai-track-icebergs-adrift-sea.html

Amazon to begin testing AI tools for film and TV production next month

(TechCrunch) Last summer, Amazon MGM Studios launched a dedicated AI Studio to develop proprietary AI tools to streamline TV and film production, with a focus on areas like improving character consistency across shots and supporting pre- and post-production. According to a report from Reuters, those tools are now ready to move beyond internal testing. Amazon will begin a closed beta program in March, inviting industry partners to try out its AI tools. – https://techcrunch.com/2026/02/04/amazon-to-begin-testing-ai-tools-for-film-and-tv-production-next-month/

Reddit Highlights AI Search as Emerging Growth Opportunity in Earnings Update

(AI Insider) Reddit said its AI-powered search initiatives are becoming a key strategic focus as the company works to integrate generative AI into how users discover information on the platform. During its fourth-quarter earnings call, CEO Steve Huffman described generative AI search as particularly effective for questions that benefit from multiple perspectives drawn from community discussions. – https://theaiinsider.tech/2026/02/06/reddit-highlights-ai-search-as-emerging-growth-opportunity-in-earnings-update/

OpenAI and Anthropic Race to Release Agentic Coding AI as Safety Debates Intensify

(AI Insider) OpenAI announced the launch of Codex, an agentic coding tool for developers, alongside GPT-5.3 Codex, a new model designed to expand AI-driven software development capabilities. The company said the model improves performance and speed compared with earlier versions and can assist with complex software creation across professional workflows. OpenAI also confirmed plans to retire older ChatGPT models, including GPT-4o, as part of its product transition. – https://theaiinsider.tech/2026/02/06/openai-and-anthropic-race-to-release-agentic-coding-ai-as-safety-debates-intensify/