Daily Digest on AI and Emerging Technologies (4 November 2025) https://pam.int/daily-digest-on-ai-and-emerging-technologies-4-november-2025/
Daily Digest on AI and Emerging Technologies (5 November 2025) https://pam.int/daily-digest-on-ai-and-emerging-technologies-5-november-2025/
Daily Digest on AI and Emerging Technologies (6 November 2025) https://pam.int/daily-digest-on-ai-and-emerging-technologies-6-november-2025/
Daily Digest on AI and Emerging Technologies (7 November 2025) https://pam.int/daily-digest-on-ai-and-emerging-technologies-7-november-2025/
Governance
UNESCO adopts first global ethical framework for neurotechnology
(DigWatch) UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy. – https://dig.watch/updates/unesco-adopts-first-global-ethical-framework-for-neurotechnology – https://www.unesco.org/en/articles/ethics-neurotechnology-unesco-adopts-first-global-standard-cutting-edge-technology
Denmark officially bans social media for kids 15 and under
(Cybernews) First EU nation bans social media for kids under 15, with limited parental override options for teens 13+. Denmark’s Prime Minister cites unprecedented youth anxiety and depression, and exposure to harmful content as a driving force. Denmark’s bold move could likely trigger an EU-wide domino effect, with several nations already signaling similar restrictions. – https://cybernews.com/news/denmark-bans-social-media-kids-15-and-under/
There is More Online Election Discourse than Ever, But Researchers See Less
(Josephine Lukito, Kaitlyn Dowling – Tech Policy Press) In the United States, politicians have increasingly incorporated social media into their electoral campaigning. Politicians seem to be on virtually every platform online, ranging from mainstream social media like Facebook and YouTube to ideological and niche social media like Truth Social. Even in the 2025 off-year US elections, political campaigns are producing a massive amount of advertising and promotional content. In New Jersey alone, Senator Cory Booker and gubernatorial candidate Mikie Sherrill have already posted over 2,500 times across five different platforms during the last 90 days, a number likely to rise in the final month of the election. But archiving and sharing this discourse with citizens has become more difficult, as politicians are present across a wider variety of media than ever before. – https://www.techpolicy.press/there-is-more-online-election-discourse-than-ever-but-researchers-see-less/
European Union: European AI Office initiates the drafting of Code of Practice on transparency of AI-generated content
(Digital Policy Alert) On 5 November 2025, the European AI Office initiated the draft process for the Code of Practice on transparency of AI-generated content with a plenary meeting of independent experts. The Code of Practice aims to detail how providers and deployers of generative AI systems can comply with their transparency obligations laid down in Article 50(2) and (4) of Regulation (EU) 2024/1689 (the AI Act). These obligations, which will become effective on 2 August 2026, require AI providers to design and develop AI systems so that AI-generated or -manipulated content is detectable through machine-readable marks. Additionally, deployers are required to visibly label deep fakes or AI-generated or -manipulated text published with the purpose of informing the public of public matters, unless the content has been subject to human review or editorial control and a person holds editorial responsibility for its publication. The Code will facilitate the effective implementation of these transparency obligations, supporting practical arrangements for detection mechanisms and cooperation along the value chain to enable the public to distinguish AI-generated content and reduce risks of deception, manipulation, and misinformation. The drafting process engages working groups comprised of providers and deployers of AI systems and is expected to last 7 months, with the aim of publishing the final Code of Practice by June 2026, before the effective date of the transparency obligations. – https://digitalpolicyalert.org/event/35101-european-ai-office-initiates-the-drafting-of-code-of-practice-on-transparency-of-ai-generated-content
European Union: European Commission published reporting template for serious incidents involving general-purpose AI models with systemic risk
(Digital Policy Alert) On 4 November 2025, the European Commission published a reporting template for serious incidents involving general-purpose AI models with systemic risk. The template is intended to be used as a means for demonstrating compliance with Art. 55(1)(c) of the EU AI Act, which requires providers of general-purpose AI models with systemic risk to report serious incidents and measures to address them. The template provides a set of fields for providers to fill out, in line with the information required under Commitment 9 of the Code of Practice for general-purpose AI (GPAI). – https://digitalpolicyalert.org/event/35102-european-commission-published-reporting-template-for-serious-incidents-involving-general-purpose-ai-models-with-systemic-risk
European Union: European Data Protection Board adopted opinion on Commission draft implementing decision on adequate protection of personal data by Brazil
(Digital Policy Alert) On 4 November 2025, the European Data Protection Board (EDPB) adopted Opinion 28/2025 regarding the European Commission draft implementing decision on Brazil’s adequacy. The opinion notes that Brazil’s General Data Protection Law (LGPD), related presidential decrees, and binding regulations issued by the national data protection authority (ANPD) establish requirements, principles, data-subject rights, transfers, oversight, and redress, closely aligned with the General Data Protection Regulation (GDPR) and the case law of the Court of Justice of the European Union (CJEU). The EDPB invited the Commission to clarify certain aspects, including the practice of Data Protection Impact Assessments (DPIAs) for high-risk processing, transparency limits where “commercial and industrial secrecy” applies, and the conditions for onward transfers, such as consent-based transfers and the content of Binding Corporate Rules (BCRs). It also encourages the Commission to continue monitoring developments. – https://digitalpolicyalert.org/event/35104-european-data-protection-board-adopted-opinion-282025-on-the-european-commission-draft-implementing-decision-regarding-brazil
India’s New IT Rules on Deepfakes Threaten to Entrench Online Censorship
(Sarthak Guptav – Tech Policy Press) The Indian Ministry of Electronics and Information Technology (MeitY) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Act Amendments) in late October, seeking to address the growing phenomenon of “synthetically generated information.” The stated objective of the Impugned Amendments is to mitigate the spread of digitally manipulated or AI-generated content, commonly known as deepfakes, that may distort reality, mislead citizens, or cause reputational harm. The IT Act amendments are presently undergoing public consultation and will be officially released thereafter. However, beneath this technocratic justification lies a deeper constitutional tension. The IT Act Amendments, by design and scope, extend the State’s regulatory arm into online discourse. By imposing vague obligations on intermediaries and failing to delineate clear boundaries for enforcement, the IT Act Amendments risk undermining the fundamental right to freedom of speech and expression guaranteed under Article 19(1)(a) of the Constitution of India. What aims to be a safeguard against misinformation may, in practice, institutionalize a system of preemptive censorship. – https://www.techpolicy.press/indias-new-it-rules-on-deepfakes-threaten-to-entrench-online-censorship/
India’s AI roadmap could add $500 billion to economy by 2035
(DigWatch) According to the Business Software Alliance, India could add over $500 billion to its economy by 2035 through the widespread adoption of AI. At the BSA AI Pre-Summit Forum in Delhi, the group unveiled its ‘Enterprise AI Adoption Agenda for India’, which aligns with the goals of the India–AI Impact Summit 2026 and the government’s vision for a digitally advanced economy by 2047. – https://dig.watch/updates/indias-ai-roadmap-could-add-500-billion-to-economy-by-2035 – https://www.bsa.org/news-events/news/global-trade-group-bsa-ai-adoption-key-to-indias-inclusive-growth
India: Ministry of Electronics and Information Technology released AI governance guidelines
(Digital Policy Alert) On 4 November 2025, the Ministry of Electronics and Information Technology (MeitY) released the India artificial intelligence (AI) governance guidelines. The guidelines set out seven guiding principles, trust is the foundation, people first, fairness and equity, innovation over restraint, accountability, understandable by design, and safety, resilience & sustainability. It proposes recommendations across six pillars, infrastructure, capacity building, policy and regulation, risk mitigation, accountability, and institutional structures. The guidelines introduce enablers including access to compute and datasets, integration with Digital Public Infrastructure, capacity development for officials, risk classification and incident reporting, voluntary compliance measures, and techno-legal approaches. The institutional architecture identifies the AI Governance Group (AIGG), Technology and Policy Expert Committee (TPEC), and AI Safety Institute (AISI) for strategic oversight, development of standards, safety evaluations, and support for enforcement. The action plan includes short, medium, and long-term measures to operationalise the framework at the national level. – https://digitalpolicyalert.org/event/35105-ministry-of-electronics-and-information-technology-released-india-ai-governance-guidelines
The World’s Growing Information Black Box: Inequity in Platform Research
(Rachelle Faust, Daniel Arnaudo – Tech Policy Press) In an age of automated agents and increasingly powerful and complex technologies, it’s becoming nearly impossible to study the internet. The golden era of independent public interest research on platforms, marked by free data access and expansive partnerships between platforms and researchers, has come and gone. No longer can fact-checkers use a proven research tool like CrowdTangle to track real-time narratives on Facebook during local elections, nor can researchers use Twitter’s application programming interface (API) to identify patterns in tech-facilitated gender-based violence at no cost. Even during this “golden era” of platform data access, there’s always been a comparative data drought in Global Majority countries. The barriers to access erected in recent years have been widespread, particularly outside of the U.S. and Europe, but perhaps none have felt them more acutely than the frontline defenders of information integrity. – https://www.techpolicy.press/the-worlds-growing-information-black-box-inequity-in-platform-research/
What the AI Safety Debate Can Learn from the Techlash
(Antonina Vykhrest – Tech Policy Press) What if we could accelerate artificial intelligence safety and risk management by building on platform governance and online safety foundations? Could we compress decades of product and algorithmic safety lessons into the shorter timeframe that advanced AI diffusion demands and in doing so, prevent more harms? As we watch the cycle of coverage around AI safety unfold, from congressional hearings digging into Meta’s policy that allowed chatbots to engage children in romantic or sexualized conversations to lawsuits from parents of teens who died by suicide after interacting with AI companions, I recognize the pattern. This trend of public outcry at seemingly foreseeable tech safety lapses echoes the 2018 techlash, when tech companies grappled with a series of broad societal risks. The difference is that today, we are dealing with higher stakes and with less time to course correct. – https://www.techpolicy.press/what-the-ai-safety-debate-can-learn-from-the-techlash/
Public Service Social Media as a Democratic Safeguard
(Christine Galvagna – Tech Policy Press) A handful of prominent social media companies have accumulated power rivaling that of many states, and increasingly use this power to undermine democracy and fundamental rights. Companies like Meta and TikTok wield immense political power through content moderation and curation on their platforms. Their choices shape the information spaces in which hundreds of millions of users in the European Union, and billions worldwide, form political opinions. They also heavily influence how users exercise fundamental rights such as freedom of association and expression. Content moderation and curation decisions can exacerbate the effects of discriminatory biases, contribute to organized political violence, and potentially influence elections. While content moderation and curation policies have never been perfect, social media companies and their leaders have recently taken steps to purposefully weaken anti-discrimination protections. For example, Meta’s Mark Zuckerberg announced that Instagram, Facebook, and other Meta platforms would, in effect, permit more discriminatory content against the LGBTQ+ community and immigrants. – https://www.techpolicy.press/public-service-social-media-as-a-democratic-safeguard/
Canada: Office of Privacy Commissioner announced investigation into websites and mobile applications commonly used by children
(Digital Policy Alert) On 3 November 2025, the Office of the Privacy Commissioner of Canada (OPC) announced its participation in the 2025 Global Privacy Enforcement Network sweep, which will examine websites and apps commonly used by children. The investigation, involving over 30 privacy authorities, will assess data collection, transparency, age assurance measures, and controls limiting children’s data collection. The sweep aims to promote privacy-friendly digital design and reduce risks such as tracking, profiling, and harmful content exposure. The OPC will contribute findings to a forthcoming report. – https://digitalpolicyalert.org/event/34947-office-of-privacy-commissioner-announced-investigation-into-websites-and-mobile-applications-commonly-used-by-children
Republic of Korea: Ministry of the Interior and Safety released public sector AI ethics principles
(Digital Policy Alert) On 3 November 2025, the Ministry of the Interior and Safety (MOIS) released public sector artificial intelligence (AI) ethics principles applicable to central administrative authorities, local governments, public institutions, and local public enterprises, to ensure public trust in AI deployment while supporting administrative innovation. It includes 6 principles: public interest, fairness, transparency, accountability, safety, and privacy protection. The framework provides more than 90 checklist items for public sector personnel to self-assess compliance when introducing and operating AI systems, including disclosure of implementation processes, mitigation of discriminatory outcomes, assignment of administrative responsibility, prevention of harm, and protection of personal information. – https://digitalpolicyalert.org/event/35107-ministry-of-the-interior-and-safety-released-public-sector-ai-ethics-principles
Legislation
Australia: Minister of Communications announced application of Social Media Minimum Age Bill to Facebook, Instagram, Snapchat, TikTok, YouTube, X, Threads, Reddit and Kick
(Digital Policy Alert) On 5 November 2025, the Minister of Communications announced that the Social Media Minimum Age Bill would be applicable to Facebook, Instagram, Snapchat, TikTok, YouTube, X, Threads, Reddit, and Kick. These services were assessed by the eSafety Commissioner as age-restricted social media platforms because their sole or significant purpose is to enable online social interaction. The platforms have to implement age-verification systems and user access restriction measures to prevent persons under 16 from creating or holding accounts. The social media Minimum Age Bill imposes financial penalties of up to AUD 49.5 million if regulated services fail to take reasonable steps to prevent under-16 access. – https://digitalpolicyalert.org/event/35112-australian-government-announced-the-application-of-the-social-media-minimum-age-legislation-to-facebook-instagram-snapchat-tiktok-youtube-x-threads-reddit-and-kick
Kenya: Computer Misuse and Cybercrimes (Amendment) Act, 2025 including content moderation regulation enters into force
(Digital Policy Alert) On 4 November 2025, the Computer Misuse and Cybercrimes (Amendment) Act, 2025, entered into force. The amendment modifies Section 6 of the principal Computer Misuse and Cybercrimes Act, granting the National Computer and Cybercrimes Coordination Committee (NC4) explicit authority to issue directives requiring the blocking of websites or applications within Kenya. Specifically, the Act gives courts the power to order the removal of digital content and the closing of computer systems in connection with a person who has been convicted of illegal activities, including child sexual abuse material, terrorism, or extreme religious and cultic practices. Further, authorised persons may apply to a court requesting a removal or deactivation order where they suspect that a computer system or website is used to promote such material. The Act also broadens the scope of cyber harassment to include conduct likely to cause suicide and extends the definition of phishing to cover fraudulent calls. – https://digitalpolicyalert.org/event/34688-computer-misuse-and-cybercrimes-amendment-act-2025-including-content-moderation-regulation-enters-into-force
United Kingdom: Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) (Revocation) Regulations 2025 enter into force
(Digital Policy Alert) On 2 November 2025, the Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) (Revocation) Regulations 2025 under the Online Safety Act 2023 entered into force. The rules required online platforms to detect and report child sexual exploitation and abuse (CSEA) content to the National Crime Agency (NCA). They applied to UK and non-UK user-to-user service providers, including social media and messaging platforms. Providers were required to appoint a senior manager to register with the NCA, authorise staff to report CSEA content, and ensure compliance by third-party moderators. It also specified that reporters must include user and content details, technical data, and be submitted securely through the NCA’s online portal, with urgency levels assigned. Providers were also required to retain reported data for up to five years and follow United Kingdom’s General Data Protection Regulation security rules. – https://digitalpolicyalert.org/event/34473-the-online-safety-csea-content-reporting-by-regulated-user-to-user-service-providers-revocation-regulations-2025-enter-into-force
New Zealand: Biometric Processing Privacy Code 2025 enters into force
(Digital Policy Alert) On 3 November 2025, the Privacy Commissioner Biometric Processing Privacy Code 2025, issued under the Privacy Act 2020, enters into force. The Code strengthens existing notification, purpose, and transparency obligations and introduces a requirement for agencies to conduct a proportionality assessment weighing the privacy risks and public benefits of biometric processing, as well as to implement appropriate privacy safeguards. It applies to all biometric identification, verification, and categorisation activities except those carried out by health agencies handling health information, and restricts biometric categorisation for purposes such as emotion or personality analysis. The Code also establishes conditions for overseas disclosure of biometric information to ensure comparable data-protection safeguards. Existing biometric systems have to comply from 3 August 2026. – https://digitalpolicyalert.org/event/32503-privacy-commissioner-biometric-processing-privacy-code-enters-into-force
Courts and Litigation
Texas sues Roblox over operating “as a digital playground for predators”
(Cybernews) Texas Attorney General Ken Paxton has filed a lawsuit against the online gaming platform Roblox, accusing it of putting children in danger and ignoring online safety laws. Paxton announced the lawsuit on November 6th on X, sharing that he is “suing Roblox for putting pixel pedophiles and profits over the safety of Texas children.”. “We cannot allow platforms like Roblox to continue operating as digital playgrounds for predators where the well-being of our kids is sacrificed on the altar of corporate greed,” Paxton added. – https://cybernews.com/security/texas-sues-roblox-over-operating-as-a-digital-playground-for-predators/
United Kingdom: Office of Communications launched prioritised investigation into provider of online suicide discussion forum over potential breaches of Online Safety Act following evidence of UK access
On 6 November 2025, the Office of Communications (Ofcom) issued an update in its ongoing investigation into the provider of an online suicide discussion forum, opened on 9 April 2025 under the Online Safety Act 2023. Ofcom confirmed that, following the forum’s implementation of a block on 1 July 2025 restricting users with United Kingdom (UK) IP addresses from accessing the service, monitoring activities had been ongoing to ensure the block’s consistency and to prevent circumvention. Evidence submitted by Samaritans on 4 November 2025 indicated that the service remained accessible to UK users. On that basis, Ofcom stated it had reason to believe the access restrictions were not being effectively maintained and that the service might remain available to UK users. The regulator announced it was now prioritising the investigation to reach a conclusion promptly. The case continues to examine compliance with Sections 9, 10, 20, 21, 23, and 102(8) of the Online Safety Act 2023 concerning duties to respond to statutory information requests, conduct and retain risk assessments, and meet illegal content, reporting, and complaints obligations. – https://digitalpolicyalert.org/event/35178-office-of-communications-prioritised-investigation-into-provider-of-online-suicide-discussion-forum-over-potential-breaches-of-online-safety-act-2023-following-evidence-of-uk-access
United Kingdom: High Court ruled that Stability AI partially infringed Getty Images’ trademarks
(Digital Policy Alert) On 4 November 2025, the High Court of Justice in England and Wales ruled that Stability AI had partially infringed Getty Images’ ISTOCK and GETTY IMAGES trademarks. The Court found that synthetic watermark images produced by Stable Diffusion version 1.x (used through DreamStudio and the Developer Platform) breached the ISTOCK trademark, and that a few examples from version 2.1 infringed the GETTY IMAGES mark. However, the Court dismissed the rest of Getty’s trademark claims, including those related to versions 1.6 and SD XL, and made no separate ruling on passing off. It also rejected the secondary copyright claim, stating that the Stable Diffusion models were not “infringing copies” under UK copyright law, so Stability AI was not liable for copyright infringement. The Court further clarified that Stability AI was not responsible for models shared on CompVis GitHub or Hugging Face, and noted that Getty Images had withdrawn earlier claims about model training, output generation, and database rights. – https://digitalpolicyalert.org/event/35156-high-court-of-justice-issued-ruling-in-getty-images-us-inc-and-others-v-stability-ai-ltd-determining-partial-trade-mark-infringement-and-dismissing-secondary-copyright-infringement-claims-case-number-il-2023-000007
India: National Company Law Appellate Tribunal partially upheld Meta and WhatsApp appeal on penalty and data-sharing order
(Digital Policy Alert) On 4 November 2025, the National Company Law Appellate Tribunal (NCLAT) issued a ruling that partly upheld and partly modified the Competition Commission of India’s (CCI) order from 18 November 2024 concerning Meta and WhatsApp. The Tribunal confirmed the INR 213.14 crore fine imposed on Meta but overturned the finding that the company had breached Section 4(2)(e) of the Competition Act. It also removed the restriction that would have banned WhatsApp from sharing user data with other Meta companies for advertising purposes for five years. At the same time, the NCLAT upheld the rest of the CCI’s directions, requiring WhatsApp to remain transparent about how it shares user data with other Meta entities, to avoid making data sharing for non-service purposes a mandatory condition for using its services in India, and to provide users with clear options to review, modify, or opt out of such data sharing within the app. The Tribunal also ruled that all future WhatsApp policy updates must comply with these requirements, and that each party will bear its own legal costs. – https://digitalpolicyalert.org/event/35162-national-company-law-appellate-tribunal-issued-ruling-partially-allowed-whatsapp-and-meta-platforms-appeal-of-order-on-penalty-and-data-sharing-restrictions
France: Paris Public Prosecutor’s Office opened investigation into TikTok over allegations of illicit transaction, data processing impairment, and suicide promotion
(Digital Policy Alert) On 4 November 2025, the Paris Public Prosecutor’s Office launched a preliminary investigation into TikTok following a report by MP Arthur Delaporte, which raised concerns about the platform’s insufficient moderation, easy access for minors, and algorithmic promotion of content that could lead vulnerable users to suicide. The Cybercrime Unit of the Paris Police Prefecture is examining three alleged offences, including facilitating illicit transactions through the platform, altering the function of automated data processing systems for harmful purposes, and promoting methods of suicide. The investigation focuses on TikTok’s compliance with obligations to report suspected offences, the operation of its algorithm compared with how it is presented to users, and the dissemination of content promoting suicide. The investigation also draws on previous analyses, including a 2023 Senate report on risks to freedom of expression and data collection, a 2023 Amnesty International report on the algorithm’s addictive nature and potential to trigger self-harm, and a February 2025 Viginum report highlighting risks of public opinion manipulation. – https://digitalpolicyalert.org/event/35148-paris-public-prosecutors-office-opened-investigation-into-tiktok-over-allegations-of-illicit-transaction-data-processing-impairment-and-suicide-promotion
Geostrategies
IonQ and Swiss Consortium Launch First Citywide Dedicated Quantum Network in Geneva
(Quantum Insider) IonQ and Swiss partners launched the Geneva Quantum Network (GQN), Switzerland’s first citywide dedicated quantum network connecting major research, enterprise, and government institutions. The network uses existing fiber-optic infrastructure and IDQ’s quantum key distribution systems to enable experiments in quantum communications, entanglement, and ultra-precise time synchronization, according to the company. IonQ said the initiative strengthens its global quantum infrastructure strategy, following recent expansions in Italy, the United Kingdom, and South Korea. – https://thequantuminsider.com/2025/11/05/ionq-and-swiss-consortium-launch-first-citywide-dedicated-quantum-network-in-geneva/
Telia and Ericsson launch revolutionary 5G partnership in Nordics and Baltics
(DigWatch) Telia Company has extended its long-term partnership with Ericsson for another four years across Sweden, Norway, Lithuania, and Estonia. Through this renewed agreement, both companies aim to enhance mobile network speed, capacity, and coverage, while also future-proofing Telia’s infrastructure against evolving technological demands. – https://dig.watch/updates/telia-and-ericsson-launch-revolutionary-5g-partnership-in-nordics-and-baltics – https://www.ericsson.com/en/press-releases/2025/11/telia-extends-ericsson-ran-partnership-in-sweden-norway-lithuania-and-estonia
Terrorism and Counter-Terrorism
Ireland: Central Bank fined Coinbase EUR 21.46 million for breaches of anti-money laundering and counter-terrorist financing obligations
(Digital Policy Alert) On 6 November 2025, the Central Bank of Ireland (CBI) imposed a monetary penalty of EUR 21.46 million on Coinbase Europe Limited for breaches of anti-money laundering (AML) and counter-terrorist financing (CTF) obligations under the Criminal Justice (Money Laundering and Terrorist Financing) Act 2010 (CJA 2010). The fine, reduced from an initial EUR 30.66 million following a 30% settlement discount, forms part of a ruling issued on 6 November 2025, concluding the CBI’s investigation into the company. The sanctions include a reprimand and financial penalty, both subject to confirmation by the High Court before taking effect. The investigation, covering the period from 23 April 2021 to 19 March 2025, found that Coinbase Europe’s transaction monitoring system failed to screen 30.44 million transactions worth approximately EUR 176 billion for suspicious activity. The company also failed to establish and apply adequate internal controls to prevent and detect money laundering and terrorist financing and did not perform enhanced monitoring for 184,790 transactions. Subsequent retrospective monitoring led to the submission of 2,708 suspicious transaction reports (STRs) to the Financial Intelligence Unit (FIU) and the Revenue Commissioners. – https://digitalpolicyalert.org/event/35177-central-bank-of-ireland-issues-ruling-concluding-investigation-and-imposing-eur-2146-million-penalty-on-coinbase-europe-limited-for-amlctf-transaction-monitoring-breaches
Security and Surveillance
As Data Centers Proliferate, Anti-AI Resistance Has the Potential to Turn Violent
(The Soufan Center) Online threats to physically sabotage AI data centers, which house the servers necessary to train, deploy, and deliver AI services, have proliferated over the past year according to surveying by The Soufan Center. Anti-AI resistance is not ideologically uniform, and instead has been driven by ethical, environmental, economic, and religious concerns. Contributing factors to potential future violent anti-AI acts and physical sabotage include concerns about its effect on employment and quality of life. Anti-AI resistance should be considered alongside heightened anti-corporate sentiment among younger generations and the politicization of the major AI companies. – https://thesoufancenter.org/intelbrief-2025-november-5/
How the Tech Industry Got Identity Wrong
(Ev Kontsevoy – Infosecurity Magazine) It shouldn’t take an enterprise 11 hours to resolve a single identity-related security incident. Does that sound controversial? It shouldn’t, considering identity-based breaches are one of the most common cyber-attacks. But that’s what the research tells us from Enterprise Strategy Group (now part of Omdia). It’s no fringe case either. It takes 11 hours on average just to figure out who did what, where and how across a company’s infrastructure. If you’re a hacker (I’m hoping you’re not), you can do a lot in 11 hours: rip through a network, escalate your privileges, steal some data, and vanish without a trace. That’s 11 hours during which some unfortunate security or engineering team is hunting down a single compromised credential. It’s 11 hours of sitting ducks. Something’s gone awfully wrong in identity management to get cybersecurity to this point. The only way to fix it is to redefine what identity means in the computing world. – https://www.infosecurity-magazine.com/opinions/how-the-tech-industry-got-identity/
Bridging the Divide: Actionable Strategies to Secure Your SaaS Environments
(Carl Brundage, Eoghan Casey, Matthew O’Neill – Infosecurity Magazine) Recent high-profile software-as-a-service (SaaS) data breaches have caught many Chief Information Security Officers (CISOs) and Information Security (InfoSec) professionals by surprise, exposing a false sense of security. While organizations know that SaaS providers invest significant resources in security, they often overlook their own responsibility for protecting data on those platforms. This is reflected in the “confidence paradox” from the 2025 CSA State of SaaS Security Report : 79% of organizations are confident in their SaaS security programs, yet have significant capability gaps. Furthermore, the CSA SaaS Security Capability Framework (SCCF) highlights that misalignment between vendors, application owners, InfoSec, and risk teams leads to delays, wasted resources, and unnecessary risk exposure. This gap is widened by the different experience and terminology of InfoSec and SaaS teams, contributing to the “InfoSec↔SaaS Divide.” Bridging this divide is essential for securing SaaS data and unlocking the future benefits of agentic AI. The authors have combined their general InfoSec and specific SaaS knowledge and experience to help organizations secure these environments. – https://www.infosecurity-magazine.com/blogs/strategies-secure-saas-environments/
Brazil: National Telecommunications Agency issued a report on illegal sale of telecommunications products on e-commerce platforms
(Digital Policy Alert) On 4 November 2025, the National Telecommunications Agency (Anatel) issued a report following its investigation into the authorisation and sale of telecommunications products on e-commerce platforms. Launched in 2024, the investigation reviewed 23’062 advertisements, of which only 3’678 included a homologation code verified in Anatel’s official database. Enforcement actions focused on the platforms Mercado Livre, Amazon, Shopee, and others. In November 2024, inspections involving 48 Anatel officials and 20 staff from the Federal Revenue Service’s Division for the Repression of Smuggling and Customs Evasion led to the seizure of 22’000 products valued at BRL 3 million. Further inspections in May 2025 focused on Amazon, Mercado Livre, and Shopee, resulting in fines exceeding BRL 7 million. In 2025, Regulatron was upgraded to expand its monitoring to include AliExpress and Temu alongside existing platforms. – https://digitalpolicyalert.org/event/35115-national-telecommunications-agency-ruling-following-investigation-into-e-commerce-platforms-authorisation-of-goods
LIBE backs new Europol Regulation despite data protection and discrimination warnings
(DigWatch) The European Parliament’s civil liberties committee (LIBE) voted to endorse a new Europol Regulation, part of the ‘Facilitators Package’, by 59–10 with four abstentions. Rights groups and the European Data Protection Supervisor had urged MEPs to reject the proposal, arguing the law fuels discrimination and grants Europol and Frontex unprecedented surveillance capabilities with insufficient oversight. – https://dig.watch/updates/libe-backs-new-europol-regulation-despite-data-protection-and-discrimination-warnings – https://edri.org/our-work/european-parliament-backs-europol-expansion-a-dangerous-step-towards-mass-surveillance-in-the-eu/
UK mobile networks and the Government launch a fierce crackdown on scam calls
(DigWatch) Britain’s largest mobile networks have joined the Government to tackle scam calls and texts. Through the second Telecommunications Fraud Charter, they aim to make the UK harder for fraudsters to target. To achieve this, networks will upgrade systems within a year to prevent foreign call centres from spoofing UK numbers. Additionally, advanced call tracing and AI technology will detect and block suspicious calls and texts before they reach users. – https://dig.watch/updates/uk-mobile-networks-and-the-government-launch-a-fierce-crackdown-on-scam-calls – https://www.gov.uk/government/news/spoofed-numbers-blocked-in-crackdown-on-scammers
Frontiers
Alibaba working on “super AI cloud” to prepare for what’s next
(Cybernews) Alibaba CEO Eddie Wu has said the company is investing heavily in artificial intelligence (AI) infrastructure to prepare for artificial superintelligence (ASI) that’s expected to surpass human capabilities. Wu was speaking at the 2025 World Internet Conference in Wuzhen, eastern China, where he said the company is building ultra-large-scale AI infrastructure and ramping up investment in its global “super AI cloud,” according to Chinese media reports. Wu said Alibaba is pushing ahead to prepare for the next phase of artificial intelligence development. He told the conference that the world is entering the era of artificial general intelligence (AGI), where AI agents assist humans in digital and physical tasks. – https://cybernews.com/ai-news/alibaba-ai-cloud-superintelligence/
New AI tool helps identify suicide-risk individuals
(DigWatch) Researchers at Touro University have found that an AI tool can identify suicide risk that standard diagnostic methods often miss. The study, published in the Journal of Personality Assessment, shows that LLMs can analyse speech to detect patterns linked to perceived suicide risk. Current assessment methods, such as multiple-choice questionnaires, often fail to capture the nuances of an individual’s experience. – https://dig.watch/updates/new-ai-tool-helps-identify-suicide-risk-individuals – https://www.touro.edu/news–events/stories/ai-detects-suicide-risk-missed-by-standard-assessments.php
Naver expands physical AI ambitions with $690 million GPU investment
(DigWatch) South Korean technology leader Naver is deepening its AI ambitions through a $690 million investment in graphics processing units from 2025. A move that aims to strengthen its AI infrastructure and drive the development of physical AI, a field merging digital intelligence with robotics, logistics, and autonomous systems. – https://dig.watch/updates/naver-expands-physical-ai-ambitions-with-690-million-gpu-investment – https://koreatechtoday.com/naver-to-invest-over-690-million-in-gpus-from-2025-to-boost-physical-ai-ambitions/
Material-level AI emerges in MIT–DeRucci sleep science collaboration
(DigWatch) MIT’s Sensor and Ambient Intelligence group, led by Joseph Paradiso, unveiled ‘FiberCircuits’, a smart-fibre platform co-developed with DeRucci. It embeds sensing, edge inference, and feedback directly in fibres to create ‘weavable intelligence’. The aim is natural, low-intrusion human–computer interaction. Teams embedded AI micro-sensors and sub-millimetre ICs to capture respiration, movement, skin conductance, and temperature, running tinyML locally for privacy. Feedback via light, sound, or micro-stimulation closes the loop while keeping power and data exposure low. – https://dig.watch/updates/material-level-ai-emerges-in-mit-derucci-sleep-science-collaboration – https://www.globenewswire.com/news-release/2025/11/06/3182217/0/en/How-Can-AI-Improve-Sleep-MIT-Lab-and-Chinese-Team-from-DeRucci-Group-Find-a-New-Solution.html
AI brain atlas reveals unprecedented detail in MRI scans
(DigWatch) Researchers at University College London have developed NextBrain, an AI-assisted brain atlas that visualises the human brain in unprecedented detail. The tool links microscopic tissue imaging with MRI, enabling rapid and precise analysis of living brain scans. – https://dig.watch/updates/ai-brain-atlas-reveals-unprecedented-detail-in-mri-scans – https://healthcare-in-europe.com/en/news/ai-brain-atlas-detail-mri.html
Pasqal and LG Electronics Forge Strategic Partnership to Advance Quantum Innovation and Industrial Applications
(Quantum Insider) Pasqal and LG Electronics have formed a strategic partnership supported by an equity investment from LG to co-develop quantum algorithms and core technologies for neutral atom quantum computing. The collaboration will focus on industrial applications such as multiphysics simulation, optimization, and materials discovery, as well as joint exploration of enabling components and modules for Pasqal’s room-temperature neutral atom systems. The agreement aims to accelerate the industrialization of quantum hardware and software, leveraging LG’s manufacturing expertise and Pasqal’s quantum technology to strengthen the quantum computing supply chain. – https://thequantuminsider.com/2025/11/06/pasqal-and-lg-electronics-forge-strategic-partnership-to-advance-quantum-innovation-and-industrial-applications/
UnifyApps Secures $50M to Become the Enterprise Operating System for AI to help CIOs Succeed with GenAI
(AI Insider) UnifyApps raised $50 million in Series B funding led by WestBridge Capital with participation from ICONIQ and others, bringing total funding to $81 million as it scales its Enterprise Operating System for AI. The company’s LLM-agnostic, low-code/no-code platform connects enterprise systems like Salesforce and Workday, turning fragmented GenAI pilots into scalable, production-grade AI by unifying data, intelligence, and execution. With over 600% annual revenue growth and customers including HDFC Bank, Deutsche Telekom, and the Abu Dhabi Government, UnifyApps is positioning itself as the infrastructure layer for AI-native enterprises. – https://theaiinsider.tech/2025/11/07/unifyapps-secures-50m-to-become-the-enterprise-operating-system-for-ai-to-help-cios-succeed-with-genai/
Anchor Browser Raises $6M Seed Round to Power the Next Generation of Agentic AI with Reliable Browser Automation
(AI Insider) Anchor Browser raised $6 million in Seed funding led by Blumberg Capital with participation from Gradient, to build infrastructure that lets AI agents navigate and act on the web securely and reliably. Its platform turns any web interface into an AI-accessible surface, enabling enterprises to automate complex online workflows through its new product b0.dev, which enhances reliability by running AI planning before execution. Founded in 2024 by veterans of Unit 8200, SentinelOne, and Noname Security, Anchor is already used by companies like Groq, Unify, and Cloudflare, positioning itself as a core infrastructure layer for agentic AI deployment. – https://theaiinsider.tech/2025/11/07/anchor-browser-raises-6m-seed-round-to-power-the-next-generation-of-agentic-ai-with-reliable-browser-automation/
Foxconn, Mitsubishi Electric Sign MoU to Jointly Develop AI Data Center Infrastructure
(AI Insider) Foxconn and Mitsubishi Electric have signed an MOU to jointly develop and supply energy-efficient, high-reliability infrastructure for AI data centers worldwide. The collaboration aims to reduce energy consumption and support circular economy goals by combining Foxconn’s manufacturing scale with Mitsubishi Electric’s energy and digital engineering expertise. The companies also plan to explore broader sustainability-focused applications and business models beyond data centers. – https://theaiinsider.tech/2025/11/06/foxconn-mitsubishi-electric-sign-mou-to-jointly-develop-ai-data-center-infrastructure/
Appetronix Closes $10M-Plus in Total Seed Funding to Scale Robotic Kitchens Across Non-Commercial Foodservice Markets
(AI Insider) Appetronix has raised over $10 million to expand its intelligent robotic kitchen systems, including a recent $6 million seed plus round led by Jim Grote, the Grote family, and AlleyCorp. The company is targeting non-commercial venues such as airports, hospitals, and entertainment centers, building on its successful deployment with Donatos Pizza at Columbus International Airport. Appetronix aims to address foodservice challenges—labor shortages, consistency, and scalability—through modular robotics designed for 24/7 operations and high-quality output in institutional environments. – https://theaiinsider.tech/2025/11/06/appetronix-closes-10m-plus-in-total-seed-funding-to-scale-robotic-kitchens-across-non-commercial-foodservice-markets/
Bluwhale Secures $10M Strategic Series A in Institutional Funding
(AI Insider) Bluwhale raised $10 million in Series A funding led by UOB Venture Management and backed by major financial institutions and leading blockchains including Sui, Tezos, Cardano, Arbitrum, and Movement Labs, signaling growing institutional adoption of decentralized AI. The company operates a decentralized AI network where over 3.6 million users access AI agents that deliver financial insights, manage transactions, and recommend assets through an interoperable Layer 3 blockchain infrastructure. The funding will drive AI-powered financial service expansion and institutional partnerships, following the launch of Bluwhale’s native $BLUAI token and its mission to make AI more secure, open, and scalable across Web3 and traditional finance. – https://theaiinsider.tech/2025/11/06/bluwhale-secures-10m-strategic-series-a-in-institutional-funding/