Governance and Legislation

Swiss to Release Open, Multilingual LLM Model

(AI Insider – 15 July 2025) Switzerland will release its first fully public large language model (LLM) in late summer 2025, developed by ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS) to promote transparency, multilingualism, and open AI innovation. Funded by the ETH Board and trained on the “Alps” supercomputer powered by over 10,000 NVIDIA Grace Hopper Superchips, the Swiss LLM is designed for sovereign AI infrastructure and will be released under an Apache 2.0 license with full access to code, training data, and model weights. Featuring multilingual capabilities across more than 1,000 languages and two model sizes (8B and 70B parameters), the model targets broad adoption in science, government, education, and industry, with researchers emphasizing compliance with Swiss and EU regulations, ethical data sourcing, and transparent documentation. – https://theaiinsider.tech/2025/07/15/swiss-to-release-open-multilingual-llm-model/

Want Accountable AI in Government? Start with Procurement

(Nari Johnson, Elise Silva, Hoda Heidari – Tech Policy Press – 15 July 2025) In 2018, the public learned that the New Orleans Police Department had been using predictive policing software from Palantir to decide where to send officers. Civil rights groups quickly raised alarms about the tool’s potential for racial bias. But the deeper issue wasn’t just how the technology worked, but the processes that shaped its adoption by the city. Who approved its use? Why was it hidden from the public? Like New Orleans, all US cities rely on established public procurement processes to contract with private vendors. These regulations, often written into law, typically apply to every government purchase, whether it’s school buses, office supplies, or artificial intelligence systems. But this case exposed a major loophole in the city’s procurement rules: because Palantir donated the software for free, the deal sidestepped the city’s usual oversight processes. No money changed hands, so the agreement didn’t trigger standard checks such as a requirement for city council debate and approval. The city didn’t treat philanthropic gifts like traditional purchases, and as a result, key city officials and council members had no idea the partnership even existed. Inspired by this story and several others across the US, our research team, made up of scholars from Carnegie Mellon University and the University of Pittsburgh, decided to investigate the purchasing processes that shape critical decisions about public sector AI. – https://www.techpolicy.press/want-accountable-ai-in-government-start-with-procurement/

The UK’s Opportunity to Lead on Social Media Transparency

(Mark Scott – Tech Policy Press – 15 July 2025) In the world of online safety rulemaking, most attention has focused on the European Union’s Digital Services Act. But just across the English Channel, the United Kingdom’s Online Safety Act (OSA), a set of rules for social media, video-sharing, and internet messaging companies to remove illegal content like terrorist material and online financial fraud, is now well underway. That rulebook, which includes potential fines of up to 10 percent of a firm’s global revenue, just got a double revamp. – https://www.techpolicy.press/the-uks-opportunity-to-lead-on-social-media-transparency/

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

(DigWatch – 15 July 2025) The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks. The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history. The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval. – https://dig.watch/updates/us-house-passes-ntia-cyber-leadership-bill-after-salt-typhoon-hacks

AI’s future in banking depends on local solutions and trust

(DigWatch – 15 July 2025) According to leading industry voices, banks and financial institutions are expected to play a central role in accelerating AI adoption across African markets. Experts at the ACAMB stakeholders’ conference in Lagos stressed the need for region-specific AI solutions to meet Africa’s unique financial needs. Niyi Yusuf, Chairman of the Nigerian Economic Summit Group, highlighted AI’s evolution since the 1950s and its growing influence on modern banking. – https://dig.watch/updates/ais-future-in-banking-depends-on-local-solutions-and-trust

Asia’s humanities under pressure from AI surge

(DigWatch – 15 July 2025) Universities across Asia, notably in China, are slashing liberal arts enrollments to expand STEM and AI programmes. Institutions like Fudan and Tsinghua are reducing intake for humanities subjects, as policymakers push for a high-tech workforce. Despite this shift, educators argue that sidelining subjects like history, philosophy, and ethics threatens the cultivation of critical thinking, moral insight, and cultural literacy, which are increasingly necessary in an AI-saturated world. They contend that humanistic reasoning remains essential for navigating AI’s societal and ethical complexities. – https://dig.watch/updates/asias-humanities-under-pressure-from-ai-surge

AI Isn’t Responsible for Slop. We Are Doing It to Ourselves

(José Marichal – Tech Policy Press – 15 July 2025) Our social media feeds are increasingly being overrun by AI slop, so much so that Fast Company’s Mark Sullivan has dubbed it the ”AI Slop Summer.” Critics are drawing attention to the looming dangers of AI driven content, as in a memorable John Oliver segment. Google’s new video AI generator, Veo3, is being used to produce racist and antisemitic videos that are then posted on social media. YouTube has taken notice of this phenomenon; on July 15 it announced that the YouTube Partner Program will exclude AI slop from being monetized. This content isn’t only banal—it can make our toxic public sphere even worse. But while the very real dangers of AI slop are often framed as large tech companies imposing dangerous tools on an unsuspecting public, perhaps we should also consider why we are so receptive to low quality content. – https://www.techpolicy.press/ai-isnt-responsible-for-slop-we-are-doing-it-to-ourselves/

Why Technology Won’t Save Us Unless We Change Our Behavior

(Frenk van Harreveld – Tech Policy Press – 14 July 2025) We can design greener tech, smarter AI, and healthier systems—but unless people use them, trust them, and stick with them, they won’t matter. Climate change, overstretched healthcare systems, and the rise of artificial intelligence are among the greatest challenges of our time. We often turn to technology for solutions: cleaner energy, more efficient healthcare, safer algorithms. But innovation is only half the story. The other half is us—and our behavior. Even the most promising technology fails if people don’t use it, understand it, or trust it. Green products have to be purchased and applied. Preventive health tools only work if lifestyles change. AI systems can boost efficiency, but only if users engage critically and responsibly. More often than not, the bottleneck is not in what we can build, but in what people actually do. – https://www.techpolicy.press/why-technology-wont-save-us-unless-we-change-our-behavior/

Security

The Security Stakes in the Global Quantum Race

(Argyri Panezi – Just Security – 15 July 2025) The quantum era is around the corner. Major tech companies are announcing impressive breakthroughs in quantum advantage, quantum error correction, and quantum networking. Competing quantum chips are also reaching new heights, from IBM’s Condor breaking the 1,000-qubit barrier in December 2023 signaling the ability to dramatically expand computational power, to Google’s Willow, presented in December 2024, and Microsoft’s Majorana 1  announcement in February 2025 – a breakthrough that remains contested. The quantum race is international, with competition between major players in the West and the East. Public investments in quantum technologies have surged globally, reaching $42 billion in 2023. China leads with more than $15 billion in investments, followed by Germany, the United Kingdom, the United States, and South Korea. Similar to the global race for AI leadership, quantum technology has geopolitical dimensions. Commentators are drawing comparisons between the quantum race and the earlier, nuclear and space races. Should policymakers anticipate that the quantum race will pose major security and safety risks, as with nuclear power? If this proves to be the case, then the international community can expect security implications of analogous magnitude. However, by coordinating and acting early, governments have an opportunity to prevent harmful competition, anticipate societal impacts, and build inclusive governance frameworks that support responsible and equitable development and adoption of quantum technologies. – https://www.justsecurity.org/116473/security-stakes-global-quantum-race/

Are Cyber Defenders Winning?

(Jason Healey, Tarang Jain – Lawfare – 14 July 2025) On June 6, President Trump signed an executive order to “reprioritize cybersecurity efforts to protect America,” outlining a rough agenda “to improve the security and resilience of the nation’s information systems and networks.” As the administration develops a new cybersecurity strategy, it is essential that it understand and respond to a shifting trend in cyberspace: After a decades-long slump, defenders may finally be gaining the advantage. In the 1970s, computers could be kept secure simply by being in locked rooms. But when these computers were connected to networks, attackers gained the advantage. Despite decades of defensive innovations since then, defenders’ efforts are routinely overwhelmed by the gains made by attackers. Successful defense is possible—but only with substantial resources and discipline. – https://www.lawfaremedia.org/article/are-cyber-defenders-winning

Terrorism

Assessing Terrorist Use of Virtual Asset Intermediaries

(RUSI – 14 July 2025) A research briefing by Allison Owen examines intermediary services that convert virtual assets to fiat currency in cases related to three groups: Hamas, Hezbollah and ISIS. This research brief is based, in part, on data provided by blockchain analytics company Crystal Intelligence, which was gathered through its internal investigations. – https://www.rusi.org/explore-our-research/publications/external-publications/assessing-terrorist-use-virtual-asset-intermediaries

Defense, Intelligence, and Warfare

Military AI and the void of accountability

(DigWatch – 15 July 2025) In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous technologies that make life-and-death decisions. From the United States’ Project Maven to Israel’s AI-powered targeting in Gaza and Ukraine’s semi-autonomous drones, military AI is no longer a futuristic concept but a present reality. While designed to improve precision and reduce risks, these systems carry hidden dangers—opaque ‘black box’ decisions, biases rooted in flawed data, and unpredictable behaviour in high-pressure situations. Operators either distrust AI or over-rely on it, sometimes without understanding how conclusions are reached, creating a new layer of risk in modern warfare. Bias remains a critical challenge. AI can inherit societal prejudices from the data it is trained on, misinterpret patterns through algorithmic flaws, or encourage automation bias, where humans trust AI outputs even when they shouldn’t. – https://dig.watch/updates/military-ai-and-the-void-of-accountability

Unnatural Disasters: The Next Front in Russia’s Hybrid War

(Matt Ince – RUSI – 14 July 2025) Once confined to science fiction, solar geoengineering is now moving into real-world experimentation, raising the risk of misuse by hostile actors. Also known as solar radiation management, this set of novel technologies – such as stratospheric aerosol injection and marine cloud brightening – aims to artificially slow the rise in global temperatures by reducing the amount of sunlight absorbed by the Earth’s surface. To date, their primary purpose has been to tackle the symptoms of accelerating climate change. However, these technologies also pose dual-use risks: alongside unintended environmental consequences, they could be exploited by powers seeking to tilt the geopolitical balance to cause climate-related disruption. As the UK government prioritises investment in solar geoengineering research and development, it has a crucial window to help shape international norms. Without adequate safeguards, these technologies could become tools of geopolitical coercion in the years ahead. – https://www.rusi.org/explore-our-research/publications/commentary/unnatural-disasters-next-front-russias-hybrid-war

Frontiers

How AI Can Degrade Human Performance in High-Stakes Settings

(Dane A. Morey, Mike Rayo, and David Woods – AI Frontiers – 16 July 2025) Last week, the AI nonprofit METR published an in-depth study on human-AI collaboration that stunned experts. It found that software developers with access to AI tools took 19% longer to complete their tasks, despite believing they had finished 20% faster. The findings shed important light on our ability to predict how AI capabilities interact with human skills. Since 2020, we have been conducting similar studies on human-AI collaboration, but in contexts with much higher stakes than software development. Alarmingly, in these safety-critical settings, we found that access to AI tools can cause humans to perform much, much worse. A 19% slowdown in software development can eat into profits. Reduced performance in safety-critical settings can cost lives. – https://aifrontiersmedia.substack.com/p/how-ai-can-degrade-human-performance

Google and Westinghouse unleash AI to build nuclear reactors faster than ever

(Interesting Engineering – 15 July 2025) In a first-of-its-kind move, Westinghouse Electric Company and Google Cloud have teamed up to leverage artificial intelligence for streamlining nuclear reactor construction. Their AI-powered tools autonomously generate and optimize modular work packages for advanced reactors. The collaboration pairs Westinghouse’s proprietary HiVE™ and bertha™ nuclear AI solutions with Google Cloud technologies such as Vertex AI, Gemini, and BigQuery. – https://interestingengineering.com/culture/westinghouse-google-cloud-ai-nuclear-reactors

US AI supercomputer Nexus will compute faster than 8 billion humans combined

(Interesting Engineering – 15 July 2025) The U.S. research community is set to gain a major AI-powered boost. Georgia Tech and its partners have secured $20 million from the National Science Foundation to build Nexus, one of the nation’s fastest supercomputers, built to accelerate scientific discovery using artificial intelligence. Once completed in spring 2026, Nexus will deliver over 400 quadrillion operations per second, the equivalent of everyone in the world continuously performing 50 million calculations every second. – https://interestingengineering.com/innovation/georgia-tech-nexus-supercomputer-ai-research

German humanoid robot welder to tackle high-risk jobs at Hyundai’s shipyard

(Interesting Engineering – 15 July 2025) A partnership will result in Korean firms HD Hyundai Robotics and HD Hyundai Samho testing advanced robots in shipbuilding, marking a major step in automating one of the world’s toughest industries. Interestingly, the trial will use robots from Germany’s Neura Robotics, not Hyundai’s. Despite owning Boston Dynamics and a leading automation arm, Hyundai is turning to external innovation for this initiative. – https://interestingengineering.com/innovation/humanoid-robot-welder-hyundai-shipyard

Germany creates material ‘that has never existed’ to unlock quantum tech power

(Interesting Engineering – 15 July 2025) Scientists have merged four elements from Group IV of the periodic table to design a new material that could redefine the future of quantum computing, microelectronics, and photonics. The stable semiconductor alloy of carbon (C), silicon (Si), germanium (Ge), and tin (Sn) was developed by researchers at Forschungszentrum Jülich (FZJ), one of the largest interdisciplinary research institutions in Europe, and the Leibniz Institute for Innovative Microelectronics (IHP). – https://interestingengineering.com/innovation/material-to-unlock-quantum-tech-power

This site is registered on wpml.org as a development site.