December 6, 2025

AI Watch

Blurb:

Nvidia shares fell on Tuesday after The Information reported that Meta is considering using chips designed by Google.

Shares of Nvidia were 3.6% lower in premarket trade. Google-parent Alphabet was trading 2.6% higher.

On Monday, The Information reported that Meta is considering using Google’s tensor processing units (TPUs) in its data centers in 2027. Meta may also rent TPUs from Google’s cloud unit next year, the publication reported.

Google launched its first-generation TPU in 2018 and it was initially designed for its own internal use for its cloud computing business. Since then, Google has launched more advanced versions of its chip that are designed to handle artificial intelligence workloads.

Don’t leave your children alone with any interactive AI warns a consumer watchdog group called The New York Public Interest Research Group (NYPIRG) in its 40th annual report, “Trouble in Toyland 2025.” The group warns “Some of these toys will talk in-depth about sexually explicit topics, act dismayed when you

Blurb:

AI chatbot toys are having ‘sexually explicit’ conversations with kids: report – NYPost

As the season of gift-giving draws nigh, experts are warning parents against buying their children presents powered by AI — claiming certain robo-charged trinkets are having “sexually explicit” discussions with kids under age 12.

Blurb:

“AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.”

A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.

Blurb:

Each chapter in the paper offers case studies: a mathematician or a physicist stuck in a quandary, a doctor trying to confirm a lab result. They all ask GPT-5 for help. Sometimes the LLM gets things wrong. Sometimes it finds a faster route to an already known result. But other times, with careful human guidance, it helps push the boundaries of what was previously known.

In one experiment involving how waves behave around black holes, GPT-5 worked through the math to independently produce results that had previously been shown to be correct, showing it was capable of doing this level of scientific calculation. In another project involving nuclear fusion, GPT-5 developed a model that accelerated the research.

Blurb:

While much of the history of life on Earth is written, the opening chapters are murky at best. On our ever-changing world, the older a rock is, the more it has changed, obscuring or even erasing evidence of ancient life. Beyond a hazy boundary of circa two billion years, in fact, this interference is so total that no pristine, unaltered Earth rocks are known to exist, making any potential sign of biology as clear as mud.

At least until now. In a study published on November 17 in the Proceedings of the National Academy of Sciences, a group of researchers say they’ve leveraged artificial intelligence to follow life’s trail further back in time than ever before, using machine learning to distinguish the echoes of biology from mere abiotic organic molecules in rocks as old as 3.3 billion years.

The results could more than double how far back in time scientists can convincingly claim to discern molecular signs of life in ancient rocks, the study authors say, citing previous record-setting measurements involving 1.6-billion-year-old rocks.

Blurb:

The term “smart city” fails to fully capture the integrated data system that is the Pudong New Area of Shanghai.  Chinese authorities call it the “city brain,” a centrally controlled A.I. center that surveils and manages the city and its inhabitants.  It offers a disturbing preview of future urban governance, built on a previously unimaginable level of monitoring and control.  Since 2017, this system has linked hundreds of government databases to tens of thousands of sensors, effectively turning an entire urban district into a single, real-time data object.

Officials defend the surveillance for its tangible rewards: cleaner neighborhoods, faster emergency response, smoother traffic, and better protection for isolated seniors.  Those benefits help explain why many citizens accept the system.  But the costs are equally real.  It normalizes penetrating, constant visibility, the steady expansion of behavior-based penalties, and an infrastructure that is also used for political and social control.

Blurb:

At the 2024 International Mathematical Olympiad (IMO), one competitor did so well that it would have been awarded the Silver Prize, except for one thing: it was an AI system. This was the first time AI had achieved a medal-level performance in the competition’s history. In a paper published in the journal Nature, researchers detail the technology behind this remarkable achievement.

The AI is AlphaProof, a sophisticated program developed by Google DeepMind that learns to solve complex mathematical problems. The achievement at the IMO was impressive enough, but what really makes AlphaProof special is its ability to find and correct errors. While large language models (LLMs) can solve math problems, they often can’t guarantee the accuracy of their solutions. There may be hidden flaws in their reasoning.

Blurb:

The US Department of Commerce has launched what could become one of the most significant initiatives in the Administration’s AI Action Plan: the American AI Exports Program. This new effort positions the Department of Commerce as an active partner in expanding the global reach of American AI technologies: hardware, software, and models. This initiative marks a shift from regulating AI development domestically to fostering trusted AI ecosystems worldwide. At its core, this effort uses US economic and diplomatic strengths to shape the global AI marketplace before others do.

Over the past few years, Washington’s AI policy debate has focused on risk management: how to prevent bias, combat misinformation, and ensure safety in critical systems. These concerns are crucial, but they shouldn’t be the sole focus when discussing emerging technology. The AI Exports Program demonstrates a deliberate expansion of the federal government’s tools, with the Department of Commerce acting as both a regulator and promoter of growth.

Elon Musk’s alternative to Wikipedia, Grokipedia, has gone live. The site promises information curated and created by AI that is without the far left insurrectionist bias of Wikipedia. Musk claimed in an X post, “even at 0.1 it’s better than Wikipedia imo.”

Blurb:

Elon Musk announced that Wikipedia’s AI rival Grokipedia is now live. The Tesla boss in an X post touted xAI startup’s Grokipedia “Version 1.0 will be 10X better”, even going on to claim, “even at 0.1 it’s better than Wikipedia imo.” Interested users can access the platform by simply searching for Grokipedia on Google, or clicking on this link – Grokipedia.com
from news.google.com

Blurb:

Artificial intelligence is accelerating the scale and potency of the malicious activity in your email inbox. These threats no longer obvious; instead, they take the shape of professional and sophisticated messages tailored to your interests and current correspondence. But with the cybersecurity landscape quickly shifting due to AI-powered illicit activity, how can we ensure a secure inbox? And what would that practically look like?

Shane Tews spent some time discussing this and more with Cy Khormaee and Ryan Luo, co-founders of AegisAI. Cy and Ryan have spent a combined 12+ years at the forefront of cybersecurity, working to help reimagine and practically apply security on a personal level.

800-plus leading figures in AI and tech banded together to call for a pause on the development of what is called “superintelligence.”  Theoretically (and probably true), it is an AI model that WILL be able to out-think humans not just scientifically, but psychologically. Part of the coalition includes Apple cofounder Steve Wozniak and former U.S. National Security Advisor Susan Rice.

The key statement was signed off on by over 16,220 AI and tech professionals. The core statement is, “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

Blurb:

A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create ‘superintelligence’ — a form of AI that would surpass human intellect.

More than 800 people, including Apple cofounder Steve Wozniak and former U.S. National Security Advisor Susan Rice, signed a statement published Wednesday calling for a pause on the development of superintelligence.

In a statement published Wednesday, with over 800 signatories, including prominent AI figures and the biggest names in AI, ranging from Apple cofounder Steve Wozniak to former National Security Advisor Susan Rice, called for a pause on the development of superintelligence.

Blurb:

In the early 2010s, nearly every STEM-savvy college-bound kid heard the same advice: Learn to code. Python was the new Latin. Computer science was the ticket to a stable, well-paid, future-proof life.

But in 2025, the glow has dimmed. “Learn to code” now sounds a little like “learn shorthand.” Teenagers still want jobs in tech, but they no longer see a single path to get there. AI seems poised to snatch up coding jobs, and there aren’t a plethora of AP classes in vibe coding. Their teachers are scrambling to keep up.

“There’s a move from taking as much computer science as you can to now trying to get in as many statistics courses” as possible, says Benjamin Rubenstein, an assistant principal at New York’s Manhattan Village Academy. Rubenstein has spent 20 years in New York City classrooms, long enough to watch the “STEM pipeline” morph into a network of branching paths instead of one straight line. For his students, studying stats feels more practical.

Blurb:

From The Guardian: “The use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical failings, experts have warned.

The development of AI for clinical use has boomed, with researchers creating a host of tools, from algorithms to help interpret scans to systems that can aid with diagnoses. AI is also being developed to help manage hospitals, from optimising bed capacity to tackling supply chains.

But while experts say the technology could bring myriad benefits for healthcare, they say there is also cause for concern, from a lack of testing of the effectiveness of AI tools to questions over who is responsible should a patient have a negative outcome.

Are AI robots the future of parenting in China? | CNN

Parents in the US brace as China’s AI toy trend goes global

from www.techspot.com

Blurb:

The market for AI toys in China is predicted to grow faster than any other consumer AI sector, writes MIT Technology Review, reaching $14 billion by 2030. That’s not surprising when there are around 1,500 AI toy companies operating in the country as of October 2025.

 

AMD: Latest news and insights | Network World

AMD: Latest news and insights | Network World

AMD wins massive AI chip deal from OpenAI with stock sweetener– arstechnica.com
Source Link
Excerpt:

As part of the arrangement, AMD will allow OpenAI to purchase up to 160 million AMD shares at 1 cent each throughout the chips deal.

With demand for AI compute growing rapidly, companies like OpenAI have been looking for secondary supply lines and sources of additional computing capacity, and the AMD partnership is part the company’s wider effort to secure sufficient computing power for its AI operations. In September, Nvidia announced an investment of up to $100 billion in OpenAI that included supplying at least 10 gigawatts of Nvidia systems. OpenAI plans to deploy a gigawatt of Nvidia’s next-generation Vera Rubin chips in late 2026.

OpenAI has worked with AMD for years, according to Reuters, providing input on the design of older generations of AI chips such as the MI300X. The new agreement calls for deploying the equivalent of 6 gigawatts of computing power using AMD chips over multiple years.

Beyond working with chip suppliers, OpenAI is widely reported to be developing its own silicon for AI applications and has partnered with Broadcom, as we reported in February. A person familiar with the matter told Reuters the AMD deal does not change OpenAI’s ongoing compute plans, including its chip development effort or its partnership with Microsoft.

Lockheed, Verizon testing 5G-linked drone swarm for intel collection

Lockheed, Verizon testing 5G-linked drone swarm for intel collection

‘Swarms of Killer Robots’: Why AI is Terrifying the American Military – Politico
Source Link
Excerpt:

Artificial intelligence technology is poised to transform national security. In the United States, experts and policymakers are already experimenting with large language models that can aid in strategic decision-making in conflicts and autonomous weapons systems (or, as they are more commonly called, “killer robots”) that can make real-time decisions about what to target and whether to use lethal force.

But these new technologies also pose enormous risks. The Pentagon is filled with some of the country’s most sensitive information. Putting that information in the hands of AI tools makes it more vulnerable, both to foreign hackers and to malicious inside actors who want to leak information, as AI can comb through and summarize massive amounts of information better than any human. A misaligned AI agent can also quickly lead to decision-making that unnecessarily escalates conflict.

Risks and Benefits of AI for Businesses and Cybersecurity | SBS

Risks and Benefits of AI for Businesses and Cybersecurity | SBS

As AI redefines work, US employers cut jobs and remain cautious in hiring – Computerworld– www.computerworld.com
Source Link
Excerpt:

 

Ben Johnston, COO of small business lender Kapitus, said US businesses are grappling with tariffs that raise costs on imported goods, potentially making domestic manufacturing more competitive long term. But in the short term, those tariffs risk driving up inflation and disrupting global supply chains, threatening jobs across the manufacturing, wholesale, and retail sectors.

AI is also beginning to displace workers, especially in white-collar jobs. Companies are currently investing heavily in AI technologies that can analyze data and quickly make decisions that once could only be made by humans, Johnston said.

Companies are using AI to gather and analyze data from the web, internal systems, and third parties — tasks once done only by humans — mainly in white-collar roles like analytics and underwriting. And as robotics advance, AI could soon take on physical tasks in blue-collar jobs like driving, factory work, and even home healthcare, Johnston noted.

How U.S.-Gulf AI Deals Project Power– warontherocks.com
Source Link
Excerpt:

The great-power contest is not unfolding on battlefields or carrier decks, but inside data halls cooled by air conditioning, far from America’s shores. Rows of servers and racks of graphics processing units now carry as much strategic weight as military bases once did. Each deal for cloud access or advanced chips is a form of statecraft, binding partners into one camp’s technology ecosystem while locking out the other.

The United States is using AI infrastructure — data centers, cloud controls, and compute access — as a tool of power projection in the Arabian Gulf. By tying investment and capacity to governance safeguards, Washington can align regional partners with its security preferences, crowd out Chinese platforms, and set the rules for how AI is built and deployed. But the leverage is fragile. Without resilience and enforceable compliance, these arrangements risk becoming single points of failure or, worse, conduits for adversaries.

To make this new form of statecraft durable, U.S. policymakers should establish standard deal architectures with Gulf partners that combine hard technical safeguards, strict governance requirements, and built-in contingency plans. That means binding model weights to secure enclaves, tracking accelerators and workloads, embedding snapback clauses for violations, and pairing technical assurances with human rights standards. Done right, this approach can turn American-backed AI infrastructure into a lasting source of influence — quiet, scalable, and harder to dislodge than a forward operating base.

  1. A Feminist Approach to AI in Sub-Saharan Africa • Stimson Center– www.stimson.org
    Source Link
    Excerpt:

    Across Africa, AI is being harnessed to achieve positive impacts for marginalized communities. However, while AI can be used for good, some fear it could further marginalize and harm those it is intended to empower. Despite emphasis from both public and private sectors on equality and equity, uncertainty around policy-enabling environments, skills, and resources still presents a bottleneck for building inclusive AI. Though there are promising femtech solutions aimed at addressing specific gender concerns, the question of addressing needs and wants from a feminist approach in AI lingers.

    Gender is often still an afterthought when it comes to policy implementation and practice, but there are many initiatives, charters, and agreements in Africa that support equality and aim to eliminate violence against women, combat the disproportionate effect of poverty on women, and support women’s participation in the political and economic spheres. For example, Agenda 2063 promotes gender equality and an engaged, empowered youth. The African Union strategy on Gender Equality and Women’s Empowerment (GEWE) 2018-2028 also aims to strengthen women’s agency in Africa and ensure that women’s voices are amplified and their concerns are fully addressed. The African Charter on Human and Peoples’ Rights on the Rights of Women in Africa similarly requires member states to tackle “all forms of discrimination against women through appropriate legislative measures.”

    In Africa, there is a strong normative framework on gender equality and women’s and girls’ rights, and correspondingly, there are some civil or governmental initiatives that support women in national digital transformation policies. In Rwanda, for example, women have become increasingly influential in building national plans for artificial intelligence, and in Kenya, women have impacted the national plan for data uses. However, Africa is still facing challenges in integrating AI and policy. The Oxford AI Worldwide Readiness Index exemplifies the gap between the United States, ranked as first, and Mauritius, considered the African flagship country in AI policy, ranked 69th. There are only four African countries – Mauritius, South Africa, Rwanda, and Egypt – whose scores were higher than the global average of 47.59.

    Policy efforts across the continent are increasing but still limited. In mid-2021, Egypt launched its national AI strategy, christened “Artificial Intelligence for Development and Prosperity,” making clear the country’s ambitious goals for development and economic growth. Senegal followed suit in 2023 with a strategy of its own, also focused on economic development. On April 20, 2023, Rwanda released its “National AI Policy for Responsible AI Adoption,” which emphasizes AI for sustainable development. In 2024, Kenya published its draft national AI strategy with goals including social inclusion, ethics, and equity in AI.

     

The Complex Promise and Perils of AI in Policing : Center for ...

The Complex Promise and Perils of AI in Policing : Center for ...

Arizona’s AI policing tool threatens civil liberties– www.theblaze.com
Source Link
Excerpt:

Several Arizona police departments are piloting a new AI-powered policing tool that promises to revolutionize how officers catch criminals. But without robust constitutional safeguards, this cutting-edge technology could pose a serious threat to the civil liberties of everyday Americans.

Arizona police agencies are now testing a new AI program that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” The program, called Overwatch, was developed by Massive Blue and provides police departments with up to 50 different AI personas.

While the technology could, in theory, be used for noble purposes, … it also creates new opportunities for government overreach.

These include a sex trafficker persona, an escort persona, a 14-year-old boy in a child trafficking scenario, and a vaguely defined “college protester.” Beyond social media monitoring, the program allows police to communicate directly with suspects while posing as one of these AI-generated personas, all without a warrant.

No transparency

So far, both the police departments using Overwatch and the company behind it have been extremely secretive about its operations. Massive Blue co-founder Mike McGraw declined to answer questions from 404 Media, which first broke the story, about how the program works, which departments are using it, and whether it has led to any arrests.

“We cannot risk jeopardizing these investigations and putting victims’ lives in further danger by disclosing proprietary information,” McGraw said.

The Pinal County Sheriff’s Office, one of the few agencies that have confirmed using the program, admitted it has not yet led to any arrests. Officials refused to provide details, saying, “We cannot risk compromising our investigative efforts by providing specifics about any personas.”

At an appropriations hearing, a Pinal County deputy sheriff also declined to share information about the program with the county council. Remarkably, the Arizona Department of Public Safety, which funds the initiative, does not appear to have been informed about the program’s specifics.

While the technology could, in theory, be used for noble purposes, such as preventing terrorist attacks or combating human trafficking, it also creates new opportunities for government overreach. Without safeguards, it poses a direct threat to the civil liberties of innocent Americans.

Invitation to entrapment

History is full of examples of government entrapment and abuse of power. In the plot to kidnap Michigan Gov. Gretchen Whitmer (D-Mich.), for example, FBI involvement played a central role in bringing groups together that may never have otherwise connected.

Similarly, in Jacobson v. United States (1992), federal agents sent child sexual abuse material through the mail to a man with no prior criminal record, leading to his conviction, which was later overturned.

RELATED: Netflix’s chilling new surveillance tools are watching you

Photo Illustration by Piotr Swat/SOPA Images/LightRocket via Getty Images

In both cases, it is doubtful the crimes would have occurred without government intervention. A program like Overwatch makes such abuses easier, granting the government new ways to monitor and manipulate citizens who have never been convicted of a crime, and all without warrants.

The risks are compounded by the program’s vague and troubling categories, such as “college protester,” which could be redefined depending on who is in power. That opens the door for the technology to be weaponized against political dissent, even when no crime has been committed.

Without serious constitutional safeguards, programs like this are poised to become political tools of tyranny. Americans must demand warrant requirements and legislative oversight before this technology spreads nationwide and the erosion of our constitutional liberties becomes irreversible.

California Governor Gavin Newsom signs landmark AI safety law SB 53– fortune.com
Source Link
Excerpt:

California has taken a significant step toward regulating artificial intelligence with Governor Gavin Newsom signing a new state law that will require major AI companies, many of which are headquartered in the state, to publicly disclose how they plan to mitigate the potentially catastrophic risks posed by advanced AI models.

The law also creates mechanisms for reporting critical safety incidents, extends whistleblower protections to AI company employees, and initiates the development of CalCompute, a government consortium tasked with creating a public computing cluster for safe, ethical, and sustainable AI research and innovation. By compelling companies, including OpenAI, Meta, Google DeepMind, and Anthropic, to follow these new rules at home, California may effectively set the standard for AI oversight.

Newsom framed the law as a balance between safeguarding the public and encouraging innovation. In a statement, he wrote: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”

The legislation, authored by State Sen. Scott Wiener, follows a failed attempt to pass a similar AI law last year. Wiener said that the new law, which was known by the shorthand SB 53 (for Senate Bill 53), focuses on transparency rather than liability, a departure from his prior SB 1047 bill, which Newsom vetoed last year.

“SB 53’s passage marks a notable win for California and the AI industry as a whole,” said Sunny Gandhi, VP of Political Affairs at Encode AI, a co-sponsor SB 53. “By establishing transparency and accountability measures for large-scale developers, SB 53 ensures that startups and innovators aren’t saddled with disproportionate burdens, while the most powerful models face appropriate oversight. This balanced approach sets the stage for a competitive, safe, and globally respected AI ecosystem.”