April 30, 2026

AI Watch

Blurb:

By this time 26 years ago, the “Dot-Com Bubble” was ready to burst. People who wanted to raise investor money claimed that they could sell anything affordably on a website; three companies were devoted just to pet food and buying ad space on broadcast television.So-called AI is enjoying a similar frenzy. Though they are still just Large Language Models (LLMs), and the best analogy for that is a fancy autocomplete, they are attracting huge levels of financial investment partly because of the potential and then primarily because people want to make money on stocks, not companies.

The Dot-Com Bubble did collapse but progress continued without the hype(1) you can buy dog food affordably on the the internet now, though the big money is artisanal dog food marketed toward wealthy elites. That may be the future for AI in 25 years also. For now, while AI is still a long way off, tasks that academics considered challenging for LLMs, like the 2020 Multitask Language Understanding (MMLU) benchmark designed to evaluate ability using 57 topics, are now easy for the private sector to master.

Blurb:

Just over 30 percent of US fourth graders are considered proficient in reading.

The AI platform called Compani.AI is promoting a “homework agent” named Einstein and says it can complete assignments on behalf of students, including submitting work for them automatically. Childhood literacy rates in the US, however, are falling.

The website features a virtual version of Albert Einstein as an AI companion. According to the company, “Einstein has a full virtual computer with a browser — anything you can do, he can do.” The platform says the AI can log into the education platform Canvas on behalf of users, and once logged in, it “watches lectures, reads essays, writes papers, participates in discussions, and submits your homework — automatically.”

“Give him a reading assignment, and he reads the full text, understands it, and writes original essays with proper citations,” the company says. It also states that the AI can watch videos and extract “key concepts” using them to “answer assignments accurately.”

Blurb:

 

When your average daily token usage is 8 billion a day, you have a massive scale problem.

This was the case at AT&T, and chief data officer Andy Markus and his team recognized that it simply wasn’t feasible (or economical) to push everything through large reasoning models.

So, when building out an internal Ask AT&T personal assistant, they reconstructed the orchestration layer. The result: A multi-agent stack built on LangChain where large language model “super agents” direct smaller, underlying “worker” agents performing more concise, purpose-driven work.

Blurb:

Nvidia CEO Jensen Huang said Wednesday that the dispute between the U.S. Defense Department and Anthropic is “not the end of the world.”

His comments come after U.S. Defense Secretary Pete Hegseth gave Anthropic until Friday to loosen its rules on how the Pentagon can use its AI tools, or risk losing its government contract.

If Anthropic fails to comply, Hegseth threatened to label the company a “supply chain risk” or invoke the Defense Production Act, sources told CNBC‘s Ashley Capoot and Kate Rooney earlier this week.

Blurb:

In the age of AI, the scarcest resource in headquarters is no longer time. It is, rather, the willingness to say no.

Artificial intelligence is moving rapidly into military planning staffs because it compresses routine cognitive labor. AI excels at absorbing guidance, reorganizing complex material, and producing clear strategic language at speed. This feels like a qualitative advance, creating the impression that planning itself has become easier. But this impression misleads. The risk of AI-enabled planning is that it will produce plausible constructs that obscure where judgment is required, creating the illusion that analytic completeness can substitute for prioritization.

AI is seen as “raising the floor” by making it easier to produce adequate products. That is true. Yet AI also “collapses the median” by increasing the relative cost of real insight. As AI-enabled planning begin to inform real-world operations, the temptation is to treat complete answers as sufficient, without interrogating whether they represent the right answers to the hard questions of what to resource, what to defer, and what risk to accept.

Blurb:

The technology giant Nvidia just reported a great fourth quarter. The company operates on a January fiscal year. It comfortably beat analyst expectations, primarily because of the explosion in artificial intelligence infrastructure spending. Importantly, the largest U.S. data center companies just announced dramatic increases in artificial intelligence capital spending. Spending by the so-called hyperscalers will rise well over 50% to almost $700 billion in 2026.

Nvidia’s revenue reached $68.1 billion, a growth rate of 73% year over year. That exceeded the consensus estimate of around $66 billion. Data center revenue was up 75%, a huge beat. Earnings per share were also higher than expected. The market was looking for earnings of around $1.53. The reported number was $1.62. Nvidia’s gross margin was also outstanding at 75%. That indicates the company continues to have pricing power and appears to be largely unaffected by the shortage of high-bandwidth memory semiconductors.

Blurb:

The tech firm’s safety team has been called to Ottawa to explain why it failed to alert police about an account linked to a mass shooter

Canadian officials have summoned senior OpenAI representatives to Ottawa to answer questions about the tech company’s safety protocols after it confirmed it did not alert police about an account linked to mass shooter Jesse Van Rutselaar.

Artificial Intelligence Minister Evan Solomon said on Monday that OpenAI’s senior safety officials will come to Ottawa on Tuesday to outline how the company decides when to notify law enforcement.

Blurb:

Agentic AI is quickly moving from demo to deployment inside the Department of Defense. But what does it actually mean to give AI “agency” — and what does it take to make those systems work on real military networks?

In this episode, Ryan sits down with Ben Van Roo, co-founder and CEO of Legion Intelligence, Jags Kandasamy, co-founder and CEO of Latent AI, and Aaron Brown, co-founder and CEO of Lumbra AI, to discuss why the real challenge is not just building smart models but getting AI agents to run on military networks and inside operational workflows. They cover deploying agents in denied environments, compressing models for the edge, orchestrating them across stovepiped systems, and the Pentagon’s struggle to scale and buy these tools fast enough to matter.

 

Blurb:

NORTHERN VIRGINIA: Rising energy costs are fuelling frustration among American voters ahead of this year’s midterm elections.

In Northern Virginia, data centres – notorious for guzzling massive amounts of electricity and water – are emerging as a flashpoint over power demand and infrastructure strain.

The region on the eastern coast of the United States is widely regarded as the data centre capital of the world, with a large concentration of server farms clustered in counties just outside Washington, DC.

Blurb:

FIRST ON THE DAILY SIGNAL—The White House is engaging against a Florida bill that would establish limits on artificial intelligence, including protections for minors, sources familiar with the matter tell The Daily Signal.

The White House has contacted Florida Speaker of the House Daniel Perez and his staff members about opposing Gov. Ron DeSantis’ Artificial Intelligence Bill of Rights, sources said.

So far, Perez has sent the bill through four committees in the House since its introduction early this year. Perez told reporters on Tuesday that he is skeptical that states should pass legislation on an issue where the federal government has “first dibs.”

Blurb:

Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

Blurb:

As agentic AI workflows multiply the cost and latency of long reasoning chains, a team from the University of Maryland, Lawrence Livermore National Labs, Columbia University and TogetherAI has found a way to bake 3x throughput gains directly into a model’s weights.

Unlike speculative decoding, which requires a separate drafting model, this approach requires no additional infrastructure — just a single special token added to the model’s existing architecture.

Blurb:

Amazon Web Services suffered a 13-hour outage in December after its internal AI coding agent determined that the optimal solution to a problem was to wipe and rebuild an environment in production. This was not a cyberattack. It was not foreign interference. It was an AI system operating with operator-level permissions inside one of the most economically critical cloud platforms in the world.

“The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to ‘delete and recreate the environment’.”

Delete and recreate the environment.

That command halted a live cloud service for half a day. AWS accounts for roughly 60 percent of Amazon’s operating profits and supports payroll systems, logistics networks, enterprise back ends, and consumer-facing applications used by millions. Its reliability is not a bonus feature. It is the product.

Blurb:

German public broadcaster ZDF has issued an on-air apology after its flagship news program, Heute Journal, aired a segment containing AI-generated footage depicting U.S. Immigration and Customs Enforcement (ICE) officers arresting a migrant family.

The controversy followed the Feb. 15 broadcast, which ZDF said examined fears in parts of the United States over immigration enforcement operations. Viewers quickly noted on social media that portions of the footage were artificially generated, with an OpenAI Sora watermark visible on screen.

 

Blurb:

No AI data centers, no AI revolution.

Or to be less dramatic, slowing the buildout of these sprawling server farms will slow technical advances and the economywide spread of generative artificial intelligence, which is shaping up to be a powerful new general-purpose technology. As such, a new survey from Politico suggests Silicon Valley shouldn’t take voter tolerance for granted.

Let’s start with the good news for AI companies: Just 28 percent of 2,000 surveyed would oppose the building of a new data center in their area. That, versus 37 percent who would support construction and 28 percent who would neither support nor oppose.

Blurb:

It is “disheartening” that some cutting-edge tech companies seem reluctant to fully do business with the military and support all of its operations, a key Defense Department official said Tuesday amid an escalating feud between the Pentagon and Silicon Valley firm Anthropic over the reported use of the company’s AI tool in recent U.S. Special Forces missions in Venezuela.
from www.washingtontimes.com

Blurb:

 

In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the information age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.

Blurb:

Reading Matt Shumer’s viral essay about artificial intelligence was like stepping back in time to roughly six years ago, when the world started going insane over Covid-19.

It hits all the same beats as those viral essays from 2020, when we were told “something big was coming” and “life will never be the same.” It is written with the same insider tone, like the author is doing us a favor by telling us how horrible life is about to become. And the intent is clearly the same: to so unsettle a population that they will begin to feel powerless in the face of what is about to come.

Blurb:

Over the course of his career, Joseph McMullen has dealt with some of the most powerful agencies in the country: the FBI, Customs and Border Protection, and Immigration and Customs Enforcement. But in early 2024 the San Diego–based civil rights attorney faced a problem of scale. He had three federal trials in three months—two involving deaths in jail, one involving American children detained at the border—and terabytes of documents. He turned to artificial intelligence to help him get through it all.

McMullen’s path to the courtroom has been unconventional. A former analyst at the consulting firm Bain & Company, he received a law degree at the University of Virginia and trained at the Trial Lawyers College (now called the Gerry Spence Method) in Wyoming in a program that specialized in the emotional craft of storytelling. The emphasis he places on both analytical rigor and narrative instinct has led him, unexpectedly, to artificial intelligence.

Blurb:

 

 

The advent of artificial intelligence is rattling a lot of cages. Trust me, I should know; two of our four daughters are freelance commercial graphic artists, and they are (rightfully) worried about being underbid and driven out of the market by computers. As for me, I’m not too worried – what computer could ever match my inimitable style, my wit, my wisdom, not to mention my modesty?