March 14, 2026

AI Watch

Source Link
Excerpt:

A universal deepfake detector has achieved the best accuracy yet in spotting multiple types of videos manipulated or completely generated by artificial intelligence. The technology may help flag non-consensual AI-generated pornography, deepfake scams or election misinformation videos.

The widespread availability of cheap AI-powered deepfake creation tools has fuelled the out-of-control online spread of synthetic videos. Many depict women – including celebrities and even schoolgirls – in nonconsensual pornography. And deepfakes have also been used to influence political elections, as well as to enhance financial scams targeting both ordinary consumers and company executives.

But most AI models trained to detect synthetic video focus on faces – which means they are most effective at spotting one specific type of deepfake, where a real person’s face is swapped into an existing video. “We need one model that will be able to detect face-manipulated videos as well as background-manipulated or fully AI-generated videos,” says Rohit Kundu at the University of California, Riverside. “Our model addresses exactly that concern – we assume that the entire video may be generated synthetically.”

Kundu and his colleagues trained their AI-powered universal detector to monitor multiple background elements of videos, as well as people’s faces. It can spot subtle signs of spatial and temporal inconsistencies in deepfakes. As a result, it can detect inconsistent lighting conditions on people who were artificially inserted into face-swap videos, discrepancies in the background details of completely AI-generated videos and even signs of AI manipulation in synthetic videos that don’t contain any human faces. The detector also flags realistic-looking scenes from video games, such as Grand Theft Auto V, that are not necessarily generated by AI.

 

Source Link
Excerpt:

Elon Musk’s Neuralink is revolutionising the way humans interact with technology by merging the power of thought with advanced computing. Its flagship innovation, a coin-sized brain implant called “the Link,” enables individuals to control computers, smartphones, and even games just by thinking. The implant has already transformed lives, helping paralysed patients regain digital independence and communication ability. Early recipients like Noland Arbaugh, Audrey Crews, Alex, and RJ showcase how neural signals can bypass damaged pathways to restore essential functions. Beyond assisting people with disabilities, Neuralink envisions a future where humans could communicate brain-to-brain, boost memory, and even merge with artificial intelligence. However, this pioneering technology also faces significant challenges, including complex surgeries, device reliability, and ethical concerns over neural privacy and long-term brain safety. Check how Neuralink works, the individuals already benefiting from it, and how it could change the future of humanity.

Source Link
Excerpt:

This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voicemail, and the Signal messaging app.

In May someone impersonated President Trump’s chief of staff, Susie Wiles.

Another phony Mr. Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.

The national security implications are huge: People who think they’re chatting with Mr. Rubio or Ms. Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy.

“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.

Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state’s upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI.

Source Link
Excerpt:

In the race for artificial intelligence supremacy, China’s government is doubling down on practical applications to accelerate adoption across industries. Unlike the U.S., which emphasizes foundational model development, Beijing is channeling resources into deploying AI in everyday operations, from factory assembly lines to urban management systems. This strategy, highlighted in a recent report by The Washington Post, aims to embed the technology deeply into the economy, fostering rapid innovation and challenging American dominance.

Recent policy moves underscore this commitment. Just days ago, on July 26, 2025, China unveiled its Action Plan for Global AI Governance, building on President Xi Jinping’s earlier initiatives. As detailed in coverage from ANSI, the plan outlines a 13-point roadmap targeting over 300 exaflops of computing power by year’s end, emphasizing green AI and international collaboration under UN frameworks.

Government Funding and Infrastructure Boost

To fuel this ambition, Beijing has allocated massive financial support. A new AI Industry Development Action Plan, backed by the China Banking and Insurance Regulatory Commission, pledges 1 trillion yuan—roughly $137 billion—over five years, according to posts circulating on X from industry analysts. This funding is set to bolster state-owned enterprises and startups alike, focusing on scalable applications rather than theoretical advancements.

Infrastructure is another cornerstone. China aims to increase its computing capacity from 230 exaflops to 300 exaflops by 2025, as noted in reports from WebProNews. This push includes expanding data centers and promoting open-source models, enabling widespread adoption in sectors like manufacturing, where AI optimizes supply chains and predictive maintenance.

Source Link
Excerpt:

As generative artificial intelligence is woven rapidly into society, teachers-in-training—as well as the professors who educate them—feel unprepared to adopt the technology in the classroom, according to a survey.

To respond, institutions should implement clear guidelines and provide professional development opportunities for educators, the author of the new paper says.

While faculty serve as subject matters experts who best understand their course needs, they require institutional support and learning opportunities to grasp how these algorithms work, their affordances and limitations, appropriate usage, and ethical considerations.

This foundational knowledge will help educators make a thoughtful decision about integrating these tools or not into their courses.

“The main takeaway of all of this is that our students are asking to learn more about AI, our teachers are asking to learn more about AI, and we do not have the support to do it,” says Priya Panday-Shukla, an instructional designer in the WSU Global Campus whose paper appears in Teaching and Teaching Education.

 

Source Link
Excerpt:

From CNN. “To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work.

“The AI revolution has arrived,” Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months.

“We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,” he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable.

Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals.

Yet behind the scenes, the agency’s slick AI project has been greeted with a shrug — or outright alarm.

Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates.

Source Link
Excerpt:

 

The more advanced artificial intelligence (AI) gets, the more capable it is of scheming and lying to meet its goals — and it even knows when it’s being evaluated, research suggests.

Evaluators at Apollo Research found that the more capable a large language model (LLM) is, the better it is at “context scheming” — in which an AI pursues a task covertly even if it misaligns with the aims of its operators.

The more capable models are also more strategic about achieving their goals, including misaligned goals, and would be more likely to use tactics like deception, the researchers said in a blog post.

Source Link
Excerpt:

 

Using OpenAI’s ChatGPT instead of Google’s search engine is becoming more common in the US, according to a new survey of 1,000 people by Adobe Express. More than three-quarters (77%) of respondents say they use ChatGPT for searches — and 25% have it as their first choice.

Not surprisingly, children and young people in particular prefer ChatGPT to Google. What attracts them most: getting answers to everyday questions or getting creative inspiration. Users said they also appreciate the ability to get summaries of complicated topics and to avoid having to click on a lot of links.

Three in 10 people surveyed said they trust ChatGPT over other search engines, and 47% of marketers and business owners use ChatGPT to promote their business.

Source Link

Excerpt:

When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. That is why it’s essential that, in addition to performance evaluations, organizations conduct alignment testing.

However, alignment audits often present two major challenges: scalability and validation. Alignment testing requires a significant amount of time for human researchers, and it’s challenging to ensure that the audit has caught everything. 

In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated that these agents, created during the pre-deployment testing of Claude Opus 4, enhanced alignment validation tests and enabled researchers to conduct multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub

“We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. 

Source Link
Excerpt:

ROME — ROME (AP) — Pope Leo XIV warned Friday that artificial intelligence could negatively impact the intellectual, neurological and spiritual development of young people as he pressed one of the priorities of his young pontificate.

History’s first American pope sent a message to a conference of AI and ethics, part of which was taking place in the Vatican in a sign of the Holy See’s concern for the new technologies and what they mean for humanity.

In the message, Leo said any further development of AI must be evaluated according to the “superior ethical criterion” of the need to safeguard the dignity of each human being while respecting the diversity of the world’s population.

He warned specifically that new generations are most at risk given they have never had such quick access to information.

“All of us, I am sure, are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development,” he said in the message. “Society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities,” and not allow them to confuse mere access to data with intelligence.

Source Link
Excerpt:

Pressure to regulate AI, fueled by apocalyptic prophecy and long-held animosity of tech giants like billionaire Elon Musk, is building within MAGA, and it might be enough to get something done in Congress.

AI-generated images, ranging from muscle-bound depictions of President Donald Trump to memes portraying the president’s opponents as communists, have become a hallmark of online conservatism over the past few years.

Percolating in the background, however, has been a resistance to AI technology, rooted in the conservative movement’s skepticism of Big Tech. Criticism of AI on the right ranges from relatively mundane concerns over AI’s potential ability to defame to warnings that AI has a role to play in the end times.

Central to the concern of right-wingers is the concept of the AI “singularity” — the name for the hypothetical point at which AI becomes able to improve itself, leading to an uncontrollable cascade of advancements in the technology — and Musk, who often features prominently in right-wing critiques of AI for his influence in the Trump administration, a 2014 interview where he predicted that “with artificial intelligence we are summoning the demon” and for his longtime social media profile, in which he sported armor bearing the Sigil of Baphomet.

“If you listen to the four horsemen of the apocalypse — Dario, Musk, Altman… they talk right now about the Big Bang, that this is the Big Bang time for artificial intelligence,” former Trump adviser Steve Bannon said on a recent episode of his podcast, “War Room.” “As sure as the turning of the Earth, this is going to be the most fundamental radical transformation in all human history, going back to the absolute beginning,” Bannon continued, “and what you have is the most irresponsible people doing it for: one, their own efforts for eternal life, because they do not believe in the underlying tenants of the Judeo-Christian West; and also money and power. It must be stopped.”

Source Link
Excerpt:

When it comes to higher education and AI, this is going to be one of the biggest challenges of the future.

The Guardian reports:

Revealed: Thousands of UK university students caught cheating using AI

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed.

Source Link
Excerpt:

Bottom line: As top labs race to build an AI master race, many turn a blind eye to dangerous behaviors – including lying, cheating, and manipulating users – that these systems increasingly exhibit. This recklessness, driven by commercial pressure, risks unleashing tools that could harm society in unpredictable ways.

Artificial intelligence pioneer Yoshua Bengio warns that AI development has become a reckless race, where the drive for more powerful systems often sidelines vital safety research. The competitive push to outpace rivals leaves ethical concerns by the wayside, risking serious consequences for society.

“There’s unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on [safety research],” Bengio told the Financial Times.

Bengio’s concern is well-founded. Many AI developers act like negligent parents watching their child throw rocks, casually insisting, “Don’t worry, he won’t hit anyone.” Rather than confronting these deceptive and harmful behaviors, labs prioritize market dominance and rapid growth. This mindset risks allowing AI systems to develop dangerous traits with real-world consequences that go far beyond mere errors or bias.

Source Link
Excerpt:

“This also strengthens Telegram’s financial position: we will receive $300M in cash and equity from xAI, plus 50% of the revenue from xAI subscriptions sold via Telegram.”

Telegram founder and CEO Pavel Durov announced on Wednesday that the company has teamed up with Elon Musk’s xAI to bring the AI chatbot Grok to the messaging platform.

Durov wrote that he and Musk have agreed to a one-year partnership to bring Grok to Telegram’s users, and Grok will be integrated across all Telegram apps. “This also strengthens Telegram’s financial position: we will receive $300M in cash and equity from xAI, plus 50% of the revenue from xAI subscriptions sold via Telegram,” Durov wrote.

A video posted by Durov stated that the partnership will see Grok pinned for all users on Telegram, will appear in the search bar when asking a question, provide message editing and chat summaries, summarize documents posted in Telegram chats, and can serve as a moderator. The inclusion of Grok will begin in the “summer of 2025,” the video stated.

Source Link
Excerpt:

A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.

The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself. The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”

Source Link
Excerpt:

A new study suggests that AI could speed up the grading process for teachers, but it may sacrifice some accuracy in the process.

Many states have adopted the Next Generation Science Standards, which emphasize the importance of argumentation, investigation, and data analysis. But teachers following the curriculum face challenges when it’s time to grade students’ work.

“Asking kids to draw a model, to write an explanation, to argue with each other are very complex tasks,” says Xiaoming Zhai, corresponding author of the study and an associate professor and director of AI4STEM Education Center in University of Georgia’s Mary Frances Early College of Education.

“Teachers often don’t have enough time to score all the students’ responses, which means students will not be able to receive timely feedback.”

The study explored how Large Language Models grade students’ work compared to humans. LLMs are a type of AI that are trained using a large amount of information, usually from the internet. They use that data to “understand” and generate human language.

 

Source Link
Excerpt:

Toddlers may swiftly master the meaning of the word “no”, but many artificial intelligence models struggle to do so. They show a high fail rate when it comes to understanding commands that contain negation words such as “no” and “not”.

That could mean medical AI models failing to realise that there is a big difference between an X-ray image labelled as showing “signs of pneumonia” and one labelled as showing “no signs of pneumonia” – with potentially catastrophic consequences if physicians rely on AI…

Source Link

Excerpt:

Nvidia and Foxconn Hon Hai Technology Group today announced they are deepening their longstanding partnership and are working with the Taiwan government to build an AI factory supercomputer that will deliver state-of-the-art Nvidia Blackwell infrastructure to researchers, startups and industries.

Foxconn will provide the AI infrastructure through its subsidiary Big Innovation Company as an Nvidia Cloud Partner. Featuring 10,000 Nvidia Blackwell GPUs, the AI factory will significantly expand AI computing availability and fuel innovation for Taiwan researchers and enterprises.

The Taiwan National Science and Technology Council will use the Big Innovation Company supercomputer to provide AI cloud computing resources to the Taiwan technology ecosystem, accelerating AI development and adoption across sectors.

TSMC researchers plan to leverage the system to advance its research and development with orders-of-magnitude faster performance, compared with previous-generation systems.

“AI has ignited a new industrial revolution — science and industry will be transformed,” said Jensen Huang, CEO of Nvidia in a keynote talk at Computex 2025 in Taiwan. “We are delighted to partner with Foxconn and Taiwan to help build Taiwan’s AI infrastructure, and to support TSMC and other leading companies to advance innovation in the age of AI and robotics.”

Source Link
Excerpt:

Their findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. The research has been published in the journal Nature Human Behavior.

“Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project.

“These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he says.

The researchers recruited 900 people based in the US and got them to provide personal information like their gender, age, ethnicity, education level, employment status, and political affiliation.