Sci-Tech AI

A recent paper from the Massachusetts Institution of Technology (MIT) suggests Artificial Intelligence (AI) is learning how to use various forms of deception to achieve the goals they were programmed to complete.

Peter S. Park, the paper’s author, said of the paper, ‘Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

Go to Article
Excerpt from freedomist.com

A recent paper from the Massachusetts Institution of Technology (MIT) suggests Artificial Intelligence (AI) is learning how to use various forms of deception to achieve the goals they were programmed to complete.

AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska have posited a theory based on a method of analysis called “design fiction” that has led them to conclude loved ones lost in the future, if they leave a digital footprint, including audio and video, can be recreated as an AI friend.

Nowaczyk-Basińska stated, “Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one. At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Go to Article
Excerpt from www.popsci.com

AI ethicists and science-fiction authors have explored and anticipated these potential situations for decades. But for researchers at Cambridge University’s Leverhulme Center for the Future of Intelligence, this unregulated, uncharted “ethical minefield” is already here. And to drive the point home, they envisioned three, fictional scenarios that could easily occur any day now.

In a new study published in Philosophy and Technology, AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska relied on a strategy called “design fiction.” First coined by sci-fi author Bruce Sterling, design fiction refers to “a suspension of disbelief about change achieved through the use of diegetic prototypes.” Basically, researchers pen plausible events alongside fabricated visual aids.

For their research, Hollanek and Nowaczyk-Basińska imagined three hyperreal scenarios of fictional individuals running into issues with various “postmortem presence” companies, and then made digital props like fake websites and phone screenshots. The researchers focused on three distinct demographics—data donors, data recipients, and service interactants. “Data donors” are the people upon whom an AI program is based, while “data recipients” are defined as the companies or entities that may possess the digital information. “Service interactants,” meanwhile, are the relatives, friends, and anyone else who may utilize a “deadbot” or “ghostbot.”

Go to Article
Excerpt from www.aol.com

LONDON (Reuters) – Google Deepmind has unveiled the third major version of its “AlphaFold” artificial intelligence model, designed to help scientists design drugs and target disease more effectively.

In 2020, the company made a significant advance in molecular biology by using AI to successfully predict the behaviour of microscopic proteins.

With the latest incarnation of AlphaFold, researchers at DeepMind and sister company Isomorphic Labs – both overseen by cofounder Demis Hassabis – have mapped the behaviour for all of life’s molecules, including human DNA…

“With these new capabilities, we can design a molecule that will bind to a specific place on a protein, and we can predict how strongly it will bind,” Hassabis said in a press briefing on Tuesday.

“It’s a critical step if you want to design drugs and compounds that will help with disease.”

Go to Article
Excerpt from thefederalist.com

If nothing else, Apple’s horrible ad announcing the new iPad Pro has the virtue of being brutally honest. The one-minute clip opens with an old vinyl playing Sonny and Cher’s “All I Ever Need Is You,” and then shows an industrial press slowly crushing an eclectic assortment of old musical instruments, paint and art supplies, and Gen X-era toys and tchotchkes.

In other words, it destroys a bunch of stuff that makes life fun, unique, interesting, and fully human.

After all that old stuff — the quirky objects and sentimental artifacts of the pre-digital era — has been flattened under the inexorable weight of machine technology, the press lifts up to reveal the new iPad Pro. The message is so obvious it hardly needs to be spelled out: This thin digital tablet is supposed to replace — and supersede — all these clunky, analog, obsolete things. All you need, we are made to understand, is this new piece of digital technology, this iPad. The rest, the detritus of the real world, can simply be destroyed.

Apple CEO Tim Cook posted the ad on X and commented, “Just imagine all the things it’ll be used to create.” (An odd comment, after just showing us all the things it’ll be used to destroy.)

Go to Article
Excerpt from amp.theguardian.com

Ex-OpenAI co-founder alleges Sam Altman subverted company’s original goal of transparency, becoming a largely for-profit entity

The California judge presiding over Elon Musk’s lawsuit against OpenAI and its CEO, Sam Altman, has removed himself from the case. Judge Ethan Schulman on Monday sustained a challenge from Musk’s lawyers, which cited a California state law that allows plaintiffs and defendants to remove a judge they believe cannot grant an impartial trial.

The law, known as California Code of Civil Procedure 170.6, does not require the person issuing the challenge to provide any factual basis for their claim that the judge is prejudiced against them. Each side in a case gets one such peremptory challenge, which is granted as long as it is filed with correct language and within a certain time frame.

Go to Article
Excerpt from amp.scmp.com

Microsoft is training a new, in-house AI language model large enough to compete with those from Alphabet’s Google and OpenAI, the Information reported on Monday.

The new model, internally referred to as MAI-1, is being overseen by recently hired Mustafa Suleyman, the Google DeepMind co-founder and former CEO of AI start-up Inflection, the report said, citing two Microsoft employees with knowledge of the effort.

The exact purpose of the model has not been determined yet and will depend on how well it performs. Microsoft could preview the new model as soon as its Build developer conference later this month, the report said.