Sci-Tech Editor’s Choice

YouTube has decided to comply to the CCP’s demands to assure a viral video showing Hong Kong Freedom Activists singing a patriotic song “Glory to Hong Kong” is blocked from their platform.

While YouTube complied with the CCP’s demands, it offered a mild protest, stating “We are disappointed by the Court’s decision but are complying with its removal order. We’ll continue to consider our options for an appeal, to promote access to information.”

Go to Article
Excerpt from www.firstpost.com

Protesters sing Glory to Hong Kong outside of Polytechnic University (PolyU) while police keep it under siege in Hong Kong, China, November 25, 2019. Reuters file

Alphabet’s YouTube on Tuesday said it would comply with a court decision and block access inside Hong Kong to 32 video links deemed prohibited content, in what critics say is a blow to freedoms in the financial hub amid a security clampdown.

The action follows a government application granted by Hong Kong’s Court of Appeal requesting the ban of a protest anthem called “Glory to Hong Kong.” The judges warned that dissidents seeking to incite secession could weaponize the song for use against the state.

AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska have posited a theory based on a method of analysis called “design fiction” that has led them to conclude loved ones lost in the future, if they leave a digital footprint, including audio and video, can be recreated as an AI friend.

Nowaczyk-Basińska stated, “Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one. At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Go to Article
Excerpt from www.popsci.com

AI ethicists and science-fiction authors have explored and anticipated these potential situations for decades. But for researchers at Cambridge University’s Leverhulme Center for the Future of Intelligence, this unregulated, uncharted “ethical minefield” is already here. And to drive the point home, they envisioned three, fictional scenarios that could easily occur any day now.

In a new study published in Philosophy and Technology, AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska relied on a strategy called “design fiction.” First coined by sci-fi author Bruce Sterling, design fiction refers to “a suspension of disbelief about change achieved through the use of diegetic prototypes.” Basically, researchers pen plausible events alongside fabricated visual aids.

For their research, Hollanek and Nowaczyk-Basińska imagined three hyperreal scenarios of fictional individuals running into issues with various “postmortem presence” companies, and then made digital props like fake websites and phone screenshots. The researchers focused on three distinct demographics—data donors, data recipients, and service interactants. “Data donors” are the people upon whom an AI program is based, while “data recipients” are defined as the companies or entities that may possess the digital information. “Service interactants,” meanwhile, are the relatives, friends, and anyone else who may utilize a “deadbot” or “ghostbot.”