misaligned bits #17: Privacy? What privacy?


Pub Crawl, update on the baddies, Microslop, data centres in bad places, chatbots and privacy.

Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news and research, sometimes with a lighter touch.

As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”.

Misaligned recap

Last Saturday thousands of scientists protested across the United States against the attacks on academic institution by the Trump administration. We shared some thoughts on standing up for science in our article “Who Them Will Stand Up For Science?

Also, a reminder that you can meet Misaligned at our booth at the Medium Pub Crawl this week, March 11–12.

Updates on the baddies

After we published our article on the ongoing saga of Anthropic, OpenAI and the Pentagon, there are some more news:

Anthropic published a “letter of apology” of sorts, saying that “Anthropic has much more in common with the Department of War than we have differences.” Well, then… (➚Anthropic)

Meanwhile, the head of robotics at OpenAI has resigned following the OpenAI/Pentagon deal. In a post she wrote: “But surveillance of ⁠Americans without judicial oversight and lethal autonomy without human ​authorization are lines that deserved more deliberation than they got.” (➚Reuters)

Anthropic has now filed a lawsuit against Pentagon’s for categorizing them as a supply-chain risk. (➚The Guardian)

Into the abyss, again

Google Gemini has been targeted with a lawsuit claiming that the chatbot is responsible for the death of Jonathan Gavalas. Lawyers for Gavalas’ family say Gemini’s design allows the chatbot to craft “immersive narratives” that can go on for weeks. (➚The Guardian)

Last year OpenAI was the target of a similar lawsuit which we covered in our article “Into the Chatbot Abyss”. We also wrote about the issues with giving chatbots a “human” personality in “AI’s Anthropomorphism Problem”.

Somewhat related to this, OpenAI is delaying its “adult mode” for ChatGPT (➚TechCrunch), according to some reports because it wants to focus on giving its chatbot more “personality” first.

Microslop

Microsoft Copilot’s chat server on Discord reportedly started to block the use of the word “Microslop” recently (➚Forbes). Now even more people heard (and will likely continue to use) the term “Microslop”. Please do not say “Microslop”.

Data Centres in Space, impact on earth

The FCC reportedly scrambles to assess the environmental impact of SpaceX’s plan to launch a million satellite for its “space based AI data centre”. (➚🔒New Scientist) To put the plans into perspective, there are fewer than 15,000 satellites currently in orbit.

Talking about bad places to put AI data centres, reportedly Saudi-Arabia’s megacity NEOM is being scaled back to build data centres instead (➚Ecoticias), because very dry and extremely hot places are apparently prime locations for this.

The House of Lords Digital & Communications Committee published their report on “AI, copyright & the creative industries”. (➚UK Parliament) We will look into this in more detail soon.

Regulatory Bits

Singapore has published the world’s first agentic AI governance framework (➚Imda). The framework aims to “give organisations a structured overview of the risks of agentic AI and emerging best practices in managing these risks.

An interesting though long read is “The European Union’s Artificial Intelligence Act and trust: towards an AI Bill of Rights in healthcare?” (➚Taylor & Francis)

Bits of Surveys

A study has found that many scientists now use AI to write papers but fail to disclose it (➚Phys Org). The study also found that approximately 70% of the journals they examined now have official AI policies.

More than 4 in 10 adults in the UK are happy to use ChatGPT for their mental health support, new research suggests. (➚Bournemouth University) Considering the lack of AI regulation in the UK and the numerous cases of chatbots causing mental health issues, this appears to be a worrying trend.

A poll has found that people hate AI even more than they hate Immigration and Customs Enforcement(ICE) (➚Gizmodo).

Science bits

Stanford University recently published a study exposing how the leading AI developers treat your private conversations as free training material by default.

The study “User Privacy and Large Language Models” by Jennifer King et al. found that companies like Amazon, Meta and OpenAI appear to retain this data indefinitely: “This creates a permanent digital dossier of your thoughts, health queries and professional secrets. Even more concerning is the lack of transparency, as essential privacy details are often buried in a web of sub-policies and help centre FAQs rather than the main privacy agreement.

The study concludes that “all six developers appear to employ their users’ chat data to train and improve their models by default, and that some retain this data indefinitely.”

J. King, K. Klyman, E. Capstick, T. Saade, and V. Hsieh, “User Privacy and Large Language Models: An Analysis of Frontier Developers’ Privacy Policies,” 2025, arXiv:2509.05382.

Follow and subscribe

If you stumbled across this article on the web, subscribe to the “misaligned bits” newsletter here and follow misaligned on Medium, LinkedIn or Mastodon.

misaligned bits

misaligned bits is our (roughly) weekly newsletter with bits and news, recaps from articles we published and latest studies in the field.

Read more from misaligned bits

Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap In this week’s featured article “The AI Industry, Unloved” we look at the rapid...

OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap Jim Loving wrote in Misaligned about the similarity between...

misaligned bits #11: Preparing For Impact Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news, with a lighter touch. In this edition: Insurers are unsure about AI. Grok has another lapse. And LLMs are seriously biased against people who speak dialect. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned Recap For your convenience, we have compiled a list of studies from 2025...