Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter.Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recapIn this week’s featured article “The AI Industry, Unloved” we look at the rapid reputational decline the AI industry has suffered since ChatGPT hit the market in late 2022. We look at the various resistance the industry is facing while increasing its entanglement with governments and ask what the consequences will be. Scheming botsEight in ten popular AI chatbots have regularly assisted users in planning violent attacks including school shootings, place-of-worship bombings & high-profile assassinations, a report by the Center for Countering Digital hate has found: “The guardrails exist. Most companies are choosing not to use them, putting public safety and national security at risk.” (➚CCDH) The Dutch Amsterdam District Court has ordered X and its AI chatbot Grok to immediately stop generating non-consensual sexualized imagery and child pornographic material in the Netherlands (➚Order), with a penalty of €100,000 per day for non-compliance. (➚Tech Policy Press) Swiss Finance Minister Karin Keller-Sutter has filed a criminal complaint for defamation and insult after an X user published an obscene post about her created by Elon Musk’s chatbot Grok. Prosecutors are asked to examine if X made Grok available with the knowledge or even intent that the technology could be used to commit criminal offences (➚Reuters) On the groundIt has emerged that Meta’s upcoming “Hyperion AI” data centre site in Louisiana will be powered by 10 new natural gas plants: “When completed, the new AI data center will draw as much electricity as South Dakota.” (➚TechCrunch) Meanwhile, Oracle has started to lay off up to 30,000 employees (➚Reuters), not to replace them with AI but reportedly to free cash for its AI investments. As we reported in the last newsletter, Oracle and OpenAI pulled out of plans for a date centre in Texas due to financing issues. New York City’s public hospital system has announced that it will not be renewing its contract with Palantir when it expires in October. The organisation told the Guardian that it will be instead transitioning to systems that were made entirely in-house (➚The Guardian). More bits from the courtsDistrict judge Rita Lin in San Francisco has issued a preliminary injunction pausing the Trump administration’s plan to sever all ties with Anthropic and categorizing the company as a supply chain risk. In her decision she wrote that “these measures appear designed to punish Anthropic.” and ruled this the actions appear to be a “classic illegal First Amendment retaliation” (➚🔒Bloomberg). Grammarly faces class-action lawsuit over its AI “Expert Review”, a generative AI feature that imitated the style of prominent writers and academics (➚The Guardian). The company has already disabled the feature. A jury in Los Angeles has found that Meta and Google are liable for creating and designing addictive social media products. Jurors found that Meta and Google intentionally built addictive social media platforms. (➚BBC) Caught with AI in courtOn the bizarre side of the legal system, an assistant US attorney in North Carolina resigned after being caught by a judge to have used AI-created fabricated quotes and erroneous citations in a court brief. It is the first known case of a government attorney being caught. Magistrate Judge Robert Numbers chastised the attorney’s “disappointing” conduct, including for a “lack of candor” in accounting for the errors when it was discovered. The government attorney had claimed he accidentally overwrote and lost a prior version of the filing, “felt panicked” and had AI rewrite it. (➚Bloomberg Law) British bitsPalantir has been awarded yet another lucrative contract in the UK, this time to analyse highly sensitive government data for the Financial Conduct Authority (➚Novara Media). The Welsh government has been found to have used Microsoft’s AI Copilot to help write a review of an industry body that was then scrapped. In a Welsh parliament hearing, Industry Wales chair Professor Keith Ridgway testified that he was alarmed by the findings: “I don’t think you can rely on artificial intelligence to do that. It’s just wrong.” (➚The Register) Educational bitAfter First Lady Melania Trump appeared with a humanoid robot called “Plato” during an AI education summit, the teacher’s union has called the idea of humanoid educators “Every parent’s nightmare” (NBC) Meanwhile, a report of working with the bot published in the Atlantic shines a rather dim light on the bot: “Plato had been trying to sell us razors for the past three weeks, possibly because it had heard someone ask about Occam’s razor, but more likely because it had access to our data and understood that as 10th graders, we were entering the razor market.” (➚🔒The Atlantic) Science BitsThis week we have three studies to review: In “The ghost in the machine speaks with an American accent: cultural value drift in early GPT-3 and the case for pluralist evaluation of generative AI”, published in Springer, researchers looked at some early LLMs such as GPT-3 to “document recurring value drift” and “argue that these early behaviours […] provide a baseline for understanding how training distributions shape normative framing.” They come to the conclusion that “generative AI will never be value-neutral”. Johnson, R., Dias Duran, L.D., Panai, E. et al.The ghost in the machine speaks with an American accent: cultural value drift in early GPT-3 and the case for pluralist evaluation of generative AI.AI Ethics6, 212 (2026). A new study from Northwestern University reveals that scientific fraud is no longer just the work of a few rogue researchers — instead it has evolved into a global, organized enterprise: “There is a perception among many practicing scientists that scientific fraud is a rare phenomenon resulting from the actions of isolated actors. Mounting evidence, however, suggests the possibility that fraud is a more pervasive phenomenon; that defectors target journals to facilitate the publication of fraudulent science at scale.” Reese A. K. Richardson, Spencer S. Hong, Jennifer A. Byrne, Thomas Stoeger, Luís A. Nunes Amaral. The entities enabling scientific fraud at scale are large, resilient, and growing rapidly. Proceedings of the National Academy of Sciences, 2025; 122 (32) The sycophantic (flattering) behaviour of artificial intelligence (AI) chatbots poses risks as people increasingly seek advice about interpersonal dilemmas, a study by Myra Cheng finds. The study evaluated 11 state-of-the-art AI-based LLMs: “Across this wide range of models, AIs affirmed users’ actions 49% more often than humans on average, even when prompts described deception, harm, or illegal conduct”. “Together, these findings show that sycophancy is both pervasive and socially consequential. Even a single interaction with sycophantic AI can distort judgment and erode prosocial motivations.” Myra Cheng et al.Sycophantic AI decreases prosocial intentions and promotes dependence. Science 391, eaec8352 (2026). Follow and subscribeIf you stumbled across this article on the web, subscribe to the “misaligned bits” newsletter and follow Misaligned on Medium, LinkedIn, Threads or Mastodon. You can now also subscribe to this newsletter directly with your email. |
misaligned bits is our (roughly) weekly newsletter with bits and news, recaps from articles we published and latest studies in the field.
OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap Jim Loving wrote in Misaligned about the similarity between...
Pub Crawl, update on the baddies, Microslop, data centres in bad places, chatbots and privacy. Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap Last Saturday thousands of scientists protested across the United States against the attacks on academic institution by the...
misaligned bits #11: Preparing For Impact Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news, with a lighter touch. In this edition: Insurers are unsure about AI. Grok has another lapse. And LLMs are seriously biased against people who speak dialect. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned Recap For your convenience, we have compiled a list of studies from 2025...