misaligned bits #18: AI Makes Us Think Alike


OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same.

Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch.

As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”.

Misaligned recap

Jim Loving wrote in Misaligned about the similarity between the emergence and nuclear weapons and generative AI, pointing out that in in both cases humanity, laws and regulations were not ready.

In a new article this week we look, again, at the fallout from the Grok scandal earlier this year, and observe how, after the failure of self-regulation in the industry, the discussion has shifted from regulation to handing over the responsibility to users by implementing age based access restrictions.

Also, another big thanks to everyone who attended the Misaligned booth at the Medium Pub Crawl two weeks ago. We are looking forward to the contributions of those writers who visited us.

Also, we are happy to announce that in addition to using your Medium account, you can now also subscribe to the newsletter directly with your email.

Standing up for Science

As part of the “No Kings” rallies across the US this weekend, “Stand up for Science” is organising a virtual event for those who can not attend in person (➚SUFS). We covered the campaign and question around the attacks on science around the world in an article recently.

Chatbot bits

Three Tennessee teenagers are suing Elon Musk’s xAI over claims that the company’s Grok AI chatbot generated sexualized images and videos of themselves as minors. The lawsuit alleges that xAI “failed to test the safety of the features it developed” and that Grok is “defective in design” (➚The Guardian).

Talking about Grok, French prosecutors say they alerted US authorities to a suspicion that Elon Musk had encouraged the controversy over sexualized deepfakes on X to “artificially” increase the value of his company. The prosecutor’s office said it has “reached out to the US Department of Justice and the Securities and Exchange Commission (SEC)” (➚Le Monde).

Meanwhile, it has emerged that at the hearings, DOGE used ChatGPT to decide which federal humanities grants to cancel.

OpenAI cutting corners

OpenAI has announced it will be shutting down its AI video generator Sora, source of thousands of hours of AI slop. The decision comes just six months after the company’s launch of Sora as a stand-alone app (➚The Guardian).

In addition, OpenAI appears also to be pivoting away from a recently launched feature that lets users buy directly from ChatGPT’s interface (➚TechCrunch).

OpenAI has also now shelved plans to release an erotic chatbot (also known as “adult mode”) “indefinitely” as it refocuses on its core products, the Financial Times reports. (➚🔒Financial Times).

Bits from the Lords

The British House of Lords has published its extensive report on “AI, copyright and the creative industries(➚House of Lords) with wide-ranging recommendations how to handle copyrighted material used by generative AI. It recommends that the government must not weaken copyright law, and should instead strengthen licensing, transparency & enforcement.

The report asks the government to stop prioritising large multinational tech firms and finds that the government’s mixed public messaging on AI & copyright is hindering licensing, and a clear public statement that AI companies operating in the UK need to license their training data (which is currently the law) would help the industry.

Updates from the baddies

In the ongoing saga between OpenAI, Anthropic and the Pentagon,more than 30 OpenAI and Google DeepMind employees signed onto a statement supporting Anthropic’s lawsuit against the DoD after the agency labelled the AI firm a supply chain risk, according to court filings (➚TechCrunch).

Meanwhile, OpenAI’s non-profit arm, currently heavily engaged in reputation management after the fallout from the companies Pentagon contract, has pledged to spend $1 billion on AI safety (➚Reuters).

American bits

The Trump administration released its new “Framework for national AI legislation” last week, in its own words focusing on protections for children and boosting the industry. Counterintuitively it is also calling for sharp limits on legal liability for developers and repeats its demand to limit state laws, which it claims would slow down the technology’s development. (➚White House)

The administration also believes that “the Federal government is uniquely positioned to set a consistent national policy that enables us to win the AI race and deliver its benefits to the American people” (➚NBC)

Overall the new “framework” is in line with the administration’s strategy paper of last year.

Meanwhile, President Trump has started to assemble his new tech advisory panel (➚🔒Wall Street Journal) that will reportedly consist solely of big tech industry figures including Mark Zuckerberg, Larry Ellison, Jensen Huang, Michael Dell and Sergey Brin.

Regulatory bits

The European Parliament adopted its position on a simplification (“omnibus”) proposal amending the Artificial Intelligence Act (➚EU Parliament). The proposal delays the application of certain rules on high-risk artificial intelligence (AI) systems, “to ensure that guidance and standards to help companies with implementation are ready”.

For high-risk AI systems specifically listed in the regulation, the new proposed deadline is moved to 2 December 2027. For AI systems that are covered by EU sectoral legislation on safety and market surveillance, full compliance will be required by 2 August 2028.

MEPs also moved the deadline for watermarking AI-created to 2 November 2026, meaning it will only be delayed by a few months.

Data centre bonanza

In addition to shutting down several services (see above), Oracle and OpenAI have also abandoned plans to expand a flagship artificial intelligence data centre in Texas after ​due to financing issues (➚Reuters). Oracle meanwhile is reportedly planning thousands of job cuts as it is experiencing an AI related cash crunch (➚🔒Bloomberg).

Nscale, hyperscaler and darling of the British government, has closed its latest funding round, “valuing Nscale at $14.6 billion” (➚Nscale).This is indeed a high valuation for a company that has yet to build its first own AI data centre. The Financial Times meanwhile has published an in-depth look into Nscale’s financial acrobatics of the past (➚🔒Financial Times).

A farmer in Kentucky has rejected an offer of $26 million to sell her land for the construction of another AI data centre (➚TechCrunch), saying that “whenever our food is disappearing, our lands are disappearing, and we don’t have any water — and that poison. Well, we know we’ve had it”.

Science Bits

A study published in Nature warns that the race that has emerged to show what AI can do in science has let to an “AI-monoculture feedback loop”.

As AI tools become embedded across the research pipeline, institutions increasingly reward outputs that align with speed, scale, and technological familiarity”, the authors write. “In warning that AI might make us think alike, we may have begun to think alike about AI. This symmetry should give us pause.”

Traberg, C.S., Roozenbeek, J. & van der Linden, S. AI is turning research into a scientific monoculture. Commun Psychol4, 37 (2026).

Follow and subscribe

If you stumbled across this article on the web, subscribe to the “misaligned bits” newsletter and follow Misaligned on Medium, LinkedIn, Threads or Mastodon.

You can now also subscribe to this newsletter directly with your email.

misaligned bits

misaligned bits is our (roughly) weekly newsletter with bits and news, recaps from articles we published and latest studies in the field.

Read more from misaligned bits

Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap In this week’s featured article “The AI Industry, Unloved” we look at the rapid...

Pub Crawl, update on the baddies, Microslop, data centres in bad places, chatbots and privacy. Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned recap Last Saturday thousands of scientists protested across the United States against the attacks on academic institution by the...

misaligned bits #11: Preparing For Impact Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news, with a lighter touch. In this edition: Insurers are unsure about AI. Grok has another lapse. And LLMs are seriously biased against people who speak dialect. As usual, we will mark all non-medium links with “➚” (external link) and all possibly paywalled links with “🔒”. Misaligned Recap For your convenience, we have compiled a list of studies from 2025...