misaligned bits #11: Preparing For Impact


misaligned bits #11: Preparing For Impact

Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news, with a lighter touch.

In this edition: Insurers are unsure about AI. Grok has another lapse. And LLMs are seriously biased against people who speak dialect.

As usual, we will mark all non-medium links with β€œβžšβ€ (external link) and all possibly paywalled links with β€œπŸ”’β€.

Misaligned Recap

For your convenience, we have compiled a list of studies from 2025 that are related to ethical questions around AI. The studies give a glimpse at the multitude of ethical questions and may serve as a warning at a time when the AI industry overpromises the capabilities of AI and attempts to push its application into more and more critical areas.

​In Hello Computer!, published in β€œbootcamp”, we wonder if talking to AI is actually a good idea for human-machine interfacing, and if it means that AI companies have access to your most private data, is even something that we should desire.

Assessing risk and preparing for impact

Liability, or the question of who is responsible when LLMs go haywire, is still an issue under intense discussion. We recently tackled this question in the article β€œAnyone responsible? β€” After The Demise Of The EU AI Liability Directive”.

Insurers appear to become increasingly nervous about covering those cases, as covered by the Financial Times, and some have started to exclude β€œany actual or alleged use” from their coverage.

Insurers increasingly view AI models’ outputs as too unpredictable and opaque to insure, said Dennis Bertram, head of cyber insurance for Europe at Mosaic. β€œIt’s too much of a black box.” […] β€œNobody knows who’s liable if things go wrong,” said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company. (πŸ”’βžšFinancial Times)

It has long been predicted that liability (and by extension insurance) will be one of the major stumbling blocks for the AI uptake in companies. This problem is becoming even more pronounced with the emergence of Shadow AI.

β€œInsurance brokers and lawyers said they feared insurers would start fighting claims in court when AI-driven losses significantly increase.” (πŸ”’βžšFinancial Times)

Insurers are supposed to evaluate the risk of AI services, but as an article by law firm Hogan Lovells points out in an article from October, β€œas AI tools begin to interact directly with customers or influence decision-making, the risk profile increases. […] Liability insurers, in particular, will want to think carefully about the interaction of AI risk with traditional coverages.” (➚Hogan Lovells)

As Jocelyn Auger writes for Fasken: β€œRecent incidents involving AI-generated misinformation, automated customer-service tools making inaccurate statements, or sophisticated impersonation frauds have only heightened these concerns. While individually manageable, they illustrate how unpredictable AI behavior can be β€” and how difficult it is for insurers to assess where future claims might arise.” (➚Fasken)

For further reading, an article from BrownJacobsen also compares the situation of AI liability in the EU and UK (➚BrownJacobsen).

Risky business

Without regulation requiring transparency and a central (and public) incident database, it will be extremely difficult for insurers to judge the risks.

The MIT model risk catalogue might therefore worth to look at. The repository is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem.

β€œWe analyzed nearly 460,000 AI model cards from Hugging Face to examine how developers report risks. From these, we extracted around 3,000 unique risk mentions and built the AI Model Risk Catalog. We compared these with risks identified by researchers in the MIT Risk Repository and with real-world incidents from the AI Incident Database.” (➚MIT)

Safeguards?

While we are on the topic of unpredictability: Lapses in safeguards (or more likely the absence of any) led to a wave of sexualized images generated by Elon Musk’s grok this week. xAI says it is working to improve its systems while blaming the users. (➚The Guardian)

It may be good to remember that in other industries, you have to implement safeguards (and test them) before you release a product to the public.

Datacentres everywhere?

Meanwhile, local resistance against building AI data centres everywhere appears to be growing: Between April and June 2025 alone, the latest reporting period, 20 proposals valued at $98 billion in 11 US states that were blocked or delayed amid local opposition and state-level pushback. (➚Associated Press)

Cheating everywhere!

The Association of Chartered Certified Accountants (ACCA), which has almost 260,000 members, has said that from March it will stop allowing students to take online exams.

The ACCA said it had concluded that online tests have become too difficult to police, given the rise in artificial intelligence (AI) tools available to students. (➚The Guardian)

Disinformation

Poland has asked the European Commission to investigate TikTok after the social media platform has been flooded with AI generated videos calling for Poland to withdraw from the EU. (➚Reuters)

The EU AI Act will mandate labelling for AI-generated content like deepfakes and synthetic media to ensure transparency, with rules coming into force on August 2, 2026, requiring both visible labels and machine-readable watermarks for AI generated output.

Bits of research

It appears that LLMs have some serious bias issues when talking to people with local dialects, a study has found, with serious impacts on AI in recruiting.​

The models were asked to describe the speakers of these texts with personal attributes, and to then assign individuals in different scenarios. The models were asked who should be hired for low-education work or where they think the speakers lived. (➚Paper)

In nearly all tests, the models attached stereotypes to dialect speakers. The LLMs described them as uneducated, farm workers and needing anger management. What is worse, the bias grew when the LLMs were told the text was a dialect. (➚DW)

(1) in the association task, all evaluated LLMs exhibit significant dialect naming and dialect usage bias against German dialect speakers, reflected in negative adjective associations; (2) all models reproduce these dialect naming and dialect usage biases in their decision-making; and (3) contrary to prior work showing minimal bias with explicit demographic mentions, we find that explicitly labeling linguistic demographics β€” German dialect speakers β€” amplifies bias more than implicit cues like dialect usage.

Minh Duc Bui, Carolin Holtermann, Valentin Hofmann, Anne Lauscher, and Katharina von der Wense. 2025. Large Language Models Discriminate Against Speakers of German Dialects. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 8223–8251, Suzhou, China. Association for Computational Linguistics.

A happy new year to all readers!

Wolfgang Hauptfleisch, misaligned

Follow and subscribe

PS: If you stumbled across this on the web, subscribe to the β€œmisaligned bits” newsletter here and follow misaligned on Medium, LinkedIn or Mastodon.

misaligned bits

misaligned bits is our (roughly) weekly newsletter with bits and news, recaps from articles we published and latest studies in the field.

Read more from misaligned bits

Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with β€œβžšβ€ (external link) and all possibly paywalled links with β€œπŸ”’β€. Misaligned recap In this week’s featured article β€œThe AI Industry, Unloved” we look at the rapid...

OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with β€œβžšβ€ (external link) and all possibly paywalled links with β€œπŸ”’β€. Misaligned recap Jim Loving wrote in Misaligned about the similarity between...

Pub Crawl, update on the baddies, Microslop, data centres in bad places, chatbots and privacy. Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with β€œβžšβ€ (external link) and all possibly paywalled links with β€œπŸ”’β€. Misaligned recap Last Saturday thousands of scientists protested across the United States against the attacks on academic institution by the...