misaligned bits #11: Preparing For ImpactWelcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news, with a lighter touch. In this edition: Insurers are unsure about AI. Grok has another lapse. And LLMs are seriously biased against people who speak dialect. As usual, we will mark all non-medium links with βββ (external link) and all possibly paywalled links with βπβ. Misaligned RecapFor your convenience, we have compiled a list of studies from 2025 that are related to ethical questions around AI. The studies give a glimpse at the multitude of ethical questions and may serve as a warning at a time when the AI industry overpromises the capabilities of AI and attempts to push its application into more and more critical areas. βIn Hello Computer!, published in βbootcampβ, we wonder if talking to AI is actually a good idea for human-machine interfacing, and if it means that AI companies have access to your most private data, is even something that we should desire. Assessing risk and preparing for impactLiability, or the question of who is responsible when LLMs go haywire, is still an issue under intense discussion. We recently tackled this question in the article βAnyone responsible? β After The Demise Of The EU AI Liability Directiveβ. Insurers appear to become increasingly nervous about covering those cases, as covered by the Financial Times, and some have started to exclude βany actual or alleged useβ from their coverage. Insurers increasingly view AI modelsβ outputs as too unpredictable and opaque to insure, said Dennis Bertram, head of cyber insurance for Europe at Mosaic. βItβs too much of a black box.β [β¦] βNobody knows whoβs liable if things go wrong,β said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company. (πβFinancial Times) It has long been predicted that liability (and by extension insurance) will be one of the major stumbling blocks for the AI uptake in companies. This problem is becoming even more pronounced with the emergence of Shadow AI. βInsurance brokers and lawyers said they feared insurers would start fighting claims in court when AI-driven losses significantly increase.β (πβFinancial Times) Insurers are supposed to evaluate the risk of AI services, but as an article by law firm Hogan Lovells points out in an article from October, βas AI tools begin to interact directly with customers or influence decision-making, the risk profile increases. [β¦] Liability insurers, in particular, will want to think carefully about the interaction of AI risk with traditional coverages.β (βHogan Lovells) As Jocelyn Auger writes for Fasken: βRecent incidents involving AI-generated misinformation, automated customer-service tools making inaccurate statements, or sophisticated impersonation frauds have only heightened these concerns. While individually manageable, they illustrate how unpredictable AI behavior can be β and how difficult it is for insurers to assess where future claims might arise.β (βFasken) For further reading, an article from BrownJacobsen also compares the situation of AI liability in the EU and UK (βBrownJacobsen). Risky businessWithout regulation requiring transparency and a central (and public) incident database, it will be extremely difficult for insurers to judge the risks. The MIT model risk catalogue might therefore worth to look at. The repository is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem. βWe analyzed nearly 460,000 AI model cards from Hugging Face to examine how developers report risks. From these, we extracted around 3,000 unique risk mentions and built the AI Model Risk Catalog. We compared these with risks identified by researchers in the MIT Risk Repository and with real-world incidents from the AI Incident Database.β (βMIT) Safeguards?While we are on the topic of unpredictability: Lapses in safeguards (or more likely the absence of any) led to a wave of sexualized images generated by Elon Muskβs grok this week. xAI says it is working to improve its systems while blaming the users. (βThe Guardian) It may be good to remember that in other industries, you have to implement safeguards (and test them) before you release a product to the public. Datacentres everywhere?Meanwhile, local resistance against building AI data centres everywhere appears to be growing: Between April and June 2025 alone, the latest reporting period, 20 proposals valued at $98 billion in 11 US states that were blocked or delayed amid local opposition and state-level pushback. (βAssociated Press) Cheating everywhere!The Association of Chartered Certified Accountants (ACCA), which has almost 260,000 members, has said that from March it will stop allowing students to take online exams. The ACCA said it had concluded that online tests have become too difficult to police, given the rise in artificial intelligence (AI) tools available to students. (βThe Guardian) DisinformationPoland has asked the European Commission to investigate TikTok after the social media platform has been flooded with AI generated videos calling for Poland to withdraw from the EU. (βReuters) The EU AI Act will mandate labelling for AI-generated content like deepfakes and synthetic media to ensure transparency, with rules coming into force on August 2, 2026, requiring both visible labels and machine-readable watermarks for AI generated output. Bits of researchIt appears that LLMs have some serious bias issues when talking to people with local dialects, a study has found, with serious impacts on AI in recruiting.β The models were asked to describe the speakers of these texts with personal attributes, and to then assign individuals in different scenarios. The models were asked who should be hired for low-education work or where they think the speakers lived. (βPaper) In nearly all tests, the models attached stereotypes to dialect speakers. The LLMs described them as uneducated, farm workers and needing anger management. What is worse, the bias grew when the LLMs were told the text was a dialect. (βDW) (1) in the association task, all evaluated LLMs exhibit significant dialect naming and dialect usage bias against German dialect speakers, reflected in negative adjective associations; (2) all models reproduce these dialect naming and dialect usage biases in their decision-making; and (3) contrary to prior work showing minimal bias with explicit demographic mentions, we find that explicitly labeling linguistic demographics β German dialect speakers β amplifies bias more than implicit cues like dialect usage. Minh Duc Bui, Carolin Holtermann, Valentin Hofmann, Anne Lauscher, and Katharina von der Wense. 2025. Large Language Models Discriminate Against Speakers of German Dialects. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 8223β8251, Suzhou, China. Association for Computational Linguistics. A happy new year to all readers! Wolfgang Hauptfleisch, misaligned Follow and subscribePS: If you stumbled across this on the web, subscribe to the βmisaligned bitsβ newsletter here and follow misaligned on Medium, LinkedIn or Mastodon. |
misaligned bits is our (roughly) weekly newsletter with bits and news, recaps from articles we published and latest studies in the field.
Everyone is in court, chatbots are plotting, ghosts in machines, social media made addictive and bots want to flatter. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with βββ (external link) and all possibly paywalled links with βπβ. Misaligned recap In this weekβs featured article βThe AI Industry, Unlovedβ we look at the rapid...
OpenAI is cuts corners, Lords are siding with creatives, Oracle scales down while others scale up, and AI makes scientists think the same. Welcome to a new edition of misaligned bits, the (roughly weekly) newsletter from Misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with βββ (external link) and all possibly paywalled links with βπβ. Misaligned recap Jim Loving wrote in Misaligned about the similarity between...
Pub Crawl, update on the baddies, Microslop, data centres in bad places, chatbots and privacy. Welcome to a new edition of misaligned bits, the newsletter by misaligned where we sum up recent news and research, sometimes with a lighter touch. As usual, we will mark all non-medium links with βββ (external link) and all possibly paywalled links with βπβ. Misaligned recap Last Saturday thousands of scientists protested across the United States against the attacks on academic institution by the...