5 Comments
Feb 13Liked by Sean Trott

nice piece Sean

Expand full comment
Feb 13Liked by Sean Trott

Thank you for this detailed write-up. May I draw your attention to a recent preprint that we published on a related topic?

Messner, W., Greene, T., & Matalone, J. (2023). From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models. arXiv Preprint. DOI: https://doi.org/10.48550/arXiv.2312.17256

Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence (GenAI) are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. This study explores the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE project. The findings reveal that their cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by sustained economic competitiveness. Recognizing the cultural biases of LLMs and understanding how they work is crucial for all members of society because one does not want the black box of artificial intelligence to perpetuate bias in humans, who might, in turn, inadvertently create and train even more biased algorithms.

Expand full comment
author

Thanks for the pointer! This looks great and really relevant. I like the way you've broken down the different kinds of bias that emerge. Would you mind if I added a link to it in the article (maybe in a footnote), just pointing readers to an additional paper on the topic?

Expand full comment

Of course you can add the link. I would be excited to see our work being linked there. Thank you!

Expand full comment
author

Just added it—thanks again!

Expand full comment