Scientists tested AI’s moral compass, and the results reveal a key blind spot
A recent study published in the Proceedings of the National Academy of Sciences suggests that large language models struggle to accurately estimate the moral values of people outside of Western societies. Scientists found that these artificial intelligence systems tend to overestimate the moral concerns of Western nations while underestimating the values of non-Western cultures. This pattern provides evidence that relying on these models to gauge global public opinion could unintentionally reinforce cultural stereotypes. Large language models are sophisticated artificial intelligence systems trained on vast amounts of text data to generate human-like writing and answer complex questions. Popular examples include ChatGPT, created by OpenAI, and similar tools built by companies like Google and Meta. People increasingly use these models for communication, business, and even academic research. Some academics have recently suggested using these models to simulate human participants in social science research. This idea relies on the assumption that the models possess an accurate understanding of diverse human populations. The researchers conducted this study to put that assumption to the test. Mohammad Atari, an assistant professor …





