Large language models are biased. Can logic help save them?

Turns out, even language models “think” they’re biased. When prompted in ChatGPT, the response was as follows: “Yes, language models can have biases, because the training data reflects the biases present in society from which that data was collected. For example, gender and racial biases are prevalent in many real-world datasets, and if a language model is trained on that, it can perpetuate and amplify these biases in its predictions.” A well-known but dangerous problem.  Humans (typically) can dabble with both logical and stereotypical reasoning when learning. Still, language models…

This content is for Member members only.
Log In Register