[ad_1]
The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.
The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.
However, in the case of AI systems, it misfires – building a mental model out of thin air.
A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”
The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.
Ascribing intelligence to machines, denying it to humans
A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently.
For instance, people with a foreign accent are often perceived as less intelligent and are less likely to get the jobs they are qualified for. Similar biases exist against speakers of dialects that are not considered prestigious, such as Southern English in the U.S., against deaf people using sign languages and against people with speech impediments such as stuttering.
These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.
Fluent language alone does not imply humanity
Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.
Kyle Mahowald is an assistant professor of linguistics at The University of Texas at Austin. Anna A. Ivanova is a PhD candidate in brain and cognitive sciences at the Massachusetts Institute of Technology.
[ad_2]
Source link