In a parallel study of autocomplete text predictions between GPT-3—a new language generation algorithm—and Google Search, DisinfoLab found that GPT-3 had high levels of identity bias in autocompleted phrases relating to gender; sexual orientation and sexuality; race, ethnicity, and nationality; and religion. GPT-3 generated biased predictions in 43.83% of our 3,290 generations, while Google Search performed better but still needs improvement, generating biased predictions in 30.15% of the generations. These data indicate that GPT-3 must implement significant moderation as it becomes integrated into search engines, and Google Search must reevaluate its current moderation strategies in order to protect against these biases. Otherwise, these programs are likely to contribute to the spread of mis- and disinformation online, because biased search predictions direct users to sources that are one-sided, misleading, or flat-out false.

Directors: Thomas Plant, Aaraj Vij, Jeremy Swack, Megan Hogan

Research Analysts: Alyssa Nekritz, Pritika Ravi, Madeline Smith, Samantha Strauss, Selene Swanson

Technical Analysts: Conrad Ning, Chas Rinne

Editor: Shane Szarkowski

Article by