Meet the scientist teaching AI to police human speech

There are some clues that AI researchers are now more often reckoning with what they’re built. In May, when Google debuted the new language-AI system LaMDA, which it said had been trained to emulate the meandering style of human dialogue that leads most by-the-book chatbots to awkwardly fail, the company acknowledged that such systems could also learn to internalize online biases, replicate hateful speech or misleading information, or otherwise be “put to ill use.” The company said it was “working to ensure we minimize such risks,” though some, including Gebru, were skeptical, panning the claims as “ethics washing.

Source: WP