© 2024 Boise State Public Radio
NPR in Idaho
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Click here for information on transmitter status in the Treasure and Magic Valleys

AI red teams see the internet's worst so we don't have to

AI red teams find and root out the worst of AI by feeding it prompts that show what various chat bots will come up with, and then alerting companies so they can add guardrails. (Business Wire)
AI red teams find and root out the worst of AI by feeding it prompts that show what various chat bots will come up with, and then alerting companies so they can add guardrails. (Business Wire)

We’ve heard about the dark side of artificial intelligence: chatbots that suggest people’s spouses don’t love them, the proliferation of conspiracy theories and even suggestions of violence and self-harm.

That’s where so-called red teams come in. They’re hired by companies to find and root out the worst of AI by feeding it prompts that show what various chatbots will come up with, and then alerting companies so they can add guardrails. But the work can be grueling and traumatic, prompting some red team members to advocate for more support.

Evan Selinger is a philosophy professor at the Rochester Institute of Technology and Brenda Leong is a partner at Luminos.Law specializing in AI governance. Both are red team members who teamed up to write about their experience in a recent Boston Globe article “Getting AI ready for the real world takes a terrible human toll.”

They join host Robin Young to discuss the issue.

This article was originally published on WBUR.org.

Copyright 2024 NPR. To see more, visit https://www.npr.org.

You make stories like this possible.

The biggest portion of Boise State Public Radio's funding comes from readers like you who value fact-based journalism and trustworthy information.

Your donation today helps make our local reporting free for our entire community.