Tech News

Chatbot Disinfo irritates protests in Los Angeles

Zoë Schiffer: Oh, wow.

Leah Feiger: Yes, it’s anyway. Who already has Trump’s ears. This becomes widespread. So the people we talk about went to Grok of X and they were like, “Grok, what is this?” What does Grok tell them? No, no. These aren’t actually images of the protests in Los Angeles, Groke said. They say they are from Afghanistan.

Zoë Schiffer: oh. Groke, no.

Leah Feiger: They said: “There is no reliable support. It’s the wrong contribution. It’s really bad. It’s really really bad. And then in another case another person is sharing these photos with Chatgpt and Chatgpt and Chatgpt, and it’s like, it’s Afghanistan. It’s not accurate, wait and so on. It’s not very good.

Zoë Schiffer: I mean, after many of these platforms systematically remove the fact-checking procedures, don’t let me start this moment and decide to allow more content purposefully. Then you add the chatbots to the mix for all their uses, and I do think they are really useful and they are very confident. When they hallucinate, when they mess up, they do it in a very convincing way. You won’t see me defending Google search here. Absolutely rubbish, nightmare, but when you go astray, you can be more clear when you go astray than Grok is completely convinced that you see pictures of Afghanistan when you don’t have one.

Leah Feiger: This is really worrying. I mean, it’s hallucinating. It’s totally hallucinated, but unfortunately, it’s the swagger of the most intoxicating boy in your life at a party.

Zoë Schiffer: nightmare. nightmare. Yes.

Leah Feiger: They were like “No, no, no. I’m sure. I’ve never been more certain in my life.”

Zoë Schiffer: Absolutely. I mean, OK, so why do chatbots give these incorrect answers so confidently? Why don’t we see them just say, “Okay, I don’t know, so maybe you should check it out somewhere else. Here are some reliable places to look for answers and that information.”

Leah Feiger: Because they don’t. They don’t admit they don’t know, and it’s really crazy for me. Actually, there is a lot of research on this, and in a recent study of AI search tools at the Center for Trailer Digital News at Columbia University, it was found that chatbots are “normally afraid to refuse to answer questions they can’t answer accurately. Offer incorrect or speculative answers.” Really, really, really, really crazy, especially when you consider the fact that there are so many articles in the election: “Oh no, sorry, I’m changpt, I can’t weigh politically.” You’re like, you’re weighing a lot right now.

Zoë Schiffer: OK, I think we should stop on that very scary note and we will be back right away. Welcome back Incredible valley. Today, I was joined by Leah Feiger, senior political editor at Wired. OK, so, in addition to trying to verify information and videos, there are a lot of reports about misleading AI to generate videos. There is a Tiktok account that starts uploading videos of an alleged National Guard soldier named Bob who has been deployed to protests in Los Angeles and you’ll see what he says is false and inflammatory words like the fact that the protesters “chuck in a balloon full of oil”, one of which is close to a million views. So, I don’t know, it feels like people have to become more proficient in identifying this fake shot, but in an environment that is essentially without context, it’s as difficult as a post on X or a video on Tiktok.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button