Press "Enter" to skip to content

The chatbot disinfrays that inflames Los Angeles protests

Zoë Schiffer: Oh, wow.

WhatsApp Group Join Now
Telegram Group Join Now

Leah Feiger: Yes, exactly. Who already has Trump’s ear. This became widespread. And so we were talking about people they went to X Grok and were like “Grok, what is this?” And what did Grok say? Ninth. Grok said these were not actually images of the protest in Los Angeles. They said they came from Afghanistan.

Zoë Schiffer: OH. Grok, no.

Leah Feiger: They were like “there is no credible support. This is an error of attribution. It was really bad. It was really bad, and then there was another situation in which another couple of people shared these photos with chatgpt and chatgpt was also”, yes, this is Afghanistan. This is not accurate, etc., etc. It is not exceptional.

Zoë Schiffer: I mean, not getting started right now after many of these platforms have systematically dismantled their facts control programs, they decided to intentionally leave many more content. And then add chatbots to the mix that, for all their uses, and I think they can be really useful, they are incredibly confident. When they hallburn, when they fit, they do it in a very convincing way. You will not see me out of the research defender on Google. Absolute space, nightmare, but it is a little clearer when you are moving away, when you are on a random and not credible blog of when Grok tells you with complete confidence that you are seeing a photo of Afghanistan when you are not.

Leah Feiger: It is really worrying. I mean, he is hallucinating. It is fully hallucinating, but it is with the swagger of the most drunk half -brother that unfortunately you have ever been put to the grip of a party in your life.

Zoë Schiffer: Nightmare. Nightmare. Yes.

Leah Feiger: I’m like “No, no, no. I’m sure. I’ve never been safer in my life.”

Zoë Schiffer: Absolutely. I mean, Okay, so why do chatbots give these incorrect answers with such trust? Because we don’t see them just say: “Well, I don’t know, so maybe you should check elsewhere. Here are some credible places to go looking for that answer and information.”

Leah Feiger: Because they don’t do it. They don’t admit they don’t know, which is really wild for me. In reality there have been many studies in this regard, and in a recent study on research tools at the center of trailer for digital journalism at Columbia University, he discovered that “generally bad chatbots in refusing to answer the questions they could not answer accurately. Instead offering incorrect or speculative responses”. Really, really, really wild, especially if you consider the fact that there were so many articles during the elections on “Oh no, sorry, they are chatgpt and I cannot weigh in politics”. You are like, well, you are weighing a lot now.

Zoë Schiffer: Ok, I think we should pause there on that very horrible note and we will return immediately. Welcome back to Valley incredibly. I am united today by Leah Feiger, senior policy publisher at Wired. Ok, so just trying to check information and videos, there have also been a lot of relationships on misleading videos generated by the AI. There was a Tiktok account that started uploading videos of an alleged soldier of the National Guard named Bob who had been deployed to the protests of Los Angeles, and you could see him say false and inflammatory things such as the fact that the protesters “throw in balloons full of oil” and one of the videos had a million views. So I don’t know, it seems that people must become a little more skilled in identifying this type of false movies, but it is difficult in an environment that is intrinsically without context as a post on X or a video on Tiktok.

Source link


Discover more from Gautam Kalal: Marketing, Technology And Business News Articles

Subscribe to get the latest posts sent to your email.

Be First to Comment

Leave a Reply