But perhaps the most convincing evidence that suggests that Ghatgpt regurgitated the language of Warhammer 40,000 is that he continued to ask if the Atlantic was interested in the PDFs. The editorial division of Games Workshop, the British company owner of the Warhammer franchise, regularly publishes rules and guides updated to various characters. Purchase of all these books can become expensive, so some fans try to find pirated copies online.
The Atlantic and Opeeni refused to comment.
At the beginning of this month, the newsletter Garbage day shown On similar experiences that an important technological investor may have had with chatgpt. On social media, the investor shared the screenshots of his conversations with the chatbot, in which he referred to an entity with a threatening sound that defined a “non -governmental system”. It seemed to believe that he had “had a negative impact on over 7000 lives” and “he extinct 12 lives, each fully”. Other figures in the technological sector have said that the places made Worry on the investor’s mental health.
Second Garbage dayThe conversations of the investor with chatgpt resemble a lot of writing by a science fiction project that started at the end of the 2000s called SCP, which is about to “protect, protect”. Participants invent several SCPs – essentially spectral objects and mysterious phenomena – and therefore write fantasy relationships that analyze them. Often they contain things such as classification numbers and references to invented scientific experiments, details that have also appeared in the registers of investors’ chats. (The investor did not respond to a commentary request.)
There are many other more trivial examples than what can be thought of as a context problem AI. The other day, for example, I did a search on Google of “cavitation surgery”, a medical term that I had seen mentioned in a random Tiktok video. At the moment, the main result was a “artificial intelligence overview” automatically generated that explains that cavitation surgery is “focused on removing the bone tissue infected or dead by the jaw”.
I was unable to find respectable scientific studies that outline such a condition, not to mention the research in support that surgery is a good way to treat it. The American Dental Association does not mention the “cavitation surgery” anywhere on its website. The overview of Google AI, apparently, was extracted from sources such as blog posts that promote alternative “holistic” dentists in the United States. I learned it by clicking on a small icon next to the OIA overview, which opened a list of links that Google had used to generate his answer.
These quotes are clearly better than nothing. Jennifer Kutz, Google spokesperson, says “we prominently show the support links so that people can dig deeper and learn more about which sources on the web are saying”. But when the links are displayed, Google’s IA has already provided a satisfactory answer to many questions, one that reduces the visibility of annoying details such as the website in which the information was provided and the identities of its authors.
What remains is the language created by artificial intelligence, which, without a further context, may appear understandably authoritative to many people. In recent weeks, technological managers have repeatedly used rhetoric that implies generative IA is a source of information from experts: Elon Musk has said that his latest artificial intelligence model is “better than the doctoral level” in any academic discipline, without “no exception”. CEO of Openi Sam Altman he wrote The fact that the automated systems are now “smarter than people in many ways” and provides that the world is “close to the construction of digital superintelligence”.
Humans, however, generally do not have skills in a wide range of fields. To make decisions, we consider not only information themselves, but where they come from and how they are presented. While I know nothing about jaw biology, I generally not read random marketing blogs when I am trying to get to know medicine. But artificial intelligence tools often erase the type of context that people need to make snap decisions on where to direct their attention.
The open internet is powerful because it connects people directly to the largest archive of human knowledge that the world has ever created, which covers everything, from the paintings of the Italian Renaissance to the comments Pornhub. After ingesting everything, artificial intelligence companies have used what is equivalent to the collective history of our species to create software that obscures its own wealth and complexity. Becoming excessively dependent on this can rob the people of the opportunity to draw conclusions from looking at the trials.