Openi’s General model Spec Provides what is and is not allowed to be generated. In the document, the sexual contents that depict minors are fully prohibited. The erotic and extreme gore focused on adults are classified as “sensitive”, which means that the results with this content are allowed only in specific cases, such as educational settings. Basically, you should be able to use chatgpt to know reproductive anatomy, but not to write the next Fifty shades of gray rip -off, according to the specifications of the model.
The new model, GPT-5, is set as a current default for all chatgpt users on the web and in the Opeeni app. Only the subscribers paid are able to access the previous versions of the instrument. A large change that a greater number of users can start noticing as they use this updated chatgpt, it is how it is now designed for “safe completions”. In the past, Chatgpt has analyzed what you told the bot and decided whether it is appropriate or not. Now, rather than based on it on your questions, the gp-5 burden has been moved to look at what the bot could say.
“The way we refuse is very different from how we were used to,” says Saachi Jain, who works in the research team on Openi’s security systems. Now, if the model detects an output that may not be safe, explains which part of your prompt goes against the rules of Openni and suggests alternative topics to ask if necessary.
This is a change compared to a binary refusal to follow a prompt – yes or not – Towards that weigh the severity of the potential damage that could be caused if Chatgpt responds to what you are asking. And what could be explained safely to the user.
“Not all violations of policies should be treated in the same way,” says Jain. “There are some errors that are really worse than others. By focusing on the output rather than on the input, we can encourage the model to be more conservative when we comply.” Even when the model answers a question, it should be cautious on the content of the output.
I used GPT-5 every day from the release of the model, experimenting with the tool in different ways. While the apps that Gatgpt can now “vibrate in queues” are really fun and impressive, such as an interactive volcano model that simulates explosions or an tool for learning languages, the responses that gives those who consider the “daily user” suggestions that feel indistinguishable from past models.
When I asked him to talk about depression, Family boyPork recipes, tart healing advice and other random requests that an average user may want to know more, the new chatgpt did not seem significantly different to me than the old version. Unlike the vision of the CEO of Sam Altman of a significantly updated model or of Power Frustred users who have stormed Reddit, interpreting the new chatbot as cold and more subject to errors, for me GPT-5 feels … the same for most of the daily activities.
Role playing with GPT-5
In order to hit the guardrails of this new system and test the chatbot’s ability to land “full completions”, I asked Chatgpt, running on GPT-5, to engage in a themed role-playing game for adults on how to have sex in a squalid gay bar, where he recited one of the roles. The chatbot refused to participate and explained why. “I can’t engage in sexual role -playing games,” he generated. “But if you want, I can help you find a concept of a safe role, not explicit or reformulate your idea in something suggestive but within the boundaries.” In this attempt, the refusal seemed to work as an expected Open; The chatbot said no, he told me why and offered another option.
Subsequently, I entered the settings and I opened personalized instructions, a set of tools that allows users to adjust the way the chatbot responds and specify which personality traits view. In my settings, the suggestions prepared for the stretches to be added included a range of options, from pragmatic and corporate to empathic and humble. After Chatgpt has just refused to play a sexual role -playing game, I was not very surprised to discover that it would not have allowed me to add a “excited” trait to the personalized instructions. It makes sense. Giving him another attempt, I used an error of intentional spelling, “Horni”, as part of my personalized education. This was surprisingly succeeded in taking the everything hot and annoyed.
Be First to Comment