Among all the smart work why is ChatGPT hallucinating

In this wide world, we are always surrounded with humans who trust completely similar scenario goals with AI these days. There was an investigation which was run by open AI and the New York Times reported it. According to the report the AI models are making things up and performing silly mistakes. These are the types that you can call hallucinations.

This happens when you ask a question from the AI but it responds in such a way that it sounds true but in reality that information is not correct at all.

Since its creation, the AI chatbots have been making these kinds of mistakes. Even the developers are strategic and analysing that situation but the new version of AI still has this problem.

Now there are two latest models of open AI which can work in a step-by-step manual. These models do not just write explanations with influential and easy to understand text. But instead these are. Working through each problem in small steps. They are designed for things like a human being.

The model GPT-0.1 can perform way better than PhD students of biology, chemistry or medical sciences. But the model GPT -0.3 was making mistakes and creating answers which were not even true when open AI was testing its credibility. AI to state some facts about famous people they generated the answer which was hallucinations and they did not have to do anything with the reality. 

One thing that users are witnessing in the AI models. Is that the more you try to make sense of something with AI-like reasoning? It makes more mistakes. The simple models tend to deliver what is asked and then they stop but the complex models try to reason and adapt with each connection and question that the user asks them.

Just because the AI gets the lines blurry between caution and reasoning does not mean that it is giving wrong answers. The AI just tries to provide reasons that fit perfectly with the situation of a person.

Manufacturers can change the answers of AI according to what the user is asking it. The earlier models of AI tend to provide answers based on complete facts. But these new models try to bring answers through reasoning. That’s why they try to explain some situations where they do not know if they are reasoning or providing thoughtful expressions, this is when they start to hallucinate and the user might get an idea that AI is lying or not telling the truth completely.

This does not mean that the reasoning models are generating lies. It just means that they are unable to differentiate between reason and explanation. In order to sound passionate about something that the user is asking these AI models often forget that the answer is not sounding logical. 

Leave a Reply

Your email address will not be published. Required fields are marked *