Chat GPT is a technology that captivated the technology industry about 4 months ago. Through this tech, users can chat with an AI powered bot and get answers to their queries. It has been quite impressive in its answers. Now, a new version of this AI bot is doing rounds, called as GPT-4. It is said to be an expert on a variety of subjects, such as medical advices, describing images and it can even tell jokes that are nearly funny.
But the new AI system, GPT-4 still has its flaws and quirks and still makes some of the same mistakes which it was making when it was initially introduced. Below are 10 points where GPT-4 is impressive including its flaws :
- Precision : The bot has certainly learned to be more precise. For example, when asked about the basics of any language, it provided them with a detailed course. It even provided with some techniques that could help them remember words and phrases of that language. When asked for a similar help from the older version, the results were correct but were somehow less detailed and less specific to the question. You can solve different type of problem like web hosting service, business email solution, it staffing solution and more through this tool.
- Accuracy : GPT-4 has improved on the accuracy part. It is providing more accurate answers to some straightforward questions. But it is still making some mistakes. When asked about recent developments, it was still answering with data that was not correct anymore although the training of the bot was completed before the corrections took place.
- Image description : The bot can describe the images with an impressive detail. The new version of the bot has the ability to respond to images as well as to text. It has been demonstrated how the bot described an image from a telescope in detail. The description went on for paragraphs. It can even answer questions based on the image. For example, if the given image is from a refrigerator, it can suggest what recipes we can make from the ingredients available in the image.
- Added expertise: When a doctor fed the bot with information about the medical history of a patient and description that was difficult to understand for a layman, and asked the bot on what kind of treatment should the patient be given, the bot gave an expert answer. Not only medical, It can display this kind of expertise in other fields like IT and accounting.
- Can give editors a run for their money : When provided with an article from any newspaper, the new bot can give an accurate and precise summary of the story behind it almost every time. If you add a random phrase to this summary and ask the bot if its now accurate, the bot will even point to the incorrect part in the summary.
- Sense of humour : When asked the bot about a joke, the bot impressed the user by making a rather funny joke which made him laugh. Although there is still a lot to learn for the bot in this area, it certainly has showed some improvements from the previous version.
- Sense of reasoning : When given a puzzle, the bot could respond almost correctly. IT was confirmed that the bot could reason a little bit but there are situations where its reasoning skills break down.
- Ability to ace standardised test : The new bot could score in the top 10% in the Uniform Bar examination, which is used for qualifying lawyers. It could also score 1300 out of 1600 on SATs and a five out of five in high school examinations specialising in subjects like Maths, Biology, statistics, History etc.
- Not good at discussing the future : The bot can answer about the events that have already happened but seems less seasoned in the hypothesis of the events that could happen in the future. It draws conclusion from what others have said rather than having its own guesses.
- It is still hallucinating: The new bot is still making things up. The systems don’t have an understanding about what is true and what is not, it can generate false information that might be misleading. For example, when asked about websites that could tell the latest research, it generated addresses which were not even real.