Did you know Study Reveals ChatGPT’s Cognitive Shortcuts and Biases, Questioning AI’s Trustworthiness in Critical Decisions
According to a new study published in the Manufacturing & Service
Operations Management journal, ChatGPT’s decision-making process is
similar to humans and mirrors their cognitive shortcuts, thought
patterns, and blind spots. There have always been talks about how
ChatGPT can make better decisions than humans but this new study shows
that ChatGPT can make decision-making mistakes just like humans. This is
consistent through different situations but can change according to the
models.
For the study,
the researchers tested ChatGPT with 18 different bias scenarios to know
how it approaches different situations. The results of the study
revealed that ChatGPT showed biases like ambiguity, overconfidence,
conjunction fallacy, and aversion in about half of the tests. It was
also found that AI is good at performing tasks that include logic and
maths but it couldn't perform well when it comes to subjective reasoning
or judgement. Even though the GPT-4 mark is more accurate in analytical
tasks as compared to other versions, it still showed strong biases in
some judgment-based decisions.
AI is being used in almost all areas of life so this study asks the question of whether AI is still reliable if it is making bad decisions just like humans. The lead author of the study says that AI learns from humans so it is natural for it to mimic them and make judgment calls similar to humans. The study also found that ChatGPT tends to overestimate itself, play it safe, seek confirmation, and avoid ambiguity. When a decision involves a clear answer, ChatGPT does so without any errors but when judgment is involved, AI may make bad decisions. So it is important to monitor ChatGPT’s answers instead of leaving them unchecked, especially if it is being used for policy making.