Did you know ChatGPT Does Well on Basic Maths Equations but Cannot Solve Questions that Require Analysis and Critical Thinking
Researchers from the University of Illinois
tested the free version of ChatGPT to see how well it could do in an
undergraduate aerospace engineering class, and it was found that even
though ChatGPT scored an A in basic math questions, it couldn't do well
on open-ended, complex questions and scored a D on them. Overall,
ChatGPT scored 82% and got a B, while the scores of average students
were a bit higher at 84.85%.
These findings
show that students can easily pass the course with a B by relying on
ChatGPT, but it can only be through getting math questions right and not
doing well on the questions related to critical thinking. This means
that they could pass, but they wouldn't learn anything by relying
completely on ChatGPT. The professor who is leading the course, Melkior
Ornik, says that ChatGPT acts as a calculator, so he's planning to add
more open-ended and project-based questions in the course so the
students can develop thinking skills.
The
researchers found that ChatGPT makes incorrect answers, even if it has
all the course material, like it once invented the term ‘quasi-periodic
oscillations’ which was never once mentioned in the class. The
researchers also said that this study was mainly focused on students who
do not put in the effort and go to ChatGPT for their coursework. That's
the reason the researchers used a free version of ChatGPT because many
students wouldn't pay for the upgraded model. ChatGPT acted just like
the student and did its homework like others did.
When the
researchers told ChatGPT about its incorrect answers, it adjusted to a
better answer, but there was not much improvement. If ChatGPT scored 90%
on homework earlier in the course, it improved to around 92% by the end
of the semester. This shows that even though ChatGPT tried to learn, it
wasn't able to show any real progress.