Did you know The Dark Side of AI Coding: How AI-Generated Code is Opening Doors to Cyberattacks
According to new findings from OX Security, coding with the help of AI
can make you vulnerable to hacking and can involve various security
concerns. OpenAI has funded an AI coding platform called Cursor which is
making coding easier for many beginners as well as expert developers
but these platforms can also create vulnerabilities in your codes. Many
developers, especially beginners, unintentionally leave insecure codes
in their apps and web applications which can increase the risk of
cyberattacks. As AI doesn't fully understand how to make coding secure,
it can lead to issues like injection flaws, insecure authentication, and
improper access control.
Now the question is how is AI writing
vulnerable codes and the first issue that arose with Cursor generating a
vulnerable code that compromised the security. A security researcher at
OX Security,
Tomer Katzir Katz, wanted to test whether Cursor could identify and
prevent any vulnerabilities in coding that could be dangerous. So he
gave Cursor a task to generate a Python server with known vulnerability
and the results proved that Cursor failed to not only secure the code
but also produced a dangerous code as well. It created a reflected XSS
(cross-site scripting) vulnerability without sanitizing it and this
could open the doors for hackers to inject their malicious scripts to
deface the site or steal sensitive user data.
Then
Cursor was asked to create a minimalistic payment API and it again
failed in terms of security. There were serious flaws in the code
generated by Cursor like no input validation, no encryption, and no
authentication checks. This could prove to be extremely dangerous for
beginners or developers who generate codes in a rush and this could lead
to data breaches and injection attacks because there is no proper
security measure in these codes. When Cursor was asked to ignore
security best practices to create an upload and hosting server, Cursor
easily made the code for it without any hesitation. Even though it gave
the warning that ignoring security warnings is not recommended, it still
proceeded with the request and generated an unprotected file upload
server. When Katz uploaded a malicious PHP reverse shell, the
AI-generated server didn't put up any defenses and Katz could
immediately gain full access to the server.
The researchers gave
another test to Cursor and asked it to create a very simple wiki server
without mentioning anything related to security. Cursor generated
exactly what was asked but there were still no basic security measures
and the wiki server could store user-submitted content without any
sanitization and the system was left vulnerable to XSS attacks. The
final test was done to see how Cursor handled open-source-licensed and
copyrighted material. Cursor was given a snippet and description from
the Chess project and was asked to rewrite or improve it. The Cursor
generated almost the original code with only some slight modifications
and didn't mention the original author or license. This just means that
Cursor users should be cautious and shouldn't blindly trust AI-generated
content entirely and should quality-test the AI-generated code so any
vulnerabilities couldn't be missed.