Stanford study: programmers who used AI tools like GitHub Copilot to solve a set of coding challenges produced less secure code than those who did not

Programmers who accept assistance from AI tools like Github Copilot produce less secure code than those who work…

Programmers who accept assistance from AI tools like Github Copilot produce less secure code than those who work alone, according to research by computer scientists at Stanford University.

Stanford researchers Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh provide a positive response to this question in their paper titled “Do Users Write More Insecure Code with AI Assistants?”

According to the authors’ findings, participants who had access to an AI assistant frequently created more security flaws than those who did not, with results for string encryption and SQL injection being especially noteworthy. Surprisingly, we also discovered that participants who had access to an AI assistant were more likely than those who didn’t to think they wrote secure code.

More: arXiv and WinBuzzer

Related Posts