Key Takeaway
The main issue with AI-generated code is its training data, which often includes outdated or vulnerable software. This can lead to the reintroduction of existing vulnerabilities and the emergence of new ones, according to Alex Zenla, CTO of Edera. Unlike traditional open-source code, which allows for inspection and auditing, AI-generated code lacks transparency and accountability. Dan Fernandez, Head of AI Products at Edera, highlights that platforms like GitHub provide traceability through pull requests and commit messages, but AI code does not offer the same level of oversight, raising concerns about its reliability and security.
The Training Data Challenge
The core issue revolves around how these models initially learn to write code.
“If AI is trained using outdated, vulnerable, or low-quality software available online, then existing vulnerabilities can resurface and new problems can be introduced,” explains Alex Zenla, CTO of cloud security firm Edera.
This highlights a significant difference from traditional open-source practices, where developers can at least review and audit the code they are using.
“AI-generated code lacks transparency,” states Dan Fernandez, Head of AI Products at Edera.
“On platforms like GitHub, you can view pull requests and commit messages, which help clarify who made changes to the code and allow for tracing contributions,” he adds.
“However, with AI-generated code, there is no equivalent accountability regarding its origins or whether it has been reviewed by a human.”








79 Comments