How to close gaps between AI coding and secure software with DevSecOps
Don't overlook the security implications of AI-generated code.

 In |

Reading Time: 5 minutes

Artificial intelligence (AI) has infiltrated the world of software development — and things will never be the same.

Tools like GitHub Copilot, Codex, Tabnine, and others can generate complete code functions and blocks in seconds using machine learning (ML) trained on billions of lines of code, making them poised to substantially increase programmer productivity.

A 2022 study found that developers who used Copilot completed a task 55% faster than those who didn’t. Nearly 33% of Stack OverFlow’s 2023 Developer Survey respondents cited increased productivity as the most significant benefit developers see from AI tools. Respondents also praised them for speeding up learning (25.17%) and improving efficiency (24.96%). All in all, 70% of respondents are using or plan to use AI tools in their development process this year.

It’s not surprising, therefore, that GitHub estimates generative AI developer tools could raise global gross domestic product (GDP) by $1.5 trillion by 2030 due to accelerated developer velocity. And yet, while this emerging technology offers the promise of speed, efficiency, and software development democratization, it comes with risks if relied upon blindly without the proper oversight.

The security implications of AI-generated code

Security is often an afterthought. But it needs to be top of mind — always.

Multiple studies have revealed concerns about increased security vulnerabilities in code produced using AI autocompletion compared to human-written code. A Stanford study, which observed 47 developers using Codex, found that programmers believed their code was more secure when written with the assistance of AI versus their own knowledge. The opposite was true. Researchers at New York University’s Tandon School of Engineering tested Copilot and discovered that “40% of the code it generated in security-relevant contexts had vulnerabilities.”

The AI’s ability to generate code quickly can lull developers into a false sense of security. They might overlook or underestimate the security risks, assuming that the AI’s suggestions are as secure as they are efficient, which is only sometimes the case. They may even find themselves duped by AI hallucinations, presuming that bogus code is actually legit. We’ve already seen this sort of carelessness happen in the legal world through the use of made up case law.

But the pitfalls don’t stop there. The advent of AI in coding isn’t just a technical shift; it’s an ethical maze.

The ethical and legal quandaries

These tools use public code as training data, often devoid of context or security checks. This not only risks flawed and insecure code but also blurs the lines of intellectual property. Our legal systems are ill-prepared for this new dynamic, leaving developers and companies in legal limbo.

Then there’s the matter of accountability. In a world where AI-generated code becomes the norm, who takes the fall when things go south or fail to meet regulatory standards? Is it the human developer, the organization, or the AI itself? These aren’t just hypotheticals. They’re central to the responsible deployment of AI in software development and demand immediate, nuanced solutions.

Incorporating DevSecOps to balance productivity and security

So, what’s the path forward? It’s not to shun these advancements but to manage and mitigate risks associated with leveraging these tools.

This is where development, security, and operations (DevSecOps) come into play. It injects security practices into the devops pipeline, allowing you to balance speed and safety when using AI code generation tools. Instead of solely relying on AI for security checks — which is like having a single line of defense against threats — you employ a multi-layered defense in depth strategy to ensure more comprehensive and robust coverage.

But what does this look like in reality?

Real-time security audits can be tailored to scrutinize AI-generated code as it’s produced. Automated security testing can be infused into the DevSecOps pipeline to ensure that the AI-generated code is as secure as possible before deployment.

DevSecOps offers mechanisms for ensuring the quality and reliability of AI-generated code. Code reviews can be adapted to focus on the unique challenges posed by this type of code. Continuous monitoring and logging can track the behavior of deployed AI-generated code, providing insights into any potential issues that may arise post-deployment.

DevSecOps can also enforce ethical coding practices. For instance, it can ensure that AI-generated code doesn’t accidentally introduce biased algorithms. Additionally, DevSecOps can help your organization comply with data privacy regulations, reducing the risk of legal complications.

And remember, DevSecOps isn’t just about tools and processes; it’s also about people. You must continuously train your developers in secure coding practices. Being aware of the latest security threats and how to counteract them is crucial for a robust DevSecOps strategy.

The future of AI coding assistants — with extra “assistance” on the side

The industry forecasts are telling.

Gartner predicts that 80% of code will be AI-generated by 2025. Similarly, Emergen Research projects the global DevSecOps market will reach $23.42 billion in 2028 — up from $2.55 billion in 2020 — firmly cementing it as the future of software development.

Mixing these two is not an option for IT decision-makers — it’s a strategic imperative. And SHI can help.

As a leader in DevSecOps, we offer solutions that integrate security into every phase of the development lifecycle. But our process isn’t just about finding and fixing vulnerabilities; it’s about creating a secure culture that involves everyone from the development team to the C-suite. Our holistic approach proves you don’t have to sacrifice speed for security.

The future of secure, efficient software development hinges on the synergy of AI-generated code and DevSecOps. The key is recognizing that AI coding assistants should complement but not entirely replace human oversight and judgment regarding critical code.

By augmenting developers with the power of AI while instilling a culture of security, you can realize tremendous gains in developer velocity without compromising on quality or safety.

Ready to learn how to use AI-generated code to speed up your development lifecycle while maintaining strong DevSecOps principles?
Reach out to us today.

Connect with SHI