Transforming code scanning and threat detection with GenAI

By | 12:12 AM Leave a Comment

In this Help Net Security interview, Stuart McClure, CEO of Qwiet AI, discusses the evolution of code scanning practices, highlighting the shift from reactive fixes to proactive risk management.

McClure also shares his perspective on the future of AI-driven code scanning, emphasizing the potential of machine learning in threat detection and remediation.

code scanning

How have you observed code scanning practices evolve in recent years, especially with cloud adoption and DevSecOps?

Code scanning has come a long way, and seeing how things have shifted is fascinating. In the beginning, we were often playing catch-up, only being able to fix issues after they popped up—usually by a hacker who had already exploited the vulnerability and shared the data dump with their friends. Now, we’re much more competent and proactive in finding, fixing, and assessing the holistic risk.

What we have today makes that world look like the Pleistocene era. We’ve got these automated checkpoints everywhere throughout the code lifecycle, beginning from when developers write their code in their editors (IDEs—integrated development environments) to when they push to the cloud development environment using Git and all through the integrate, build, test, and deployment pipeline.

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. For example, if we spot a vulnerability in a third-party software library or component, we need to understand how that might impact the code that’s calling it.

The key to running an efficient, secure software development lifecycle (SSDLC) program is to automate basic or repetitive tasks and track a vulnerability’s lifecycle (from detection to triage) completely—from womb to tomb.

What are some significant challenges organizations face when adopting code scanning tools at scale, and how can these challenges be overcome?

Most legacy code scanning tools are painfully slow, often taking tens of hours to scan a single modern application! And they frequently generate endless alerts, most of which are nothing but false positives. Imagine chasing down phantoms and red herrings all day, with 6 or 7 out of 10 findings being false flags. Exhausting. So now our developers, who are already swamped with actual coding work, have to triage (and typically in a crisis) to figure out which alerts matter.

Even after they’ve sorted through all that noise and identified the real issues, they’ve got to create tickets and track everything they find. This function is often bolted onto the existing responsibility of engineering rather than incentivizing desired behavior to bonuses or recognition. If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile?

Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. Human + AI is greater than AI alone.

With a wide range of static and dynamic scanning tools available, what critical factors should influence a CISO’s selection of code scanning tools?

In the age of artificial intelligence, the number one critical factor that should be considered is AI ancestry. Do you think the tools and products come from AI-first companies and platforms? If not, move on. If yes, double-click to understand the foundational principles that have governed their roadmaps. How have they implemented AI in its entirety into the workflows of application security, and what sides of the AI landscape have they embraced or shied away from? Both predictive and generative AI models and workflows are essential to being an AI-first application security company, and those without this pedigree will thrash and struggle to evolve into the modern AI solution set.

Second, the speeds and feeds include low latency (time to process), low maintenance costs (SaaS-based in contrast to on-prem), high throughput (enterprise-grade parallelization), and a single glass pane (carrying context across all of these tools is key to running an effective program), among many others.

How can CISOs foster a culture of security-first coding among development teams, and what role do automated code reviews play in this?

The security program should be visible at the board and executive levels. Align yourself with the board member(s) who care and educate them thoroughly. Empower them to demand quantitative (along with qualitative) improvement metrics and remind them of the inevitable risk they expose the company to when ignored.

Another meaningful step a CISO can execute is aligning incentives, rewards, and bonuses to sustain security posture.

How do you see AI and machine learning shaping the future of code scanning, especially with automated threat detection and remediation?

AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs (Large Language Models) can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.

Some of the most compelling developments are:

  • Contextual understanding: Modern AI models are becoming remarkably adept at understanding code in context, not just pattern-matching. They can grasp the semantic meaning of code blocks and their interrelationships, often catching subtle vulnerabilities that legacy static analyzers miss.
  • Predictive analysis: Rather than flagging known vulnerabilities, AI systems are better at predicting potential security weaknesses based on code structure and flow patterns, anticipating threats before they become exploitable.
  • Adaptive learning: Each new vulnerability discovery helps train these systems to become more sophisticated. They learn from real-world attack patterns and evolve their detection capabilities accordingly.
  • AI attack graphs are being developed at compute speed to be used by the bad guys to infiltrate systems and networks.

secure software development ebook

Fill out the form below to get the free eBook:


from Help Net Security https://ift.tt/s3peB5Z

0 comments:

Post a Comment