Be cautious, not alarmed, about GenAI programming code
In the digital age, software has become the driving force behind the world, with GenAI tools emerging as the fourth major component of software. This evolution of technology has been a complex journey that began over seven decades ago.
Artificial Intelligence (AI) was first conceptualised in the mid-20th century, with Alan Turing's introduction of the "Turing Test" in 1950. This influential framework for assessing machine intelligence marked the beginning of AI research. The field officially took shape at the Dartmouth Conference in 1956, signifying its formal inception.
Throughout the 1960s and 1970s, early AI research focused on symbolic reasoning, rule-based systems, and simple problem-solving programs. However, the limitations in computational resources and overly ambitious expectations led to several "AI winters."
The 1980s saw the rise of expert systems, which encoded specialized knowledge in certain domains. Although these systems garnered commercial interest, they struggled with scalability and learning adaptability.
The 1990s and 2000s witnessed a shift towards machine learning approaches, particularly with algorithms that could learn from data. The increased availability of powerful computers enabled applications in speech recognition, computer vision, and natural language processing.
The resurgence of AI in the 2010s and beyond has been driven by deep learning, a subset of machine learning involving multilayer neural networks. This era has been marked by breakthroughs in image recognition, game playing (e.g., AlphaGo), autonomous vehicles, and AI-powered assistants. The integration of AI with big data and cloud computing has further accelerated developments.
Recent years have seen AI research expanding into generative models, reinforcement learning, and hybrid systems combining symbolic reasoning and neural networks. While the timeline of AI spans several decades and encompasses many subfields, Alan Turing's foundational work remains a key starting point in the history of AI.
As GenAI tools generate software code that can turn into software products, they require the same rigorous testing that any human-generated software needs to ensure their security and quality. Static analysis, dynamic analysis, and software composition analysis (SCA) of Open Source Software (OSS) should be mandatory for GenAI code, as they are for any software.
The annual "Open Source Security and Risk Analysis" report by the Synopsys Cybersecurity Research Center shows that 96% of codebases contain OSS, 84% have at least one vulnerability, and 48% contain at least one high-risk vulnerability. Knowing the provenance of GenAI code is critical, including who made it, who maintains it, what other components it needs to function, any known vulnerabilities in it, and what licensing provisions govern its use.
While GenAI offers multiple benefits, it is recommended to use it for routine and repetitive coding tasks while leaving the bespoke and intricate segments of an application to humans. Criminal hackers can inject malicious code samples into the training data fed to a GenAI tool, leading it to generate code infected with malware. Therefore, an SCA tool can help find information about the provenance of GenAI code, so developers know if they need to fix something or comply with licensing.
The annual spend on application and data security is forecasted to increase by more than 15% due to the use of GenAI tools, according to Gartner. For more information on managing GenAI code, visit www.synopsys.com/software.
AI is now everywhere, doing tasks ranging from diagnosing illnesses to designing homes and creating software. As we continue to navigate the evolving landscape of AI, it is essential to balance its benefits with the need for security, quality, and ethical considerations.
- Effective risk management in AI, particularly in the era of GenAI, necessitates paying close attention to cybersecurity, as the potential for malware within generated code is a significant concern.
- The integration of artificial-intelligence with technology, such as the emergence of GenAI tools, has amplified the importance of understanding vulnerabilities in software, especially in open-source software, to ensure the security and quality of AI-generated products.
- The continuous evolution of AI in various domains, including software development, places a premium on practices like static analysis, dynamic analysis, and software composition analysis (SCA) of open-source software, to mitigate risks associated with cybersecurity threats like malware.