ChatGPT is a Large Language Model (LLM) based on the GPT-3.5 architecture that OpenAI built and trained. This LLM’s advanced deep-learning (DL) algorithms can process natural language and generate relevant responses. As a result, ChatGPT can generate human-like responses to textual prompts.
One of the most exciting aspects of ChatGPT is its ability to generate code snippets and even entire software programs automatically. Upon receiving a prompt, it can return code that satisfies the included requirements. Then, a human developer can further optimize and refactor the code.
Because of its convenience, ChatGPT (and other AI tools) are increasingly popular—especially for repetitive coding tasks involving complex algorithms. You can save significant time using ChatGPT to generate code for data processing tasks, machine learning (ML) algorithms, and even video game engines. Furthermore, ChatGPT-generated code increases efficiency, appealing to strapped-for-time developers.
However, AI-generated code needs improvement. ChatGPT lacks knowledge of development concepts and contexts. The unaware user may unknowingly use AI-generated code with severe security vulnerabilities, consequently introducing these flaws into production environments. For this reason, developers should consider ChatGPT and other AI only supplementary in their arsenal.
This article explores the cybersecurity implications of AI-generated code and the significant impact of the rise of ChatGPT.
How ChatGPT impacts cybersecurity
Because ChatGPT generates human-like responses to textual prompts, security experts have already sounded the cybersecurity alarm. Their concerns include the potentially malicious use of ChatGPT. Some reports highlight that scammers could design prompts to get ChatGPT to aid in writing phishing emails.
In the example cited above, concerns over ChatGPT’s security implications focus on how it’s used—in other words, how malicious actors may use generated content to their advantage. This inclination towards bad actors aligns with typical approaches to cybersecurity. But as all developers know, maintaining application security requires identifying and resolving less-obvious vulnerabilities. This is where using ChatGPT for code generation becomes risky. Malicious actors can exploit the vulnerabilities that AI-generated code introduces.
Relying on ChatGPT-produced code means potentially deploying insecure code to a production application and unintentionally introducing vulnerabilities. This is particularly troubling for users with little prior knowledge or incomplete knowledge of this specific domain of the AI-produced code. In a 2021 study, researchers found that GitHub’s CoPilot—a code-generating predecessor to ChatGPT—produced security issues around 40 percent of the time.
How does ChatGPT handle these security concerns?
While ChatGPT can generate code snippets and even entire software programs, the OpenAI team has parameters and guardrails to prevent ChatGPT from creating actively malicious code.
One key mechanism is a set of filters that check prompt content. These filters detect specific phrases or keywords that may indicate the prompt is malicious. For example, if a prompt contains phrases like, “create a piece of malware,” ChatGPT will state that it can’t fulfill the request.
In addition to these filters, OpenAI has trained ChatGPT to increase the accuracy and quality of its responses. OpenAI first trained ChatGPT on a large corpus of text and code. Then, human developers reviewed and refined its responses. This process, known as Reinforcement Learning from Human Feedback (RLHF), means that humans can reward the system for more accurate responses. RLHF helps train ChatGPT to produce better textual and code-based responses.
Developers: Don’t just copy and paste!
Even with OpenAI’s security efforts, ChatGPT isn’t infallible. Malicious actors could still use ChatGPT to produce potentially harmful code by fine-tuning their prompts. For example, they could have ChatGPT create individual segments of code that don’t have a malicious purpose alone but act as malware when combined. Therefore, relying solely on ChatGPT to generate code is ill-advised and high-risk, as it can introduce security vulnerabilities into your applications without your knowledge.
While ChatGPT can generate functional code that meets the requirements of a given prompt, it often produces bare-bones code without basic security features. For example, ChatGPT-generated code may lack input validation, rate limiting, or even core API security features such as authentication and authorization. This could create vulnerabilities that attackers can exploit to extract sensitive user information or perform denial-of-service (DoS) attacks.
The risk factor of AI-generated code will only grow as developers and organizations adopt tools like ChatGPT to cut corners with AI-generated code. This may lead to a swift proliferation of vulnerable code.
ChatGPT occasionally advises users that the code it outputs lacks certain security features, as shown in the image below. However, this message may not always appear. And even when it does, some users may ignore it.
With these security challenges, you might wonder if you can—or should—use ChatGPT and similar programs to generate code. The answer is you can, but you should take extra precautions.
How to use AI-generated code securely
While ChatGPT may sometimes remind you that its generated code should undergo extra scrutiny, you should remember that you are ultimately responsible for the code you use.
In addition to following general security best practices, developers that use ChatGPT and AI-generated code should:
- Treat all code generated by ChatGPT as if it contains vulnerabilities. Don't assume that it's safe because a highly trained AI generated this code.
- Supplement your use of ChatGPT with manual coding. Don't just rely on ChatGPT.
- Perform rigorous security testing on your applications.
- Have the code reviewed by peers who may be able to spot security issues.
- Consult relevant documentation—especially if you're unfamiliar with the language or library. Always do your research and don’t assume AI knows best.
Using ChatGPT to improve security
While relying solely on ChatGPT to generate code can potentially introduce security vulnerabilities in your applications, ChatGPT can also add security features and review code for security vulnerabilities when prompted.
For example, if you want to add a new security feature to your application but aren’t sure how best to implement it, you could prompt ChatGPT to generate code that meets your requirements. ChatGPT could then generate code incorporating best practices for security, such as adding authorization to boilerplate application programming interface (API) code, input validation, or rate limiting.
Furthermore, developers can use ChatGPT to review existing code for security vulnerabilities. This is especially useful for organizations with large codebases looking for ways to identify and remediate security issues quickly. For instance, you could prompt ChatGPT to generate code that identifies and mitigates Structured Query Language (SQL) injection vulnerabilities, as shown in the image below.
While developers should not rely on ChatGPT as the sole source of security expertise, it can be useful for developers looking to add security features to their applications or review existing code for security vulnerabilities. When used with manual coding and rigorous security testing, ChatGPT can help you build more secure and resilient applications.
Remember that ChatGPT can't guarantee security even when you've prompted it to secure the code. Just as you should only use AI-generated code supplementally, you should use ChatGPT as a security-boosting resource in addition to—not in place of—other security measures.
Conclusion
ChatGPT is a powerful AI language model that creates great opportunities for efficiency. While it’s appealingly easy to use and reduces developer labor, it often only produces bare-bones code snippets with few security considerations.
Though OpenAI has taken steps to mitigate these issues, these measures aren't foolproof. Therefore, you should only use ChatGPT to supplement your development—whether to implement security best practices or generate the code itself. Always ensure you've thoroughly tested and reviewed any ChatGPT-generated code before deploying it into production applications.
The security implications of ChatGPT and the code it generates depend largely on the developers and organizations using it. But one thing is for certain: you should implement any AI-generated code with the utmost care and caution.