AI security risks are introduced through the implementation and/or use of AI technology. These include malicious cyberattacks initiated by threat actors and vulnerabilities stemming from platform and/or user behavior.
The Open Worldwide Application Security Project (OWASP) identified a series of vulnerabilities pertaining to AI built on learning language models (LLMs). These include the following:
These vulnerabilities can be condensed and simplified further into the following core categories:
Being largely dependent on available data and user input, AI is increasingly targeted by threat actors to breach digital defenses and siphon sensitive information. In a recent Gartner® survey, the five most cited emerging risks of Q1 2024 were revealed. AI-related risks took the top two spots in the form of AI-enhanced malicious attacks and AI-assisted misinformation. As Gartner notes, AI enhancement can “facilitate phishing and social engineering, which enables better intrusion, increased credibility and more damaging attacks.”
Rogue AI is when AI is misaligned with the goal of the user. This misalignment can be accidental, such as a failure of appropriate guardrails. It can also be intentional, in which case threat actors may seek to subvert a target's AI system or use, or they may attempt to install maliciously aligned AI models within an environment.
Fraud automation is the synthetic content creation of text, audio, and/or video that exploits business process whether through phishing, business email compromise (BEC) or deepfake videos or audio. Fraud automation can easily scale with AI.
AI systems are data reliant. Therefore, the data used in the AI systems—and the live data they touch—must comply with all privacy and fair use regulations, hence the need for proactive and effective data governance that helps to minimize risk.
The most critical vulnerabilities related to LLMs are listed by the OWASP Top 10 as:
In addition, summaries for each of these vulnerabilities can be found on the OWASP website.
Generative AI (GenAI) makes use of available past and present data to assist users. Therefore, for tools that require prompting, it’s best to be mindful and proactive about what you put into the prompt field. Some tools allow for users to opt out of data collection, such as ChatGPT’s option to turn off chat history. Depending on the AI governance and usage policies enforced by the industry regulator in question, preventative measures and/or behaviors like these may be a requirement for maintaining compliance.
Inserting financial information, confidential specifics on yet-to-be-released software, personal identifying information (PII) such as personal addresses and contact details, and/or other sensitive data means that information it is freely accessible by the AI application. This data is at risk of being manipulated, shared with others in recommendations from the tool in response to similar queries, and/or stolen by threat actors if the AI’s protection measures are breached. This is particularly a risk when using generative AI tools to assist with ideation or quickly compile large quantities of data, especially if insufficient encryption and security measures aren’t in place.
As a form of generative AI that delivers text-based responses to user prompts, ChatGPT can be manipulated by threat actors to help disguise and/or strengthen their phishing attempts. Alternatively, the platform itself may be targeted to gain access to—and potentially misuse—user data. This may include drafting phishing emails by leveraging writing samples from the targeted organization or individual, as well as correcting typos, grammar, and language to appear more convincing. There is also a risk of user data theft and/or breaches via prompt injection or jailbreaking.
There are also security risks stemming from use that don’t directly involve threat actors. For example, the information ChatGTP receives from you may be leveraged to train LLMs. There is also the risk of insufficient data encryption, as demonstrated by the ChatGPT MacOS app initially launching with user chats stored as plaintext.
The OpenAI API itself has potential to be targeted by cybercriminals. Although it is SOC 2 compliant and undergoes regular penetration testing, your risk is never entirely absolved since cyber threats are constantly evolving. A recent Soft Kraft article explores OpenAI data security risks in comprehensive detail, revealing five of particular interest to enterprise users:
With support for Microsoft 365 applications, Microsoft Copilot AI is readily available to users. Moreover, at the hardware level, the latest Copilot+ branded PCs ship with dedicated physical Copilot keys to encourage even quicker user input. These streamlined access measures may introduce security risks if sensitive information is made available to Copilot, just as with other generative AI tools. Should permissions not be correctly set, or if AI-generated documents don’t have the proper privacy settings enabled, you may also find yourself facing confidential data leaks and/or breaches. The same applies to user access management. Lastly, attacks on the platform itself could enable threat actors to modify how it accesses and shares your data.
Related Research
Related Articles