Artificial Intelligence (AI) is a technology that empowers computers and machines to mimic human abilities such as learning, understanding, problem-solving, decision-making, creativity, and independent action
Organizations use AI to help drive innovation, empower their teams, and streamline operations in variety of ways. Depending on how it is implemented, AI capabilities are often bolstered with predictive analytics, machine learning (ML), and other functionalities. AI use cases include but are not limited to:
The hardware we use to drive innovation, streamline processes, and manage everyday operations is changing. Advanced architectures such as reduced instruction set computing (RISC) machines—commonly known as ARM development—and complex instruction set computing (CISC) architectures like x86 are both playing critical roles in the computing industry. With Apple, Microsoft, Broadcom, Intel, and other companies investing heavily into AI-enabling technologies, we have entered the age of AI PCs. These systems are optimized to handle a wide range of AI-enabled tasks, including but not limited to voice recognition, natural language processing, and machine learning. AI-specific hardware accelerates many of these tasks on-device, allowing powerful AI inference and even training in everyday machines.
To power deep learning and train AI models, organizations are leveraging the performance and expanded throughput offered by AI data centers. These are facilities that are home to large quantities of hardware including graphical processing units (GPUs) and AI acceleration systems. As a recent Forbes article exploring their capabilities notes, these deliver “substantial computational power," collectively consuming massive amounts of power and even requiring state-of-the-art cooling solutions.
Security operations centers (SOCs) can leverage AI to more efficiently allocate resources and mitigate risk. Through deep learning, automation, and other capabilities, they can accelerate their risk identification and response measures, particularly if utilizing a cybersecurity platform that consolidates solutions and integrates AI to streamline their operations further.
With tools including ChatGPT, OpenAI, and Microsoft Copilot being easily accessible, threat actors are continually attempting to access sensitive data. In some cases, their aim is to target AI tools to manipulate their behavior to operate against its intended use. Key AI security risks include rogue AI, fraud automation, and insufficient data governance.
Organizations must not only keep pace but get ahead of cybercriminals by ensuring risk-aware and compliant adoption of AI technology. Developing a deeper understanding of AI security risks is a vital part of this strategy.
A risk-aware policy that provides guidance on correct AI use is an important point of reference for employees. Ensuring it is followed and kept up to date will help minimize the risk levied against your organization. Having the right policies and procedures in place is essential to be compliant and maintain effective data security. Exploring examples from federal and industry regulators and working with peers can help inform the drafting of your own AI policy.
As generative AI (GenAI) technology continues to advance, deep fakes are being made increasingly convincing. With threat actors using them to manipulate individuals or groups into believing the image, video, or text generated to be authentic and trustworthy, they pose a substantial data security risk. Whether or not AI plays a role, the intention of cybercriminals in using either approach—to mislead, steal, and/or defraud—remains the same.
Understanding how AI implementations function—including how they leverage and potentially retain data—helps to inform an effective cybersecurity response. As organizations continue to imagine and innovate with AI, malicious actors are adapting accordingly to take advantage of vulnerabilities. With the threat landscape constantly evolving in tandem with AI itself, organizations should strive to proactively secure their AI implementations.
In addition, if developing your own AI systems and regardless of whether you train your own models, OWASP recommends the following:
Read the OWASP AI security overview for additional details and technical insights.
With GenAI leveraging ML capabilities for data analysis and creative output, new risks are emerging. “Machine learning data security must also consider data integrity in transit and during processing,” notes a Global Cybersecurity Alliance (GCA) article on ML data security. “Compromised data integrity can skew model outputs. It can lead to inaccurate or biased decisions with potentially far-reaching consequences.”
Proactive steps are explored in detail within this article:
AI models are structures made up of an architecture and parameter values that allow the system to perform tasks like making predictions or generating content, which is called inference. These tasks can range from answering queries, detecting patterns in data, recognizing behaviors, and more. AI models typically go through a training process to learn the best parameter values for effective inference.
Depending on your organization’s needs, goals, compliance requirements, and budget—among other factors—a wide range of ideal AI models may be under consideration for implementation. However, it’s important to note that every AI model has its own inherent level of risk, and there are also different types of AI models to consider.
Much of the most widely implemented and established AI technology we have today is referred to as traditional or narrow AI. While it can adapt to user queries and/or needs, it can only perform predetermined tasks, often within one domain of expertise. Examples of narrow AI include text-based chatbots in customer support portals, virtual assistants such as Siri or Google Assistant, and language detection software with auto-translate capabilities.
According to IBM’s Data and AI Team, there are two functional categories of narrow AI:
As the term suggests, reactive machine AI can only make use of the information that you feed it in the present moment. It can actively engage with its environment and users but, unable to memorize what it receives, it cannot self-improve. Content recommendations built into streaming and social media platforms make use of reactive machine AI, as do tools designed to perform predictive analyses of real-time data.
Limited memory AI leverages past and presently available data to better assist you. The “limited” distinction refers to it being unable to hold onto your provided data indefinitely, essentially relying on short-term memory. The data that it can access, however, is leveraged to help continually optimize its performance and capabilities. In other words, its environment and your input help to train it on how best to respond. Virtual assistants fall under this category, for instance.
While narrow AI is used in a variety of implementations, frontier AI models—more commonly referred to as GenAI—are also receiving plenty of attention and investment. These are even more advanced, experimental, and future facing AI models by design. As the term implies, GenAI is designed to generate content, either from prompt inputs or accessing existing data. Standout examples include GPT-4 and Google Gemini Ultra.
The Artificial Intelligence Index Report 2024 by Stanford University estimates that frontier AI training costs have reached “unprecedented levels,” with Google Gemini Ultra alone costing US $191 million. In addition, it states that industry is a significant driver of frontier AI research, producing 51 “notable machine learning models” in 2023 compared to 15 in academia. Yet, at the same time, 21 such models emerged from industry-academia collaborations. The report also notes that, despite declining private investment in 2022, GenAI funding has surged to US $25.2 billion, and “all major players […] reported substantial fundraising rounds.”
“Traditional AI excels at pattern recognition, while generative AI excels at pattern creation. Traditional AI can analyze data and tell you what it sees, but generative AI can use that same data to create something entirely new,” author Bernard Marr summarizes in The Difference Between Generative AI and Traditional AI: An Easy Explanation for Everyone (Forbes). “Both generative AI and traditional AI have significant roles to play in shaping our future, each unlocking unique possibilities. Embracing these advanced technologies will be key for businesses and individuals looking to stay ahead of the curve in our rapidly evolving digital landscape.”
A set of step-by-step instructions designed to solve a problem or perform a task. It defines a sequence of operations that can be executed by a computer.
An ML subset where algorithms, inspired by the structure and function of the human brain's neural networks, learn from large amounts of data. ‘Deep’ refers to the large number of layers in which these artificial neurons are organized. Deep learning excels in tasks like image and speech recognition, natural language processing, and more complex pattern recognition.
A system designed to perceive its environment and take actions to maximize its chances of achieving specific goals. It uses sensors to gather information and algorithms to make decisions, take actions and evaluate the effect, often learning and adapting over time.
This is content produced or manipulated using AI techniques, such as deep learning. It includes generated images, videos, and audio that convincingly simulate real-world elements, blurring the line between authenticity and simulation.
LLMs refer to AI models with billions of parameters, such as GPT-4, that are trained on vast datasets to manipulate and generate human-like text. This enables various language-related tasks and applications. Transformers are currently the dominant architecture for LLMs.
This is usually a deep learning model trained on a broad data set, which can then be repurposed for many different tasks. LLMs are examples of foundation models, capable of being specialized for language, code, images, audio, a combination of modalities, or they can be multi-modal. Foundation models can also be finetuned for specialized applications, like chatbots.
Related Research
Related Articles