Artificial Intelligence (AI)
AI Configuration Best Practises to address AI Security Risks
AI usage is on the rise as many companies are adopting AI for productivity gains and creation of new business opportunities which provide value to their customers.
Context
AI usage is on the rise as many companies are adopting AI for productivity gains and creation of new business opportunities which provide value to their customers. According to the Mckinsey’s article: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai based on their Global Survey on AI, '65 percent of respondents report that their organisations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Organisations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology.
Although AI presents new exciting and lucrative opportunities for organisations, it is also the latest attack surface for many organisations as security often lags behind deployment of new technology.
AI Attack Tactics & Security Risks:
The AI Attack Tactics
The number of companies using GenAI is exploding, and this fast adoption to meet market demands could lead companies to overlook security best practises.
The rapid adoption of this new technology means cloud architects, security professionals, and developers alike may not have received training or guidance on deploying GenAI services securely. Further, there have been recent attacks like
- Qubitstrike campaign where AI model notebooks exposed to the internet were exploited for harvesting of cloud provider credentials and crypto mining
- Chatgpt Microsoft Windows 11 grandma exploit where ChatGPT via prompt injection was tricked into revealing free windows keys for Windows 11
Threat actors increasingly use Generative AI (GenAI) to craft targeted phishing emails. However, the same GENAI can help identify scams and security threats.
Security Risks:
Below are the security risks that can occur while using AI services:
- OWASP Top Ten for LLMs and Generative AI Apps
- Prompt Injection which can lead to disclosure of sensitive information and reputational damage.
- Insecure Output Handling which can lead to cross-site scripting and remote code injection
- Training Data Poisoining - poisoned information may be surfaced to users or create other risks like performance degradation, downstream software exploitation and reputational damage
- Model Denial of Service which affects the availability of AI service such that there is a decline in the quality to users as well as unexpected high resource costs for the owner of the AI service.
- Sensitive Information Disclosure as LLM have the potential of revealing sensitive information, proprietary algorithms, or other confidential details through their output. This can occur when LLM are trained with sensitive e.g. Personally identifiable information (PII) and making that data public
- Excessive Agency is the vulnerability that enables damaging actions to be performed in response to unexpected/ambiguous outputs from an LLM
- Overreliance can occur when an LLM produces erroneous information and provides it in an authoritative manner. When people or systems trust this information without oversight or confirmation it can result in a security breach, misinformation, miscommunication, legal issues, and reputational damage.
- Model Theft- This occurs when competitors or attackers steal training models (valuable intellectual property) and/or training data to create similar generative AI services
The impact of these issues could be detrimental to organisations which fail to implement security controls on their GenAI products.
Failure to implement security controls on GenAI products could have a detrimental impact on organisations, including loss of customer trust, litigation, reputation damage, and lost revenue.
How to guard against AI Security Issues?
Configuration of AI Cloud services according to best practises ensures security of AI services by preventing the security issues mentioned above.
Here are some of the AI best practises recommended by Trend Micro:
AWS AI Best Practises
Configure Sensitive Information Filters for Amazon Bedrock Guardrails
Amazon Bedrock guardrails are security measures designed to ensure safe and responsible use of AI services provided by Amazon Bedrock. They help manage data privacy, prevent misuse, and maintain compliance with regulations. Guardrails can detect sensitive information such as Personally Identifiable Information (PII) in input prompts or foundation model (FM) responses. You can also configure sensitive information specific to your use case or organisation by defining it with regular expressions (regex). Amazon Bedrock guardrails offer two behaviour modes to filter sensitive information.
This best practise helps customers identify any Bedrock resources that do not have guardrails configured. Guardrails are an important security measure to filter our sensitive information from both AI responses and user input. Customer should not train AI on sensitive data; however guardrails should be used as an extra layer of security to ensure any sensitive data accidentally included in training models is filtered in responses. Learn More
Disable Direct Internet Access for Notebook Instances
This best practise ensures that your Amazon SageMaker Studio notebook instances are not allowed to communicate with the Internet through Direct Internet Access feature. For added security control, make sure that the Amazon SageMaker domain associated with your notebook instances is configured to use the VPC only network access type. When "VPC Only" is enabled, all SageMaker Studio traffic is routed through your secure VPC subnets, with internet access disabled by default. Learn More
Microsoft Azure AI Best Practises:
Disable Public Network Access to OpenAI Service Instances
When an Azure OpenAI service instance is publicly accessible, all networks, including the Internet, can access the instance, increasing the risk of unauthorised access, potential security breaches, and compliance violations. To limit access to selected, trusted networks, you must configure network access rules for your OpenAI instances. This allows only authorised traffic from your Azure virtual networks (VNets) or trusted IP addresses to interact with the OpenAI instances, preventing unauthorised access attempts and protecting your AI workloads and data. Learn More
Use System-Assigned Managed Identities for Azure Machine Learning Workspaces
This best practise ensures that your Azure Machine Learning (ML) workspaces are using system-assigned managed identities in order to allow secure access to other Microsoft Azure protected resources such as key vaults and storage accounts. Using system-assigned managed identities for Azure ML workspaces enhances security by allowing the ML workspaces to authenticate and authorise with Azure resources without the need for explicit credentials, reducing the risk associated with credential management and providing a seamless and more secure integration with other cloud services. Learn More
GCP AI Best Practises:
Disable Root Access for Workbench Instances
This best practise ensures that the root access to your Google Cloud Vertex AI notebook instances is disabled in order to reduce the risk of accidental or malicious system damage by limiting administrative privileges within the instances. Disabling root access to your Google Cloud Vertex AI notebook instances minimises the risk of unauthorised modifications, enhances security by preventing potential misuse or exploitation of superuser privileges, and helps maintain a more controlled and secure AI environment. Learn More
Vertex AI Dataset Encryption with Customer-Managed Encryption Keys
This best practise ensures that your Google Cloud Vertex AI datasets are encrypted using Customer-Managed Encryption Keys (CMEKs) in order to have full control over data encryption and decryption process.
By default, Google Cloud automatically encrypts Vertex AI datasets (data items and annotations) using Google-Managed Encryption Keys (GMEK). However, for organisations with strict compliance and security requirements, CMEKs can be implemented as an additional security layer on top of existing data encryption as it provides organisations with control and management of Vertex AI dataset encryption. Learn More
About Trend Micro AI Security Posture Management
Trend Micro ASRM for Cloud AI Security Posture Managment detects AI services that are misconfigured and provides step by step remediation guides to fix these misconfigurations.
It also identifies Cloud Identity risks and potential attack paths that can be exploited.
To learn more about Trend Micro Cloud ASRM AI SPM check out these resources:
https://www.trendmicro.com/en_us/business/products/hybrid-cloud.html#tabs-4092ca-1