Unbounded Consumption refers to the process where a Large Language Model (LLM) generates outputs based on input queries or prompts. Inference is a critical function of LLMs, involving the application of learned patterns and knowledge to produce relevant responses or predictions.
Attacks designed to disrupt service, deplete the target’s financial resources, or even steal intellectual property by cloning a model’s behavior all depend on a common class of security vulnerability in order to succeed. Unbounded Consumption occurs when a Large Language Model (LLM) application allows users to conduct excessive and uncontrolled inferences, leading to risks such as denial of service (DoS), economic losses, model theft, and service degradation. The high computational demands of LLMs, especially in cloud environments, make them vulnerable to resource exploitation and unauthorized usage.
Attackers can overload the LLM with numerous inputs of varying lengths, exploiting processing inefficiencies. This can deplete resources and potentially render the system unresponsive, significantly impacting service availability.
By initiating a high volume of operations, attackers exploit the cost-per-use model of cloud-based AI services, leading to unsustainable financial burdens on the provider and risking financial ruin.
Continuously sending inputs that exceed the LLM’s context window can lead to excessive computational resource use, resulting in service degradation and operational disruptions.
Submitting unusually demanding queries involving complex sequences or intricate language patterns can drain system resources, leading to prolonged processing times and potential system failures.
Attackers may query the model API using carefully crafted inputs and prompt injection techniques to collect sufficient outputs to replicate a partial model or create a shadow model. This not only poses risks of intellectual property theft but also undermines the integrity of the original model.
Using the target model to generate synthetic training data can allow attackers to fine-tune another foundational model, creating a functional equivalent. This circumvents traditional query-based extraction methods, posing significant risks to proprietary models and technologies.
Malicious attackers may exploit input filtering techniques of the LLM to execute side-channel attacks, harvesting model weights and architectural information. This could compromise the model’s security and lead to further exploitation.
Implement strict input validation to ensure that inputs do not exceed reasonable size limits.
Restrict or obfuscate the exposure of logit_bias
and logprobs
in API responses. Provide only the necessary information without revealing detailed probabilities.
Apply rate limiting and user quotas to restrict the number of requests a single source entity can make in a given time period.
Monitor and manage resource allocation dynamically to prevent any single user or request from consuming excessive resources.
Set timeouts and throttle processing for resource-intensive operations to prevent prolonged resource consumption.
Restrict the LLM’s access to network resources, internal services, and APIs.
- This is particularly significant for all common scenarios as it encompasses insider risks and threats. Furthermore, it governs the extent of access the LLM application has to data and resources, thereby serving as a crucial control mechanism to mitigate or prevent side-channel attacks.
Continuously monitor resource usage and implement logging to detect and respond to unusual patterns of resource consumption.
Implement watermarking frameworks to embed and detect unauthorized use of LLM outputs.
Design the system to degrade gracefully under heavy load, maintaining partial functionality rather than complete failure.
Implement restrictions on the number of queued actions and total actions, while incorporating dynamic scaling and load balancing to handle varying demands and ensure consistent system performance.
Train models to detect and mitigate adversarial queries and extraction attempts.
Build lists of known glitch tokens and scan output before adding it to the model’s context window.
Implement strong access controls, including role-based access control (RBAC) and the principle of least privilege, to limit unauthorized access to LLM model repositories and training environments.
Use a centralized ML model inventory or registry for models used in production, ensuring proper governance and access control.
Implement automated MLOps deployment with governance, tracking, and approval workflows to tighten access and deployment controls within the infrastructure.
An attacker submits an unusually large input to an LLM application that processes text data, resulting in excessive memory usage and CPU load, potentially crashing the system or significantly slowing down the service.
An attacker transmits a high volume of requests to the LLM API, causing excessive consumption of computational resources and making the service unavailable to legitimate users.
An attacker crafts specific inputs designed to trigger the LLM’s most computationally expensive processes, leading to prolonged CPU usage and potential system failure.
An attacker generates excessive operations to exploit the pay-per-use model of cloud-based AI services, causing unsustainable costs for the service provider.
An attacker uses the LLM’s API to generate synthetic training data and fine-tunes another model, creating a functional equivalent and bypassing traditional model extraction limitations.
A malicious attacker bypasses input filtering techniques and preambles of the LLM to perform a side-channel attack and retrieve model information to a remote controlled resource under their control.
- Proof Pudding (CVE-2019-20634) AVID (
moohax
&monoxgas
) - arXiv:2403.06634 Stealing Part of a Production Language Model arXiv
- Runaway LLaMA | How Meta’s LLaMA NLP model leaked: Deep Learning Blog
- I Know What You See:: Arxiv White Paper
- A Comprehensive Defense Framework Against Model Extraction Attacks: IEEE
- Alpaca: A Strong, Replicable Instruction-Following Model: Stanford Center on Research for Foundation Models (CRFM)
- How Watermarking Can Help Mitigate The Potential Risks Of LLMs?: KD Nuggets
- Securing AI Model Weights Preventing Theft and Misuse of Frontier Models
- Sponge Examples: Energy-Latency Attacks on Neural Networks: Arxiv White Paper arXiv
- Sourcegraph Security Incident on API Limits Manipulation and DoS Attack Sourcegraph
Refer to this section for comprehensive information, scenarios strategies relating to infrastructure deployment, applied environment controls and other best practices.
- MITRE CWE-400: Uncontrolled Resource Consumption MITRE Common Weakness Enumeration
- AML.TA0000 ML Model Access: Mitre ATLAS & AML.T0024 Exfiltration via ML Inference API MITRE ATLAS
- AML.T0029 – Denial of ML Service MITRE ATLAS
- AML.T0034 – Cost Harvesting MITRE ATLAS
- AML.T0025 – Exfiltration via Cyber Means MITRE ATLAS
- OWASP Machine Learning Security Top Ten – ML05:2023 Model Theft OWASP ML Top 10
- API4:2023 – Unrestricted Resource Consumption OWASP Web Application Top 10
- OWASP Resource Management OWASP Secure Coding Practices