Sensitive information can affect both the LLM and its application context. This includes personal identifiable information (PII), financial details, health records, confidential business data, security credentials, and legal documents. Proprietary models may also have unique training methods and source code considered sensitive, especially in closed or foundation models.
LLMs, especially when embedded in applications, risk exposing sensitive data, proprietary algorithms, or confidential details through their output. This can result in unauthorized data access, privacy violations, and intellectual property breaches. Consumers should be aware of how to interact safely with LLMs. They need to understand the risks of unintentionally providing sensitive data, which may later be disclosed in the model’s output.
To reduce this risk, LLM applications should perform adequate data sanitization to prevent user data from entering the training model. Application owners should also provide clear Terms of Use policies, allowing users to opt out of having their data included in the training model. Adding restrictions within the system prompt about data types that the LLM should return can provide mitigation against sensitive information disclosure. However, such restrictions may not always be honored and could be bypassed via prompt injection or other methods.
Personal identifiable information (PII) may be disclosed during interactions with the LLM.
Poorly configured model outputs can reveal proprietary algorithms or data. Revealing training data can expose models to inversion attacks, where attackers extract sensitive information or reconstruct inputs. For instance, as demonstrated in the ‘Proof Pudding’ attack (CVE-2019-20634), disclosed training data facilitated model extraction and inversion, allowing attackers to circumvent security controls in machine learning algorithms and bypass email filters.
Generated responses might inadvertently include confidential business information.
###@ Sanitization:
Implement data sanitization to prevent user data from entering the training model. This includes scrubbing or masking sensitive content before it is used in training.
Apply strict input validation methods to detect and filter out potentially harmful or sensitive data inputs, ensuring they do not compromise the model.
###@ Access Controls:
Limit access to sensitive data based on the principle of least privilege. Only grant access to data that is necessary for the specific user or process.
Limit model access to external data sources, and ensure runtime data orchestration is securely managed to avoid unintended data leakage.
###@ Federated Learning and Privacy Techniques:
Train models using decentralized data stored across multiple servers or devices. This approach minimizes the need for centralized data collection and reduces exposure risks.
Apply techniques that add noise to the data or outputs, making it difficult for attackers to reverse-engineer individual data points.
###@ User Education and Transparency:
Provide guidance on avoiding the input of sensitive information. Offer training on best practices for interacting with LLMs securely.
Maintain clear policies about data retention, usage, and deletion. Allow users to opt out of having their data included in training processes.
###@ Secure System Configuration:
Limit the ability for users to override or access the system’s initial settings, reducing the risk of exposure to internal configurations.
Follow guidelines like “OWASP API8:2023 Security Misconfiguration” to prevent leaking sensitive information through error messages or configuration details. (Ref. link:OWASP API8:2023 Security Misconfiguration)
###@ Advanced Techniques:
Use homomorphic encryption to enable secure data analysis and privacy-preserving machine learning. This ensures data remains confidential while being processed by the model.
Implement tokenization to preprocess and sanitize sensitive information. Techniques like pattern matching can detect and redact confidential content before processing.
A user receives a response containing another user’s personal data due to inadequate data sanitization.
An attacker bypasses input filters to extract sensitive information.
Negligent data inclusion in training leads to sensitive information disclosure.
- Lessons learned from ChatGPT’s Samsung leak: Cybernews
- AI data leak crisis: New tool prevents company secrets from being fed to ChatGPT: Fox Business
- ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever: Wired
- Using Differential Privacy to Build Secure Models: Neptune Blog
- Proof Pudding (CVE-2019-20634) AVID (
moohax
&monoxgas
)
Refer to this section for comprehensive information, scenarios strategies relating to infrastructure deployment, applied environment controls and other best practices.
- AML.T0024.000 – Infer Training Data Membership MITRE ATLAS
- AML.T0024.001 – Invert ML Model MITRE ATLAS
- AML.T0024.002 – Extract ML Model MITRE ATLAS