We have a firm commitment to Generative AI Security, and strive to stay at the forefront of Generative AI security risks and mitigation techniques. This page is an indication to vendors and customers as to the approaches we have in place to mitigate LLM security concerns.

Generative AI Security Trust Center

Generative AI Security Trust Center

Personally Identifiable Information

Secrets

Google Drive Documents

SECURED

Zendesk Tickets

SECURED

External Webpages

SECURED

Local File Uploads

SECURED

Private Git Repositories

SECURED

Public Git Repositories

SECURED

Data sent to Large Language Models

Data sent to Large Language Models

Personally Identifiable Information

Secrets

Google Drive Documents

Zendesk Tickets

External Webpages

Local File Uploads

Private Git Repositories

Public Git Repositories

SECURED

SECURED

SECURED

SECURED

SECURED

SECURED

Data sent to Large Language Models

Generative AI Application Security Controls

Gen AI Application Security Controls

Customer Data Privacy

Direct querying data exfiltration risk mitigated

Direct querying data exfiltration risk mitigated

RAG systems checked for data exfil risks

RAG systems checked for data exfil risks

Data sources checked for indirect risks

Data sources checked for indirect risks

End Customer Security


Phishing risk mitigated

Phishing risk mitigated

Attacker driven misinformation risk mitigated

Attacker driven misinformation risk mitigated

National security risk content mitigated

National security risk content mitigated

Malware download risk mitigated

Malware download risk mitigated

Automatic action manipulation risk mitigated

Automatic action manipulation risk mitigated

Profane content risk mitigated

Profane content risk mitigated

LLM Application Risk Assessment

LLM Application Security Pentest conducted

LLM Application Security Pentest conducted

LLM Application Security Monitoring

High risk LLM inputs, outputs, and actions tracked

High risk LLM inputs, outputs, and actions tracked

Historical traceability of risky events maintained

Historical traceability of risky events maintained

Alerting system in place for suspicious events

Alerting system in place for suspicious events

LLM Application IP Security


Company IP exfiltration risk mitigated

Company IP exfiltration risk mitigated

Customer IP exfiltration mitigated

Customer IP exfiltration mitigated

LLM Application Access Controls

Automated LLM actions analyzed for risk

Automated LLM actions analyzed for risk

Internal data writes analyzed for adversarial content

Internal data writes analyzed for adversarial content

Powered by PromptArmor

Amazon Bedrock | Model Hosting

Amazon Bedrock | Model Hosting

Weaviate | Vector Database

Weaviate | Vector Database

Data and Model Residency

Data and Model Residency