| Generative AI Security Trust Center


Mendable has a firm commitment to Generative AI Security, and strives to stay at the forefront of Generative AI security risks and mitigation techniques. This page is an indication to vendors and customers as to the approaches we have in place to mitigate security concerns.


Gen AI Security Trust Center


Mendable has a firm commitment to LLM Security, and strives to stay at the forefront of LLM security risks and mitigation techniques. This page is an indication to vendors and customers as to the approaches we have in place to mitigate security concerns.

Generative AI Application Security Protections

Data Exfiltration Prevention



Mendable is protected against the latest known LLM data exfiltration methods, which include but are not limited to the sources ingested and secured and the various input, output, and action checks Mendable enforces

Phishing Prevention

Mendable is protected against the latest known LLM phishing methods, which include but are not limited to the sources ingested and secured and the various input, output, and action checks Mendable enforces

System Manipulation Prevention

Mendable is protected against the latest known LLM system manipulation methods, which include but are not limited to the sources ingested and secured and the various input, output, and action checks Mendable enforces

Data Source Ingests Protected

Public Git Repositories

Google Drive

Zendesk

File Uploads

Youtube

External Webpages

Generative AI Application Security Controls

RAG systems checked for data exfil risks

Customer Data Privacy


Direct querying data exfiltration mitigated

Data sources checked for indirect risks

RAG systems checked for data exfil risks

End-user Security & Safety

Phishing risk mitigated

Profane content risk content mitigated

Attacker driven misinformation risk mitigated

National security risk content mitigated

Malware download risk mitigated

Automatic action manipulation risk mitigated

+3

LLM Application Risk Assessment

LLM Application Security Pentest conducted

LLM Application Security Monitoring

Risky LLM inputs, outputs, and actions tracked

Alerting system in place for suspicious events

Historical traceability of risky events maintained

Historical traceability of risky events maintained

LLM Application IP Security

Company IP exfiltration mitigated

Customer IP exfiltration mitigated

Internal data writes analyzed for adversarial content

LLM Application Access Controls

Function calls and parameters analyzed for risk

Internal data writes analyzed for adversarial content

Powered by PromptArmor 2024

San Francisco, CA