In LangChain up to and including 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.
Awesome GPT + Security
A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT
Contents
Tools
Integrated
Audit
Reconnaissance
Offensive
Detecting
Preventing
Social Engineering
Reverse Engineering
Investigation
Fix
Assessment
Cases
Experimental
Academic
Blogs
Fun
GPT Security
Standard
Bypass Security Policy
Bug Bounty
Cra
A curation of awesome tools, documents and projects about LLM Security.
Awesome LLM Security
A curation of awesome tools, documents and projects about LLM Security
Contributions are always welcome Please read the Contribution Guidelines before contributing
Table of Contents
Awesome LLM Security
Papers
Tools
Articles
Other Awesome Projects
Other Useful Resources
Papers
Not what you've signed up for: Compromising Real-World LLM-Integra
🕵️♂️
Invariant Analyzer for AI Agent Security
A security scanner for LLM-based AI agents
The Invariant Security Analyzer is an open source security scanner that enables developers to reduce risk when building AI agents by quickly detecting vulnerabilities, bugs, and security threats The analyzer scans and analyzes an agent's execution traces t