Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    LLM Security

    Also known as:
    AI Security
    Language Model Security
    GenAI Security
    Foundation Model Security
    Updated: 2/9/2026

    The field of security research and practices specifically for Large Language Models and generative AI.

    Quick Summary

    LLM Security addresses security risks for language models: Prompt injection, jailbreaking, data leakage, model extraction. OWASP LLM Top 10 is the reference standard.

    Explanation

    LLM Security covers: Prompt injection, jailbreaking, data leakage, model extraction, training data poisoning. Differs from classic software security through natural language as attack surface.

    Marketing Relevance

    Every production AI needs a security concept: What data is in context? What actions can the AI perform? What happens if manipulated?

    Example

    OWASP LLM Top 10 documents: Prompt Injection (#1), Insecure Output Handling (#2), Training Data Poisoning (#3), Model Denial of Service (#4)...

    Common Pitfalls

    Classic security teams often don't understand LLM risks. New attack vectors emerge faster than defenses. Balance between security and usability.

    Origin & History

    LLM Security emerged as a field in 2022 with ChatGPT. Simon Willison coined "Prompt Injection". OWASP published LLM Top 10 (2023). Security conferences now have dedicated AI tracks.

    Comparisons & Differences

    LLM Security vs. AI Safety

    AI Safety focuses on alignment and long-term risks; LLM Security on concrete attacks and defenses today.

    LLM Security vs. Application Security

    AppSec deals with code vulnerabilities; LLM Security deals with natural language as attack surface.

    Related Services

    Related Terms

    👋Questions? Chat with us!