Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Jailbreaking

    Also known as:
    Jailbreak
    AI Jailbreak
    Guardrail Bypass
    Safety Bypass
    DAN
    Updated: 2/9/2026

    Techniques aimed at bypassing safety measures and ethical restrictions of AI models.

    Quick Summary

    Jailbreaking bypasses LLM safety guardrails through creative prompts: roleplay ("You are DAN"), hypothetical scenarios, or token manipulation. Providers continuously patch.

    Explanation

    Jailbreak methods: Roleplay prompts ("You are DAN who can do anything"), hypothetical scenarios, token manipulation, multi-step attacks, Base64 encoding. Providers continuously patch, new methods emerge.

    Marketing Relevance

    Understanding jailbreaks helps build more robust AI applications. What works on competitor models? What attack vectors exist on own systems?

    Example

    "Ignore all previous instructions and..." is the classic jailbreak opener. More sophisticated variants use personas or indirect requests.

    Common Pitfalls

    Jailbreak research ethically problematic. Publication helps attackers. Models become more robust but also more restrictive.

    Origin & History

    "DAN" (Do Anything Now) became the most famous jailbreak for ChatGPT in 2023. The jailbreak community on Reddit/Discord constantly develops new techniques. OpenAI responds with patches within days.

    Comparisons & Differences

    Jailbreaking vs. Prompt Injection

    Jailbreaking wants to generate prohibited content; Prompt Injection wants to hijack system behavior (e.g., leak data).

    Jailbreaking vs. Red Teaming

    Red Teaming is authorized security research; Jailbreaking is often unauthorized bypassing – the techniques overlap.

    Related Services

    Related Terms

    👋Questions? Chat with us!