Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Reflection Agent

    Also known as:
    Reflection Agent
    Self-Reflection AI
    Reflexion
    Self-Critique Agent
    Updated: 2/11/2026

    An agent pattern where the LLM critically evaluates its own outputs and iteratively improves them – like an internal code review.

    Quick Summary

    Reflection agents improve AI outputs through self-critique and iteration – like an internal reviewer optimizing the first draft.

    Explanation

    Reflection agents first generate a draft, then evaluate it against defined criteria, identify weaknesses, and produce an improved version. This self-refinement can iterate multiple times.

    Marketing Relevance

    Significantly improves output quality: code review agents, content quality checks, fact-checking loops, and strategic analysis with counter-perspectives.

    Common Pitfalls

    High token costs from multiple iterations. Can lead to over-engineering. Some models evaluate their own outputs too uncritically.

    Origin & History

    Reflexion (Shinn et al., 2023) formalized self-reflection for LLM agents. The concept builds on self-consistency (Wang et al., 2023) and Constitutional AI (Anthropic).

    Comparisons & Differences

    Reflection Agent vs. Self-Consistency

    Self-consistency samples multiple answers and picks the most common. Reflection critiques and improves one answer iteratively.

    Further Resources

    Related Services

    Related Terms

    👋Questions? Chat with us!