Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence
    (ReAct Prompting)

    ReAct (Reasoning + Acting)

    Also known as:
    Reason and Act
    ReAct Framework
    Thought-Action Loop
    Updated: 2/9/2026

    A prompting paradigm that connects reasoning (thinking) and acting (doing) in a loop – the LLM thinks aloud, executes actions, and reflects on results.

    Quick Summary

    ReAct connects thinking and acting: Think → Act → Observe → Repeat. The base pattern for robust AI agents.

    Explanation

    ReAct agents follow a thought-action-observation loop: Thought analyzes the situation, Action executes a tool, Observation processes the result. Repeat until goal reached. Much more robust than pure chain-of-thought as errors in intermediate steps can be corrected.

    Marketing Relevance

    Standard pattern for agentic AI. LangChain, AutoGen, and other frameworks implement ReAct as the base architecture for tool-using agents.

    Example

    Thought: "I need current weather data." → Action: weather_tool(Berlin) → Observation: "15°C, sunny" → Thought: "Now I can formulate the recommendation." → Final Answer.

    Common Pitfalls

    Token-intensive due to verbose reasoning. Can get stuck in loops. Requires good prompt engineering for consistent format.

    Origin & History

    ReAct was introduced in 2022 by Yao et al. (Princeton, Google) in the paper "ReAct: Synergizing Reasoning and Acting in Language Models." It was the first to combine CoT reasoning with tool use.

    Comparisons & Differences

    ReAct (Reasoning + Acting) vs. Chain-of-Thought

    Chain-of-thought only thinks; ReAct additionally executes actions and processes their results.

    ReAct (Reasoning + Acting) vs. Plan-and-Execute

    Plan-and-execute plans everything upfront; ReAct plans and corrects iteratively based on observations.

    Related Services

    Related Terms

    👋Questions? Chat with us!