Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Function Calling (LLM)

    Also known as:
    Tool Calling
    Tool Use
    LLM Function Calling
    Tool-Augmented LLM
    Updated: 2/10/2026

    Function Calling enables LLMs to generate structured function calls – the bridge between natural language and APIs, databases, or external tools.

    Quick Summary

    Function Calling lets LLMs invoke real APIs – the key technology that transforms chatbots from text generators into actionable agents.

    Explanation

    The LLM decides based on the conversation which function to call and returns structured parameters (JSON). Execution happens externally, the result flows back into the dialog. OpenAI, Anthropic, and Google offer native Function Calling APIs.

    Marketing Relevance

    Enables chatbots to execute real actions: place orders, book appointments, query data – not just generate text.

    Example

    "Show me last month's revenue" → LLM generates: get_revenue(period="last_month") → API delivers data → LLM formulates the answer.

    Common Pitfalls

    Incorrect parameter extraction. Unintended destructive actions without confirmation. Hallucinated tool calls that don't exist.

    Origin & History

    OpenAI introduced Function Calling in June 2023 (GPT-3.5/4). Anthropic followed with Tool Use (2024). Google Gemini added Function Calling. 2025 parallel function calling and structured output are standard for all major LLMs.

    Comparisons & Differences

    Function Calling (LLM) vs. Agentic AI

    Function Calling is a single tool invocation; Agentic AI orchestrates many function calls autonomously in multi-step workflows.

    Function Calling (LLM) vs. Structured Output

    Structured Output guarantees JSON format; Function Calling uses structured output specifically for tool invocations with parameters.

    Related Services

    Related Terms

    👋Questions? Chat with us!