Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    SAM (Segment Anything Model)

    Also known as:
    Segment Anything
    SAM 2
    Meta SAM
    Foundation Model for Segmentation
    Updated: 2/10/2026

    A foundation model by Meta for universal image segmentation that can segment any object in an image with zero-shot capability.

    Quick Summary

    SAM (Segment Anything) is Meta's foundation model for universal segmentation – segments any object zero-shot via click, box, or text prompt.

    Explanation

    SAM was trained on over 1 billion masks and can be prompted via points, boxes, or text. SAM 2 (2024) extends this to video.

    Marketing Relevance

    SAM democratizes segmentation – no domain-specific training data needed. Useful for creative tools, medical imaging, and data annotation.

    Example

    A design tool uses SAM to cut out objects in photos – the user simply clicks on the desired object.

    Common Pitfalls

    High compute for real-time. Weaknesses with abstract/textureless regions. Finest edges sometimes imprecise.

    Origin & History

    Meta released SAM in April 2023 with the SA-1B dataset (1 billion masks). It was the first "foundation model" for segmentation. SAM 2 (July 2024) extended to video segmentation with memory and tracking.

    Comparisons & Differences

    SAM (Segment Anything Model) vs. U-Net

    U-Net needs domain-specific training. SAM is a foundation model and works zero-shot on any image.

    SAM (Segment Anything Model) vs. Mask R-CNN

    Mask R-CNN detects and segments predefined classes. SAM segments any object class-agnostically.

    Related Services

    Related Terms

    👋Questions? Chat with us!