Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Technology

    Seldon Core

    Updated: 2/11/2026

    Kubernetes-native open-source platform for deploying, scaling, and monitoring ML models in production.

    Quick Summary

    Seldon Core deploys ML models as Kubernetes microservices with native A/B testing, canary deployments, and explainability.

    Explanation

    Seldon Core uses Kubernetes custom resources (SeldonDeployment) to deploy ML models as microservices. It natively supports A/B testing, canary deployments, explainability, and multi-armed bandits.

    Marketing Relevance

    Seldon Core is ideal for Kubernetes-centric enterprises with complex ML deployment requirements.

    Common Pitfalls

    Requires Kubernetes expertise. Complex CRD configuration. Overhead for simple deployments.

    Origin & History

    Seldon Technologies was founded in London in 2014. Seldon Core was released as an open-source project in 2018 and became the standard for Kubernetes-based ML serving. Seldon Deploy offers enterprise features.

    Comparisons & Differences

    Seldon Core vs. KServe

    KServe (formerly KFServing) is more lightweight and Kubeflow-integrated; Seldon Core offers more enterprise features like explainability and MAB.

    Seldon Core vs. BentoML

    BentoML focuses on developer experience and packaging; Seldon Core on Kubernetes-native governance and monitoring.

    Related Services

    Related Terms

    👋Questions? Chat with us!