What is Inferless ?
Inferless helps you deploy your custom machine learing models in a serverless fashion with ease and lowest coldstart.
Inferless offers a serverless platform that simplifies the deployment of machine learning models. Our developer-friendly solution takes the complexities out of managing hardware and provides autoscaling capabilities for a seamless experience. Import your models from popular providers like Huggingface, AWS Sagemaker, and Google Vertex AI, and let Inferless handle the rest.
-
Deploy Custom Models: Our platform enables multiple models and workloads to share GPUs, with automatic rebalancing and node draining to ensure optimized utilization and cost reduction.
-
Advanced Integrations: One of our standout features is the seamless model import from your favorite repositories, including Huggingface, AWS Sagemaker, Google Vertex AI, and Github, all with just a single click. Stay tuned for our upcoming integration with Azure!
-
Get Lowest Coldstarts: Inferless offers auto-scaling based on requests per second, allowing you to scale from zero to thousands of GPUs seamlessly.
-
**Fully Model and Framework agnostic: ** You can easily deploy ML models including deep learning models from major ML frameworks like Pytorch, Tensorflow, ONNX, and even custom Python functions.
-
Volumes: Get NFS like volumes to store your data and models, so you can easily share data between different models and workloads.
-
Custom Runtimes: Get you custom software and packages installed via easy to use yaml, no need to write complex app server logic.
-
**Advanced Monitoring Capabilities: ** Inferless comes with built-in Prometheus metrics and Grafana dashboards for visualizing GPU usage and other system metrics.
-
Fractional GPU Support: With Inferless Serverless Offering, you can get fractional GPUS, so you no longer need to worry about the costs associated with underutilized GPUs, as you only pay for what you use.
-
Tackling Coldstart Challenges: Our custom-built orchestration engine, advanced router, and proprietary storage infrastructure work in tandem to help you achieve the lowest coldstart for your ML models.
-
Full CI/CD intrgration: New versions are automatically deployed using CI/CD pipelines without having to rebuild or redeploy the infrastructure.
-
API Endpoints: Obtain ready-to-use API endpoints that can be effortlessly integrated into your backend or front-end applications.
Start deploying your models with Inferless today, and enjoy a simplified and efficient machine learning experience!