Import using Docker
Bring your own image
There are 2 ways to integrate the:-
-
Docker Image
-
Docker File
Docker Image URI ( Private/ Public )
We Currently support DockerHub Private / AWS Elastic Container registry, Once you have pushed your image, you need to make sure to follow the specifications for APIs
API Contract Requirements
The image is expected to run a web server when started using the command docker run
- It is mandatory to host the web server on
port **8080**
Following API Contract needs to be strictly followed to host a custom docker image on inferless there are 3 API specifications that you need to meet
Server Metadata API
-
path =
/v2
, http_method =GET
-
path =
/v2/models/{$MODEL_NAME}
, http_method =GET
A successful response on request to this API is marked by 200 HTTP status code. And the response is as follows
$MODEL_NAME is used in the successive API paths
Health APIs
-
path =
/v2/health/live
, http_method =GET
-
path =
/v2/health/ready
, http_method =GET
-
path =
/v2/models/{$MODEL_NAME}/ready
, http_method =GET
These APIs need to respond with a 200 HTTP status code to mark the server as healthy
Inference API
-
path =
/v2/models/{$MODEL_NAME}/infer
-
http_method =
POST
, -
content_type =
application/json
The body of the API can be as per your implementation
For Docker File Integration
You can use GitHub/ GitLab for the docker file integration.
Example
This is a sample fastApi application for a test generation use case
main.py
model.py
Dockerfile
requirements.txt