CLI Import
The simplest way to deploy ML models in production
Install the Inferless CLI package
Login into the inferless workspace
A new window will open
Copy the below CLI command
After Login, you will see this message
Import a Model
You need to have an app.py in the root folder with below Class as the entrypoint
Then create the input_schema.py
Once you have this, Run the below command to start a new model import
You will see the below prompt
Once init is complete you will see the below files created
-
inferless-runtime-config.yaml
This file will have all the software packages and the Python packages required for the model inferencing. -
inferless.yaml
This file will have all the configurations required for the deployment. Users can update this file according to their requirements. -
input_schema.py
This file will have all the configurations required for the deployment. Users can update this file according to their requirements.
Run a Model Locally
Run a model locally
Deploy a Model to Inferless
Run the below command to push the model to production
You will see the below after-deployment
In UI in the Progress Section you will see :
Getting the logs
Redeploy the Model with ‘updated code
All Options in Inferless CLI
Optional Setting :
Using Runtimes with CLI
During the model init if you have a custom requirements.txt file you can use that to automatically create the config.yaml
Creating using requirements.txt
Generated file
If you don’t have the requirements in the same repo you can build the config.yaml using the below documentation
https://docs.inferless.com/model-import/bring-custom-packages
Push the runtime
The CLI will ask you to update the config automatically, or else you can update manually in inferless.yaml
Using an existing runtime :
Creating a Volume
List all the volumes
Use this command to get the id of the volume
Using an existing volume
Copy data from machine to Volume
copy a file
copy the entire folder
List the data in Volume
Copy data Volume to local machine
Delete the data in the Volume
Depreciated - Input / Output Json
-
input.json
This file will have the key for the input parameter. Whenever you change the name of the key in theapp.py
, update it accordingly. -
output.json
This file will have thename
of the output key that thedef infer
the function is going to return.
Update the input.json / Output Json as per your model.