Hugging face
Supported Frameworks
You can use a transformer
or a diffuser
the based model from Hugging face.
Steps to load your model
To import a model from Hugging Face, below are the requirements.
As a next step, we would need to import the Hugging Face model into GitHub before we push it to Inverness. How Inferless works is:Hugging Face
-> Copy and create a repo in GitHub
-> Load the model repo into Inferless.
You can use the imported GitHub Repo to change the pre-processing and post-processing code.
Pre Requisite: Note the Model Name, Type, and Framework
-
Navigate to the Hugging Face model page of your choice that you want to import into Inferless.
-
Take note of the
"Model Name"
(you can also use the copy button),Task Type
,Model Framework,
andModel Type
. These will be required for the next steps.
The fields to be copied/noted are mentioned in RED.
Step 1: Add Model in your workspace.
- Navigate to your desired workspace in Inferless and Click on
"Add a custom model"
button that you see on the top right. An import wizard will open up.
Click on Add Model
Step 2: Choose the source of your model.
-
Since we are using a model from Hugging Face in this example, select
Hugging Face
as the method of uploading from theProvider
list. -
To proceed with the upload, you will need to connect your
Hugging Face account
and a privateGitHub account
. This is a mandatory step as the Hugging Face credentials enable us to import your private repositories, while GitHub creates the template of the Hugging face repo where you can make changes later to modify your pre-processing and post-processing function -
For more information: Inferless works by copying the model from Hugging Face and creating a new repository in GitHub. The model repository is then loaded into Inferless.
Select Hugging Face and also add the neccesary accounts
Step 3: Enter the model details
- Model Details : In this step, Add your
model name
(The name that you wish to call your model), Choose themodel type
(Eg: Transformer), Choose thetask type
(Eg: Text generation) andHuggingface model name
.
Enter the details as noted
- In case you would like to set up
Automatic rebuild
for your model, enable it- You would need to set up a webhook for this method. Click here for more details.
Step 4: Edit the Inference Code and Input/Output Schema
- Model Codec: In this step, you can modify the input params ( by adding to input_schema.py ) and output params, you can also modify the model load and inference code in app.py
Enter the details as noted
Step 5: Configure Machine and Environment.
- Choose the type of machine, and specify the minimum and maximum number of replicas for deploying your model.
- Min scale -
- Max scale -
- Configure Custom Runtime ( If you have pip or apt packages), choose Volume, Secrets and set Environment variables like Inference Timeout / Container Concurrency / Scale Down Timeout
Set runtime and configuration
Step 6: Review your model details
-
Once you click “Continue,” you will be able to review the details added for the model.
-
If you would like to make any changes, you can go back and make the changes.
-
Once you have reviewed everything, click
Deploy
to start the model import process.
Review all the details carefully before proceeding
Step 7 : Run your model
-
Once you click submit, the model import process would start.
-
It may take some time to complete the import process, and during this time, you will be redirected to your workspace and can see the status of the import under
"In Progress/Failed"
tab.
View the model under `In-Progress/ Failed`
-
If you encounter any errors during the model import process or if you want to view the build logs for any reason, you can click on the three dots menu and select “View build logs”. This will show you a detailed log of the import process, which can help you troubleshoot any issues you may encounter.
-
Post-upload, the model will be available under “My Models”
-
You can then select the model and go to
"My Model" -> API -> Inference Endpoint details.
Here you would find the API endpoints that can be called. You can click on the copy button on the right and can call your model.
Under the API Tab, you can view the API endpoint details.
Extra Step: Getting API key details
- You can now call using this from your end. The inference result would be the output for these calls.
- In case you need help with API Keys:
- Click on settings, available on the top, next to your Workspace Name
- Click on “Workspace API keys”
- You can view the details of your key or generate a new one
Sample for now