Deploy the Phi-4 using Inferless
Phi-4 is Microsoft’s latest 14 billion parameters small language model (SLM). This model is part of the Phi family, which aims to balance between model size and performance, showcasing that smaller models can achieve state-of-the-art results.
Introduction
Phi-4 is a 14-billion parameter language model developed by Microsoft Research, designed to excel in complex reasoning tasks, particularly within STEM domains. Phi-4 strategically incorporates synthetic data throughout its training process, enhancing its problem-solving capabilities.
It achieves an 80.4 score on the MATH benchmark, surpassing larger models like Llama-3.3 70B, Qwen 2.5 72B Instruct and GPT-4o. It attains a score of 82.6 on the HumanEval coding benchmark, indicating strong code generation capabilities.
Defining Dependencies
We are using the vLLM to serve the model on a single A100.
Our Observations
We have deployed the model on an A100 GPU(80GB). Here are our observations:
Library | Inference Time | Cold Start Time | Tokens/Sec | Output Tokens Length |
---|---|---|---|---|
vLLM | 2.78 sec | 39.95 sec | 32.6 | 128 |
Note: The inference time and cold start time are average values.
Defining Dependencies
We are using the vLLM to serve the model on a single A100 (80GB).
Constructing the GitHub/GitLab Template
Now quickly construct the GitHub/GitLab template, this process is mandatory and make sure you don’t add any file named model.py
.
You can also add other files to this directory.
Create the Input Schema with Pydantic
Using the inferless
Python client and Pydantic, you can define structured schemas directly in your code for input and output, eliminating the need for external file.
Input Schema
When defining an input schema with Pydantic, you need to annotate your class attributes with the appropriate types, such as str
, float
, int
, etc.
These type annotations specifys what type of data each field should contain.
The default
value serves as the example input for testing with the infer
function.
Output Schema
The @inferless.response
decorator helps you define structured output schemas.
Usage in the infer
Function
Once you have annotated the objects you can expect the infer function to receive RequestObjects
as input,
and returns a ResponseObjects
instance as output, ensuring the results adhere to a defined structure.
Create the class for inference
In the app.py we will define the class and import all the required functions
-
def initialize
: In this function, you will initialize your model and define anyvariable
that you want to use during inference. -
def infer
: This function gets called for every request that you send. Here you can define all the steps that are required for the inference. You can also pass custom values for inference and pass it throughinputs(dict)
parameter. -
def finalize
: This function cleans up all the allocated memory.
Creating the Custom Runtime
This is a mandatory step where we allow the users to upload their custom runtime through inferless-runtime-config.yaml.
Method A: Deploying the model on Inferless Platform
Inferless supports multiple ways of importing your model. For this tutorial, we will use GitHub.
Step 1: Login to the inferless dashboard can click on Import model button
Navigate to your desired workspace in Inferless and Click on Add a custom model
button that you see on the top right. An import wizard will open up.
Step 2: Follow the UI to complete the model Import
- Select the GitHub/GitLab Integration option to connect your source code repository with the deployment environment.
- Navigate to the specific GitHub repository that contains your model’s code. Here, you will need to identify and enter the name of the model you wish to import.
- Choose the appropriate type of machine that suits your model’s requirements. Additionally, specify the minimum and maximum number of replicas to define the scalability range for deploying your model.
- Optionally, you have the option to enable automatic build and deployment. This feature triggers a new deployment automatically whenever there is a new code push to your repository.
- If your model requires additional software packages, configure the Custom Runtime settings by including necessary pip or apt packages. Also, set up environment variables such as Inference Timeout, Container Concurrency, and Scale Down Timeout to tailor the runtime environment according to your needs.
- Wait for the validation process to complete, ensuring that all settings are correct and functional. Once validation is successful, click on the “Import” button to finalize the import of your model.
Step 3: Wait for the model build to complete usually takes ~5-10 minutes
Step 4: Use the APIs to call the model
Once the model is in ‘Active’ status you can click on the ‘API’ page to call the model
Here is the Demo:
Method B: Deploying the model on Inferless CLI
Inferless allows you to deploy your model using Inferless-CLI. Follow the steps to deploy using Inferless CLI.
Clone the repository of the model
Let’s begin by cloning the model repository:
Deploy the Model
To deploy the model using Inferless CLI, execute the following command:
Explanation of the Command:
--gpu A100
: Specifies the GPU type for deployment. Available options includeA10
,A100
, andT4
.--runtime inferless-runtime-config.yaml
: Defines the runtime configuration file. If not specified, the default Inferless runtime is used.