Deploy Meta-Llama-3-8B using Inferless
Llama 3 is an auto-regressive language model, leveraging a refined transformer architecture.It incorporate supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to ensure alignment with human preferences.
Introduction
Meta releases the Llama 3, the latest open LLM models in the Llama family. The Llama 3 models were trained on 8x more data on over 15 trillion tokens. It has a context length of 8K tokens and increases the vocabulary size of the tokenizer to tokenizer to 128,256 (from 32K tokens in the previous version). In this tutorial we will deploy LLama-3 8B.
Our Observations
We have deployed the model using the vLLM library on an A100 GPU(80GB). Here are our observations:
Library | Inference Time | Cold Start Time | Tokens/Sec |
---|---|---|---|
vLLM | 1.63 sec | 13.30 sec | 78.65 |
Defining Dependencies
We are using the vLLM library, which boost the inference speed of the LLM.
Constructing the GitHub/GitLab Template
Now quickly construct the GitHub/GitLab template, this process is mandatory and make sure you don’t add any file named model.py
.
Llama-3/
├── app.py
├── inferless-runtime-config.yaml
├── inferless.yaml
└── input_schema.py
You can also add other files to this directory.
Create the class for inference
In the app.py we will define the class and import all the required functions
-
def initialize
: In this function, you will initialize your model and define anyvariable
that you want to use during inference. -
def infer
: This function gets called for every request that you send. Here you can define all the steps that are required for the inference. You can also pass custom values for inference and pass it throughinputs(dict)
parameter. -
def finalize
: This function cleans up all the allocated memory.
from vllm import LLM, SamplingParams
from pathlib import Path
class InferlessPythonModel:
def initialize(self):
model_id = "Undi95/Meta-Llama-3-8B-hf" # Specify the model repository ID
# Define sampling parameters for model generation
self.sampling_params = SamplingParams(temperature=0.7, top_p=0.95, max_tokens=128)
# Initialize the LLM object
self.llm = LLM(model=model_id)
def infer(self,inputs):
prompts = inputs["prompt"] # Extract the prompt from the input
result = self.llm.generate(prompts, self.sampling_params)
# Extract the generated text from the result
result_output = [output.outputs[0].text for output in result]
# Return a dictionary containing the result
return {'generated_text': result_output[0]}
def finalize(self):
pass
Create the Input Schema
We have to create a input_schema.py
in your GitHub/Gitlab repository this will help us create the Input parameters. You can checkout our documentation on Input / Output Schema.
For this tutorial, we have defined a parameter prompt
which is required during the API call. Now lets create the input_schema.py
.
INPUT_SCHEMA = {
"prompt": {
'datatype': 'STRING',
'required': True,
'shape': [1],
'example': ["What is AI?"]
}
}
Creating the Custom Runtime
This is a mandatory step where we allow the users to upload their custom runtime through inferless-runtime-config.yaml.
build:
cuda_version: "12.1.1"
system_packages:
- "libssl-dev"
python_packages:
- "torch==2.2.1"
- "vllm==0.4.1"
Deploying the model on Inferless
Inferless supports multiple ways of importing your model. For this tutorial, we will use GitHub.
Import the Model through GitHub
Click on theRepo(Custom code)
and then click on the Add provider
to connect to your GitHub account. Once your account integration is completed, click on your Github account and continue.
Provide the Model details
Enter the name of the model and pass the GitHub repository URL.
Configure the machine
In this 4th step, the user has to configure the inference setup. On the Inferless platform, we support all the GPUs. For this tutorial, we recommend using A100 GPU. Select A100 from the drop-down menu in the GPU Type.
You also have the flexibility to select from different machine types. Opting for a dedicated
machine type will grant you access to an entire GPU while choosing the shared
option allocates 50% of the VRAM. In this tutorial, we will opt for the dedicated machine type.
Choose the Minimum and Maximum replicas that you would need for your model:
-
Min replica: The number of inference workers to keep on at all times.
-
Max replica: The maximum number of inference workers to allow at any point in time.
If you wish to enable Automatic rebuild for your model setup, toggle the switch. Note that setting up a web-hook is necessary for this feature. Click here for more details.
In the Advance configuration
, we have the option to select the custom runtime. First, click on the Add runtime
to upload the inferless-runtime-config.yaml file, give any name and save it. Choose the runtime from the drop-down menu and then click on continue.
Review and Deploy
In this final stage, carefully review all modifications. Once you’ve examined all changes, proceed to deploy the model by clicking the Submit
button.
Voilà, your model is now deployed!
Method B: Deploying the model on Inferless CLI
Inferless allows you to deploy your model using Inferless-CLI. Follow the steps to deploy using Inferless CLI.
Initialization of the model
Create the app.py and inferless-runtime-config.yaml, move the files to the working directory. Run the following command to initialize your model:
inferless init
Upload the custom runtime
Once you have created the inferless-runtime-config.yaml file, you can run the following command:
inferless runtime upload
Upon entering this command, you will be prompted to provide the configuration file name. Enter the name and ensure to update it in the inferless.yaml file. Now you are ready for the deployment.
Deploy the Model
Execute the following command to deploy your model. Once deployed, you can track the build logs on the Inferless platform:
inferless deploy