Deploy Starling 7B using Inferless
Starling 7B is an LLM trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-7B-alpha scores 8.09 in MT Bench with GPT-4 as a judge, outperforming every model to date on MT-Bench
Our Observations
We have deployed a 4-bit GPTQ quantized version of the model using A100 GPU(80GB) and observed that the model took an average inference time of 5.04 sec
, generating an average of 41.99 tokens/sec
and an average cold start time of 9.76sec
Defining Dependencies
We are using the AutoGPTQ library, which enables you to run LLM on low memory. We deploy a GPTQ 4bit quantized version of the model model.
Constructing the GitHub/GitLab Template
Now quickly construct the GitHub/GitLab template, this process is mandatory and make sure you don’t add any file named model.py
starling-7B/
├── app.py
└── config.yaml
You can also add other files to this directory.
Create the class for inference
In the app.py we will define the class and import all the required functions
def initialize
: In this function, you will initialize your model and define anyvariable
that you want to use during inference.
class InferlessPythonModel:
def initialize(self):
self.tokenizer = AutoTokenizer.from_pretrained("TheBloke/Starling-LM-7B-alpha-GPTQ", use_fast=True)
self.model = AutoGPTQForCausalLM.from_quantized(
"TheBloke/Starling-LM-7B-alpha-GPTQ",
use_safetensors=True,
device="cuda:0",
quantize_config=None,
inject_fused_attention=False
)
def infer
: This function gets called for every request that you send. Here you can define all the steps that are required for the inference. You can also pass custom values for inference and pass it throughinputs(dict)
parameter.
def infer(self,inputs):
prompt = inputs["prompt"]
input_ids = self.tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = self.model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=200)
result = self.tokenizer.decode(output[0])
return {"generated_result": result}
def finalize
: This function cleans up all the allocated memory.
def finalize(self,*args):
self.tokenizer = None
self.model = None
Creating the Custom Runtime
This is a mandatory step where we allow the users to upload their custom runtime through config.yaml.
build:
cuda_version: "12.1.1"
system_packages:
- "libssl-dev"
python_packages:
- "auto-gptq==0.5.1"
- "transformers==4.35.2"
- "accelerate==0.25.0"
Deploying the model on Inferless
Inferless supports multiple ways of importing your model. For this tutorial, we will use GitHub.
Import the Model through GitHub
Click on theRepo(Custom code)
and then click on the Add provider
to connect to your GitHub account. Once your account integration is completed, click on your Github account and continue.
Provide the Model details
Enter the name of the model and pass the GitHub repository URL.
Now, you can define the model’s input and output parameters in JSON format. For more information, go through this page.
-
INPUT: Refer to the function
def infer(self, inputs)
, here we mentionedinputs["prompt"]
, which meansinputs
the parameter will have a keyprompt.
-
OUTPUT: The same goes here, the function mentioned above will return the results as a key-value pair
return {"generated_result": result}
.
Input JSON must include prompt
a key to pass the instruction. For output JSON, it must be includedgenerated_result
to retrieve the output.
GPU Selection
On the Inferless platform, we support all the GPUs. For this tutorial, we recommend using A100 GPU. Select A100 from the drop-down menu in the GPU Type.
Using the Custom Runtime
In the Advance configuration
, we have the option to select the custom runtime. First, click on the Add runtime
to upload the config.yaml file, give any name and save it. Choose the runtime from the drop-down menu and then click on continue.
Review and Deploy
In this final stage, carefully review all modifications. Once you’ve examined all changes, proceed to deploy the model by clicking the Import
button.
Voilà, your model is now deployed!