Introduction

Mistral-Small-3.1-24B-Instruct-2503 builds upon the base model by incorporating both text and vision capabilities model designed for performance across both text and vision tasks. With 24B parameters, this instruction-finetuned model builds upon the Mistral Small 3.1 base, offering enhanced capabilities such as state-of-the-art vision understanding and support for long context lengths up to 128k tokens without sacrificing text processing quality.

Defining Dependencies

We are using the transformers to serve the model on a single A100.

Our Observations

We have deployed the model on an A100 GPU(80GB). Here are our observations:

LibraryInference TimeCold Start TimeTokens/SecOutput Tokens Length
transformers2.06 sec17.69 sec21.99256

Note: The inference time and cold start time are average values.

Defining Dependencies

We are using the transformers to serve the model on a single A100 (80GB).

Constructing the GitHub/GitLab Template

Now quickly construct the GitHub/GitLab template, this process is mandatory and make sure you don’t add any file named model.py.

mistral-small-3.1-24b-instruct/
├── app.py
├── inferless-runtime-config.yaml
└── inferless.yaml

You can also add other files to this directory.

Create the Input Schema with Pydantic

Using the inferless Python client and Pydantic, you can define structured schemas directly in your code for input and output, eliminating the need for external file.

Input Schema

When defining an input schema with Pydantic, you need to annotate your class attributes with the appropriate types, such as str, float, int, etc. These type annotations specifys what type of data each field should contain. The default value serves as the example input for testing with the infer function.

@inferless.request
class RequestObjects(BaseModel):
    prompt: str = Field(default="Give me 5 non-formal ways to say 'See you later' in French.")
    system_prompt: Optional[str] = "You are a conversational agent that always answers straight to the point."
    temperature: Optional[float] = 0.7
    top_p: Optional[float] = 0.1
    repetition_penalty: Optional[float] = 1.18
    top_k: Optional[int] = 40
    max_tokens: Optional[int] = 100
    do_sample: Optional[bool] = False

Output Schema

The @inferless.response decorator helps you define structured output schemas.

@inferless.response
class ResponseObjects(BaseModel):
    generated_text: str = Field(default="Test output")

Usage in the infer Function

Once you have annotated the objects you can expect the infer function to receive RequestObjects as input, and returns a ResponseObjects instance as output, ensuring the results adhere to a defined structure.

class InferlessPythonModel:
    def infer(self, request: RequestObjects) -> ResponseObjects:
        
        return ResponseObjects(generated_text=generated_text)

Create the class for inference

In the app.py we will define the class and import all the required functions

  1. def initialize: In this function, you will initialize your model and define any variable that you want to use during inference.

  2. def infer: This function gets called for every request that you send. Here you can define all the steps that are required for the inference.

  3. def finalize: This function cleans up all the allocated memory.

import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
from pydantic import BaseModel, Field
from typing import Optional
import inferless
import torch
from transformers import AutoTokenizer, Mistral3ForConditionalGeneration, BitsAndBytesConfig


@inferless.request
class RequestObjects(BaseModel):
    prompt: str = Field(default="Give me 5 non-formal ways to say 'See you later' in French.")
    system_prompt: Optional[str] = "You are a conversational agent that always answers straight to the point."
    temperature: Optional[float] = 0.7
    top_p: Optional[float] = 0.1
    repetition_penalty: Optional[float] = 1.18
    top_k: Optional[int] = 40
    max_tokens: Optional[int] = 100
    do_sample: Optional[bool] = False

@inferless.response
class ResponseObjects(BaseModel):
    generated_text: str = Field(default="Test output")

class InferlessPythonModel:
    def initialize(self):
        model_id = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
        snapshot_download(repo_id=model_id, allow_patterns=["*.safetensors"])
        quantization_config = BitsAndBytesConfig(load_in_4bit=True)
        
        self.tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
        self.tokenizer.chat_template = "{%- set today = strftime_now(\"%Y-%m-%d\") %}\n{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\\nYour knowledge base was last updated on 2023-10-01. The current date is \" + today + \".\\n\\nWhen you're not sure about some information, you say that you don't have the information and don't make up anything.\\nIf the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \\\"What are some good restaurants around me?\\\" => \\\"Where are you?\\\" or \\\"When is the next flight to Tokyo\\\" => \\\"Where do you travel from?\\\")\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content'] %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = default_system_message %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n    {%- if message['role'] == 'user' %}\n\t    {%- if message['content'] is string %}\n            {{- '[INST]' + message['content'] + '[/INST]' }}\n\t    {%- else %}\n\t\t    {{- '[INST]' }}\n\t\t    {%- for block in message['content'] %}\n\t\t\t    {%- if block['type'] == 'text' %}\n\t\t\t\t    {{- block['text'] }}\n\t\t\t    {%- elif block['type'] == 'image' or block['type'] == 'image_url' %}\n\t\t\t\t    {{- '[IMG]' }}\n\t\t\t\t{%- else %}\n\t\t\t\t    {{- raise_exception('Only text and image blocks are supported in message content!') }}\n\t\t\t\t{%- endif %}\n\t\t\t{%- endfor %}\n\t\t    {{- '[/INST]' }}\n\t\t{%- endif %}\n    {%- elif message['role'] == 'system' %}\n        {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}\n    {%- elif message['role'] == 'assistant' %}\n        {{- message['content'] + eos_token }}\n    {%- else %}\n        {{- raise_exception('Only user, system and assistant roles are supported!') }}\n    {%- endif %}\n{%- endfor %}"
        
        self.model = Mistral3ForConditionalGeneration.from_pretrained(
            model_id,
            trust_remote_code=True,
            quantization_config=quantization_config,
            device_map="cuda",
        )

    def infer(self, request: RequestObjects) -> ResponseObjects:
        system_prompt = "You are a conversational agent that always answers straight to the point."
        if request.system_prompt is not None:
            system_prompt = request.system_prompt
        messages = [
                        {
                            "role": "system",
                            "content": system_prompt
                        },
                        {
                            "role": "user",
                            "content": request.prompt
                        },
                    ]
        tokenized_chat = self.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
        with torch.no_grad():
            generation = self.model.generate(
                tokenized_chat,
                max_new_tokens=request.max_tokens,
                temperature=request.temperature,
                top_p=request.top_p,
                top_k=request.top_k,
                do_sample=request.do_sample,
                repetition_penalty=request.repetition_penalty,
            )
            generated_text = self.tokenizer.decode(generation[0], skip_special_tokens=True)

        return ResponseObjects(generated_text=generated_text)

    def finalize(self):
        self.model = None

Creating the Custom Runtime

This is a mandatory step where we allow the users to upload their custom runtime through inferless-runtime-config.yaml.

build:
  cuda_version: "12.1.1"
  python_packages:
    - "git+https://github.com/huggingface/transformers@v4.49.0-Mistral-3"
    - "bitsandbytes==0.45.3"
    - "hf_transfer==0.1.9"
    - "inferless==0.2.14"
    - "pydantic==2.10.6"
    - "accelerate==1.5.2"

Test your model with Remote Run

You can use the inferless remote-run(installation guide here) command to test your model or any custom Python script in a remote GPU environment directly from your local machine. Make sure that you use Python3.10 for seamless experience.

Step 1: Add the Decorators and local entry point

To enable Remote Run, simply do the following:

  1. Import the inferless library and initialize Cls(gpu="A100"). The available GPU options are T4, A10 and A100.
  2. Decorated the initialize and infer functions with @app.load and @app.infer respectively.
  3. Create the Local Entry Point by decorating a function (for example, my_local_entry) with @inferless.local_entry_point. Within this function, instantiate your model class, convert any incoming parameters into a RequestObjects object, and invoke the model’s infer method.
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
from pydantic import BaseModel, Field
from typing import Optional
import inferless
import torch
from transformers import AutoTokenizer, Mistral3ForConditionalGeneration, BitsAndBytesConfig


@inferless.request
class RequestObjects(BaseModel):
    prompt: str = Field(default="Give me 5 non-formal ways to say 'See you later' in French.")
    system_prompt: Optional[str] = "You are a conversational agent that always answers straight to the point."
    temperature: Optional[float] = 0.7
    top_p: Optional[float] = 0.1
    repetition_penalty: Optional[float] = 1.18
    top_k: Optional[int] = 40
    max_tokens: Optional[int] = 100
    do_sample: Optional[bool] = False

@inferless.response
class ResponseObjects(BaseModel):
    generated_text: str = Field(default="Test output")

app = inferless.Cls(gpu="A100")

class InferlessPythonModel:
    @app.load
    def initialize(self):
        model_id = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
        snapshot_download(repo_id=model_id, allow_patterns=["*.safetensors"])
        quantization_config = BitsAndBytesConfig(load_in_4bit=True)
        
        self.tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
        self.tokenizer.chat_template = "{%- set today = strftime_now(\"%Y-%m-%d\") %}\n{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\\nYour knowledge base was last updated on 2023-10-01. The current date is \" + today + \".\\n\\nWhen you're not sure about some information, you say that you don't have the information and don't make up anything.\\nIf the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \\\"What are some good restaurants around me?\\\" => \\\"Where are you?\\\" or \\\"When is the next flight to Tokyo\\\" => \\\"Where do you travel from?\\\")\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content'] %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = default_system_message %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n    {%- if message['role'] == 'user' %}\n\t    {%- if message['content'] is string %}\n            {{- '[INST]' + message['content'] + '[/INST]' }}\n\t    {%- else %}\n\t\t    {{- '[INST]' }}\n\t\t    {%- for block in message['content'] %}\n\t\t\t    {%- if block['type'] == 'text' %}\n\t\t\t\t    {{- block['text'] }}\n\t\t\t    {%- elif block['type'] == 'image' or block['type'] == 'image_url' %}\n\t\t\t\t    {{- '[IMG]' }}\n\t\t\t\t{%- else %}\n\t\t\t\t    {{- raise_exception('Only text and image blocks are supported in message content!') }}\n\t\t\t\t{%- endif %}\n\t\t\t{%- endfor %}\n\t\t    {{- '[/INST]' }}\n\t\t{%- endif %}\n    {%- elif message['role'] == 'system' %}\n        {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}\n    {%- elif message['role'] == 'assistant' %}\n        {{- message['content'] + eos_token }}\n    {%- else %}\n        {{- raise_exception('Only user, system and assistant roles are supported!') }}\n    {%- endif %}\n{%- endfor %}"
        
        self.model = Mistral3ForConditionalGeneration.from_pretrained(
            model_id,
            trust_remote_code=True,
            quantization_config=quantization_config,
            device_map="cuda",
        )

    @app.infer
    def infer(self, request: RequestObjects) -> ResponseObjects:
        system_prompt = "You are a conversational agent that always answers straight to the point."
        if request.system_prompt is not None:
            system_prompt = request.system_prompt
        messages = [
                        {
                            "role": "system",
                            "content": system_prompt
                        },
                        {
                            "role": "user",
                            "content": request.prompt
                        },
                    ]
        tokenized_chat = self.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
        with torch.no_grad():
            generation = self.model.generate(
                tokenized_chat,
                max_new_tokens=request.max_tokens,
                temperature=request.temperature,
                top_p=request.top_p,
                top_k=request.top_k,
                do_sample=request.do_sample,
                repetition_penalty=request.repetition_penalty,
            )
            generated_text = self.tokenizer.decode(generation[0], skip_special_tokens=True)

        return ResponseObjects(generated_text=generated_text)

    def finalize(self):
        self.model = None

@inferless.local_entry_point
def my_local_entry(dynamic_params):
    request_objects = RequestObjects(**dynamic_params)
    model_instance = InferlessPythonModel()

    return model_instance.infer(request_objects)

Step 2: Run with Remote GPU

From your local terminal, navigate to the folder containing your app.py and your inferless-runtime-config.yaml and run:

inferless remote-run app.py -c inferless-runtime-config.yaml --prompt "Give me 5 non-formal ways to say 'See you later' in French."

You can pass the other input parameters in the same way (e.g., --temperature, --system_prompt, etc.) as long as your code expects them in the inputs dictionary.

If you want to exclude certain files or directories from being uploaded, use the --exclude or -e flag.

Method A: Deploying the model on Inferless Platform

Inferless supports multiple ways of importing your model. For this tutorial, we will use GitHub.

Step 1: Login to the inferless dashboard can click on Import model button

Navigate to your desired workspace in Inferless and Click on Add a custom model button that you see on the top right. An import wizard will open up.

Step 2: Follow the UI to complete the model Import

  • Select the GitHub/GitLab Integration option to connect your source code repository with the deployment environment.
  • Navigate to the specific GitHub repository that contains your model’s code. Here, you will need to identify and enter the name of the model you wish to import.
  • Choose the appropriate type of machine that suits your model’s requirements. Additionally, specify the minimum and maximum number of replicas to define the scalability range for deploying your model.
  • Optionally, you have the option to enable automatic build and deployment. This feature triggers a new deployment automatically whenever there is a new code push to your repository.
  • If your model requires additional software packages, configure the Custom Runtime settings by including necessary pip or apt packages. Also, set up environment variables such as Inference Timeout, Container Concurrency, and Scale Down Timeout to tailor the runtime environment according to your needs.
  • Wait for the validation process to complete, ensuring that all settings are correct and functional. Once validation is successful, click on the “Import” button to finalize the import of your model.

Step 3: Wait for the model build to complete usually takes ~5-10 minutes

Step 4: Use the APIs to call the model

Once the model is in ‘Active’ status you can click on the ‘API’ page to call the model

Here is the Demo:

Method B: Deploying the model on Inferless CLI

Inferless allows you to deploy your model using Inferless-CLI. Follow the steps to deploy using Inferless CLI.

Clone the repository of the model

Let’s begin by cloning the model repository:

git clone https://github.com/inferless/mistral-small-3.1-24b-instruct.git

Deploy the Model

To deploy the model using Inferless CLI, execute the following command:

inferless deploy --gpu A100 --runtime inferless-runtime-config.yaml

Explanation of the Command:

  • --gpu A100: Specifies the GPU type for deployment. Available options include A10, A100, and T4.
  • --runtime inferless-runtime-config.yaml: Defines the runtime configuration file. If not specified, the default Inferless runtime is used.