Fine-Tuning
This article walks you through how to fine-tune base models in Hyperstack AI Studio using either the API or the user interface. It explains how to configure training parameters, select and filter training data, adjust advanced LoRA settings, and initiate fine-tuning jobs. You'll also learn how to estimate training duration and monitor job progress.
In this article
Fine-Tuning Using the API
Replace the following variables before running the command:
API_KEY
: Your AI Studio API key.model_name
: Unique name for your fine-tuned model.base_model
: The base model to fine-tune. Valid values:mistral-7b-instruct-v0.3
mixtral-8x7b-instruct-v0.1
llama-3.1-70B-instruct
llama-3.1-8B-instruct
batch_size
: Number of training samples per step. Default:4
(Recommended range: 1–16).epoch
: Number of times the dataset is fully traversed during training. Default:1
(Typical: 1–10).learning_rate
: Speed at which the model learns during training. Default:0.0002
(Typical range: 0.00005–0.001).
To further customize model behavior, see the full list of Optional Parameters.
At least 10 logs are required to start a fine-tuning job. If you apply filters (e.g. tags
, models
, custom_dataset
, or custom_logs_filename
), the filtered set must still include at least 10 logs.
If no filters are specified, all uploaded logs will be used by default.
curl -X POST https://api.genai.hyperstack.cloud/tailor/v1/training-pod \
-H "X-API-Key: API_KEY" \
-H "Content-Type: application/json" \
-d '{
"base_model": "BASE_MODEL",
"model_name": "YOUR_NEW_MODEL_NAME",
"batch_size": 4,
"epoch": 1,
"learning_rate": 0.0002,
"tags": ["tag1", "tag2"],
"custom_logs_filename": "optional-filename",
"save_logs_with_tags": true,
"custom_dataset": "optional-dataset-name",
"skip_logs_with_errors": true,
"use_synthetic_data": true,
"parent_model_id": "optional-parent-model-id",
"lora_r": 32,
"lora_alpha": 16,
"lora_dropout": 0.05
}'
Required Parameters
base_model
(string)
– Name of the base model to fine-tune.model_name
(string)
– Unique name for the fine-tuned model.batch_size
(integer)
– Number of training samples per batch. Default:4
.epoch
(integer)
– Number of training epochs. Default:1
.learning_rate
(float)
– Learning rate for the optimizer. Default:0.0002
.
Optional Parameters
tags
(array[string])
– Tags to filter training data.custom_logs_filename
(string)
– Filename for custom logs upload.save_logs_with_tags
(boolean)
– Whether to save uploaded logs with tags.custom_dataset
(string)
– Custom dataset to use.skip_logs_with_errors
(boolean)
– Skip invalid logs (default: true).use_synthetic_data
(boolean)
– Use synthetic data (default: true).parent_model_id
(string)
– ID of the parent model for incremental training.lora_r
(integer)
– LoRA rank (controls adapter size). Default:32
.lora_alpha
(integer)
– LoRA scaling factor. Default:16
.lora_dropout
(float)
– Dropout rate applied to LoRA adapter. Default:0.05
.
Success Response
{
"message": "Reservation starting",
"status": "success"
}
Fine-Tuning Using the UI
To fine-tune a model using Hyperstack AI Studio, follow the steps below:
-
Start a New Fine-Tuning Job
Go to the My Models page and click the Fine-Tune a Model button in the top-right corner.
-
Configure Basic Settings
- Model Name – Enter a unique name for your fine-tuned model.
- Base Model – Select one of the available base models to fine-tune.
-
Select Training Data
Choose the source of logs to use for training:
- All Logs – Use all logs in your account.
- By Tags – Select logs that have specific tags.
- By Dataset – Select logs from an existing dataset.
- Upload Logs – Upload logs just for this training run. You can choose to save these logs with custom tags, or not save them at all.
Minimum Logs RequirementYou must include at least 10 logs. If filtering by tags, dataset, or uploading logs, the resulting set must also contain at least 10 valid logs.
-
Adjust Advanced Settings
Optionally customize training parameters:
- Epochs – Full passes through the dataset (e.g., 1–10).
- Batch Size – Samples per training step (e.g., 1–16).
- Learning Rate – Step size for weight updates (e.g., 0.00005–0.001).
- LoRA Rank (r) – Rank of adaptation matrices.
- LoRA Alpha – Scaling factor for LoRA.
- LoRA Dropout – Dropout rate for LoRA layers.
Fine-Tuning DurationTraining time varies depending on your dataset and model complexity. You can track progress in real time through the UI.
-
Review Estimates
You’ll see an estimation summary including:
- Estimated Time to Train
- Current Training Progress
- Recommended Minimum Logs based on your setup
-
Start Training
Click Start Training to begin. The following will occur:
- Validation – Logs are checked and filtered. If no filters are applied, all logs will be used. Invalid logs will be skipped if specified.
- Training Begins – The system allocates resources and kicks off the job.
- Monitor Progress – Track training from the UI. You can cancel the job at any time.
See Monitoring Jobs for more details on tracking and reviewing training runs.