Getting Started With AI Studio
This guide walks you through the full workflow of using Hyperstack AI Studio—from uploading training data to fine-tuning, evaluating, deploying, and testing custom models. Whether you prefer a visual interface or the API, you'll find step-by-step instructions for both paths.
In this article
- Before Getting Started
- Getting Started With the UI
- Getting Started With the API
- All AI Studio Guides
Before Getting Started
-
Create your Hyperstack account: Visit the Hyperstack Registration Page to sign up using your email or a supported SSO provider (Google, Microsoft, or GitHub). Learn more about login options.
-
Sign in: Use your login credentials or SSO to access your Hyperstack account.
-
Add credit: Provide your billing information and add credit to your account here.
-
Access AI Studio UI: Navigate to the AI Studio page in Hyperstack.
Getting Started With the UI
This section walks you through uploading training data, configuring and launching a fine-tuning job, evaluating model performance, deploying your model, and testing outputs in the Playground—all from the web interface.
1. Upload Training Data
You can upload logs directly through the AI Studio UI using the following steps:
-
Open the Logs Page
Navigate to the Logs & Datasets page.
-
Upload Your
.jsonl
FileClick the Upload Logs button in the top-right corner, then either select your
.jsonl
file from your device or drag and drop it into the upload area.Click here to see required JSONL file format
The JSONL file must contain one JSON object per line, where each object represents a conversation or interaction. Here's an example of the expected format:
{"messages": [{"role": "user", "content": "What's the capital of Australia?"}, {"role": "assistant", "content": "The capital of Australia is Canberra."}]}
{"messages": [{"role": "system", "content": "You are a travel advisor."}, {"role": "user", "content": "Where should I go in Europe for a summer vacation?"}, {"role": "assistant", "content": "Consider Italy, Spain, or Greece—they offer great weather, food, and culture in the summer!"}]}
{"messages": [{"role": "user", "content": "What's the capital of Australia?"}, {"role": "assistant", "content": "The capital of Australia is Canberra."}]}Each line in the JSONL file must be a valid JSON object containing:
messages
: An array of message objects- Each message object must have:
role
: Either "system", "user", or "assistant"content
: The text content of the message
Make sure your JSONL file:
- Has one complete JSON object per line
- Uses proper JSON formatting
- Contains the required fields for each message
- Has no trailing commas
- Uses UTF-8 encoding
-
Add Tags
Enter at least one tag to help categorize your logs (e.g.,
testing
). -
Validate and Upload
Click Validate & Upload.
The system will automatically check your file format and structure, then upload the logs if validation succeeds.
Once uploaded, you can:
- View and manage logs individually.
- Associate logs with datasets for training.
- Add tags to improve filtering and traceability.
If your uploaded training data contains model outputs that cannot be directly used for training other models you can use the data Synthesis feature offered by AI Studio which modifies the data while maintaining its characteristics. To learn how synthesize your data, click here.
To group multiple data uploads for use in a single fine-tuning job, create a dataset. To learn how, click here.
2. Fine-Tuning
To fine-tune a model using Hyperstack AI Studio, follow the steps below:
-
Start a New Fine-Tuning Job
Go to the My Models page and click the Fine-Tune a Model button in the top-right corner.
-
Configure Basic Settings
- Model Name – Enter a unique name for your fine-tuned model.
- Base Model – Select one of the available base models to fine-tune.
-
Select Training Data
Choose the source of logs to use for training:
- All Logs – Use all logs in your account.
- By Tags – Select logs that have specific tags.
- By Dataset – Select logs from an existing dataset.
- Upload Logs – Upload logs just for this training run. You can choose to save these logs with custom tags, or not save them at all.
Minimum Logs RequirementYou must include at least 10 logs. If filtering by tags, dataset, or uploading logs, the resulting set must also contain at least 10 valid logs.
-
Adjust Advanced Settings
Optionally customize training parameters:
- Epochs – Full passes through the dataset (e.g., 1–10).
- Batch Size – Samples per training step (e.g., 1–16).
- Learning Rate – Step size for weight updates (e.g., 0.00005–0.001).
- LoRA Rank (r) – Rank of adaptation matrices.
- LoRA Alpha – Scaling factor for LoRA.
- LoRA Dropout – Dropout rate for LoRA layers.
Fine-Tuning DurationTraining time varies depending on your dataset and model complexity. You can track progress in real time through the UI.
-
Review Estimates
You’ll see an estimation summary including:
- Estimated Time to Train
- Estimated Cost
- Number of Logs Selected
-
Start Training
Click Start Training to begin. The following will occur:
- Validation – Logs are checked and filtered. If no filters are applied, all logs will be used. Invalid logs will be skipped if specified.
- Training Begins – The system allocates resources and kicks off the job.
- Monitor Progress – Track training from the UI. You can cancel the job at any time.
See Monitoring Jobs for more details on tracking and reviewing training runs.
3. Evaluate Training Results
To track the progress and performance of your fine-tuned models, follow these steps in Hyperstack AI Studio:
-
Open the Model Details Page
On the My Models page and click on the fine-tuned model you want to monitor. This will take you to the model’s training details view.
-
Check Job Status
The status panel shows:
- Training Jobs – Jobs currently in progress.
- Completed Jobs – Finished jobs with full metrics available.
- Failed Jobs – Jobs that encountered errors during training.
-
Review Metrics
You’ll see metrics such as:
- Training Loss – How well the model fits your training data.
- Validation Loss – How well the model performs on unseen data.
-
Analyze Visualizations
The following charts help evaluate model performance:
- Performance Comparison Chart – Compares pre- and post-fine-tuning loss values.
- Model Performance Over Steps – Displays training loss reduction over time.
For more details on interpreting training results and example metrics display, click here.
4. Deploy Model
Follow these steps to deploy your fine-tuned model:
-
Select Model
On the My Models page, click on the fine-tuned model you want to deploy to open its details view. -
Toggle Deployment
In the Status section, use the Deploy toggle to activate the model. Deployment typically takes a few seconds.
5. Model Testing - Playground
Use the Playground in Hyperstack AI Studio to interact with your fine-tuned or base models in a conversational interface. You can customize prompts, adjust generation parameters, and even compare models side-by-side.
-
Open the Playground
Navigate to the Playground page from the Hyperstack AI Studio.
-
Select a Model
From the dropdown menu select the fine-tuned model you’ve trained and deployed.
-
(Optional) Enter a System Prompt
Use the System Prompt field to guide the model’s behavior.
-
(Optional) Adjust Parameters
Tune the model’s output using the available sliders and fields.
Click here to see model parameters
Parameter Description Max Tokens Limits the maximum number of tokens in the model’s response. Default: 100
.Temperature Controls randomness. Lower values (e.g., 0.1
) produce more focused outputs; higher values (e.g.,0.9
) generate more diverse responses. Default:0.5
.Top-P Controls nucleus sampling. The model considers tokens with a cumulative probability ≤ Top-P
. Lower = more conservative. Default:0.5
.Top-K Limits token sampling to the top K
most likely options. Lower = fewer options considered. Default:40
.Presence Penalty Discourages reuse of existing concepts to encourage novelty. Default: 0.1
.Repetition Penalty Penalizes repeated tokens to reduce redundancy. Higher values reduce repetition. Default: 0.5
.These can be modified live to observe how the model behavior changes in real time.
-
Enter a Prompt
Type your query into the text input box and hit Enter to get a response.
Compare Side-by-Side
Click Compare Side-by-Side to evaluate how your fine-tuned model responds to the same input compared to another model.
- Select the two models from the dropdowns.
- Enter your prompt.
- View and compare both outputs side-by-side in real time.
Getting Started With the API
Follow the steps below to use the API to upload training data, start fine-tuning jobs, deploy models, and run inference—all through programmatic requests.
1. Create an API key
Start by creating an API key to authenticate your requests:
- Navigate to API Keys page in the UI under Settings > Security
- Click + Generate new API key
- Name your key and confirm
- Copy the generated key (it will only be shown once) e.g.
genai_api_key_6wTC5tZQ374JixFBxKy7OA
2. Test API Authentication
Test your ability to authenticate AI Studio API requests by calling the chat completion endpoint as follows, replacing API_KEY
with your API Key:
curl -X POST "https://api.genai.hyperstack.cloud/api/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "X-API-KEY: API_KEY" \
-d '{
"model": "mistralai/mistral-7b-instruct-v0.3",
"messages": [
{"role": "user", "content": "Hi"}
],
"stream": true
}'
3. Follow API Guides
These API guides walk you through every step of the model development process—uploading logs, launching fine-tuning jobs, monitoring training metrics, deploying your model, and running inference—all programmatically using the AI Studio API.
- Upload Logs
- Start Fine-Tuning Job
- Monitor Training Results and Metrics
- Deploy Fine-Tuned Model
- Test Your Fine-Tuned Model
AI Studio Guides
Browse essential guides that walk you through the core features of the AI Studio and how to use them effectively.