Hyperstack Object Storage (Beta)
Hyperstack Object Storage provides scalable, S3-compatible storage designed for workloads that require reliable, cost-efficient, and flexible data access—such as AI/ML datasets, logs, media, and backup files. This guide covers the key concepts and functionality of Hyperstack Object Storage, including how to generate access credentials, interact with storage using S3-compatible tools, manage storage buckets, and understand relevant security considerations.
For complete documentation on Hyperstack's Object Storage APIs, see the Object Storage API Reference.
Hyperstack's S3-compatible Object Storage is currently exclusively available in the CANADA-1
region.
In this article
- Overview
- Getting Started with Hyperstack Object Storage
- Supported S3 Actions by Tool
- Manage Access Keys and Buckets
- Data Protection, Performance and Policies
Overview
Hyperstack Object Storage is built on Amazon S3-compatible technology, offering a scalable, secure, and API-compliant solution for large-scale data handling.
This storage option is suitable for workloads that include backups, long-term archives, AI/ML datasets, and media delivery. It is optimized for redundancy, availability, and cost-efficiency.
Key Benefits
- Redundancy and Resilience: Data is stored with built-in redundancy, ensuring durability and availability even in the event of hardware failures.
- Cost Optimization: Designed for high-volume storage with an efficient pay-as-you-go pricing model.
- S3 Compatibility: Supports standard S3 API operations via access keys, making it easy to integrate with existing tools and SDKs.
- Optimized for Unstructured Data: Object storage is designed for storing large-scale unstructured data such as media, logs, and datasets.
- Efficient Metadata Handling: Each object can include custom metadata, making it easier to manage, categorize, and retrieve content at scale.
What is an Access Key?
Access keys are credentials used to authenticate programmatic access to Object Storage. Each access key consists of:
- Access Key ID (public identifier)
- Secret Access Key (used to sign requests; shown once at creation)
What is a Bucket?
Buckets are the top-level containers where your data is stored in Object Storage. Each bucket can store an unlimited number of objects and includes configuration options like region, access level, and lifecycle settings.
S3 Endpoint
To interact with Hyperstack Object Storage via any S3-compatible tool, use the following endpoint:
https://no1-dev.s3.nexgencloud.io
Replace with production endpoint.
Billing
[Billing and pricing details here when available.]
You can monitor object count, storage usage, and hourly costs directly in the Hyperstack Console by visiting the Buckets page.
Getting Started with Hyperstack Object Storage
You can interact with Hyperstack Object Storage using any S3-compatible client or SDK. This guide covers three popular tools: AWS CLI, MinIO Client (mc), and the Boto3 Python SDK, with step-by-step instructions for each.
Click the tab below that matches your preferred tool to get started.
- AWS CLI
- Boto3 (Python SDK)
- S3cmd
- MinIO Client (mc)
Using AWS CLI
-
Generate Access Key Credentials
Generate the necessary credentials to authenticate with Hyperstack's S3-compatible Object Storage.- Log in to the Hyperstack Console
- Navigate to Object Storage > Access Keys
- Click the Generate Access Key button, select the region, and click Generate
- Copy and securely store your Access Key ID and Secret Access Key (shown only once)
Regional AvailabilityObject storage is currently available only in the
CANADA-1
region.Secret Access Key VisibilityFor your security, your secret access key is displayed only once at the time of creation. Be sure to copy and store it in a secure location immediately. Treat your secret access key like a password—do not share it with anyone.
-
Install AWS CLI
Follow the official installation instructions for your platform. -
Connect using AWS CLI
Use the AWS CLI to configure your credentials and set up access to Hyperstack Object Storage.aws configure
Enter the following values when prompted:
- AWS Access Key ID: (paste from Hyperstack).
- AWS Secret Access Key: (paste from Hyperstack).
- Default region name:
us-east-1
(required for compatibility but unrelated to Hyperstack’s actual region setting). - Default output format: you can leave this blank or set it to
json
.
Verify your configuration:
aws s3 ls --endpoint-url https://no1-dev.s3.nexgencloud.io
-
Create a Bucket
Provide a name for your bucket and run the command below. Bucket names must be 3–63 characters and use only lowercase letters, numbers, hyphens (-
), or periods (.
). For full naming rules, see AWS S3 bucket guidelines.aws s3api create-bucket \
--bucket <your-bucket-name> \
--endpoint-url https://no1-dev.s3.nexgencloud.io \
--region us-east-1Confirm the bucket was created:
aws s3 ls --endpoint-url https://no1-dev.s3.nexgencloud.io
You can view your bucket in Hyperstack by navigating to Object Storage > Buckets.
-
Upload Your First Object
Upload a file using thecp
command:aws s3 cp /path/to/your-file.txt s3://<your-bucket-name>/ \
--endpoint-url https://no1-dev.s3.nexgencloud.ioConfirm the object uploaded:
aws s3 ls s3://<your-bucket-name>/ \
--endpoint-url https://no1-dev.s3.nexgencloud.ioView Bucket In HyperstackYou can view your bucket and uploaded objects in the Hyperstack Console > Buckets.
Your total storage usage will update automatically to reflect any new uploads.Supported S3 operationsFor a list of commonly supported actions, see the Supported S3 Actions by Tool section below under the AWS CLI tab.
For detailed guidance on using AWS CLI with S3-compatible storage, refer to the official AWS CLI S3 reference.
Using Boto3 (Python SDK)
-
Generate Access Key Credentials
Generate the necessary credentials to authenticate with Hyperstack's S3-compatible Object Storage.- Log in to the Hyperstack Console
- Navigate to Object Storage > Access Keys
- Click the Generate Access Key button, select the region, and click Generate
- Copy and securely store your Access Key ID and Secret Access Key (shown only once)
Regional AvailabilityObject storage is currently available only in the
CANADA-1
region.Secret Access Key VisibilityFor your security, your secret access key is displayed only once at the time of creation. Be sure to copy and store it in a secure location immediately. Treat your secret access key like a password—do not share it with anyone.
-
Install Boto3
Follow the official installation instructions for your platform. -
Export Credentials as Environment Variables
Set your credentials in the shell so that Boto3 can access them from your environment. These values are the Access Key ID and Secret Access Key you generated in step 1.export AWS_ACCESS_KEY_ID=your-access-key-id
export AWS_SECRET_ACCESS_KEY=your-secret-access-key -
Create a Bucket and Upload a File
This script initializes a Boto3 client, creates a new bucket, uploads a file, and lists the contents of the bucket.Bucket Naming RulesBucket names must be 3–63 characters and use only lowercase letters, numbers, hyphens (
-
), or periods (.
). For full naming rules, see AWS S3 bucket guidelines.import boto3
import os
# Initialize the Boto3 client for S3-compatible storage
s3 = boto3.client(
's3',
aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'], # Access Key ID from Hyperstack
aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'], # Secret Access Key from Hyperstack
endpoint_url='https://no1-dev.s3.nexgencloud.io', # Hyperstack S3-compatible endpoint
region_name='us-east-1' # AWS region
)
bucket_name = 'your-unique-bucket-name' # Must be globally unique and lowercase
file_path = '/path/to/your-file.txt' # Must point to an existing local file
object_key = 'your-file.txt' # The key (name) to assign to the uploaded object in the bucket
# Create a new bucket
s3.create_bucket(Bucket=bucket_name)
print(f"Created bucket: {bucket_name}")
# Upload the file to the bucket
s3.upload_file(file_path, bucket_name, object_key)
print(f"Uploaded {file_path} to {bucket_name}/{object_key}")
# List objects in the bucket
response = s3.list_objects_v2(Bucket=bucket_name)
print("Bucket contents:")
for obj in response.get('Contents', []):
print(f" - {obj['Key']}")View Bucket In HyperstackYou can view your bucket and uploaded objects in the Hyperstack Console > Buckets.
Your total storage usage will update automatically to reflect any new uploads.Supported S3 operationsFor a list of commonly supported actions, see the Supported S3 Actions by Tool section below and select the Boto3 (Python SDK) tab.
For detailed guidance on using Boto3 with S3-compatible storage, refer to the official Boto3 S3 reference.
Using S3cmd
-
Generate Access Key Credentials
Generate the necessary credentials to authenticate with Hyperstack's S3-compatible Object Storage- Log in to the Hyperstack Console
- Navigate to Object Storage > Access Keys
- Click Generate Access Key, choose
CANADA-1
, and click Generate - Copy and securely store your Access Key ID and Secret Access Key (shown only once)
Regional AvailabilityObject Storage is currently only available in the
CANADA-1
region.Secret Access Key VisibilityThe secret key is shown only once—treat it like a password and store it securely.
-
Install S3cmd
Follow the official S3cmd installation instructions for your operating system. -
Configure S3cmd to connect to Hyperstack
Run the interactive setup:s3cmd --configure
When prompted, enter the following values:
- Access Key: Your Hyperstack Access Key ID
- Secret Key: Your Hyperstack Secret Access Key
- Default Region: Leave blank or enter
US
- S3 Endpoint:
no1-dev.s3.nexgencloud.io
- DNS-style bucket+hostname:
%(bucket)s.no1-dev.s3.nexgencloud.io
- Encryption Password: Optional
- Use HTTPS: Yes
Accept or skip the remaining defaults. When prompted, test the connection and save the generated
~/.s3cfg
file. -
Create a Bucket
Provide a name for your bucket and run the command below. Bucket names must be 3–63 characters and use only lowercase letters, numbers, hyphens (-
), or periods (.
). For full naming rules, see AWS S3 bucket guidelines.s3cmd mb s3://<your-bucket-name>
-
Upload and Verify an Object
Upload a file to your bucket:s3cmd put /path/to/your-file.txt s3://<your-bucket-name>/
Confirm the upload by listing bucket contents:
s3cmd ls s3://<your-bucket-name>/
View Bucket In HyperstackYou can view your bucket and uploaded objects in the Hyperstack Console > Buckets.
Your total storage usage will update automatically to reflect any new uploads.Supported S3 operationsFor a list of commonly supported actions, see the Supported S3 Actions by Tool section below and select the S3cmd tab.
For detailed guidance on using S3cmd with S3-compatible storage, refer to the official S3cmd usage documentation.
Using MinIO Client (mc)
-
Generate Access Key Credentials
Generate the necessary credentials to authenticate with Hyperstack's S3-compatible Object Storage.- Log in to the Hyperstack Console
- Navigate to Object Storage > Access Keys
- Click the Generate Access Key button, select the region, and click Generate
- Copy and securely store your Access Key ID and Secret Access Key (shown only once)
Regional AvailabilityObject storage is currently available only in the
CANADA-1
region.Secret Access Key VisibilityFor your security, your secret access key is displayed only once at the time of creation. Be sure to copy and store it in a secure location immediately. Treat your secret access key like a password—do not share it with anyone.
-
Install MinIO Client (mc)
Follow the official installation instructions for your platform. -
Configure MinIO client with Hyperstack credentials
Setup a new alias that connects to the Hyperstack S3-compatible endpoint. Replace<ACCESS_KEY>
and<SECRET_KEY>
with the Access Key and Secret Access Key you generated in Step 1.mc alias set hyperstack https://no1-dev.s3.nexgencloud.io <ACCESS_KEY> <SECRET_KEY>
-
Create a Bucket
Provide a name for your bucket and run the command below. Bucket names must be 3–63 characters and use only lowercase letters, numbers, hyphens (-
), or periods (.
). For full naming rules, see AWS S3 bucket guidelines.mc mb hyperstack/<your-bucket-name>
You can view your bucket in Hyperstack by navigating to Object Storage > Buckets.
-
Upload Your First Object
Copy a file into your bucket:mc cp /path/to/your-file.txt hyperstack/<your-bucket-name>/
List the contents of your bucket to confirm successful upload:
mc ls hyperstack/<your-bucket-name>/
View Bucket In HyperstackYou can view your bucket and uploaded objects in the Hyperstack Console > Buckets.
Your total storage usage will update automatically to reflect any new uploads.Supported S3 operationsFor a list of commonly supported actions, see the Supported S3 Actions by Tool section below and select the MinIO Client (mc) tab.
For detailed guidance on using MinIO with S3-compatible storage, refer to the official MinIO client S3 reference.
Supported S3 Actions by Tool
See below for detailed examples of how to perform S3-compatible operations using the AWS CLI, MinIO Client (mc), or Boto3 Python SDK. Each tab organizes commands by action category—such as bucket management, object operations, multipart uploads, and directory sync. Select your preferred tool, then expand a category to view example commands.
- AWS CLI
- Boto3 (Python SDK)
- S3cmd
- MinIO Client (mc)
AWS CLI S3 Actions
Bucket Operations - Manage your object storage buckets: list, validate, and access them.
ListBuckets - List all available buckets.
Execute the following command to list all buckets under your account:
aws s3 ls \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
HeadBucket - Check if a bucket exists and if you have access.
Execute the following command to verify the existence and access permissions for a bucket:
aws s3api head-bucket \
--bucket <your-bucket-name> \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
Object Operations - Upload, download, delete, and inspect files within buckets.
PutObject - Upload an object.
Execute the following command to upload a file to your bucket:
aws s3 cp /path/to/your-file.txt s3://<your-bucket-name>/ \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
GetObject - Download an object.
Execute the following command to download a file from your bucket:
aws s3 cp s3://<your-bucket-name>/file.txt ./ \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
DeleteObject - Remove an object from a bucket.
Execute the following command to delete a specified object:
aws s3 rm s3://<your-bucket-name>/file.txt \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
CopyObject - Copy an object to a new location.
Execute the following command to copy an object to another bucket or path:
aws s3 cp s3://<source-bucket>/file.txt s3://<target-bucket>/file.txt \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
HeadObject - Retrieve metadata of an object.
Execute the following command to retrieve metadata for a specified object:
aws s3api head-object \
--bucket <your-bucket-name> \
--key <object-key> \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
ListObjects - List objects in a bucket.
Execute the following command to list all objects in a specified bucket:
aws s3 ls s3://<your-bucket-name>/ \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
Multipart Uploads - Split large files into parts for reliable, resumable uploads.
CreateMultipartUpload - Initiate multipart upload.
Execute the following command to begin a multipart upload session:
aws s3api create-multipart-upload \
--bucket <your-bucket-name> \
--key <object-key> \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
UploadPart - Upload a part in a multipart upload.
Execute the following command to upload a single part in a multipart session:
aws s3api upload-part \
--bucket <your-bucket-name> \
--key <object-key> \
--upload-id <upload-id> \
--part-number 1 \
--body ./part1.bin \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
CompleteMultipartUpload - Finalize multipart upload.
Execute the following command to complete a multipart upload by assembling uploaded parts:
aws s3api complete-multipart-upload \
--bucket <your-bucket-name> \
--key <object-key> \
--upload-id <upload-id> \
--multipart-upload file://parts.json \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
AbortMultipartUpload - Cancel a multipart upload.
Execute the following command to abort an in-progress multipart upload:
aws s3api abort-multipart-upload \
--bucket <your-bucket-name> \
--key <object-key> \
--upload-id <upload-id> \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
ListMultipartUploads - List in-progress multipart uploads.
Execute the following command to list all ongoing multipart uploads in a bucket:
aws s3api list-multipart-uploads \
--bucket <your-bucket-name> \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
Directory Operations - Sync local directories with Object Storage for bulk transfers.
Sync - Synchronize local directory with bucket.
Execute the following command to mirror the contents of a local directory to a bucket:
aws s3 sync ./local-dir s3://<your-bucket-name>/ \
--region us-east-1 \
--endpoint-url https://no1-dev.s3.nexgencloud.io
Boto3 S3 Actions
Bucket Operations - Manage your object storage buckets: list, validate, and access them.
ListBuckets - List all available buckets.
Execute the following Python code to list all available buckets:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
HeadBucket - Check if a bucket exists and if you have access.
Execute the following Python code to verify a bucket exists:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
try:
s3.head_bucket(Bucket='your-bucket-name')
print("Bucket exists and you have access")
except ClientError as e:
print("Bucket does not exist or you have no access")
Object Operations - Upload, download, delete, and inspect files within buckets.
PutObject - Upload an object.
Execute the following Python code to upload a file to a bucket:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
s3.upload_file('local-file.txt', 'your-bucket-name', 'remote-file.txt')
GetObject - Download an object.
Execute the following Python code to download an object from a bucket:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
s3.download_file('your-bucket-name', 'remote-file.txt', 'local-file.txt')
DeleteObject - Remove an object from a bucket.
Execute the following Python code to delete a file from a bucket:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
s3.delete_object(Bucket='your-bucket-name', Key='remote-file.txt')
CopyObject - Copy an object to a new location.
Execute the following Python code to copy an object between buckets:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
copy_source = {'Bucket': 'source-bucket', 'Key': 'file.txt'}
s3.copy_object(CopySource=copy_source, Bucket='target-bucket', Key='file.txt')
HeadObject - Retrieve metadata of an object.
Execute the following Python code to retrieve metadata for a file:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
response = s3.head_object(Bucket='your-bucket-name', Key='file.txt')
print(response)
ListObjects - List objects in a bucket.
Execute the following Python code to list all objects in a bucket:
import boto3
s3 = boto3.client('s3', endpoint_url='https://no1-dev.s3.nexgencloud.io')
response = s3.list_objects_v2(Bucket='your-bucket-name')
for obj in response.get('Contents', []):
print(obj['Key'])
Multipart Uploads - Split large files into parts for reliable, resumable uploads.
CreateMultipartUpload / UploadPart / CompleteMultipartUpload - Use Boto3's TransferManager
or low-level APIs to handle multipart uploads.
Multipart upload in Boto3 is more advanced and usually handled with TransferManager
or manual coordination. See Boto3 Multipart Upload Guide for full examples.
AbortMultipartUpload - Cancel a multipart upload in progress by specifying the bucket, object key, and upload ID.
s3.abort_multipart_upload(Bucket='your-bucket-name', Key='your-key', UploadId='upload-id')
ListMultipartUploads - List in-progress multipart uploads.
response = s3.list_multipart_uploads(Bucket='your-bucket-name')
print(response)
Directory Operations - Sync local directories with Object Storage for bulk transfers.
Sync - Not available natively in Boto3.
Use a loop with upload_file()
or use awscli
for directory sync.
S3cmd S3 Actions
Bucket Operations - Manage your object storage buckets: list, validate, and access them.
ListBuckets - List all available buckets.
Execute the following command to list all buckets under your account:
s3cmd ls \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
HeadBucket - Check if a bucket exists and if you have access.
Execute the following command to verify the existence and access permissions for a bucket:
s3cmd info s3://<your-bucket-name> \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
Object Operations - Upload, download, delete, and inspect files within buckets.
PutObject - Upload an object.
s3cmd put /path/to/your-file.txt s3://<your-bucket-name>/ \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
GetObject - Download an object.
s3cmd get s3://<your-bucket-name>/file.txt ./ \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
DeleteObject - Remove an object from a bucket.
s3cmd del s3://<your-bucket-name>/file.txt \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
CopyObject - Copy an object to a new location.
s3cmd cp s3://<source-bucket>/file.txt s3://<target-bucket>/file.txt \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
HeadObject - Retrieve metadata of an object.
s3cmd info s3://<your-bucket-name>/file.txt \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
ListObjects - List objects in a bucket.
s3cmd ls s3://<your-bucket-name>/ \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
Multipart Uploads - Split large files into parts for reliable, resumable uploads.
Unlike AWS CLI, S3cmd does not expose granular control over multipart upload stages (initiate, upload-part, complete). However, it supports multipart uploads automatically for large files. When you use s3cmd put
with a large file, it will handle multipart upload transparently.
Directory Operations - Sync local directories with Object Storage for bulk transfers.
Sync - Synchronize local directory with bucket.
s3cmd sync ./local-dir/ s3://<your-bucket-name>/ \
--host=no1-dev.s3.nexgencloud.io \
--host-bucket="%(bucket)s.no1-dev.s3.nexgencloud.io"
MinIO Client S3 Actions
Bucket Operations - Manage your object storage buckets: list, validate, and access them.
ListBuckets - List all available buckets.
Execute the following command to list all buckets:
mc ls hyperstack
HeadBucket - Check if a bucket exists and if you have access.
Execute the following command to verify the existence and permissions for a bucket:
mc stat hyperstack/<your-bucket-name>
Object Operations - Upload, download, delete, and inspect files within buckets.
PutObject - Upload an object.
Execute the following command to upload a file to your bucket:
mc cp /path/to/your-file.txt hyperstack/<your-bucket-name>/
GetObject - Download an object.
Execute the following command to download a file from your bucket:
mc cp hyperstack/<your-bucket-name>/file.txt ./
DeleteObject - Remove an object from a bucket.
Execute the following command to delete an object:
mc rm hyperstack/<your-bucket-name>/file.txt
CopyObject - Copy an object to a new location.
Execute the following command to copy an object to a different bucket or prefix:
mc cp hyperstack/<source-bucket>/file.txt hyperstack/<target-bucket>/file.txt
HeadObject - Retrieve metadata of an object.
Execute the following command to retrieve object metadata:
mc stat hyperstack/<your-bucket-name>/file.txt
ListObjects - List objects in a bucket.
Execute the following command to list objects in a bucket:
mc ls hyperstack/<your-bucket-name>/
Multipart Uploads - Split large files into parts for reliable, resumable uploads.
CreateMultipartUpload / UploadPart / CompleteMultipartUpload - Upload large files in multiple parts for reliability and performance.
MinIO automatically handles multipart uploads using standard mc cp
for large files.
mc cp /large-file.bin hyperstack/<your-bucket-name>/
AbortMultipartUpload - Abort an in-progress multipart upload by removing the incomplete object from the bucket.
To abort a multipart upload, remove the partial file:
mc rm hyperstack/<your-bucket-name>/incomplete-upload
ListMultipartUploads - List in-progress multipart uploads.
MinIO does not expose an API to list in-progress multipart uploads using mc
. Use the console interface if needed.
Directory Operations - Sync local directories with Object Storage for bulk transfers.
Sync - Synchronize local directory with bucket.
Execute the following command to sync a local directory with a bucket:
mc mirror ./local-dir hyperstack/<your-bucket-name>/
S3 Reference Documentation
For comprehensive documentation on AWS S3 commands and libraries, refer to:
- AWS S3 API reference
- AWS CLI S3 reference
- AWS S3cmd S3 reference
- Boto3 S3 reference
- MinIO client S3 reference
Manage Access Keys and Buckets
View and Manage Access Keys
- In Hyperstack, navigate to Object Storage > Access Keys
- View all keys associated with your account, including:
- Access Key ID
- Region
- Date of creation
Organization owners will see all access keys while members will only see the keys they created.
Delete Access Keys
- Click the ⋮ menu next to the key
- Confirm deletion by entering the key name
Deleting an access key will immediately disable all access using that key.
View and Manage Buckets
- In Hyperstack, navigate to Object Storage > Buckets
- View list of existing buckets and their basic info including:
- Name
- Creation date
- Region
- Storage size
- Number of objects
- Click on the name of a bucket or hover over the ⋮ menu and click More Details to see:
- Creation date – the exact timestamp when the bucket was created.
- Region – the Hyperstack region (e.g.,
CANADA-1
) where the bucket is hosted. - Per hour running cost – current hourly cost for storing objects in the bucket, calculated based on storage size used.
- Number of objects – total count of objects stored in the bucket.
- Total Storage Used – cumulative storage consumption in human-readable units.
- Endpoint – the unique S3-compatible URL to access the bucket (e.g.,
https://no1-dev.s3.nexgencloud.io/<bucket-name>
).
Delete a Bucket
- From the bucket list or detail page, click the ⋮ next to the bucket
- Click Delete, confirm via dialog
- A deletion confirmation email will be sent
Bucket deletions are permanent. Ensure all necessary data is backed up before deleting.
Data Protection, Performance and Policies
Encryption and Data Security
All communication with the Hyperstack S3-compatible endpoint is encrypted using TLS 1.2 or higher, ensuring secure data transmission between clients and Object Storage.
Server-side encryption is not currently supported. Users who require encryption at rest should implement client-side encryption. Most S3-compatible tools and SDKs—such as S3cmd, AWS SDKs, and Boto3—offer built-in support for encrypting data before upload.
Durability and Redundancy
Objects are stored using a 4+2 erasure coding scheme, which divides data into four data blocks and two parity blocks. This configuration tolerates the loss of any two blocks without impacting data integrity, ensuring high durability and fault tolerance.
Hyperstack's Object Storage is currently exclusively available in the CANADA-1
region.
Multipart Uploads
Multipart upload is supported and recommended for large files. This feature:
- Enables more reliable uploads by retrying individual parts
- Delivers faster transfers through parallel uploads
- Uses less memory for handling large objects
See the Supported S3 Actions by Tool section for example Multipart Upload commands for your preferred tool.
Performance and Usage Limits
Performance depends on network conditions:
- Transfers over the public internet are limited by the region’s external uplink, currently ~40 Gb/s aggregate in the
CANADA-1
region. - Transfers within the same region (e.g., between VMs/Clusters and Object Storage in
CANADA-1
) benefit from faster internal networking.
There are currently no enforced per-user or per-bucket rate limits. For sustained high-throughput use cases, contact support@hyperstack.cloud to ensure optimal provisioning.