1. You receive
- Endpoint:
https://s3.us.alts3.com(example) - Access key:
ALTS3ACCESSKEY - Secret key:
alts3-secret-key - Region string:
us-east-1(default unless we provide another)
Your actual values arrive by email or dashboard. Code snippets on this page refer to YOUR_ALTS3_REGION as a placeholder (defaults to us-east-1 unless we provide another).
2. Try with AWS CLI
Use the AWS CLI but point it at altS3.com.
aws s3 ls \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
Replace YOUR_ALTS3_REGION with the string from your welcome email (defaults to us-east-1 if we don’t specify one). Configure a profile with your altS3 access/secret.
Endpoints
altS3.com exposes a managed S3-compatible endpoint. It expects path-style requests unless you control DNS + certificates for virtual hosts.
- Example REST endpoint:
https://s3.us.alts3.com - Example region string:
us-east-1(platform default) - Path-style addressing: set
aws configure set s3.use_path_style trueorforcePathStyle: truein SDKs.
Confirm your assigned endpoint and region inside the dashboard or welcome email before wiring up clients.
Advanced S3 feature parity
altS3 matches the same capabilities you rely on in self-hosted S3-compatible stacks:
- Erasure-coded durability + bitrot protection on every bucket.
- Bucket versioning & object locking for governance/compliance workloads.
- Server-side encryption (SSE-S3/AES-256 or SSE-C) on uploads.
- Lifecycle policies that expire or transition objects based on prefixes/tags.
- Per-tenant quotas + telemetry surfaced in the altS3 portal.
- Open-source
mcCLI compatibility for scripting and automation.
Bring your existing automation, mc command sets, and SDK behaviors—they work the same way against altS3.
Authentication
We accept S3-style signatures (v4). Provide your access key and secret key in the client/SDK.
# example ~/.aws/credentials
[alts3]
aws_access_key_id = ALTS3ACCESSKEY
aws_secret_access_key = alts3-secret-key
# example ~/.aws/config
[profile alts3]
region = YOUR_ALTS3_REGION # typically us-east-1 for altS3
output = json
s3 =
addressing_style = path
Time skew can cause signature errors. Ensure your server time is accurate.
Buckets
Buckets behave like standard S3 buckets:
- Create:
PUT /{bucket} - List:
GET / - Delete:
DELETE /{bucket}
aws s3 mb s3://my-bucket \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
Our platform enforces JSON bucket policies (similar to AWS bucket policies). Classic ACLs are not supported—grant access via policies or dedicated access keys.
Versioning & object locking
Full S3 versioning and WORM-style object locking are available—enable versioning per bucket, then optionally require retention windows.
# enable versioning
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
# optional: enable governance-mode object lock with default retention
aws s3api put-object-lock-configuration \
--bucket my-bucket \
--object-lock-configuration '{
"ObjectLockEnabled":"Enabled",
"Rule":{"DefaultRetention":{"Mode":"GOVERNANCE","Days":30}}
}' \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
Retention settings require versioning to be enabled first. Governance mode lets admins bypass locks with the appropriate header; Compliance mode is immutable until expiration.
Objects
Upload, download, list, and delete work like S3. Multipart uploads are supported.
# upload
aws s3 cp ./file.zip s3://my-bucket/file.zip \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
# download
aws s3 cp s3://my-bucket/file.zip ./ \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
For very large objects, use multipart. Most SDKs handle this automatically.
Lifecycle policies
Use lifecycle rules to expire or transition prefixes/tags—perfect for keeping buckets tidy without manual cleanup.
cat > lifecycle.json <<'EOF'
{
"Rules": [
{
"ID": "logs-expire",
"Status": "Enabled",
"Prefix": "logs/",
"Expiration": { "Days": 30 }
},
{
"ID": "tmp-cleanup",
"Filter": { "Tag": { "Key": "ttl", "Value": "7d" } },
"Status": "Enabled",
"Expiration": { "Days": 7 }
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
Lifecycle rules run natively on altS3. Use prefixes, filters, and tags to dial in the exact clean-up behavior you want.
Server-side encryption
altS3 honors standard SSE-S3 (AES256) and SSE-C headers, so objects are encrypted at rest by default. Use the AWS CLI/S3 APIs to enforce encryption per upload.
# SSE-S3 (managed keys)
aws s3 cp ./finance.csv s3://my-bucket/finance.csv \
--sse AES256 \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
# SSE-C (customer-provided key)
aws s3 cp ./secret.tgz s3://my-bucket/secret.tgz \
--sse-c AES256 \
--sse-c-key "$(openssl rand -base64 32)" \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3
For SSE-C workloads, store the keys securely—altS3 never retains a copy. SSE-S3 uploads automatically inherit the platform’s KMS integration.
Presigned URLs
Generate a temporary URL for PUT or GET using your client. This is handy for browsers and apps.
# presigned GET
aws s3 presign s3://my-bucket/file.zip \
--endpoint-url https://s3.us.alts3.com \
--region YOUR_ALTS3_REGION \
--profile alts3 \
--expires-in 3600
Presigned URLs are time-limited; use shorter expirations for public sharing.
SDK examples
Python (boto3)
import boto3
session = boto3.session.Session()
s3 = session.client(
service_name="s3",
region_name="YOUR_ALTS3_REGION",
endpoint_url="https://s3.us.alts3.com",
aws_access_key_id="ALTS3ACCESSKEY",
aws_secret_access_key="alts3-secret-key",
)
# list buckets
print(s3.list_buckets())
Node.js (AWS SDK v3)
import { S3Client, ListBucketsCommand } from "@aws-sdk/client-s3";
const client = new S3Client({
region: "YOUR_ALTS3_REGION",
endpoint: "https://s3.us.alts3.com",
forcePathStyle: true,
credentials: {
accessKeyId: "ALTS3ACCESSKEY",
secretAccessKey: "alts3-secret-key",
},
});
const data = await client.send(new ListBucketsCommand({}));
console.log(data.Buckets);
Note forcePathStyle: true is often required when using S3-compatible providers.
mc CLI quickstart
The open-source mc CLI works unchanged against altS3, so you can keep your existing scripting, lifecycle management, and replication routines.
# configure an alias
mc alias set alts3 https://s3.us.alts3.com ALTS3ACCESSKEY alts3-secret-key
# inspect buckets
mc ls alts3
# mirror a folder
mc mirror ./assets alts3/my-bucket/assets
# manage policies/quotas
mc admin policy list alts3
Use the same mc commands you already run against self-hosted S3-compatible clusters—no changes needed.
Features not available
To deliver $5.95/TB on the base tier we disable some advanced S3 capabilities:
- No Object Lambda
- No S3 Inventory reports
- No S3 Batch Operations
- No Glacier-like archival tiers
- No bucket ACLs or S3 Access Points (use bucket policies instead)
- No AWS event notifications/Lambda triggers
If your workload uses any of the above, keep it on Amazon S3 or another provider. For backups, media, app assets, and logs, altS3.com is drop-in.
Troubleshooting
SignatureDoesNotMatch
- Check endpoint is exactly
https://s3.us.alts3.com - Check region matches
YOUR_ALTS3_REGION(defaults tous-east-1if unspecified) - Check system time (NTP)
AccessDenied
- Bucket does not exist or is owned by another account
- Your access key was rotated; request a new one