The S3 API has become the lingua franca of object storage. Backup tools speak it, container registries speak it, data lakes speak it, static site generators speak it. That ubiquity is also why the interesting question is no longer "does this provider speak S3?" — almost everyone does — but where does the bucket actually live, who controls the operator, and what does egress cost?
This piece is the practical version of that question for Australian teams: a walk through what S3 compatibility means in practice, why running it inside Australia matters, and how to point standard tools (boto3, the AWS CLI, rclone, Terraform) at a sovereign endpoint. We'll use RustFS, the storage engine we run at tasmanian.cloud, as the worked example.
What "S3-compatible" actually means
S3 is a protocol — an HTTP-based API with a specific request signing scheme (Signature Version 4), a specific XML response format, and a specific set of operations on buckets, objects, multipart uploads, versions, and ACLs. "S3-compatible" means a server implements enough of that protocol that clients written for Amazon S3 work without changes, usually by changing only an endpoint URL.
In practice, "enough" varies. The well-supported core looks like this:
| Operation | Coverage in modern S3-compatible servers |
|---|---|
| Buckets — create, list, delete, head | Universal. |
| Objects — PUT, GET, DELETE, HEAD, COPY | Universal. |
| Multipart upload | Universal — required for large objects. |
| Versioning | Common, but check semantics around delete markers. |
| Object Lock / WORM | Increasingly common; critical for ransomware-resistant backups. |
| Pre-signed URLs (SigV4) | Universal. |
| Server-side encryption (SSE-S3, SSE-KMS, SSE-C) | SSE-S3 universal; SSE-KMS implementations vary; SSE-C broadly supported. |
| Bucket policies / ACLs | Implemented, but the IAM model differs per provider. |
| S3 Select, Lambda triggers, lifecycle policies | Inconsistent — these are where compatibility claims get loose. |
The honest summary: if your application uses the boto3 / AWS SDK surface — put_object, get_object, copy_object, multipart, presigned URLs — a compliant S3-compatible server will look identical to AWS S3. If your code relies on AWS-specific features (S3 Select, Lambda event triggers over EventBridge, AWS-flavoured bucket policy JSON), expect to do some work.
Why "in Australia" is part of the spec
There are four practical reasons to care that an S3 endpoint sits inside Australia, distinct from the legal sovereignty argument.
Egress economics
Hyperscaler S3 charges for outbound bandwidth. Pulling 1 TB out of AWS ap-southeast-2 currently lands somewhere around USD 90 plus GST — and that's before any cross-region replication or inter-AZ chatter your architecture happens to do. For workloads that read more than they write (analytics, ML training data, static-content origins, backup restores), egress quietly becomes the dominant line item.
Australian-resident sovereign providers commonly offer zero or near-zero AU egress as a deliberate competitive lever. tasmanian.cloud charges nothing for traffic to AU destinations. That isn't generosity — it's a recognition that for the workloads we serve, the customer's users are also in Australia, and our peering bill is tiny by comparison.
Latency to local users
Round-trip times to Sydney from regional Australia are typically sub-30 ms; to Tasmania, sub-50 ms; to Singapore, 100 ms or more. Object storage is rarely on a tight latency budget for first-byte time, but it shows up in two places: the long tail of small-object reads, and signed-URL flows where the browser fetches directly. A local endpoint quietly removes a class of P99 problems.
Data sovereignty
We covered this in detail in Data Sovereignty vs. Data Residency in Australia. The short form: residency is necessary but not sufficient. An Australian-owned operator running infrastructure inside Australia changes the legal posture in a way a foreign-owned region cannot.
Operational independence
Multi-cloud isn't about chasing the cheapest VM. It's about having a credible answer when a single supplier has a global control plane outage, a billing dispute, or a policy change you can't live with. A sovereign Australian S3 endpoint as a second target for backups and a destination for disaster-recovery copies is a cheap insurance policy.
What RustFS is
RustFS is the distributed object storage engine we run underneath the tasmanian.cloud S3 API. The relevant properties for this article are:
- S3 API surface: bucket and object CRUD, multipart uploads, versioning, object lock (WORM), and presigned URLs — the operations your tools already use.
- Written in Rust: memory-safe, designed for high-throughput erasure-coded storage, with predictable latency characteristics.
- Post-quantum cryptography: Kyber-768 for key encapsulation and Dilithium-3 for signatures, so encrypted-at-rest data is resistant to harvest-now-decrypt-later attacks against future quantum computers.
- Erasure coding across nodes: data survives node and disk failures without the per-object replication overhead of naive 3x copies.
- Zero AU egress: we don't charge for traffic to Australian destinations.
The full architectural deep dive lives in the RustFS docs; this article stays at the integration layer.
Connecting standard tools
Every example below assumes the tasmanian.cloud TAS-1 endpoint and a set of credentials issued from the dashboard. The pattern generalises to any S3-compatible provider — only the endpoint URL, credentials, and the "use path-style addressing" flag tend to vary.
Environment variables
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="tas-1"
export AWS_ENDPOINT_URL="https://s3.tas-1.tasmanian.cloud"Most modern AWS SDKs honour AWS_ENDPOINT_URL out of the box. Older clients may need the endpoint passed explicitly per call.
AWS CLI
# List buckets
aws s3 ls --endpoint-url https://s3.tas-1.tasmanian.cloud
# Create a bucket
aws s3 mb s3://my-app-prod \
--endpoint-url https://s3.tas-1.tasmanian.cloud
# Sync a directory up
aws s3 sync ./build s3://my-app-prod/static \
--endpoint-url https://s3.tas-1.tasmanian.cloud \
--acl private
# Generate a presigned URL valid for 1 hour
aws s3 presign s3://my-app-prod/reports/q1.pdf \
--expires-in 3600 \
--endpoint-url https://s3.tas-1.tasmanian.cloudPython (boto3)
import boto3
from botocore.client import Config
s3 = boto3.client(
"s3",
endpoint_url="https://s3.tas-1.tasmanian.cloud",
aws_access_key_id="your-access-key",
aws_secret_access_key="your-secret-key",
region_name="tas-1",
config=Config(signature_version="s3v4", s3={"addressing_style": "path"}),
)
# Upload a file
s3.upload_file("./report.pdf", "my-app-prod", "reports/q1.pdf")
# Generate a 15-minute presigned download URL
url = s3.generate_presigned_url(
"get_object",
Params={"Bucket": "my-app-prod", "Key": "reports/q1.pdf"},
ExpiresIn=900,
)
print(url)Path-style addressing (endpoint/bucket/key rather than bucket.endpoint/key) is the safer default when working against any non-AWS S3 endpoint, because it doesn't require the bucket to be a valid DNS subdomain.
Node.js (AWS SDK v3)
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { readFile } from "node:fs/promises";
const s3 = new S3Client({
region: "tas-1",
endpoint: "https://s3.tas-1.tasmanian.cloud",
forcePathStyle: true,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
await s3.send(
new PutObjectCommand({
Bucket: "my-app-prod",
Key: "uploads/avatar.png",
Body: await readFile("./avatar.png"),
ContentType: "image/png",
}),
);
const url = await getSignedUrl(
s3,
new PutObjectCommand({ Bucket: "my-app-prod", Key: "uploads/next.png" }),
{ expiresIn: 900 },
);rclone
For backup, sync, and migration jobs, rclone is the workhorse. The remote definition:
# ~/.config/rclone/rclone.conf
[tascloud]
type = s3
provider = Other
access_key_id = your-access-key
secret_access_key = your-secret-key
endpoint = https://s3.tas-1.tasmanian.cloud
region = tas-1
force_path_style = true
acl = private# Mirror a local tree to a sovereign bucket
rclone sync ./data tascloud:my-app-prod/data --progress
# Copy from AWS S3 to tasmanian.cloud (one-time migration)
rclone copy aws-prod:legacy-bucket tascloud:my-app-prod \
--transfers 16 --checksumTerraform
The standard aws provider works against any S3-compatible endpoint with a few overrides:
terraform {
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
}
}
provider "aws" {
region = "tas-1"
access_key = var.tas_access_key
secret_key = var.tas_secret_key
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_use_path_style = true
endpoints {
s3 = "https://s3.tas-1.tasmanian.cloud"
}
}
resource "aws_s3_bucket" "app" {
bucket = "my-app-prod"
}
resource "aws_s3_bucket_versioning" "app" {
bucket = aws_s3_bucket.app.id
versioning_configuration {
status = "Enabled"
}
}Migrating from AWS S3
The mechanics of moving an existing S3 workload off a hyperscaler break into three steps that you can run in parallel for different buckets.
- Inventory. Catalogue every bucket, its size, its object count, its access pattern, and its sensitivity. The boring part of any migration is finding the "temporary" bucket from 2019 that's now load-bearing.
- Bulk seed. Use
rclone copywith a high--transferscount to seed the new bucket. For multi-TB moves, do the seed from an EC2 instance inap-southeast-2to keep AWS-side egress at the source speed and let the WAN hop happen once. - Cutover. Switch the application to dual-write to both buckets, run a final delta sync, then flip reads. Keep the AWS bucket read-only for a defined window before deletion.
For large estates we're happy to help — the migration patterns are well-trodden but the tooling and order-of-operations is where most of the time goes.
Common compatibility gotchas
- Path-style vs. virtual-hosted-style. Always set path-style addressing for non-AWS endpoints. Virtual-hosted-style (
bucket.endpoint) requires wildcard DNS and a wildcard TLS certificate that not every provider runs. - SigV4 region. The signing region must match the server's configured region (
tas-1for tasmanian.cloud). Mismatches surface as opaqueSignatureDoesNotMatcherrors. - Bucket naming. Stick to lowercase, hyphenated, DNS-safe names. Even with path-style addressing, mixed-case names cause issues with some SDK clients.
- Multipart thresholds. Tune the multipart threshold and chunk size to your link. The defaults are tuned for datacentre-to-datacentre throughput; over a customer ADSL link a smaller chunk size finishes more reliably.
- Object Lock semantics. If you're using S3 Object Lock for ransomware-resistant backups, validate the retention behaviour against your runbook before relying on it. The spec is consistent but provider implementations have edge cases.
Where to go next
If you want to try the API against a real endpoint, the storage docs have the full reference, including endpoints, auth, and bucket policies. The architecture deep-dive lives in the RustFS docs. And if you're evaluating sovereignty as a procurement requirement rather than just an architectural preference, the companion piece Data Sovereignty vs. Data Residency in Australia is the right next read.