Zipserver is an HTTP service and CLI tool for working with zip files on Google Cloud Storage. It can extract zip files, copy files between storage targets, download URLs to storage, and more.
go install github.com/itchio/zipserver@latestCreate a config file zipserver.json:
{
"PrivateKeyPath": "path/to/service/key.pem",
"ClientEmail": "111111111111@developer.gserviceaccount.com",
"Bucket": "my-bucket",
"ExtractPrefix": "extracted"
}More config settings can be found in zipserver/config.go.
All limits are configured in zipserver.json using the names below. For extract and list, you can override limits per request via HTTP query parameters. CLI overrides are available for extract via flags, and --threads can override ExtractionThreads. A value of 0 means "unbounded" for all limits except ExtractionThreads (which uses GOMAXPROCS, and is forced to at least 1).
| Config field | Applies to | Description | Default |
|---|---|---|---|
MaxInputZipSize |
list, extract | Maximum size (bytes) of the input zip file (compressed). | 104857600 |
MaxFileSize |
extract | Maximum uncompressed size (bytes) for a single file. | 209715200 |
MaxTotalSize |
extract | Maximum total uncompressed size (bytes) across all files. | 524288000 |
MaxNumFiles |
extract | Maximum number of files in the archive. | 100 |
MaxFileNameLength |
extract | Maximum path length for a file name in the archive. | 255 |
ExtractionThreads |
extract | Number of worker threads used during extraction. | 4 |
MaxListFiles |
list | Maximum number of files returned by list. | 50000 |
Zipserver can run as an HTTP server or execute operations directly via CLI commands. Operational commands output JSON; server, testzip, dump, and version print human-readable output.
zipserver --help # Show all commands
zipserver <command> --help # Show help for a specific command| Command | Description | Storage | HTTP Endpoint |
|---|---|---|---|
server |
Start HTTP server (default) | n/a | |
extract |
Extract a zip file to storage | Source read, optional target write | /extract |
copy |
Copy a file to target storage or different key | Source read, target or source write | /copy |
delete |
Delete files from storage | Target write | /delete |
list |
List files in a zip archive | Source read, URL, or local file | /list |
slurp |
Download a URL and store it | Source write, or optional target write | /slurp |
testzip |
Extract and serve a local zip file via HTTP for debugging | local only | |
dump |
Dump parsed config and exit | n/a | |
version |
Print version information | n/a |
Start the server:
zipserver server --listen 127.0.0.1:8090Warning: This HTTP server exposes unauthenticated operations on your storage bucket. It's recommended to avoid public network interfaces.
| Endpoint | Description |
|---|---|
/status |
Show currently running tasks (held locks per operation type) |
/metrics |
Prometheus-compatible metrics |
Example /status response:
{
"copy_locks": [],
"extract_locks": [{"Key": "s3backup:zips/large.zip", "LockedAt": "...", "LockedSeconds": 12.5}],
"slurp_locks": [],
"delete_locks": []
}Extract a zip file and upload individual files to a prefix.
CLI:
# Extract from storage key
zipserver extract --key zips/my_file.zip --prefix extracted/
# Extract from local file
zipserver extract --file ./local.zip --prefix extracted/
# With limits
zipserver extract --key zips/my_file.zip --prefix extracted/ \
--max-file-size 10485760 --max-num-files 100
# Override extraction threads (global flag)
zipserver --threads 8 extract --key zips/my_file.zip --prefix extracted/
# With a target storage and file filter
zipserver extract --key zips/my_file.zip --prefix extracted/ \
--target s3backup --filter "assets/**/*.png"
# Extract specific files by exact path
zipserver extract --key zips/my_file.zip --prefix extracted/ \
--only-file "readme.txt" --only-file "images/logo.png"HTTP API:
curl "http://localhost:8090/extract?key=zips/my_file.zip&prefix=extracted"
# With a target storage and file filter
curl "http://localhost:8090/extract?key=zips/my_file.zip&prefix=extracted&target=s3backup&filter=assets/**/*.png"
# Extract specific files by exact path
curl "http://localhost:8090/extract?key=zips/my_file.zip&prefix=extracted&only_files[]=readme.txt&only_files[]=images/logo.png"Note: --filter (glob pattern) and --only-file (exact paths) are mutually exclusive.
When extracting HTML games or web content, you can inject an HTML snippet at the end of all index.html files. This is useful for adding analytics, scripts, or other content without post-processing.
CLI:
zipserver extract --key zips/game.zip --prefix games/123/ \
--html-footer '<script src="/analytics.js"></script>'HTTP API:
# Use POST for long footer content
curl -X POST "http://localhost:8090/extract" \
-d "key=zips/game.zip" \
-d "prefix=games/123/" \
-d "html_footer=<script src=\"/analytics.js\"></script>"Behavior:
- Matches
index.htmlfiles case-insensitively (e.g.,INDEX.HTML,Index.Html) - Injects into all
index.htmlfiles, including nested ones (e.g.,subdir/index.html) - Skips pre-compressed files (gzip/brotli) since appending to compressed streams would corrupt them
- The response includes
"Injected": truefor each file that received the footer
Copy a file from primary storage to a target storage, or to a different key within primary storage.
CLI:
# Copy to a target storage
zipserver copy --key path/to/file.zip --target s3backup
# Copy to a different key within primary storage (rename/move)
zipserver copy --key path/to/file.zip --dest-key archive/file.zip
# Copy to target storage with a different destination key
zipserver copy --key path/to/file.zip --target s3backup --dest-key renamed.zip
# With HTML footer injection
zipserver copy --key games/123/index.html --target s3backup \
--html-footer '<script src="/analytics.js"></script>'
# Strip Content-Disposition header (allow inline rendering instead of download)
zipserver copy --key uploads/game.html --dest-key html5games/123/index.html \
--strip-content-dispositionHTTP API:
# Sync mode (waits for completion)
curl "http://localhost:8090/copy?key=path/to/file.zip&target=s3backup"
# Copy within primary storage to a different key
curl "http://localhost:8090/copy?key=path/to/file.zip&dest_key=archive/file.zip"
# Copy to target with different destination key
curl "http://localhost:8090/copy?key=path/to/file.zip&target=s3backup&dest_key=renamed.zip"
# Async mode (returns immediately, notifies callback when done)
curl "http://localhost:8090/copy?key=path/to/file.zip&target=s3backup&callback=http://example.com/done"
# Hybrid mode (waits up to sync_timeout ms, then falls back to async + callback)
curl "http://localhost:8090/copy?key=path/to/file.zip&target=s3backup&callback=http://example.com/done&sync_timeout=500"
# With HTML footer injection
curl -X POST "http://localhost:8090/copy" \
-d "key=games/123/index.html" \
-d "target=s3backup" \
-d "html_footer=<script src=\"/analytics.js\"></script>"
# Strip Content-Disposition header (allow inline rendering instead of download)
curl "http://localhost:8090/copy?key=uploads/game.html&dest_key=html5games/123/index.html&strip_content_disposition=true"Either target or dest_key (or both) must be provided. When copying within the same storage (no target), dest_key must differ from key.
When using sync_timeout, you must also provide callback. If the copy finishes within sync_timeout (milliseconds), the response is synchronous success and no callback is sent. If it exceeds the timeout, the response is async ({"Processing": true, "Async": true}) and the callback is sent when the copy completes.
When copying files, you can inject an HTML snippet at the end of the file using the html_footer parameter.
- Skips files with
Content-Encodingset (e.g., gzip/brotli) to avoid corruption - The response includes
"Injected": truewhen footer was appended
Delete files from a target storage.
CLI:
zipserver delete --key file1.zip --key file2.zip --target s3backupHTTP API:
# Sync mode (waits for completion)
curl -X POST "http://localhost:8090/delete" \
-d "keys[]=file1.zip" \
-d "keys[]=file2.zip" \
-d "target=s3backup"
# Async mode (returns immediately, notifies callback when done)
curl -X POST "http://localhost:8090/delete" \
-d "keys[]=file1.zip" \
-d "keys[]=file2.zip" \
-d "target=s3backup" \
-d "callback=http://example.com/done"List files in a zip archive without extracting. Returns JSON with filenames and uncompressed sizes.
CLI:
# From storage (uses efficient range requests - only reads zip metadata)
zipserver list --key zips/my_file.zip
# From URL (downloads entire file)
zipserver list --url https://example.com/file.zip
# From local file
zipserver list --file ./local.zipWhen using --key, zipserver uses HTTP range requests to read only the zip's central directory (typically < 1% of the file size). This significantly reduces bandwidth and storage operation costs for large zip files.
HTTP API:
curl "http://localhost:8090/list?key=zips/my_file.zip"The HTTP API also uses range requests when listing by key.
Download a file from a URL and store it in storage.
CLI:
# Store in primary storage
zipserver slurp --url https://example.com/file.zip --key uploads/file.zip
# Store in a target storage
zipserver slurp --url https://example.com/file.zip --key uploads/file.zip --target s3backupHTTP API:
curl "http://localhost:8090/slurp?url=https://example.com/file.zip&key=uploads/file.zip"
# With target storage
curl "http://localhost:8090/slurp?url=https://example.com/file.zip&key=uploads/file.zip&target=s3backup"Extract and serve a local zip file via HTTP for debugging:
zipserver testzip ./my_file.zip
# With filtering
zipserver testzip ./my_file.zip --filter "*.png"
zipserver testzip ./my_file.zip --only-file "readme.txt"
# With HTML footer injection
zipserver testzip ./my_file.zip --html-footer '<script>console.log("test")</script>'The top-level storage settings in zipserver.json (for example
PrivateKeyPath, ClientEmail, Bucket) define the primary/source storage
used for reads and default writes. You can also configure additional storage
targets (S3 or GCS) for copy, delete, extract, and slurp operations.
When a target is specified (for example --target s3backup or
target=s3backup), reads still come from the primary/source storage and writes
go to the target bucket. Targets marked Readonly cannot be written to.
Example target entries:
{
"StorageTargets": [
{
"Name": "s3backup",
"Type": "S3",
"S3AccessKeyID": "...",
"S3SecretKey": "...",
"S3Endpoint": "s3.amazonaws.com",
"S3Region": "us-east-1",
"Bucket": "my-backup-bucket"
},
{
"Name": "gcsbackup",
"Type": "GCS",
"GCSPrivateKeyPath": "/path/to/target/key.pem",
"GCSClientEmail": "target-service@project.iam.gserviceaccount.com",
"Bucket": "my-gcs-backup-bucket"
}
]
}Some HTTP handlers support callbacks to notify your application when long-running operations complete. This allows you to immediately return a response to the client while the operation continues in the background.
| Endpoint | Parameter | Sync Mode Available |
|---|---|---|
/extract |
async |
Yes (omit async for sync) |
/slurp |
async |
Yes (omit async for sync) |
/copy |
callback |
Yes (omit callback for sync) |
/delete |
callback |
Yes (omit callback for sync) |
- Provide a callback URL via the
asyncorcallbackparameter - The server immediately returns
{"Processing": true, "Async": true} - The operation runs in the background
- On completion, the server POSTs the result to your callback URL
The callback is sent as a POST request with Content-Type: application/x-www-form-urlencoded.
For /copy and /delete, you can use callback=- to run the operation asynchronously without receiving a callback notification. This is useful when you want to trigger an operation but don't need to know when it completes.
# Delete files asynchronously without callback
curl -X POST "http://localhost:8090/delete" -d "keys[]=file.zip" -d "callback=-"On success, callbacks include Success=true plus operation-specific fields:
| Endpoint | Success Fields |
|---|---|
/extract |
ExtractedFiles[N][Key], ExtractedFiles[N][Size], ExtractedFiles[N][Injected] (if html_footer was applied) for each file |
/slurp |
(none beyond Success=true) |
/copy |
Key, Duration, Size, Md5, Injected (if html_footer was applied) |
/delete |
TotalKeys, DeletedKeys, Errors (JSON array if any) |
On error, callbacks include:
| Endpoint | Error Fields |
|---|---|
/extract |
Type=ExtractError, Error=<message> |
/slurp |
Type=SlurpError, Error=<message> |
/copy |
Success=false, Error=<message> |
/delete |
Success=false, Error=<message> |
Extract with callback:
curl "http://localhost:8090/extract?key=zips/my_file.zip&prefix=extracted&async=http://example.com/extract-done"Slurp with callback:
curl "http://localhost:8090/slurp?url=https://example.com/file.zip&key=uploads/file.zip&async=http://example.com/slurp-done"- The callback timeout is configurable via
AsyncNotificationTimeoutin the config - If your callback URL returns a non-200 status, the error is logged but the operation result is not retried
- Operations that are already in progress for the same key return
{"Processing": true}without theAsyncfield - For
/copy,sync_timeout(milliseconds) enables a hybrid mode: the server waits up to that duration before switching to async and notifying the callback. If it completes within the timeout, no callback is sent.
The key file in your config should be the PEM-encoded private key for a service account which has permissions to view and create objects on your chosen GCS bucket.
The bucket needs correct access settings:
- Public access must be enabled, not prevented.
- Access control should be set to fine-grained ("legacy ACL"), not uniform.
Zipserver supports systemd's sandboxing features through environment variables:
| Variable | Purpose | Systemd Directive |
|---|---|---|
RUNTIME_DIRECTORY |
Temp directory for zip extraction (instead of ./zip_tmp) |
RuntimeDirectory= |
CREDENTIALS_DIRECTORY |
Resolves credential paths like PrivateKeyPath |
LoadCredential= |
ZIPSERVER_TMP_DIR |
Custom temp directory (takes precedence over RUNTIME_DIRECTORY) |
- |
When CREDENTIALS_DIRECTORY is set, zipserver checks this directory first when loading credential files (e.g., PrivateKeyPath, GCSPrivateKeyPath). This allows your config to use relative paths like "PrivateKeyPath": "secret/storage.pem" while systemd loads the actual file via LoadCredential=.
[Unit]
Description=zipserver
After=network.target
[Service]
User=myuser
# Load credentials to /run/credentials/zipserver/
LoadCredential=zipserver.json:/path/to/config/zipserver.json
LoadCredential=storage.pem:/path/to/credentials/storage.pem
ExecStart=/usr/local/bin/zipserver --config=${CREDENTIALS_DIRECTORY}/zipserver.json
# Temp directory for zip extraction
RuntimeDirectory=zipserver
RuntimeDirectoryMode=0700
# Filesystem restrictions
ProtectSystem=strict
ProtectHome=tmpfs
BindReadOnlyPaths=/usr/local/bin/zipserver
PrivateTmp=true
# Network
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
# Privilege restrictions
NoNewPrivileges=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
ProtectKernelLogs=true
CapabilityBoundingSet=
AmbientCapabilities=
# System call filtering
SystemCallArchitectures=native
SystemCallFilter=@system-service
SystemCallFilter=~@privileged @resources
# Memory protections
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.targetNotes:
ProtectHome=tmpfscombined withBindReadOnlyPaths=allows exposing only the binary while hiding home directories- Use
systemd-analyze security zipserver.serviceto check the security score - The
RuntimeDirectoryis automatically cleaned up when the service stops