File Upload Chunked upload flow, web upload (URL import), status polling, and upload management.
Overview
Fast.io supports four upload flows:
- Small files (< 4 MB) — Single-request upload. Send the file as a multipart chunk in the session creation request. Optionally auto-add to storage in the same call.
- Large files (≥ 4 MB) — Chunked upload. Create a session, upload chunks (up to 3 in parallel), trigger assembly, poll until stored.
- Stream upload — Upload a file of unknown size in a single request. Create a session with
stream=true, then POST the raw file body to the stream endpoint. - Web upload (URL import) — Import files from any HTTP/HTTPS URL. The server downloads and uploads the file asynchronously.
Both direct upload flows produce an upload session with a unique id (OpaqueId). Once the upload reaches stored status, the file is ready to use.
Upload Constraints
| Constraint | Value |
|---|---|
| Single-call upload max size | 4 MB (4,194,304 bytes) |
| Chunk size | Plan-dependent (query /upload/limits/ for exact values) |
| Last chunk | May be smaller than the plan chunk size |
| Max parallel chunk uploads per session | 3 |
| Max undersized chunks per session | 1 (the final chunk only) |
| Chunk ordering | 1-based (first chunk is order=1) |
| Supported hash algorithms | md5, sha1, sha256, sha384 |
relative_path max length | 8192 characters |
relative_path | Omit entirely if empty — do NOT send as empty string |
creator format | 1–150 chars, alphanumeric and hyphens only (/^[a-zA-Z0-9\-]+$/) |
| Max file size | Plan-dependent (up to 40 GB) |
| Max concurrent sessions | Plan-dependent (150 for Free, 7500 for Pro/Business) |
| Long-poll max wait | 590 seconds |
Upload Status Values
| Status | Meaning | Action |
|---|---|---|
ready | Session created, awaiting chunks | Upload chunks |
uploading | Chunks being received | Continue uploading |
assemble | Assembly queued | Keep polling |
assembling | Assembly in progress | Keep polling |
complete | Assembly done, awaiting storage import | Keep polling (or use addfile if not auto-adding) |
store | Storage import queued | Keep polling |
storing | Being imported to storage | Keep polling |
stored | Fully complete — file assembled and in storage | Done. new_file_id available. |
assembly_failed | Assembly failed | Handle error |
store_failed | Storage import failed | Handle error |
Terminal states: stored, assembly_failed, store_failed. Stop polling when you reach one of these.
Workflow: Small File Upload (< 4 MB)
A single request creates the session and uploads the file. Optionally auto-adds to storage.
Step 1: Upload in one request
Create upload session and send file data in a single request.
Content-Type: multipart/form-data
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | File name including extension (1–255 chars). Names longer than 100 chars are auto-truncated server-side, preserving the extension; conflict resolution switches to RENAME when truncation occurs. |
| size | integer | Yes | File size in bytes (must match actual file) |
| chunk | file | Yes | The file binary data (multipart field) |
| action | string | No | "create" for new file, "update" for file replacement |
| instance_id | string | Required if action=create | Workspace or share profile ID (19-digit numeric) |
| file_id | string | Required if action=update | OpaqueId of the existing file to replace |
| folder_id | string | No | Target folder OpaqueId or "root" for storage root |
| hash | string | No | Hash of the full file. Must be provided with hash_algo. |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
| relative_path | string | No | For folder uploads, relative path for auto folder creation (max 8192 chars). Omit entirely if not applicable. |
| org | string | No | Organization ID for billing limit resolution (only used when no action is specified) |
| creator | string | No | Client identifier string (1–150 chars, alphanumeric and hyphens only) |
Example (small file with auto-add to workspace)
curl -X POST "https://api.fast.io/current/upload/" \
-H "Authorization: Bearer {jwt_token}" \
-F "name=notes.txt" \
-F "size=1024" \
-F "action=create" \
-F "instance_id=12345678901234567890" \
-F "folder_id=root" \
-F "chunk=@notes.txt"
Response (201 Created)
{
"result": "yes",
"response": {
"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123",
"creator": "my-web-client",
"new_file_id": "f3jm5-zqzfx-pxdr2-dx8z5-bvnb3-rpjfm4"
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| result | string | "yes" on success |
| response.id | string | Upload session OpaqueId |
| response.creator | string | Echoed back only if creator was provided in the request |
| response.new_file_id | string | OpaqueId of the created storage node. Only present for single-call uploads with an upload target (instance_id). |
When instance_id and folder_id are provided, the file is automatically added to storage. No separate polling or addfile step is needed.
Response (without instance_id, 201 Created)
{
"result": "yes",
"response": {
"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123"
},
"current_api_version": "1.0"
}
Without a target, only the upload session id is returned. Use addfile to place the file in storage after the upload completes.
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1605 (Invalid Input) | 400 | "The size was not supplied." |
1605 (Invalid Input) | 400 | "The file name is not valid." |
1605 (Invalid Input) | 400 | "Invalid share or workspace instance ID." |
1658 (Not Acceptable) | 406 | "This file type is not allowed for upload on your plan." |
1685 (Feature Limit) | 403 | "The file size exceeds single call upload, use chunks." |
1685 (Feature Limit) | 403 | "You have created too many upload sessions..." |
1685 (Feature Limit) | 403 | "The size is too large for the account plan." |
1685 (Feature Limit) | 403 | "The total size of all active upload sessions exceeds the limit." |
1605 (Invalid Input) | 400 | "The hash algorithm provided is not valid." |
1605 (Invalid Input) | 400 | "The hash provided is not valid." |
1605 (Invalid Input) | 400 | "The hash algorithm was provided but not the hash." |
1658 (Not Acceptable) | 406 | "We were unable to create the upload session..." |
Workflow: Large File Upload (Chunked)
Step 1: Create upload session
Create a new upload session for a large file.
Content-Type: application/x-www-form-urlencoded
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | File name including extension (1–255 chars). Names longer than 100 chars are auto-truncated server-side, preserving the extension; conflict resolution switches to RENAME when truncation occurs. |
| size | integer | Yes | Total file size in bytes |
| action | string | Yes | "create" for new file, "update" for file replacement |
| instance_id | string | Required if action=create | Workspace or share profile ID (19-digit numeric) for auto-add to storage after assembly |
| file_id | string | Required if action=update | OpaqueId of existing file to replace |
| folder_id | string | No | Target folder OpaqueId or "root" for storage root |
| hash | string | No | SHA-256 hex hash of the full file for integrity verification. Must be provided with hash_algo. |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
| relative_path | string | No | For folder uploads, relative path for auto folder creation (max 8192 chars). Omit entirely if not applicable. |
| org | string | No | Organization ID for billing limit resolution (only used when no action is specified) |
| creator | string | No | Client identifier string (1–150 chars, alphanumeric and hyphens only) |
Example
curl -X POST "https://api.fast.io/current/upload/" \
-H "Authorization: Bearer {jwt_token}" \
-d "name=annual-report.pdf" \
-d "size=52428800" \
-d "action=create" \
-d "instance_id=12345678901234567890" \
-d "hash_algo=sha256" \
-d "hash=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
Response (201 Created)
{
"result": "yes",
"response": {
"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123"
},
"current_api_version": "1.0"
}
Step 2: Upload chunks
Upload a single chunk of file data.
Content-Type: multipart/form-data
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID from Step 1 |
Query parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| order | integer | Yes | 1-based chunk number (first chunk = 1). Must not exceed plan's chunk limit. |
| size | integer | Yes | Size of this chunk in bytes. Must match the actual uploaded file size. |
| hash | string | No | Hash of this chunk. Must be provided with hash_algo. |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
Request body (multipart/form-data)
| Field | Type | Required | Description |
|---|---|---|---|
| chunk | file | Yes | Binary chunk data |
Upload up to 3 chunks in parallel. The last chunk may be smaller. Only 1 undersized chunk is allowed per session.
When all chunks have been uploaded (total bytes equal the declared file size), auto-finalization triggers automatically. You can still call the complete endpoint explicitly.
Example
curl -X POST "https://api.fast.io/current/upload/u1abc-defgh-ijklm-nopqr-stuvw-xyz123/chunk/?order=1&size=5242880&hash_algo=sha256&hash=abc123def456..." \
-H "Authorization: Bearer {jwt_token}" \
-F "chunk=@chunk_001.bin"
Response (202 Accepted)
{
"result": "yes",
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1605 (Invalid Input) | 400 | "The session id provided is not valid." |
1658 (Not Acceptable) | 406 | "The session id provided is not in a valid state to accept a chunk." |
1605 (Invalid Input) | 400 | "No order supplied" |
1605 (Invalid Input) | 400 | "The order provided for this chunk is not valid..." |
1605 (Invalid Input) | 400 | "The size was not supplied." |
1685 (Feature Limit) | 403 | "The size is too large for the account plan." |
1685 (Feature Limit) | 403 | "The size is too small for the account plan." |
1685 (Feature Limit) | 403 | "You have exceeded the maximum number of chunks..." |
1685 (Feature Limit) | 403 | "The combined chunk size exceeds the size for this session." |
1605 (Invalid Input) | 400 | "The upload chunk failed or was the wrong size..." |
1605 (Invalid Input) | 400 | "The chunk failed to hash properly..." |
1654 (Internal Error) | 500 | "The chunk failed to be stored..." |
Step 3: Trigger assembly
Trigger asynchronous assembly of all uploaded chunks.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID |
Query parameters (optional)
| Parameter | Type | Required | Description |
|---|---|---|---|
| hash | string | No | Final file hash for validation. Can be provided/updated at completion time. Must be provided with hash_algo. |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
No body parameters required. If the session is already in a completed or processing state (complete, assemble, assembling, store, storing, stored), the endpoint returns 200 OK immediately without error.
Example
curl -X POST "https://api.fast.io/current/upload/u1abc-defgh-ijklm-nopqr-stuvw-xyz123/complete/" \
-H "Authorization: Bearer {jwt_token}"
Response (202 Accepted)
{
"result": "yes",
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1683 (Resource Missing) | 404 | "The id provided is not found..." |
1658 (Not Acceptable) | 406 | "The session id provided is not in a valid state to assemble." |
1658 (Not Acceptable) | 406 | "No chunks have been uploaded to this session." |
1658 (Not Acceptable) | 406 | "The chunks provided do not match the size of the file." |
1685 (Feature Limit) | 403 | "You have created too many upload sessions..." |
1678 (Enqueue Failed) | 500 | "Your request was valid but could not be processed." |
Step 4: Poll for completion
Get upload session status with optional long-poll.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID |
Query parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| wait | integer | No | — | Long-poll wait time in seconds (1 to 590). Server holds the connection and returns immediately when the status changes. |
The server detects status changes efficiently during long-poll. Maximum wait is 590 seconds.
Example
curl -X GET "https://api.fast.io/current/upload/u1abc-defgh-ijklm-nopqr-stuvw-xyz123/details/?wait=60" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"session": {
"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123",
"name": "annual-report.pdf",
"size": 52428800,
"status": "stored",
"hash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"hash_algo": "sha256",
"created": "2025-01-20 10:30:00",
"updated": "2025-01-20 10:35:00",
"chunks": {
"1": 5242880,
"2": 5242880,
"3": 5242880,
"4": 5242880,
"5": 5242880
}
}
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.session.id | string | Upload session OpaqueId |
| response.session.name | string | Filename |
| response.session.size | integer | Declared file size in bytes |
| response.session.status | string | Current status (see status table above) |
| response.session.hash | string | File hash (if provided) |
| response.session.hash_algo | string | Hash algorithm (if provided) |
| response.session.created | string | Session creation timestamp |
| response.session.updated | string | Last update timestamp |
| response.session.chunks | object | Map of chunk order (string key) to chunk size (integer value) |
| response.session.new_file_id | string | OpaqueId of created storage node (only when stored with a target) |
Exit condition: Stop polling when status is stored, assembly_failed, or store_failed.
Step 5 (if no instance_id): Add file to storage manually
Add a completed upload to workspace storage.
Or for shares:
Add a completed upload to share storage.
Path parameters
{workspace_id}or{share_id}— Profile ID (19-digit numeric string){folder_id}— OpaqueId of the target folder, or"root"for the storage root
Body parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| from | string (JSON) | Yes | Source specification as JSON-encoded string |
from format
from={"type":"upload","upload":{"id":"{upload_id}"}}
Example
curl -X POST "https://api.fast.io/current/workspace/12345678901234567890/storage/root/addfile/" \
-H "Authorization: Bearer {jwt_token}" \
-d 'from={"type":"upload","upload":{"id":"u1abc-defgh-ijklm-nopqr-stuvw-xyz123"}}'
Step 6 (optional): Clean up session
Delete an upload session and clean up temporary files.
Complete Chunked Upload Example
1. Create session (25 MB file, 5 chunks)
curl -X POST "https://api.fast.io/current/upload/" \
-H "Authorization: Bearer {jwt_token}" \
-d "name=presentation.pptx" \
-d "size=26214400" \
-d "action=create" \
-d "instance_id=12345678901234567890" \
-d "folder_id=root"
Response:
{"result": "yes", "response": {"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123"}, "current_api_version": "1.0"}
2. Upload 5 chunks (3 in parallel, then 2 more)
# Chunks 1-3 in parallel
curl -X POST "https://api.fast.io/current/upload/u1abc-.../chunk/?order=1&size=5242880" \
-H "Authorization: Bearer {jwt_token}" -F "chunk=@chunk1.bin" &
curl -X POST "https://api.fast.io/current/upload/u1abc-.../chunk/?order=2&size=5242880" \
-H "Authorization: Bearer {jwt_token}" -F "chunk=@chunk2.bin" &
curl -X POST "https://api.fast.io/current/upload/u1abc-.../chunk/?order=3&size=5242880" \
-H "Authorization: Bearer {jwt_token}" -F "chunk=@chunk3.bin" &
wait
# Chunks 4-5
curl -X POST ".../chunk/?order=4&size=5242880" \
-H "Authorization: Bearer {jwt_token}" -F "chunk=@chunk4.bin" &
curl -X POST ".../chunk/?order=5&size=5242880" \
-H "Authorization: Bearer {jwt_token}" -F "chunk=@chunk5.bin" &
wait
3. Trigger assembly
curl -X POST "https://api.fast.io/current/upload/u1abc-.../complete/" \
-H "Authorization: Bearer {jwt_token}"
4. Poll until stored
curl -X GET "https://api.fast.io/current/upload/u1abc-.../details/?wait=60" \
-H "Authorization: Bearer {jwt_token}"
Response: {"result": "yes", "response": {"session": {"status": "assembling", ...}}} — keep polling
curl -X GET "https://api.fast.io/current/upload/u1abc-.../details/?wait=60" \
-H "Authorization: Bearer {jwt_token}"
Response: {"result": "yes", "response": {"session": {"status": "stored", "new_file_id": "f3jm5-...", ...}}} — done
5. Clean up session
curl -X DELETE "https://api.fast.io/current/upload/u1abc-defgh-ijklm-nopqr-stuvw-xyz123" \
-H "Authorization: Bearer {jwt_token}"
Resume a Disconnected Upload
If an upload is interrupted (network failure, client crash), resume it without re-uploading completed chunks.
Steps
-
Get session status:
curl -X GET "https://api.fast.io/current/upload/{upload_id}/details/" \ -H "Authorization: Bearer {jwt_token}" - Read the
chunksmap in the response. Keys are chunk numbers already uploaded, values are byte sizes. - Upload only missing chunks. Compare the
chunksmap against the expected chunk list. Upload any chunks not present. -
Trigger assembly:
curl -X POST "https://api.fast.io/current/upload/{upload_id}/complete/" \ -H "Authorization: Bearer {jwt_token}" - Poll for completion as normal.
File Update (New Version)
To upload a new version of an existing file:
- Create session with
action=update,instance_id, andfile_id(OpaqueId of the file to replace). - Upload chunks and complete as normal. The existing file receives a new version.
curl -X POST "https://api.fast.io/current/upload/" \
-H "Authorization: Bearer {jwt_token}" \
-d "name=report-v2.pdf" \
-d "size=1024" \
-d "action=update" \
-d "instance_id=12345678901234567890" \
-d "file_id=f3jm5-zqzfx-pxdr2-dx8z5-bvnb3-rpjfm4" \
-F "chunk=@report-v2.pdf"
Stream Upload (Unknown File Size)
For clients that don't know the exact file size upfront (piped output, generated content, compressed streams). The client declares a maximum size ceiling, streams the file in a single request, and the system records actual bytes.
Step 1: Create stream session
Create a stream-mode upload session.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Filename including extension |
| stream | string | Yes | Must be "true" |
| max_size | integer | No | Maximum file size in bytes (defaults to plan limit) |
| action | string | No | "create" or "update" (same as standard upload) |
| instance_id | string | Conditional | Target workspace/share ID (required if action=create) |
| file_id | string | Conditional | File to update (required if action=update) |
| hash | string | No | Expected whole-file hash |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
| creator | string | No | Client identifier |
Example
curl -X POST "https://api.fast.io/current/upload/" \
-H "Authorization: Bearer {jwt_token}" \
-d "name=output.tar.gz" \
-d "stream=true" \
-d "max_size=52428800" \
-d "action=create" \
-d "instance_id=12345678901234567890"
Response (201 Created)
{
"result": "yes",
"response": {
"id": "u1abc-defgh-ijklm-nopqr-stuvw-xyz123"
},
"current_api_version": "1.0"
}
Step 2: Stream file body
Upload the entire file as a raw binary stream.
Send the raw file bytes as the request body with Content-Type: application/octet-stream. No size or order parameters needed.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID from Step 1 |
Query parameters (optional)
| Parameter | Type | Required | Description |
|---|---|---|---|
| hash | string | No | Whole-file hash for validation. Must be provided with hash_algo. |
| hash_algo | string | No | Hash algorithm: "md5", "sha1", "sha256", or "sha384" |
Example
curl -X POST "https://api.fast.io/current/upload/u1abc-defgh-ijklm-nopqr-stuvw-xyz123/stream/" \
-H "Authorization: Bearer {jwt_token}" \
-H "Content-Type: application/octet-stream" \
--data-binary @myfile.tar.gz
Response (201 Created)
{
"result": "yes",
"current_api_version": "1.0"
}
The session auto-finalizes after the stream completes. The session's size field is updated to reflect the actual bytes received.
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1658 (Not Acceptable) | 406 | "This session is not configured for streaming uploads." |
1658 (Not Acceptable) | 406 | "A stream has already been uploaded for this session." |
1605 (Invalid Input) | 400 | "The upload stream was empty or interrupted." |
1685 (Feature Limit) | 403 | "The uploaded bytes exceed the session's maximum size." |
1658 (Not Acceptable) | 406 | "The session is not in a valid state to accept a stream." |
Notes
- The
max_sizeparameter is used for quota validation at session creation. If omitted, defaults to the plan's maximum file size. - The actual uploaded bytes must not exceed
max_size. - Stream mode sessions produce exactly one chunk — no multi-chunk assembly pipeline is needed.
- Stream upload is a single-shot operation: you cannot stream to the same session twice.
- Stream mode sessions cannot use the chunk endpoint — attempting to upload chunks to a stream session will return an error. Use the
/stream/endpoint instead. - Only one concurrent stream upload is allowed per session; concurrent requests to the same session are rejected.
Batch Upload (Many Small Files)
Submit 1–200 small files in a single request. Returns a per-file result array, so partial success is legible and does not abort the batch. Use this when you have many small files destined for the same workspace or share — one rate-limit cost instead of one per file. For files over 4 MB, keep using the standard chunked POST /current/upload/ flow.
Batch Limits
| Limit | Value |
|---|---|
| Files per batch | 1–200 |
| Max per-file size | 4 MB (4,194,304 bytes) |
| Max request body | 100 MB (applied post-base64-decode on the JSON path) |
| Supported hash algorithms | md5, sha1, sha256, sha384 |
| Status record TTL | 1 hour from POST |
Create batch (multipart)
Submit a batch of up to 200 small files via multipart/form-data.
Content-Type: multipart/form-data
Batch-level fields
| Field | Type | Required | Description |
|---|---|---|---|
| instance_id | string | Yes | Target workspace or share profile ID (19-digit numeric). Every file in the batch lands in this target. |
| creator | string | No | Optional echo-back correlation tag (1–150 chars, alphanumeric and hyphens only). |
| manifest | string (JSON) | Yes | JSON-encoded array of per-file manifest entries (see below). |
| file_{index} | file | Yes (one per manifest entry) | Binary file body. The suffix matches the index in the manifest entry. |
Manifest entry schema
| Field | Type | Required | Description |
|---|---|---|---|
| index | integer | Yes | 0-based position. Indices must be contiguous from 0 to N−1 with no duplicates. |
| filename | string | Yes | File name (1–255 chars). Names longer than 100 chars are auto-truncated server-side, preserving the extension; conflict resolution switches to RENAME when truncation occurs. |
| hash_algo | string | No | "md5", "sha1", "sha256", or "sha384" for optional integrity validation. |
| hash | string | No | Hex digest of the uploaded bytes. |
Hash validation is opt-in per entry: supply both hash_algo and hash, or neither. A mismatch errors only that entry; the batch still returns 200.
Example
curl -X POST "https://api.fast.io/current/upload/batch/" \
-H "Authorization: Bearer {jwt_token}" \
-F "instance_id=12345678901234567890" \
-F "creator=my-importer" \
-F 'manifest=[{"index":0,"filename":"doc-001.txt"},{"index":1,"filename":"doc-002.txt"}]' \
-F "file_0=@doc-001.txt" \
-F "file_1=@doc-002.txt"
Create batch (JSON)
Fallback body shape for clients that cannot compose multipart.
Content-Type: application/json
Base64 inflation adds ~33% to the wire size and forces the server to hold decoded bytes in memory while parsing; prefer multipart (which streams directly to disk) for non-trivial payloads.
Body schema
{
"instance_id": "12345678901234567890",
"creator": "my-importer",
"files": [
{"filename": "doc-001.txt", "content_b64": "SGVsbG8sIHdvcmxkIQ=="},
{"filename": "doc-002.txt", "content_b64": "U2Vjb25kIGZpbGU="}
]
}
Each files entry accepts the same optional hash_algo and hash fields as the multipart manifest. The array position is the entry's logical index.
Example
curl -X POST "https://api.fast.io/current/upload/batch/" \
-H "Authorization: Bearer {jwt_token}" \
-H "Content-Type: application/json" \
-d '{
"instance_id": "12345678901234567890",
"creator": "my-importer",
"files": [
{"filename": "doc-001.txt", "content_b64": "SGVsbG8sIHdvcmxkIQ=="},
{"filename": "doc-002.txt", "content_b64": "U2Vjb25kIGZpbGU="}
]
}'
Response (200 OK)
Always 200 OK on a well-formed batch. Inspect count_errored to detect per-file failures.
{
"result": "yes",
"response": {
"batch_id": "{batch_id}",
"count_total": 2,
"count_succeeded": 1,
"count_errored": 1,
"creator": "my-importer",
"results": [
{
"index": 0,
"filename": "doc-001.txt",
"status": "ok",
"upload_id": "{upload_id}",
"node_id": "{node_id}"
},
{
"index": 1,
"filename": "doc-002.txt",
"status": "error",
"error_code": 1605,
"error_message": "hash_algo and hash must both be provided for a manifest entry."
}
]
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.batch_id | string | Opaque batch identifier for the GET status lookup (valid for 1 hour). |
| response.count_total | integer | Files submitted. |
| response.count_succeeded | integer | Files with status: "ok". |
| response.count_errored | integer | Files with status: "error". |
| response.creator | string | Echoed back only if creator was supplied. |
| response.results[].index | integer | Matches the submitted manifest index (or array position for the JSON path). |
| response.results[].filename | string | Submitted filename. |
| response.results[].status | string | "ok" or "error". |
| response.results[].upload_id | string | Present on ok. Upload session OpaqueId. |
| response.results[].node_id | string | Present on ok when storage placement succeeded. |
| response.results[].error_code | integer | Present on error. Same numeric codes used by POST /current/upload/. |
| response.results[].error_message | string | Present on error. Human-readable description. |
Whole-batch error responses
These are returned in the standard error envelope (no results[]).
| Error Code | HTTP Status | Cause |
|---|---|---|
1605 (Invalid Input) | 400 | Unsupported Content-Type (must be multipart/form-data or application/json). |
1605 (Invalid Input) | 400 | instance_id missing, not numeric, or not a valid workspace/share ID. |
1605 (Invalid Input) | 400 | manifest not valid JSON, empty, or entries malformed. |
1605 (Invalid Input) | 400 | Manifest index values not contiguous from 0, or contain duplicates. |
1605 (Invalid Input) | 400 | Manifest filename fails validation. |
1605 (Invalid Input) | 400 | JSON body missing files array or files empty. |
1605 (Invalid Input) | 400 | creator fails length or character-set validation. |
1685 (Feature Limit) | 403 | Batch contains more than 200 files. |
1685 (Feature Limit) | 403 | Request body exceeds 100 MB (pre-parse Content-Length, or post-decode total for JSON). |
1685 (Feature Limit) | 403 | Account plan does not allow files at the 4 MB per-file bound. |
1680 (Access Denied) | 403 | Caller not authorized to upload to the target. |
Per-file error causes
Recorded in results[] with status: "error"; the batch returns 200 OK.
- Missing / empty
file_{index}part (multipart) or missing / emptycontent_b64(JSON). - Base64 decode failure (JSON path).
- File over the 4 MB per-file bound — use
POST /current/upload/instead. - File type (extension or MIME) restricted for the account plan.
hash_algonot supported orhashlength wrong for the declared algorithm.hash_algosupplied withouthash(or vice versa).- Uploaded bytes do not match the declared hash.
- Internal storage / finalization failure for the single entry.
Fetch batch status
Re-fetch the stored result record for a prior batch.
Authentication is not required for this endpoint — the opaque batch_id is the only credential. Treat batch_id as bearer-equivalent; transport over HTTPS and do not log alongside identifiers.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {batch_id} | string | Yes | The opaque batch identifier returned by a previous POST. |
Example
curl -X GET "https://api.fast.io/current/upload/batch/{batch_id}/"
Response (200 OK)
{
"result": "yes",
"response": {
"batch_id": "{batch_id}",
"creator": "my-importer",
"count_total": 2,
"count_succeeded": 1,
"count_errored": 1,
"results": [
{ "index": 0, "filename": "doc-001.txt", "status": "ok", "upload_id": "{upload_id}", "node_id": "{node_id}" },
{ "index": 1, "filename": "doc-002.txt", "status": "error", "error_code": 1605, "error_message": "hash_algo and hash must both be provided for a manifest entry." }
],
"created_ts": 1745000000
},
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Cause |
|---|---|---|
1605 (Invalid Input) | 400 | batch_id missing or not a valid opaque identifier. |
1609 (Not Found) | 404 | No record found, or the 1-hour TTL elapsed. 404 is returned for both cases; callers cannot distinguish "never existed" from "expired". |
Notes
- The batch endpoint is rate-limited in an independent bucket from
POST /current/upload/. A client doing bulk uploads and then a chunked upload is not double-charged. - Each successful file produces an
upload_session_createdevent — downstream consumers (search indexing, AI pipelines) see the same event stream as N independentPOST /current/upload/calls. - Partial success is the documented contract. A failure on file 37 does not roll back files 0–36; they are already persisted.
- Files over 4 MB in the batch return per-file errors pointing back to
POST /current/upload/— the rest of the batch still processes.
Web Upload (URL Import)
Import files from any HTTP/HTTPS URL. Supports OAuth-protected URLs (Google Drive, OneDrive, Dropbox, Box, iCloud). The server downloads the file in the background and streams it through the standard upload pipeline.
Create web upload job
Create a new web upload job to import a file from a URL.
Content-Type: application/json
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| source_url | string | Yes | URL to download the file from. Only HTTP/HTTPS supported. Max 2048 characters. |
| file_name | string | Yes | Filename to save as (1–255 chars). Names longer than 100 chars are auto-truncated server-side, preserving the extension; RENAME conflict resolution prevents distinct sources from overwriting each other when they collapse to the same truncated prefix. |
| profile_id | string | Yes | Target workspace or share profile ID (19-digit numeric) |
| profile_type | string | Yes | "workspace" or "share" |
| folder_id | string | No | Target folder OpaqueId or "root" for storage root |
| relative_path | string | No | Relative path for automatic folder creation (1–8192 chars) |
| options | integer | No | Bitfield options (default: 0). See options table. |
| creator | string | No | Client identifier string (1–150 chars, alphanumeric and hyphens only) |
Options bitfield
| Bit | Value | Description |
|---|---|---|
| 0 | 0x1 | Overwrite if a file with the same name exists at the destination |
| 1 | 0x2 | Skip virus scanning (admin only) |
| 2 | 0x4 | Attempt to preserve source file metadata |
Example
curl -X POST "https://api.fast.io/current/web_upload/" \
-H "Authorization: Bearer {jwt_token}" \
-H "Content-Type: application/json" \
-d '{
"source_url": "https://example.com/files/document.pdf",
"file_name": "document.pdf",
"profile_id": "12345678901234567890",
"profile_type": "workspace",
"folder_id": "root"
}'
Response (201 Created)
{
"result": "yes",
"response": {
"web_upload": {
"id": "aBcDeFgHiJkLmNoPqRsT123456",
"user_id": "12345678901234567890",
"profile_id": "12345678901234567890",
"profile_type": "workspace",
"source_url": "https://example.com/files/document.pdf",
"file_name": "document.pdf",
"folder_id": null,
"relative_path": null,
"status": "queued",
"status_description": "Queued for processing",
"bytes_downloaded": 0,
"expected_size": null,
"upload_session_id": null,
"async_job_id": null,
"creator": "12345678901234567890",
"error_message": null,
"options": 0,
"properties": null,
"created": "2025-01-26 15:00:00",
"updated": "2025-01-26 15:00:00"
}
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.web_upload.id | string | Web upload job OpaqueId |
| response.web_upload.user_id | string | ID of the user who created the job |
| response.web_upload.profile_id | string | Target workspace or share ID |
| response.web_upload.profile_type | string | "workspace" or "share" |
| response.web_upload.source_url | string | The source URL |
| response.web_upload.file_name | string | Destination filename |
| response.web_upload.folder_id | string/null | Target folder OpaqueId |
| response.web_upload.relative_path | string/null | Relative path for folder creation |
| response.web_upload.status | string | Status string (e.g., "queued", "downloading", "complete") |
| response.web_upload.bytes_downloaded | integer | Bytes downloaded so far (0 initially) |
| response.web_upload.expected_size | integer/null | Expected file size (from HEAD request, if known) |
| response.web_upload.upload_session_id | string/null | Linked upload session ID (populated during uploading phase) |
| response.web_upload.async_job_id | string/null | The async job ID processing this upload |
| response.web_upload.status_description | string | Human-readable status description |
| response.web_upload.creator | string | User ID of the creator |
| response.web_upload.error_message | string/null | Error details if failed |
| response.web_upload.options | integer | Options bitfield |
| response.web_upload.created | string | Creation timestamp |
| response.web_upload.updated | string | Last update timestamp |
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1605 (Invalid Input) | 400 | "Invalid URL. Only HTTP and HTTPS URLs are supported." |
1605 (Invalid Input) | 400 | "Invalid profile_type. Must be \"workspace\" or \"share\"." |
1680 (Access Denied) | 403 | "You do not have permission to upload to this workspace." |
1680 (Access Denied) | 403 | "You do not have permission to upload to this share." |
1658 (Not Acceptable) | 406 | "You have too many active web uploads..." |
1654 (Internal Error) | 500 | "Failed to create web upload job." |
OAuth for protected URLs
For Google Drive, OneDrive, and other OAuth-protected files, include the access token as a query parameter in the source URL:
https://www.googleapis.com/drive/v3/files/{fileId}?alt=media&access_token={oauth_token}
The server extracts the token from the URL, removes it from the query string, and sends it as an Authorization: Bearer header on all HTTP requests. Tokens are never logged or returned in API responses.
List web upload jobs
List all web upload jobs for the current user.
Query parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| limit | integer | No | 50 | Maximum number of results (1–100) |
| offset | integer | No | 0 | Pagination offset |
| status | string | No | — | Filter by status: "pending", "queued", "downloading", "uploading", "complete", "failed", "canceled" |
Example
curl -X GET "https://api.fast.io/current/web_upload/?status=downloading&limit=20" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"web_uploads": [
{
"id": "aBcDeFgHiJkLmNoPqRsT123456",
"user_id": "12345678901234567890",
"profile_id": "12345678901234567890",
"profile_type": "workspace",
"source_url": "https://example.com/files/document.pdf",
"file_name": "document.pdf",
"folder_id": null,
"relative_path": null,
"status": "downloading",
"status_description": "Downloading file from URL",
"bytes_downloaded": 5242880,
"expected_size": 52428800,
"progress_percent": 10,
"upload_session_id": null,
"error_message": null,
"created": "2025-01-26 15:00:00",
"updated": "2025-01-26 15:01:00"
}
],
"total": 1,
"limit": 20,
"offset": 0
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.web_uploads | array | Array of web upload job objects |
| response.web_uploads[].id | string | Web upload job OpaqueId |
| response.web_uploads[].profile_type | string | "workspace" or "share" |
| response.web_uploads[].status | string | Status string (e.g., "downloading", "complete") |
| response.web_uploads[].status_description | string | Human-readable status description |
| response.web_uploads[].bytes_downloaded | integer | Bytes downloaded so far |
| response.web_uploads[].expected_size | integer/null | Expected file size (null if unknown) |
| response.web_uploads[].progress_percent | integer | Download progress percentage (0–100) |
| response.web_uploads[].upload_session_id | string/null | Linked upload session ID |
| response.web_uploads[].error_message | string/null | Error details if failed |
| response.total | integer | Total count of matching records |
| response.limit | integer | Applied limit |
| response.offset | integer | Applied offset |
Get web upload job details
Get details for a specific web upload job.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The web upload job OpaqueId |
Example
curl -X GET "https://api.fast.io/current/web_upload/aBcDeFgHiJkLmNoPqRsT123456/details/" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"web_upload": {
"id": "aBcDeFgHiJkLmNoPqRsT123456",
"user_id": "12345678901234567890",
"profile_id": "12345678901234567890",
"profile_type": "workspace",
"source_url": "https://example.com/files/document.pdf",
"file_name": "document.pdf",
"status": "complete",
"status_description": "Upload successfully completed",
"bytes_downloaded": 52428800,
"expected_size": 52428800,
"upload_session_id": "xYzAbCdEfGhIjKlMnOpQ123456",
"async_job_id": "aBcDeFgHiJkLmNoPqRsT789012",
"creator": "12345678901234567890",
"error_message": null,
"options": 0,
"created": "2025-01-26 15:00:00",
"updated": "2025-01-26 15:02:00"
}
},
"current_api_version": "1.0"
}
Note: Both the details and list endpoints return status as string values (e.g., "pending", "queued", "downloading", "uploading", "complete", "failed", "canceled").
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1609 (Not Found) | 404 | "Web upload job not found." |
1680 (Access Denied) | 403 | "You do not have permission to view this web upload job." |
Cancel web upload job
Cancel an active web upload job.
Query parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| id | string | Yes | The web upload job OpaqueId to cancel |
Example
curl -X DELETE "https://api.fast.io/current/web_upload/?id=aBcDeFgHiJkLmNoPqRsT123456" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"canceled": true,
"id": "aBcDeFgHiJkLmNoPqRsT123456"
},
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1609 (Not Found) | 404 | "Web upload job not found." |
1680 (Access Denied) | 403 | "You do not have permission to cancel this web upload job." |
1658 (Not Acceptable) | 406 | "This web upload job cannot be canceled because it is already in a terminal state." |
1654 (Internal Error) | 500 | "Failed to cancel web upload job." |
Web Upload Status Values
| Status | Value | Description | Terminal |
|---|---|---|---|
pending | 1 | Job created, waiting for processing | No |
queued | 2 | Job has been queued for processing | No |
downloading | 3 | Actively downloading from the source URL | No |
uploading | 4 | Feeding downloaded chunks to upload system | No |
complete | 5 | Upload successfully completed | Yes |
failed | 6 | Download or upload failed | Yes |
canceled | 7 | User canceled the web upload | Yes |
Web Upload Limits
| Limit | Value |
|---|---|
| Max active per user | 50 (non-terminal jobs) |
| Max file size | Up to 40 GB (subject to plan limits) |
| Max retries | 3 (automatic on transient failures) |
| Retry delay | 60 seconds between attempts |
Upload Management Endpoints
List all upload sessions
Returns all upload sessions for the current user in any state.
Example
curl -X GET "https://api.fast.io/current/upload/details/" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"results": 2,
"sessions": [
{
"id": "aBcDeFgHiJkLmNoPqRsT123456",
"name": "document.pdf",
"size": 52428800,
"status": "uploading",
"hash": "e3b0c44298fc1c14...",
"hash_algo": "sha256",
"created": "2025-01-20 10:30:00",
"updated": "2025-01-20 10:35:00"
}
]
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.results | integer | Total number of sessions (only present when > 1) |
| response.sessions | array | Array of upload session objects |
Delete/cancel an upload session
Cancel and delete an active upload session.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID to delete (appended to URL path) |
Cleans up temporary chunk files and releases session quota. If the session has an associated web upload job, that job is automatically canceled.
Sessions can be deleted in states: ready, uploading, assembly_failed, store_failed, complete. Sessions in assemble, assembling, store, storing, or stored states cannot be deleted.
Example
curl -X DELETE "https://api.fast.io/current/upload/aBcDeFgHiJkLmNoPqRsT123456" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1605 (Invalid Input) | 400 | "The id provided is not found or is not associated with your account." |
1658 (Not Acceptable) | 406 | "The session id provided is not in a valid state to delete." |
1654 (Internal Error) | 500 | "We were unable to delete the requested upload session." |
Get upload limits
Returns upload limits based on the user's billing plan and target context.
Query parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| action | string | No | — | "create" or "update" to get limits in context of a target |
| org | string | No | — | Organization ID for limit resolution (used when no action specified) |
| instance_id | string | Required if action=create | — | Target workspace or share ID |
| folder_id | string | No | — | Target folder OpaqueId or "root" |
| file_id | string | Required if action=update | — | File ID for update context |
Example
# General limits (with org context)
curl -X GET "https://api.fast.io/current/upload/limits/?org=12345678901234567890" \
-H "Authorization: Bearer {jwt_token}"
# Limits for creating a file in a workspace
curl -X GET "https://api.fast.io/current/upload/limits/?action=create&instance_id=12345678901234567890" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"limits": {
"chunk_size": 104857600,
"size": 42949672960,
"chunks": 500,
"sessions": 7500,
"sessions_size_max": 214748364800
}
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.limits.chunk_size | integer | Maximum size of a single chunk in bytes |
| response.limits.size | integer | Maximum total file size in bytes |
| response.limits.chunks | integer | Maximum number of chunks per upload session |
| response.limits.sessions | integer | Maximum concurrent active upload sessions |
| response.limits.sessions_size_max | integer | Maximum aggregate size of all active sessions in bytes |
Get restricted file extensions
Returns restricted and archive file extensions. Authentication is optional.
Query parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| plan | string | No | User's plan or "free" | Override the billing plan to check restrictions for |
Example
curl -X GET "https://api.fast.io/current/upload/limits/extensions/"
# With specific plan
curl -X GET "https://api.fast.io/current/upload/limits/extensions/?plan=pro"
Response (200 OK)
{
"result": "yes",
"response": {
"restricted_extensions": [".exe", ".apk", ".jar", ".php"],
"archive_extensions": [".7z", ".zip", ".rar", ".tar.gz", ".bz2"],
"enforcement_enabled": true,
"plan": "free",
"cache_ttl": 86400
},
"current_api_version": "1.0"
}
Response fields
| Field | Type | Description |
|---|---|---|
| response.restricted_extensions | string[] | Extensions blocked for the plan |
| response.archive_extensions | string[] | Archive extensions (only populated if the plan restricts archives) |
| response.enforcement_enabled | boolean | Whether extension restriction enforcement is currently active |
| response.plan | string | The plan used for this response |
| response.cache_ttl | integer | Suggested client-side cache TTL in seconds (86400 = 24 hours) |
Clients should call this once on startup and cache the results for 24 hours.
List supported hash algorithms
Returns the list of supported hash algorithms for upload integrity verification.
Example
curl -X GET "https://api.fast.io/current/upload/algos/" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"response": {
"algos": ["md5", "sha1", "sha256", "sha384"]
},
"current_api_version": "1.0"
}
Get chunk information
Returns information about uploaded chunks for a session.
Query parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| order | integer | No | Specific chunk number to retrieve. If omitted, returns all chunks. |
Example (all chunks)
curl -X GET "https://api.fast.io/current/upload/aBcDeFgHiJkLmNoPqRsT123456/chunk/" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK, all chunks)
{
"result": "yes",
"response": {
"chunks": {
"1": 5242880,
"2": 5242880,
"3": 2097152
}
},
"current_api_version": "1.0"
}
To retrieve a specific chunk, you can also append the chunk number to the path:
Returns information about a specific uploaded chunk by path parameter.
Path parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| {upload_id} | string | Yes | The upload session ID |
| {order} | integer | Yes | Specific chunk number to retrieve |
Example (single chunk by path)
curl -X GET "https://api.fast.io/current/upload/aBcDeFgHiJkLmNoPqRsT123456/chunk/1" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK, single chunk)
{
"result": "yes",
"response": {
"chunk": {
"1": 5242880
}
},
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1609 (Not Found) | 404 | "The supplied chunk not valid or found." |
Delete a chunk
Delete a specific chunk from an upload session. Session must be in uploading state.
Query parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| order | integer | Yes | The chunk number to delete |
Example
curl -X DELETE "https://api.fast.io/current/upload/aBcDeFgHiJkLmNoPqRsT123456/chunk/?order=3" \
-H "Authorization: Bearer {jwt_token}"
Response (200 OK)
{
"result": "yes",
"current_api_version": "1.0"
}
Error responses
| Error Code | HTTP Status | Message |
|---|---|---|
1658 (Not Acceptable) | 406 | "The session id provided is not in a valid state to delete a chunk." |
1654 (Internal Error) | 500 | "We were unable to delete the requested upload session chunk." |
Quick Reference
Small file (one request, auto-add)
POST /current/upload/
multipart: name, size, chunk, action=create, instance_id, folder_id
-> 201: {id, new_file_id}
Large file (chunked)
POST /current/upload/ # Create session
form: name, size, action=create, instance_id, folder_id
-> 201: {id}
POST /current/upload/{id}/chunk/?order=N&size=N # Upload chunks (up to 3 parallel)
multipart: chunk
-> 202
POST /current/upload/{id}/complete/ # Trigger assembly
-> 202
GET /current/upload/{id}/details/?wait=60 # Poll until stored
-> 200: {session: {status, new_file_id}}
DELETE /current/upload/{id} # Clean up session
-> 200
Stream upload (unknown file size)
POST /current/upload/ # Create stream session
form: name, stream=true, max_size, action=create, instance_id
-> 201: {id}
POST /current/upload/{id}/stream/ # Stream file body
body: raw binary (application/octet-stream)
-> 201 (auto-finalizes)
Manual add to storage (if no instance_id)
POST /current/workspace/{id}/storage/{folder}/addfile/
form: from={"type":"upload","upload":{"id":"{upload_id}"}}
-> 200
Web upload (URL import)
POST /current/web_upload/ # Create job
json: source_url, file_name, profile_id, profile_type
-> 201: {web_upload}
GET /current/web_upload/ # List jobs
query: limit, offset, status
-> 200: {web_uploads, total}
GET /current/web_upload/{id}/details/ # Get job details
-> 200: {web_upload}
DELETE /current/web_upload/?id={id} # Cancel job
-> 200: {canceled, id}
Upload management
GET /current/upload/details/ # List all sessions
GET /current/upload/limits/ # Get plan limits
GET /current/upload/limits/extensions/ # Get restricted extensions
GET /current/upload/algos/ # List hash algorithms
GET /current/upload/{id}/chunk/ # Get all chunk info
GET /current/upload/{id}/chunk/{order} # Get single chunk info
DELETE /current/upload/{id}/chunk/?order=N # Delete a chunk
Best Practices
- Check limits first: Query
/upload/limits/and/upload/limits/extensions/before starting uploads. - Use hash validation: Always provide chunk and file hashes to detect corruption early.
- Implement retry logic: Failed chunk uploads can be retried by re-uploading the same
order. - Track chunks locally: Maintain a local record of successfully uploaded chunks for resumability.
- Long-poll for completion: Use the
waitparameter on the details endpoint instead of frequent polling. - Clean up failures: DELETE failed sessions to free session quota.
- Cache extension restrictions: Call
/upload/limits/extensions/once and cache for 24 hours. - Use auto-finalization: When all chunks total the declared file size, assembly triggers automatically. Explicit
/complete/is optional but recommended for reliability. - Omit
relative_pathwhen unused: Do NOT send it as an empty string.