Hook
A few large images can ruin your upload latency, increase storage costs, and slow client pages. Compressing at ingest keeps your stack fast and predictable. The Image Compressor endpoint lets your backend shrink files in one call and returns ready-to-store URLs and metrics.
What the API does
POST /v1/image-compressor accepts multipart image uploads and returns an array of compressed results: file name, a URL to the compressed object, original and final sizes, and the percent saved. Use it in your upload handler to compress before persisting or serving.
Prerequisites
- An API key set in
APISHARD_KEY. curlfor quick tests; Node 22+ for the TypeScript example.- A sample image (this guide uses
test-1920.jpg).
Core usage
Endpoint: POST https://api.apishard.com/v1/image-compressor
No query parameters are used in the example below.
curl
curl https://api.apishard.com/v1/image-compressor \
--request POST \
--header 'Accept: application/json' \
--header 'x-api-key: $APISHARD_KEY' \
--form '[email protected]' \
--form 'quality=1'
TypeScript (Node)
import fs from 'fs';
import FormData from 'form-data';
import fetch from 'node-fetch';
async function compressImage() {
const form = new FormData();
form.append('images', fs.createReadStream('test-1920.jpg'));
form.append('quality', '1');
const res = await fetch('https://api.apishard.com/v1/image-compressor', {
method: 'POST',
headers: {
'x-api-key': process.env.APISHARD_KEY || '',
'Accept': 'application/json',
...form.getHeaders()
},
body: form as any
});
if (!res.ok) {
const text = await res.text();
throw new Error(`Compression failed: ${res.status} ${text}`);
}
const data = await res.json();
console.log(JSON.stringify(data, null, 2));
}
compressImage().catch(err => {
console.error(err);
process.exit(1);
});
Request fields (from the provided example)
images— multipart file field (one or more files). Type: file (multipart/form-data). Required/optional: not specified in example.quality— form field (string/integer). Example value:60. Required/optional: not specified in example.
Response (200)
The API returns a JSON array of result objects. Example (exactly as provided):
[
{
"name": "flow-1920.jpg",
"url": "https://apishard-prod.123456789.r2.cloudflarestorage.com/images-compressor/HaVHG-flow-1920.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=c03fd2cb45926dea0198114d149cd473%2F20260412%2Fweur%2Fs3%2Faws4_request&X-Amz-Date=20260412T064325Z&X-Amz-Expires=3600&X-Amz-Signature=99fc2258a2904f5c19d094c73a5d10b992d350b92789355dd9240988cee73c23&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject",
"sizeBefore": 264526,
"sizeAfter": 19572,
"savedSizePercentage": 92.60110537338485
}
]
Response headers (example)
x-apishard-request-id: req_iHQZiGKETqDGl71
x-credit-limit: 247825
x-credit-used: 10
x-credit-remaining: 247815
Response fields
name(string): output filename.url(string): link to the compressed image (serve or persist this).sizeBefore(integer): original size in bytes.sizeAfter(integer): compressed size in bytes.savedSizePercentage(number): percentage of bytes saved.
Patterns / integration notes
- Sync on-upload: call the compressor in the upload request and return the compressed URL to the client (simple, adds latency).
- Async worker: accept upload, queue the image, compress in a worker, then update DB and notify clients (lower latency for upload).
- Storage: store the returned
urlor copy the compressed object into your storage provider; keep thesize*fields for monitoring and billing reconciliation.
Putting it together
In a typical Node upload endpoint, forward the multipart file to /v1/image-compressor (as shown), save the returned url and sizeAfter in your DB, and serve the compressed URL to clients. Use sizeBefore/sizeAfter to tune quality.
Notes & missing details
- The example did not include credit-cost or
X-Credit-Usedheaders. - Required/optional status and valid ranges for
qualitywere not provided — test values to find the right tradeoff.
Closing
Compressing at ingest is a quick win: fewer bytes sent, lower storage costs, and faster client pages. Try different quality values and pick sync vs async integration based on your latency needs.
