How It Works
Coming soon. This describes planned functionality. Join the waitlist to get early access.
Getting started with cloud mode is simple — add --cloud to your run command.
Run with --cloud
SPARK_CLOUD_URL=http://api.spark.dev spark run ./tests --cloud
That's it. Same command, same files — Spark handles the rest.
What happens behind the scenes
- Spark parses your
*.sparkfiles locally (same as local mode) - Service template references are resolved and merged
- Test definitions are uploaded to Spark Cloud as JSON (kilobytes, not Docker images)
- Cloud assigns tests to workers and starts execution
- CLI streams results in real-time via SSE:
$ spark run ./tests --cloud
Uploading 24 tests...
Running on 8 workers [3/24]
✓ auth/login ...................... 0.8s
✓ auth/register ................... 1.2s
● api/categories .................. running
○ api/users ....................... pending
○ api/orders ...................... pending
...
21 passed 1 failed 2 skipped (12.4s)
✗ api/categories
Expected status code 200, got 500
https://app.spark.dev/runs/abc123
Exit codes
| Code | Meaning |
|---|---|
0 | All tests passed |
1 | At least one test failed |
2 | Infrastructure error (cloud unreachable, auth failure, timeout) |
Cancellation (Ctrl+C)
Pressing Ctrl+C sends a cancel request to the cloud. Workers stop, the run is marked as cancelled, and CLI exits cleanly.
Connection recovery
If the SSE connection drops, CLI reconnects automatically and receives missed events. No results are lost.
Local-only flags are ignored in cloud mode. These flags have no effect with
--cloud: --html, --junit, --artifacts, --regenerate-snapshots, --verbose, --debug, --timeout, --workers.