How It Works

Coming soon. This describes planned functionality. Join the waitlist to get early access.

Getting started with cloud mode is simple — add --cloud to your run command.

Run with --cloud

SPARK_CLOUD_URL=http://api.spark.dev spark run ./tests --cloud

That's it. Same command, same files — Spark handles the rest.

What happens behind the scenes

  1. Spark parses your *.spark files locally (same as local mode)
  2. Service template references are resolved and merged
  3. Test definitions are uploaded to Spark Cloud as JSON (kilobytes, not Docker images)
  4. Cloud assigns tests to workers and starts execution
  5. CLI streams results in real-time via SSE:
$ spark run ./tests --cloud

Uploading 24 tests...
Running on 8 workers  [3/24]

  ✓ auth/login ...................... 0.8s
  ✓ auth/register ................... 1.2s
  ● api/categories .................. running
  ○ api/users ....................... pending
  ○ api/orders ...................... pending
  ...

  21 passed  1 failed  2 skipped  (12.4s)

  ✗ api/categories
    Expected status code 200, got 500

  https://app.spark.dev/runs/abc123

Exit codes

CodeMeaning
0All tests passed
1At least one test failed
2Infrastructure error (cloud unreachable, auth failure, timeout)

Cancellation (Ctrl+C)

Pressing Ctrl+C sends a cancel request to the cloud. Workers stop, the run is marked as cancelled, and CLI exits cleanly.

Connection recovery

If the SSE connection drops, CLI reconnects automatically and receives missed events. No results are lost.

Local-only flags are ignored in cloud mode. These flags have no effect with --cloud: --html, --junit, --artifacts, --regenerate-snapshots, --verbose, --debug, --timeout, --workers.