Running Tests

Once you have a few .spark files, you'll want to run them selectively, speed them up with parallelism, and get reports for CI. Here's how.

Basic usage

# Run all tests in a directory (recursively)
spark run ./tests

# Run a single test file
spark run ./tests/hello.spark

# Run from current directory
spark run

Filtering

Don't run everything every time. Spark gives you three ways to narrow down which tests run:

Tags use OR logic — any test matching at least one tag will run.

# Run all tests tagged "smoke"
spark run ./tests --tags smoke

# Run tests tagged "smoke" OR "api"
spark run ./tests --tags smoke,api
You can combine filters. Use --tags and --filter together to narrow down even further: spark run ./tests --tags smoke --filter "user*"

Reports

Generates a visual dashboard with timing breakdowns, logs, and per-test assertion results.

spark run ./tests --html ./reports

Open reports/index.html in your browser.

Parallelism

Spark runs tests in parallel by default, using your CPU count to determine the number of workers.

# Auto-detect (default)
spark run ./tests --workers 0

# Fixed worker count
spark run ./tests --workers 4
Each test gets its own isolated Docker network, so running in parallel is safe — tests never share containers or state.

Verbosity

# Verbose — show all events and timing details
spark run ./tests --verbose

# Debug — show Docker commands (implies --verbose)
spark run ./tests --debug