Running Tests
Once you have a few .spark files, you'll want to run them selectively, speed them up with parallelism, and get reports for CI. Here's how.
Basic usage
# Run all tests in a directory (recursively)
spark run ./tests
# Run a single test file
spark run ./tests/hello.spark
# Run from current directory
spark run
Filtering
Don't run everything every time. Spark gives you three ways to narrow down which tests run:
Tags use OR logic — any test matching at least one tag will run.
# Run all tests tagged "smoke"
spark run ./tests --tags smoke
# Run tests tagged "smoke" OR "api"
spark run ./tests --tags smoke,api
Filter by test name with * wildcards or case-insensitive substring matching.
# Wildcard match
spark run ./tests --filter "login*"
spark run ./tests --filter "*authentication*"
# Substring match (no wildcard)
spark run ./tests --filter "login"
You can combine filters. Use
--tags and --filter together to narrow down even further: spark run ./tests --tags smoke --filter "user*"Reports
Generates a visual dashboard with timing breakdowns, logs, and per-test assertion results.
spark run ./tests --html ./reports
Open reports/index.html in your browser.
Standard format for CI systems (Jenkins, GitHub Actions, GitLab CI).
spark run ./tests --junit results.xml
spark run ./tests --html ./reports --junit results.xml
Parallelism
Spark runs tests in parallel by default, using your CPU count to determine the number of workers.
# Auto-detect (default)
spark run ./tests --workers 0
# Fixed worker count
spark run ./tests --workers 4
Each test gets its own isolated Docker network, so running in parallel is safe — tests never share containers or state.
Verbosity
# Verbose — show all events and timing details
spark run ./tests --verbose
# Debug — show Docker commands (implies --verbose)
spark run ./tests --debug