spark.yaml

As your test suite grows, you'll notice the same service definitions repeated across files — the same Postgres image, the same healthcheck, the same environment variables. The spark.yaml configuration file lets you define these once and reference them everywhere.

File format

# spark.yaml
services:
  postgres:
    image: postgres:15
    healthcheck:
      test: pg_isready
      retries: 10

Services defined in the config file can be referenced by any test using ref::

# my-test.spark
name: API Tests

tests:
  - name: Create user
    services:
      - ref: postgres
        name: db
      - ref: redis
        name: cache
    execution:
      # ...

Discovery

Spark auto-discovers your config file. Just drop a spark.yaml in your project root and it works. No flags needed.

Spark looks for the config file in this order:

  1. Path specified by --configuration flag
  2. spark.yaml in the current working directory
  3. spark.yml in the current working directory

Service fields

Each service in the config file supports:

FieldRequiredDescription
imageyesDocker image
commandnoDefault command (string or array)
workingDirnoWorking directory
environmentnoMap of environment variables
healthchecknoHealth check configuration
artifactsnoFiles or directories to collect from the container after test execution

The service key (e.g., postgres) becomes the name used with ref: in tests.

Environment variables

Config files support the same $SPARK_* environment variable expansion as test files:

services:
  app:
    image: $SPARK_APP_IMAGE
    environment:
      DATABASE_URL: $SPARK_DATABASE_URL
Only SPARK_ prefixed variables are expanded — both in config files and test files. This prevents accidentally leaking sensitive environment variables into your test definitions.