Spark vs Docker Compose + Bash
Most teams start integration testing with Docker Compose and bash scripts. It works — until it doesn't. Spark replaces the orchestration boilerplate while keeping the same Docker foundation.
Quick comparison
| Spark | Docker Compose + Bash | |
|---|---|---|
| Service definitions | In the test file (YAML) | Separate docker-compose.yml |
| Health check waiting | Automatic (built-in) | Manual (polling loops in bash) |
| Test isolation | Automatic per test | Manual (unique project names) |
| Assertions | Built-in (statusCode, jsonPath, etc.) | DIY (curl + jq, or external tool) |
| Parallel execution | Built-in | Manual scripting |
| Reports | HTML + JUnit XML | DIY |
| Cleanup | Automatic | Manual (easy to forget) |
| Artifact collection | Automatic from containers | Manual docker cp |
| Learning curve | Learn Spark YAML format | Already know it (but fragile) |
A typical Compose + Bash setup
#!/bin/bash
set -e
# Start services
docker compose up -d
# Wait for health (the fragile part)
echo "Waiting for postgres..."
for i in $(seq 1 30); do
docker compose exec -T db pg_isready && break
sleep 1
done
echo "Waiting for API..."
for i in $(seq 1 30); do
curl -sf http://localhost:8080/health && break
sleep 1
done
# Run tests
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
-X POST http://localhost:8080/api/login \
-H "Content-Type: application/json" \
-d '{"email": "test@test.com", "password": "secret"}')
if [ "$STATUS" != "200" ]; then
echo "FAIL: Expected 200, got $STATUS"
docker compose logs
docker compose down
exit 1
fi
echo "PASS"
# Cleanup
docker compose down -v
The same test in Spark
name: Login API
tests:
- name: Login returns 200
services:
- name: db
image: postgres:15
healthcheck: "pg_isready"
- name: api
image: myapp:latest
environment:
DATABASE_URL: postgres://db:5432/test
execution:
target: http://api:8080
request:
method: POST
url: /api/login
body: '{"email": "test@test.com", "password": "secret"}'
assertions:
- statusCode:
equals: 200
spark run ./tests
Same result. No polling loops, no manual cleanup, no fragile bash.
When to stick with Docker Compose + Bash
- You have an existing setup that works and your test suite is small
- You need maximum flexibility — custom orchestration logic that Spark doesn't support
- You don't want to adopt any new tools
- Your tests use tools Spark doesn't support (browser testing, custom protocols, etc.)
When to switch to Spark
- Your bash scripts are getting fragile — race conditions, forgotten cleanup, inconsistent health checks
- You're adding more tests and need parallel execution
- You want standardized reports (HTML, JUnit XML) without building them yourself
- New team members struggle to understand the test scripts
- You're tired of reinventing the same orchestration boilerplate
What Spark handles for you
Everything you would otherwise write in bash:
- Health check waiting — retries with configurable limits, no sleep loops
- Network isolation — each test gets its own Docker network automatically
- Parallel execution — tests run concurrently with configurable worker count
- Cleanup — containers and networks are removed after every test, even on failure
- Artifact collection — logs and files extracted from containers automatically
- Reporting — HTML dashboards, JUnit XML for CI, structured CLI output
- Exit codes — 0 for pass, 1 for fail, 2 for infrastructure errors