Spark vs Testkube
Testkube and Spark both run tests against real infrastructure. The difference is scope — Testkube is a Kubernetes-native test orchestrator that wraps existing tools. Spark is a self-contained test framework with built-in execution, assertions, and reporting.
Quick comparison
| Spark | Testkube | |
|---|---|---|
| Approach | Test framework (execution + assertions) | Test orchestrator (wraps existing tools) |
| Infrastructure | Docker only | Kubernetes cluster required |
| Test format | YAML with built-in HTTP/CLI execution | Kubernetes CRDs wrapping Cypress, k6, Postman, etc. |
| Built-in assertions | Yes (statusCode, jsonPath, snapshot) | No — delegates to wrapped framework |
| Service management | Docker containers on shared network | Kubernetes pods with readiness probes |
| Setup complexity | Install binary, run command | Install CRDs, agent, configure cluster |
| Local development | Just Docker | Requires minikube/KinD/k3s |
| Parallel execution | Built-in | Kubernetes pod scheduling |
| Distributed execution | Workers via HTTP API | Multi-cluster via control plane |
| Pricing | Free CLI, paid cloud | Free agent, $400-800+/mo for dashboard |
| Written in | Go | Go |
When to use Testkube
- You're already on Kubernetes and want tests running inside the cluster
- You need to test internal cluster services without exposing them externally
- You have existing test suites (Cypress, k6, Playwright) and want centralized orchestration
- You need massive parallelism — hundreds of concurrent test pods via Kubernetes scaling
- You have a platform engineering team to manage the Kubernetes overhead
When to use Spark
- You want one YAML file that defines services, requests, and assertions — no external framework needed
- You need Docker, not Kubernetes — simpler infrastructure
- You want non-developers (QA, DevOps) writing tests without learning a programming language
- You need fast local testing — install binary, write YAML, run tests
- You don't want to operate a Kubernetes cluster just to run integration tests
The key difference
Testkube orchestrates existing tests — you still need Cypress, k6, Postman, or another framework to write the actual test logic:
# Testkube: wraps an existing tool (Cypress)
apiVersion: testworkflows.testkube.io/v1
kind: TestWorkflow
metadata:
name: login-test
spec:
content:
git:
uri: "https://github.com/myorg/tests"
paths: ["cypress/e2e"]
container:
image: "cypress/included:13.6.4"
workingDir: "/data/repo"
steps:
- shell: "npm install"
- shell: "cypress run --spec cypress/e2e/login.cy.js"
Spark is the test framework — services, execution, and assertions in one file:
# Spark: self-contained test definition
name: Login API
tests:
- name: Login returns token
services:
- name: db
image: postgres:15
healthcheck: "pg_isready"
- name: api
image: myapp:latest
execution:
target: http://api:8080
request:
method: POST
url: /api/login
body: '{"email": "test@test.com", "password": "secret"}'
assertions:
- statusCode:
equals: 200
- jsonPath:
path: $.token
expected: exists
With Testkube you need a separate Cypress project, npm dependencies, and Kubernetes. With Spark you need Docker and one YAML file.
Where Testkube wins
- Framework flexibility — wrap any test tool (Cypress, Playwright, k6, JMeter, custom scripts). Spark only supports HTTP and CLI execution.
- Kubernetes-native security — tests run inside the cluster, no need to expose internal services. Secrets managed via Kubernetes-native mechanisms.
- Massive scale — Kubernetes pod scheduling can parallelize hundreds of tests across nodes. Better suited for very large test suites.
- Existing test reuse — if you already have Cypress or k6 tests, Testkube orchestrates them without rewriting. Spark requires converting to its YAML format.
- Multi-cluster management — centralized control plane across multiple Kubernetes clusters (paid feature).
Where Spark wins
- No Kubernetes required — Docker Engine is all you need. No cluster setup, no CRDs, no kubectl.
- Self-contained tests — one YAML file with services, requests, and assertions. No external test framework, no npm install, no compilation.
- Simpler mental model — "start containers, make HTTP request, check response" vs. Kubernetes Jobs, CRDs, pod scheduling, readiness probes.
- Faster feedback loop — install binary, write YAML, run
spark run. No cluster provisioning or pod scheduling delays. - Built-in assertions — status codes, JSON paths, snapshots — without bringing a separate assertion library.
- Accessible to non-developers — QA and DevOps can write YAML tests without knowing JavaScript, Python, or Kubernetes internals.
Can they coexist?
Yes, at different layers. Testkube excels at orchestrating complex E2E suites (browser tests, load tests) inside Kubernetes. Spark is ideal for API integration tests that need real services but not a full cluster. Some teams use Spark for development and CI, and Testkube for staging and production cluster validation.