4 Ways to Test Your APIs End to End
- If Everything Was Perfect
- The Current State of API Testing Tools
- Newman (and Postman)
- Honorable mention: Record/Replay Testing
Here’s a series of steps for such test flow for your common note-taking app:
- Log in
- Save a note
- Get a note
- Delete a note
A “Get” action can counting on data to exist. A step might expect you to authenticate such as the “Save a note” step, therefor having to perform an authentication step that supplies, for example, a JWT token.
And, trivially, to log in — someone should have put a user with your credentials in the database before all this was able to run.
You can test this flow using white-box — your integration tests, with access to system source, being able to mock things out and build helpers to promote testing, or black-box — no access to your source, having just an opaque service address.
If Everything Was Perfect
In theory, you should not be doing black-box end-to-end multi request flows. That is, in theory, you should test each part in isolation and then test integration generically (mocking some parts of your system).
However, when you don’t control the test subject, you have to exercising black-box testing which means — multi request flows.
Multi request flows pose a challenge — you’re no longer testing a section of your product or API in isolation of its dependencies.
A Word About Exception Cases
In some cases, we can’t perform multi request flows without a price. For example, if you’re testing a Fintech or ECommerce product, and you want to run a flow that at some point charges a credit card, and you’re black-box testing, you have to have a credit card ready.
Given enough of these tests running with a real card and real money (charging minimal amounts), when someone drops a money laundering question about your tests — which they will — it becomes borderline impossible to test this way.
The Current State of API Testing Tools
The discussion around end to end testing tools for APIs is as the one for doing Selenium tests for Web apps. Both have the challenges of black-box testing: flaky tests, test-suite prepwork, setup and teardown, side-effects, inconsistencies and, lastly, tooling.
To understand the current state of tooling, the first step is to come up with a good evaluation framework:
- Self-contained — can we run this tool as a standalone tool, regardless of our programming language or of the test subject tech stack? With the idea of using one definition with one kind of tool through product lifecycle, from development to production.
- Integration tests — can we use it to build plain integration tests? Ideally, to promote reuse, we should use the same tool for white-box integration tests while doing high-level black-box multi request flows.
- Test definition as configuration — can someone that is not skilled in the test subject tech stack add new tests? is it easy? is it quick?
- Variable passing — multi request flows is about sharing context. This means a test framework supports passing step results from one onto the other.
- Set up and tear down — does it support doing database set up before running the tests? tear down? is it generic in the sense that we can run anything before a test?
- Contract validation — does it validate the contract against a schema?
- Content validation — does it validate content?### Supertest
❎ Self-contained— No. Has to have a testing framework to drive it.
✅ Integration tests — Yes.
✳️ Test definition as configuration — No. But if you really squint you can look at its fluent-interface as a DSL for a configuration language.
✅ Set up and tear down — Yes. Part of your testing framework.
❎ Contract validation — No.
✳️ Content validation — Yes. But you have to explicitly verify it in your tests.
Notable mention: hippie which has a similar approach.
✅ Self-contained— Yes.
✅ Integration tests — Yes.
✅ Test definition as configuration — Yes. Can take both Swagger and API Blueprint format, which are widely popular.
❎️ Variable passing — Partial but listed as “No” because it tends to become a mess very quickly. Solution feels nontrivial/fragile (coordinating results via hooks file).
✅ Set up and tear down — Yes. Via hooks feature.
✅ Contract validation — Yes.
✳️ Content validation — Partial. Depending on whether you use Swagger or API Blueprint, and their levels of support for “contract examples”, which is not perfect.
Newman (and Postman)
Newman is “The command-line companion for Postman”. It means that if you use Postman as an individual or a team, you can now reuse your Postman collections as API integration and end-to-end flow tests.
It can take its own proprietary format or Swagger (now OpenAPI) descriptions for your API tests and validate (aka test) your live services or service under test, using scripts to do the final validation. This means you’re maintaining a way to describe API calls and a way to validate expectations.
✅ Self-contained — Yes.
✅ Integration tests— Yes.
✳️ Test definition as configuration— Partial. You have to maintain the validation part by writing scripts, which is kind of awkward — why not flip this around and write tests in code?
✅ Variable passing— Yes. But still a bit of a cranky solution because you have to maintain code in Postman UI that does this.
✅ Set up and tear down— Yes. When using Newman as embedded.
❎ Contract validation — No.
✅️ Content validation— Yes. Again pending you do that in your tests.
Pact is a different kind of tool. It is a popular implementation of Consumer Driven Contracts which makes it an API testing tool.. kind of.
It feels that Pact is interested in providing tooling for verifying the consumer of the API more than in the API itself, how ever, there’s a way to test both sides — see that the consumer is behaving against a Pact, and that the provider is as well, which makes it an API testing tool, kind of.
❎ Self-contained — No.
✳️ Integration tests — Kind of. It depends how you define “integration”, either case it’s not how we define it here.
✅ Variable passing — Yes.
✅️ Set up and tear down — Yes.
❎ Contract validation— No.
✅️ Content validation— Yes.
Honorable mention: Record/Replay Testing
Nicknamed “VCR testing” and made hugely popular by the Ruby VCR gem this technique requires to first record a set of API interactions, and then verify the consumer code (similar to Packt) against a recorded version of the API.
As time moved, the idea spread and today we have such libraries for Python, Go, Node.js, Java, and some read the same original “Cassette” recording format the original Ruby gem produced.
In this way, again, we’re not really testing integration in the way we define it to be here. However, a long while ago I built RCV which reverses VCR and tests the recordings against the provider, which makes it completely compatible with Pact’s feature-set.### Conclusion
There is still space for a testing tool and framework that can hit all cylinders. One that has tests spec’d out as configuration, can be stand-alone, can cover development as well as pre-production and canary testing, and have the black-box testing quirks such as test flow context, set up and tear down all easy to do.
As a general practice, if I had to choose from the list above, focusing on test flow format, I would always take a tool that can read a Swagger / OpenAPI format, or if such a tool has its own format — it has to be worth it, and it has to be a simple enough format to allow to build tooling on top of.