This post is part of a series on functional testing using the test pyramid.
Purposes
- Verifies that components or microservices within a service or sub-system talk to each other correctly. This is different from component or API testing. API testing covers the external interface and functionality of the sub-system, while integration testing covers communication between subsystems.
- Tests that components which share resources work together correctly. Example: several components are CPU-intensive. Each works fine on its own, but do they work together on the minimum size CPU?
- Where necessary, tests merge of data and workflow between two systems.
Who defines the test?
- Tests are defined by a senior developer or architect as part of designing the dependency mapping of the components.
- Non-functional testing, such as resources tests are defined by QE.
Who codes the test automation?
- Automated by a development team. Some companies have a QE engineer in the team do this.
- Integration tests can be done by a separate system team if desired.
What to test
- Test positive and negative connections to each dependency. All microservices or components should test positive and negative communication to all other services they call. Example: your microservice calls a logging service. You should call that service with a valid call (and verify the log) and call with an out-of-bounds value (and verify the correct error returned from the service). If there are different error cases your service is acting on (treating 400 and 403 returns differently, for example, then tests are needed for both negative cases.
- Identify and test shared resource usage with typical and load traffic.
- Where an integration involves a workflow, you’ll need to test the workflow following a finite state model. Test all the transitions, states, and first order loops.
- Where the integration involves merging of data, variations on the data for all fields including special and null values need to be checked.
Best practices
- # of communication tests should > = 2x the number of dependencies, with about 1/2 of the tests being negative.
- Use domain-driven design (or any good design system) to create groups of components that work together with outside contact limited by to a few simple interfaces.
- Use dependency tracking to know which components depend on what other components / interfaces.
- Use sequence diagrams to track intended calls.
- When using a microservices architecture, define groups of components / microservices that work together as a unit. This unit is a “service” and services can often be tested, deployed and mocked as a single unit.
- Often, a large portion of the integration tests are a subset of the component tests with some mocks removed.
Measures of success
Implementation measures
- Key Performance Indicators (KPIs) for shared resources being tested
- Number of connections being tested
Success measures
- % end-to-end or deployment failures: As with component / API tests, the integration tests should be run by teams before merge to master to reduce integration problems with other teams.
- Bug lifetime: How often are bugs found outside of a sprint?
- Missed bugs: Integration errors and resource contention issues found after integration testing.
Maturity levels
- None: Call dependencies, sequences, and shared resources are not defined. Tests are not automated.
- Initial: Some dependency testing and testing of shared resources exists.
- Definition: Call dependencies, sequences, and shared resources are defined.
- Integration: Major services or subsystems have internal interactions defined and automated. Shared mocks tests are built for each major service allowing independent development and testing.
- Management and measurement: Integration tests are running as part of a regular continuous deployment stage in a pipeline where failures block the promotion of the build.
- Optimization: Failures in deployment or end-to-end tests are reviewed for root cause and lead to additional lower level tests.
Experiences
One of the companies produced “integration” tests for every component — which were clearly component tests. “Integration testing” has become such a generic term that companies often use it to mean component or API testing. We distinguish here because the investment size is different. You’ll want component / API tests to cover all your positive, boundary and negative cases. That can be a lot of tests, but because other components are mocked, they run quickly.
You only need integration tests to validate communication between components and resource contention. These tests must have real (not mocked) instances of to the components being tested in an environment that is close to production. That means there are fewer tests needed, but they take a longer to run.
One company had two databases that had to be kept in sync and where a complicated workflow was needed to describe the state for each. In this case, more integration tests than usual were needed — they needed both data merge testing and to test all the paths using a finite state model.
Copyright © 2019, All Rights Reserved by Bill Hodghead, shared under creative commons license 4.0