How to improve your QA productivity

Help! My QA team can’t keep up with development!

We hear a lot of variations on this cry for help:

  • “QA is always falling behind.”
  • “QA doesn’t have enough time for the full test passes.”
  • “There isn’t enough time for automation or performance work.”

Any of this sound familiar?

There are many ways to improve your QA productivity. Let’s look at some of the best practices and anti-patterns.

It’s beyond the scope of this post to show every bad practice, but we’ll try to hit the most common ones we see and describe what to do about them.

5 ways to improve your QA productivity

Here are our top five methods to improve QA productivity. For each one, we talk about when to use it, best practices, and anti-patterns.

  1. Test automation
  2. Architecture changes to make testing easier
  3. Write better manual tests
  4. Put developers and QA close together
  5. Make quality everyone’s job

1. Test automation

Automation is usually the first thing people think of to improve QA productivity. Yes, it can really help when done right, but it may not be the most important thing you can do.

The decision to automate is a simple return on investment (ROI) decision. Consider:

  • How much time do I spend on manually doing this test?
  • How much time will it take to automate?
  • How much time will it take to maintain, run and debug the automation once it’s built?

The time to create and maintain the automation better be less than you spend doing manual testing or you haven’t increased productivity. When adding automation to a regular build pipeline, you may plan on increasing the number of times you run the test by 10x to 1000x, so the automation makes much more sense.

The following are good rules to live by when considering automation:

  • Automate the test if you are going to run it at least 6 more times. Your cutoff may vary, but this rule has worked for us.
  • Automate tests where you would fix a bug the automation found. Start automating the tests that would find your worst bugs with the least automation time and work downward. At some point, you get to diminishing returns.
  • Don’t automate a UI-only feature like a picture or the color of the dialog. When developers change a UI, these changes should be manually tested. A human will find usability issues much better than a machine. In one product measured over the course of a year, over 50% of the UI bugs would not have been found by UI automation. Only a human would have caught them, and the same was not true in reverse.
  • Your first automated tests will be slow — that’s OK. Your first few tests may take a lot more time to automate because you need to build up common test libraries. The more tests you automate, the more shared code you have and automation times should drop. However, you should drop tests that are going to take several days to automate to the bottom of your list.

Best practices

  • Test one thing. Most of your automated tests should check one important behavior. You will have a few end-to-end tests that check a lot of things along a key business scenario, but most automated tests will be much simpler. A simple test is easier to maintain and debug.
  • Log test failures and automation failures differently. A test failing because the product is broken is not the same as one that fails because it couldn’t run properly. Track different failures for these cases. You’ll save debugging and reporting time later.

Anti-patterns

  • Excessive UI automation. UI automation ROI tends to be great for the first 10 tests. It’s OK up to 100 tests, and degrades severely as you head toward 1000 tests. This is the nature of the beast. See my blog: Less UI automation, more quality

2.  Architecture changes to make testing easier

Want the biggest ROI for test? Write easily maintainable code.

The companies with the lowest incoming bug rates are the ones that have the best architectures – irrespective of any other development practice.

They have a core set of code that almost never changes, but is configurable and extensible. Components are modular and easily isolated. When your components can be easily tested in isolation or mocked, you can have lots of simple tests that don’t depend on each other or a lot of underlying functionality to work.

As a developer, if you want to help your test team, make your code more maintainable.

Best practices

  • Use well-defined interfaces using a machine-readable specification like Open API. Have a limited number of endpoints for any component and specify them fully. A clear contract describes exactly what to test.
  • Separate your presentation from your business functionality using a pattern like MVVM or CQRS. That way you can run tests against your business logic without the UI and vice versa.
  • Measure cyclomatic complexity and number of dependencies for your functions. Complexity describes the number of unit tests you are going to need. Dependencies describe the number of stubs or mocks that you’ll need to write those unit tests. Make your life easy and keep these small. We like complexity to be less than 10 and dependencies to be less than 7 for any function. If you are using Visual Studio, check out Tools/Analyze; if not, use a tool like SonarQube.

Anti-patterns

  • The dreaded monolithic architecture. If you change one thing in your code, how many tests do you have to run? In a simple modular architecture, the answer is one test. If you have a large inter-connected code base, it’s “all of them, every time”. That’s what’s slowing down your test team.
  • Large stored procedures (SP’s). Back in the 90’s I worked on the Microsoft SQL team. Like other database vendors, we told developers to put business logic in stored procedures to improve performance. I’m sorry! Yes, if you need to use a stored procedure for performance, do it, but look everywhere else first. If you must do it, keep them small and use ANSI SQL. SP’s are hard to test, maintain, upgrade, or port to another DB product. It can be done, but it’s not easy.

3. Write better manual tests

What does the classic test look like in a tool like Zephyr or TFS? It’s a set of steps and checks. “Press this button, look for that window.”

These kinds of tests take a long time to write. They can also have a lot of duplication of steps, and if the navigation changes, you have lots of tests to change.

Want to go faster? Write tests with less text, but that help you understand what is tested.

All those steps are not that important if you are optimizing for the obvious — the stuff that a new user would figure out quickly. Instead, you want to describe the important things like:

  • Prerequisites of the test. What needs to be setup in advance? What are the setup steps at the beginning of the test trying to accomplish?
  • What is the test trying to do? Instead of steps, describe the intent. What are you trying to accomplish?
  • What is the test measuring? What is being measured in the software, e.g., “Check that the new config entry is created.” If it’s obvious how to do this, don’t bother spelling it out. When we automate, we could implement this in the UI, the API or as a call to the DB.
  • What is the customer impact if it fails? Imagine you see a test report. If it says 90% pass, what does that mean? Not very useful. What if it says, “Major breaking issues: users can’t change dates for their calendar events.” That’s useful.
  • The priority. The test should have a priority based on how bad the result could be and how likely it is to happen.
  • Logging the scenario. When you automate, make sure it logs that priority and the scenario that is impacted so you know how to report the severity of the failure.

Best practices

  • Use “functions.” Just like with good code, you want to avoid repeating steps in your tests. It’s helpful to identify repeated or complex setup and verification operations like “create event” or “create user.” Write the steps for these in a separate document and link to them in your test.
  • Use data-driven tests for complex logic so that one test can do the work of many (e.g., a test to add to numbers may have one function to perform the test action and sets of data like for all the equivalence classes).
  • Use a chain of responsibility pattern as an oracle when there are many possible outputs from your inputs. This pattern is easy to use, code and maintain. A chain of responsibility pattern is just a series of IF/THEN statements where each statement calls out to a result and stops the series. The first statements should be the worst cases, like “if (A or B) then error #1.” Statements get progressively less negative until you get to “else success.” This way you guarantee that you cover the negative cases, and new cases can be added easily without affecting existing ones.
  • Use pairwise testing to reduce the number of tests when two or more inputs depend on each other. Also use them to reduce the number of tests when you have to run the test under multiple configurations like “Spanish, android, large data set”. By combining tests that test pairs of inputs or configurations, you can reduce a test matrix of thousands of tests into 10s. See my blog: How to use pairwise testing

Anti-patterns

As we see from the best practices, the anti-patterns are:

  • Repeated steps. Writing the same thing over and over
  • Duplicate tests. Testing the same functionality in several tests
  • A very complex test oracle. The point of an oracle is to be simpler than the code

4. Put development and QA close together

Development and QA engineers should share the same sprint processes, share the same code branch and sit in the same office — ideally next to each other.

Best practices

  • Test code is in the same repo. Test code in same place as development code
  • Test code is real code. Test coding practices are the same as development
  • Testers aren’t separated. QA and development sit near each other or are in constant communication
  • Work completes together. QA and development work on the same stories in the same sprint and complete that work together
  • Keep sprints short. Two weeks or fewer — longer sprints tempt dev and QA to get out of sync
  • Provide API stubs before coding. If QA has an API definition, they can be coding automation in parallel with developer writing the API

Separating the QA work from the dev work lowers development productivity and hurts quality.  It may seem like dev can do more if they don’t wait for QA, but it’s not really the case if QA is involved early with defining the requirements.

Here’s a typical sprint schedule:

  • Pre-sprint: QA and development work with the product owner or business analyst to define acceptance criteria for stories. You should be having story refinement meetings at least weekly. These break down larger stories and add acceptance criteria. Sometimes stories in-flight need to be split further so that one piece can be shipped while another part requires additional work.
  • Sprint week 1: QA writes tests based on acceptance criteria, runs a part of the on-going regression and performance test matrix that is not affected by the work in the sprint, writes library automation functions and does exploratory testing. Dev writes code and shows pieces to QA as they go. Dev writes unit tests as code is written.
  • Sprint week 2: QA does final tests on new code, and automates tests that make sense to repeat. Dev completes code, fixes issues, adds integration tests and works on ongoing refactoring work. Dev and QA can add logging / monitoring calls to the code to help track it in production.
  • End of sprint: Team demos working code. Demo is typically run by QA to prove it can be run by someone other than the developer who wrote it.

Unless development provides API stubs, QA must wait a little for development to have something to share. In the plan above, we pad the beginning of the sprint with ongoing QA work like test passes and perf testing. These must get done but can happen anytime.

Dev will typically be done before QA has finished automation, so we pad the end of the sprint with ongoing dev work like refactoring and integration tests.

Anti-patterns

  • Testing after dev has moved on. If you find a bug when the developer is working on a feature, then it is still fresh in their mind and will likely be fixed correctly. If you find it two weeks later, you’ll interrupt their current work and the dev may not remember what they were thinking and may fix it wrong.

It’s not just expensive to find bugs later, it adds to your technical debt and slows productivity. We see a drop of 20% to 40% productivity for the whole team when QA lags dev by one sprint.

5. Make quality everyone’s job

Is QA responsible for quality? What about development? The product owner? Each is responsible for their part in the quality of the product:

  • The product manager or owner (PM / PO) typically owns the vision for the story — how is it going to help the customer? They should be checking that the features match their vision before the final demo.
  • The developer should be able to prove their code works the way they intended to write it, such as with unit and integration tests.
  • QA owns measurement of the quality Are we done? To what extent is it working? And sometimes, is it working for the customer?

Measurement of quality is important because it gives the organization the data needed to make decisions, but it’s not the same as owning all the quality. QA falls behind when the job becomes bigger than what they can accomplish.

It’s best that dev and PM / PO take on quality tasks that they can do most efficiently: unit testing, integration tests and acceptance review.

Best practices

  • A good RACI (roles / responsibilities) model. Make clear what tasks each role is responsible for doing.
  • Multiple team members should contribute to your epics and stories. Everyone brings their own skills to the process and if they write the part they are responsible for, they will understand it. Co-authorship eliminates a lot of the story review process. This works especially well for epics where there might be more detail, but also makes sense for stories.
    • Typically, the product owner (PO) or product manager (PM) writes the “why” part of the story: who is the customer, what’s the problem from their point of view and what are the outcomes. They DON’T describe how the technology will work.
    • The developer writes the “how” part of the story. These are notes for the developers, so they only need enough detail for those working on it to understand how to build it.
    • QA writes the acceptance criteria. How will you know when the outcomes are achieved and you can measure done?
    • The operations engineer (Ops or DevOps) writes how success is measured in production (if needed).
    • The user experience (UX) designer comes up with any UI and UX guidelines.

Anti-patterns

  • Hand-offs. Development writes code and then hands it off to QA without checking the requirements or existing tests. Instead, development should be proving their code works as they intended. QA can put it in a bigger context — does it integrate well? Does the experience make sense? What’s performance like? We also see hand-off problems when development is done in one location and QA in another. That rarely works well. QA must be in constant communication with the developer, and ideally, sit side by side.
  • The attitude that QA is responsible for quality. QA is a contributor to quality, not responsible for it — at least not all of it. All roles have quality responsibilities. At the end of an epic, everyone signs off that they did their part and all of those add up to quality. Think of the role of QA as “measuring product and customer behavior to give the team data necessary to make decisions.” If you think of yourself as a manual tester or an automator (i.e., back-end loaded), you aren’t providing a lot of value to the company. If you are helping the company make decisions and assure quality as early as possible, that’s a different story. Those people get paid more.
  • Product owner (PO) writes all the stories alone in a room. Product owners are great at understanding the vision behind the story, not so much with all the details. POs can fall way behind on story creation if they have to research and write everything, especially if you need enough detail in the story for an offshore development team.

Summary

Quality assurance is everyone’s job in today’s faster Agile environment. While there are a number of engineering best practices teams can implement, we’ve had tremendous success with those described above. Implementing these practices will help organizations improve QA productivity while maintaining a healthy return on their overall investment in testing.

Copyright © 2019, All Rights Reserved by Bill Hodghead, shared under creative commons license 4.0