top of page
Search

Are we wasting too much time creating end-to-end tests?

  • Writer: Phil Hargreaves
    Phil Hargreaves
  • Apr 14, 2022
  • 4 min read

Updated: Dec 2, 2025

End-to-end (E2E) testing can quickly become an ambiguous term, especially when developing microservices. E2E testing in the context of integration between multiple microservices is a bit of an anti-pattern.

There is still considerable demand for creating large numbers of E2E regression tests. In a monolithic world, this is without a doubt extremely important when the entire system's function is a single application as an individual, autonomous unit.

We have to accept that one size does not fit all circumstances.


Monolithic Vs Micro-services


I sometimes think that, as testers, we want to keep doing what we have always done without taking a step back to think about what is required, given that there are many approaches to developing software.


With micro-services, the big question is, What value are E2E regression tests creating for us? The warm and fuzzy feeling that the system is working as it should! Confidence that the lower-level testing is adequate! Something to do! To make the Q&A work more visible! Remove the need for collaborative working with our developers! Or is it just creating huge overhead and maintenance costs for your development teams?


There is nothing better than breaking down silos and working together to produce excellent software.

Here are the types of testing that we think you should be doing for your micro-services:

Unit Tests: Unit tests, while still testing meaningful units of functionality, do this without any other dependencies. Unit tests run in isolation without the need for additional services or databases. They also run fast and provide instant feedback.


Component Tests: In its most basic form, they are tests that verify whether a developed service meets its specification.

Many delivery teams choose to validate this via automated end-to-end tests (as mentioned earlier). However, this need not be the case. My preferred approach is for teams to sit down with the "spec" (usually a list of acceptance criteria) and figure out which level of the Test Pyramid to cover those specific criteria. If you can do the bulk of your testing with unit tests or component tests, then great!

I don't refer to the Test Pyramid much these days and prefer to use the term "Test early and often", which I believe is more relevant. However, utilising the Test Pyramid makes so much sense when trying to explain your approach.


Contract Tests: Discussed quite a lot, but in my opinion, rarely implemented! When building microservices, integration points between services are a breeding ground for bugs. Consumer-driven contract testing is a technique in which the consumer defines the contract, and verifications are performed against it throughout the build/test lifecycle. Micro-services allow teams to release autonomously. There is an excellent article by Martin Fowler on Contract testing.


End-to-End Tests: As mentioned right at the start, E2E testing can quite quickly become an ambiguous term, especially when developing micro-services. Ideally, while developing microservices, we want to aim to be as independent as possible, confident to build, test, and deploy individual components. We prefer to test the interaction between micro-services through contract tests rather than E2E tests. A high level of E2E testing can become an expensive overhead, and tests tend to be flaky. A limited number of tests that verify that things hang together would still be helpful post-deployment, but I have moved away from the norm and won't create unnecessary ones. That said, as I move around organisations, it's essential to listen to your customers and discuss what they need.


Smoke Tests: A subset of automated tests that can prove an environment is stable enough to proceed with further testing or decide if the software is stable enough to be released into production. When releasing software into higher environments, such as an externally facing environment for testing or production, we are solely proving that we can deploy the software successfully; this is not another functional testing phase. I have written another post on smoke testing if this is of interest.



Performance Testing: There are many variants of performance testing, such as Spike, Soak, and Stress testing. All types of performance testing have very different focuses. The only kind of performance testing that I encourage is Load Testing. Each team must test each release of their service in isolation to ensure it can withstand the predicted load. Conducting regular, short load tests on your microservices is sufficient to identify performance-related bugs, such as memory leaks. During a load test, we also need to consider what happens to the microservice. This way, we can assess how many instances are necessary to support the service we are testing.


Security Testing: An essential element of your testing is running security tests, right?

Automated security testing helps identify generic vulnerabilities unrelated to your code's business domain.

Manual exploratory testing is best suited to complex or domain-specific test scenarios. Examples of this include appropriate use of authentication and authorisation, and exploitation of business logic, all of which require an understanding of the system's functionality. Some generic (or non-domain-specific) vulnerabilities are also best tested manually when sufficiently complex to exploit. We could also consider implementing some of these tests in our component tests.


Accessibility Testing: This is performed to ensure that the application is usable by people with disabilities, including hearing impairments, colour blindness, and older age, as well as other disadvantaged groups. It is a subset of Usability Testing.

Accessibility must be considered when building software. It is essential that accessibility testing is not left until the end of your cycle, nor should it be limited to your Usability Testing sessions.

My current organisation is obliged to build software to an AA Standard: https://www.w3.org/TR/WCAG/


For all testing types, it is strongly encouraged that we implement them within your pipeline where possible and where appropriate. However, we understand there may be some exceptions, e.g. you may not want to run a performance test for every deployment.


There is also considerable benefit in conducting manual exploratory testing. Exploratory testing involves simultaneously learning about the software under test while designing and executing tests, using feedback from the last test to inform the next if you find a defect. In contrast, for exploratory testing, it's worth having a team discussion to decide whether there is value in automating that test and including it in your pipeline to prevent it from happening again.


There are gaps in the above, such as Compatibility, etc. I have taken the approach of addressing things as they arise to avoid making too many assumptions.


Feel free to contact me regarding the above! I'd love to hear your thoughts, too.

 
 
 

Comments


logo_transparent_background.png

© 2026 Evolve Software Consulting Ltd.

bottom of page