top of page
Search

Are we wasting too much time creating end-to-end tests?

  • Writer: Phil Hargreaves
    Phil Hargreaves
  • Apr 14, 2022
  • 5 min read

End-to-end (E2E) testing can quickly become an ambiguous term, especially when developing micro-services. E2E in the context of testing integration between multiple micro-services is a bit of an anti-pattern.

There is still a considerable demand for creating copious amounts of E2E regression tests. In a monolithic world, this is without a doubt extremely important when the entire system function is a single application as an individual, autonomous unit.

We have to accept that one size does not fit all circumstances.


Monolithic Vs Micro-services

ree

I sometimes think as testers, we want to carry on doing what we have always done without taking a step back and thinking about what is required, given there are many approaches to developing software.

With micro-services, the big question is, What value is creating E2E regression tests giving us? The warm and fuzzy feeling that the system is working as it should! Confidence that the lower level testing is adequate! Something to do! To make a QAs work more visible! Remove the need for collaborative working with our developers! Or is it just creating huge overhead and maintenance costs for your development teams?


There is nothing better than breaking down those silos and working to produce awesome software as a team.

Here are the types of testing that we think you should be doing for your micro-services:

Unit Tests: Unit tests while still testing meaningful units of functionality do this without any other dependencies. Unit tests run in isolation without the need for additional services or databases. They also run fast and provide instant feedback.


Component Tests: In its most basic form, it means a test that validates whether a developed service meets its specification.

Many delivery teams choose to validate this via (as mentioned earlier) automated end-to-end tests. However, this does not need to be the case. My preferred approach is for teams to sit down with the "spec" (usually a list of acceptance criteria) and figure out which level of the Test Pyramid to cover those specific criteria and if you can do the bulk of your testing with unit tests, or component tests then great!

I don't refer to the Test Pyramid much these days and prefer to use the term "Test early and often" which I believe is more relevant, although utilising the test pyramid makes so much sense when trying to explain your approach.


Contract Tests: Discussed quite a lot but in my opinion, rarely implemented! When building micro-services, integration points between services are a breeding ground for bugs. Consumer-driven contract testing is a technique where the consumer defines the contract, and verifications are made against this contract within the build/test lifecycle. Micro-services allow teams to release autonomously. There is an excellent article by Martin Fowler on Contract testing

ree

End to End Tests: As mentioned right at the start E2E testing can quite quickly become an ambiguous term, especially when developing micro-services. Ideally, while developing micro-services, we want to aim for being as independent as we possibly can be being confident to build, test and deploy individual components. We prefer to test the interaction between micro-services through contract tests rather than E2E tests. A high level of E2E testing can become an expensive overhead, and tests tend to be flaky. A limited number of tests that checked that things hung together would still be helpful post-deployment, but I have moved away from the norm and won't create unnecessary amounts of these. That said, as I move around organisations it's essential to listen to your customers and discuss what is necessary for them.


Smoke Tests: A subset of automated tests that can prove an environment is stable enough to proceed with further testing or decide if the software is stable enough to be released into production. When releasing software into higher environments such as an externally facing environment for test or production, we are solely proving we can deploy the software successfully; This is not another functional testing phase. I have written another post regarding smoke testing if this was of interest.


ree

Performance Testing: There are many variants of performance testing such as Spike, Soak, and Stress testing. All types of performance testing have very different focuses. The only kind of performance testing that I encourage is Load Testing. Each team must test each release of their service in isolation to ensure that it can stand up to the predicted load. Conducting regular but short load tests on your micro-services is sufficient to identify performance-related bugs such as memory leaks. During a load test, we also need to consider what happens to the micro-service. This way, we can assess how many instances are necessary to support the service we are testing.


Security Testing: An essential element of your testing is running security tests, right?

Automated security testing is useful for finding generic vulnerabilities that are not related to the business domain of your code.

Manual exploratory testing is best suited to complex or domain-specific test scenarios. Examples of this are an appropriate use of authentication and authorisation, business logic exploitation which all require an understanding of the system's functionality. Some generic (or non-domain-specific) vulnerabilities are also best tested manually when sufficiently complex to exploit. We could also look at implementing some of these tests into our component tests.


Accessibility Testing: This is performed to ensure that the application is usable by people with disabilities like hearing, colour blindness, old age, and other disadvantaged groups. It is a subset of Usability Testing.

Accessibility must be considered when building software. It is important that accessibility testing is not left until the end of your cycle, nor should it just form part of your Usability Testing sessions.

My current organisation is obliged to build software to an AA Standard: https://www.w3.org/TR/WCAG/

ree

For all testing types, it is strongly encouraged that we implement them within your pipeline where possible and where appropriate. However, we understand there may be some exceptions, e.g. you may not want to run a performance test for every deployment.


There is also a considerable amount of benefit of carrying out manual exploratory testing. Exploratory testing involves simultaneously learning about the software under test while designing and executing tests, using feedback from the last test to inform the next. If you find a defect while exploratory testing, it's worth having a team discussion to agree if there is value in automating that test and including it in your pipeline to prevent it from happening in the future.


There are gaps in the above, such as Compatibility etc. I have taken the approach to address things as they arise to prevent me from making too many assumptions.


Feel free to contact me regarding the above! id love to hear your thoughts too.

 
 
 

Comments


logo_transparent_background.png

© 2025 Evolve Software Consulting Ltd.

bottom of page