top of page
Search

Testing on a New Digital Platform - Start as you mean to go on!

  • Writer: Phil Hargreaves
    Phil Hargreaves
  • Apr 12, 2022
  • 9 min read

Updated: Dec 2, 2025

An organisation that I have worked with recently are going through a significant digital transformation. For the organisation, this is an entirely new way of thinking. As a team, we want to revolutionise their ways of working by using new, fast, and frequently changing digital technologies to solve their problems.


I have been looking into creating an overarching Testing Strategy. A lot of this isn't information that's uncommon, but I thought it would be helpful to those starting in the testing industry, or maybe you are in a similar position on a greenfield project, too.


The organisation are heavily investing in Microsoft, so for us, the best approach was to use Azure DevOps. Maybe in a future post, I can go into more detail about the platform and the challenges we have and will face.



This information is intended to serve as guidance for teams that may develop on this Digital Platform in the future. Each team is responsible for making the decisions that best suit how to test their services. We don’t want them to follow advice blindly; instead, use this information as a set of principles and then select the test activities that best suit their service.


The aim is not to dictate how teams should do things, but rather to guide what we believe good testing looks like.


“We are creating an environment where people do the right thing and make their own decisions”


In our approach, we deliberately do not describe which Agile processes or ceremonies you should use, as those decisions lie with the teams. It is, however, strongly advised that we aim to deliver iteratively and frequently deliver value. The impact of changes in each microservice release is lower, so we don’t need to run extensive regression testing. Instead, testers embed themselves in the teams. They are part of technical conversations about the impact of each change, allowing them to select which aspects of each service might need testing to reduce the risk of a specific release.



We are also not dictating the use of the Agile methodology; there are many ways you can deliver frequently.


Continuous Delivery

Continuous delivery is a software engineering approach in which teams produce software in short cycles, ensuring it can be reliably released at any time. It aims to build, test, and release software faster and more frequently. A straightforward and repeatable deployment process is essential.


We want to avoid testing being treated as an end-of-process activity or anything like that.


That is often forgotten and left until the last minute.


Here, we highlight where the process testing can and should be applied:


Continuous Integration (CI) is a development practice in which developers/testers frequently integrate code into a shared repository, ideally several times a day. Automated builds and tests can then verify each integration.


Continuous delivery (CD) is a software development practice in which code changes are automatically prepared for production releases.



Our Test Principles


  1. Testing will enable effective delivery rather than obstruct it - Testers are not gatekeepers; we want Testing to be something the whole team invests in up front.

  2. Automate as much as possible, provided it adds value. Automated tests at any level should give us fast feedback on our code. These tests need to be helpful to the team: they should run in minutes rather than hours and not be flaky. For this reason, we recommend Unit tests and Component tests and encourage teams to limit their UI-level automated tests to a handful.

  3. Fail Fast - We want the earliest possible feedback that a test has failed. If we find a defect in a later environment, we should always ask the question “What was the earliest point at which we could have found this defect?” and then write a test at that point.

  4. The Team is responsible for defining their own test processes. It is the responsibility of the team building services for the platform to describe their test approach. The key points we want to get across are to work iteratively, deliver value regularly and ensure testing is embedded within the team.

  5. Pragmatism in Testing - We want teams to assess the impact of changes in each release of their service and take a pragmatic approach to determine what needs to be validated. We do not believe it is a good use of time to perform extensive end-to-end testing. In essence, we believe we get more value from investing in automation at the integration level and running a handful of end-to-end tests to validate connectivity.

  6. Advance Monitoring - As we strive for more frequent releases and more advanced monitoring, we need to change our testing approach. We need to build and deliver changes with testing baked into the process to get them into production and obtain real information about our services. In a world of Continuous Deployment, advanced monitoring can help us identify issues in production (sometimes before our users do) and fix them quickly.

  7. Continually look to improve - If something isn’t working, use your iteration reviews to feed that back to the team and try different ways of working.


Our Test Structure

When deciding which tests to write, it is recommended to use the Test Pyramid as a guideline. Primarily, the bulk of your code should be tested with Unit tests; It would be more useful to write your Service tests to cover integration with other systems. UI tests should be, perhaps, lightweight and include a handful of user journeys.


The Testing Pyramid highlights that it is much more effective and less flaky to automate tests at the Unit level.


High-level tests serve as a second line of defence. If you get a failure in a high-level test, you won't just have a bug in your functional code; you also have a missing or incorrect unit test. Before fixing a bug exposed by a high-level test, you should replicate the bug with a unit test. Then the unit test ensures the bug stays away.


Test-Driven Development


Test-Driven Development (TDD) is an agile development approach we encourage.

This relies on short development cycles being repeated. Tests and code are developed together in a Red-Green-Refactor cycle.


  • RED – Write a failing test

  • Green – Write the functional code to make the test pass

  • Refactor – Refactor the code to make it clean







These cycles should be frequent, line-by-line. It's about making lots and lots of little steps in the right direction. There are many descriptions of why TDD is beneficial, such as driving out design and increasing code coverage. However, in essence, it's about knowing what you want the code to do, when it does, and when it breaks.


Why TDD?

“One of the most dangerous things about a traditional requirements specification is when people think that once it's written, they no longer need to communicate”



Is this really what they asked for?!


Its incredibly difficult for one person to fully define how a system should behave, especially when most systems have various paths through the code, this can sometimes make it hard to communicate clearly and consistently to all the people involved in the project. Working as a team will spark healthy conversations to define what is truly required.

Example: Scenario: Add two numbers together Given I have the number 1 And I have the number 2 When I add them Then the result should be 3

The benefits of TDD are:

  • Tests prove that the code does what it's meant to do

  • Tests drive the design of the program

  • Refactoring allows improvements to the design

  • You are building up a low-level set of regression tests

  • Test-first approach reduces the risk of bugs landing in your production environment

  • You are creating working documentation that is easily understood by the whole team

Testing Types on the Platform

Unit Tests - Unit tests are those that, while still testing meaningful units of functionality, do so without any other dependencies. Unit tests run in isolation without the need for additional services or databases. They also run fast and provide instant feedback.


Component Tests - In its most basic form, it means a test that validates whether a developed service meets its specification.

Many delivery teams choose to validate this through automated end-to-end tests. However, this need not be the case. Our preferred approach for teams on the platform is for delivery teams to sit down with the “spec” (usually a list of acceptance criteria) and determine which level of the Test Pyramid to cover those criteria. If you can do most of your testing with unit or component tests, great!

Contract Tests - When using micro-services, integration points between services are a breeding ground for bugs. Consumer-driven contract testing is a technique in which the consumer defines the contract, and verifications are performed against it throughout the build/test lifecycle. Microservices allow teams to release autonomously.


End-to-End Tests (E2E) - E2E testing can quite quickly become an ambiguous term, especially when developing micro-services. E2E testing in the context of integration between multiple microservices is a bit of an anti-pattern. Ideally, while developing microservices, we want to aim to be as independent as possible. We prefer to test the interaction between micro-services through contract tests rather than E2E tests. A high level of E2E testing can become an expensive overhead, and tests tend to be flaky. A limited number of tests that checked that things hung together would still be helpful.

Smoke Tests - A subset of automated tests that can be used to prove an environment is stable enough to proceed with further testing, or to decide if the software is stable enough to be released into production. At the point of releasing software into higher environments, such as an externally facing environment for testing or production, we are solely proving that we can deploy the software successfully; this is not another phase of functional testing.


Performance Testing - There are many variants of performance testing, such as Spike, Soak and Stress testing. All types of performance testing have very different focuses. The only kind of performance testing that is encouraged on the platform is Load Testing. Each team is required to test each release of their service in isolation to ensure it can withstand the predicted load. Conducting regular, short load tests on your microservices is sufficient to identify performance-related bugs, such as memory leaks. During a load test, we also need to consider what happens to the microservice. This way, we can assess how many instances are needed to support the digital service we're testing.


Security Testing - An essential element of your testing pipeline is running security testing. All teams must implement this.

Automated security testing helps identify generic vulnerabilities unrelated to your code's business domain.

Manual exploratory testing is best suited to complex or domain-specific test scenarios. Examples of this include the appropriate use of authentication and authorisation, and the exploitation of business logic, all of which require an understanding of the system's functionality. Some generic (or non-domain-specific) vulnerabilities are best tested manually when they are sufficiently complex to exploit or require chaining multiple vulnerabilities.


Accessibility Testing - Accessibility Testing is performed to ensure that the application being tested is usable by people with disabilities, including hearing impairments, colour blindness, and older age, as well as other disadvantaged groups. It is a subset of Usability Testing. Accessibility must be a consideration when building software, and it is essential that accessibility testing is not left until the end of your cycle, nor should it be limited to your Usability Testing sessions. This organisation is obliged to build software to an AA Standard: https://www.w3.org/TR/WCAG/


For all testing types, it is strongly encouraged that you implement them within your pipeline where possible and where appropriate. However, we understand there may be some exceptions, e.g. you may not want to run a performance test for every deployment. There is also considerable benefit in conducting manual exploratory testing. Exploratory testing involves simultaneously learning about the software under test while designing and executing tests, using feedback from the last test to inform the next. If you find a defect during exploratory testing, it's worth having a team discussion to decide whether it's worth automating the test and including it in your pipeline to prevent it from happening again.


There are gaps in these testing types above, such as Compatibility. Accessibility, etc., such things will be addressed by the platform in the future.


Environments


  • Near-identical Environments for test and production purposes, which reduces risk.

  • These environments exist in the cloud (Azure)

  • Deployments are automatic, reducing risk and creating repeatability

  • Secure connectivity is provided to other internal or 3rd party services

  • Services are typically deployed in “containers” that offer a high level of predictability and assurance

  • Services scale up and down in response to actual demand, providing consistent performance and minimum cost

  • Automatic detection of issues and self-healing (redeployment) of nodes ensures high service availability

  • Logging, monitoring and alerting are standardised across services to enable a proactive response to any issues

Monitoring


As we implement monitoring on our services, we are aiming to answer the following:

  • Can we output the required statistics of our services?

  • Can we get the necessary information to understand how our service is used?

  • Can we actively use the outputs to model or provide direction for some aspects of our testing?


If this comes in useful to just one person, it was worth the share.


 
 
 

Comments


logo_transparent_background.png

© 2026 Evolve Software Consulting Ltd.

bottom of page