top of page
Search

Testing on a New Digital Platform - Start as you mean to go on!

  • Writer: Phil Hargreaves
    Phil Hargreaves
  • Apr 12, 2022
  • 9 min read

Updated: Apr 14, 2022

An organisation that I have worked with recently are going through a large digital transformation. For the organisation, this is a completely new way of thinking and as a team, we want to revolutionise their ways of working with the use of new, fast and frequently changing digital technology to solve their problems.


I have been looking at creating an overarching Testing Strategy, a lot of this isn't going to be information that isn't commonly known but I thought it would be useful to those starting out in the testing industry or maybe you are in a similar position on a greenfield project too.


The organisation are heavily investing in Microsoft so for us, the best approach was to use Azure DevOps - Maybe in a future post, I can go into more detail about the platform and the challenges we have and will face.


ree

This information is intended to be used as guidance to teams who may come to develop on this Digital Platform in the future. Each team is responsible for making the appropriate decisions on how best to test their services. We don’t want them to blindly follow advice, instead use this information as a set of principles and then smartly select the types of test activities that suit their service the best.


The aim is not to dictate how teams should do things, but rather guide what we believe good looks like in testing.


“We are creating an environment where people do the right thing and make their own decisions”


In our approach, we deliberately do not describe which Agile processes or ceremonies you should use as those decisions lie with the teams. It is, however, strongly advised that we aim to deliver iteratively and deliver value frequently. The impact of change in each release of a micro-service is less, and subsequently means we don’t need to do large amounts of regression testing. Instead, testers embed themselves in the teams. They are a part of technical conversations around the impact of each change, meaning that they can then select which aspects of each service might need testing to reduce the risk of a specific release.


ree

We are also not dictating the use of the Agile methodology, there are many ways you can deliver frequently.


Continuous Delivery

Continuous delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software with greater speed and frequency. A straightforward and repeatable deployment process is essential.

ree

We want to avoid testing being an end of process activity or something


that is often forgotten and left until the last minute.


Here, we highlight wherein the process testing can and should be applied:



Continuous Integration (CI) is a development practice where developers/testers integrate code into a shared repository frequently, preferably several times a day. Each integration can then be verified by an automated build and automated tests.


Continuous delivery (CD) is a software development practise where code changes are automatically prepared for a release to production.


ree

Our Test Principles


  1. Testing will enable effective delivery rather than obstructing it - Testers are not gatekeepers; we want Testing to be something that the whole team invests in up-front.

  2. Automate as much as possible, providing it adds value - Automated tests, at any level should give us fast feedback on our code. These tests need to be useful to the team, by which we mean – they should run in minutes rather than hours and should not be flaky. For this reason, we recommend Unit tests and Component tests and encourage teams to only have a handful of UI level automated tests.

  3. Fail Fast - We want the earliest possible feedback that a test has failed. If we find a defect in a later environment, we should always ask the question “What was the earliest point at which we could have found this defect?” and then write a test at that point.

  4. The Team is responsible for defining their own test processes - It is the responsibility of the team building services for the platform to define their test approach, the key points we want to get across are to work iteratively, deliver value regularly and ensure testing is embedded within the team.

  5. Pragmatism in Testing - We want teams to assess the impact of the change being made in each release of their service and take a pragmatic approach to decide what needs to be validated. We do not believe it is a good use of time to perform massive amounts of End to End testing. In essence, we believe we get more value from investing in automation at the integration level and running a handful of end to end tests to validate connectivity.

  6. Advance Monitoring - As we strive to have frequent releases and more advanced monitoring, we need to change our approach to testing. We need to build and deliver changes with testing baked into the process in order to get out to production to obtain real information about our services. In a world of Continuous Deployment, advanced monitoring can help us identify issues in production (sometimes before our users do) and get them fixed very quickly.

  7. Continually look to improve - If something isn’t working, use your iteration reviews to feed that back to the team and try different ways of working.


Our Test Structure

When deciding what types of tests to write, it is encouraged to use the Test Pyramid as a guideline. Primarily, the bulk of your code should be tested with Unit tests; It would be more useful to write your Service tests to cover integration with other systems. UI tests should be, perhaps, lightweight and include a handful of user journeys.


What the Testing Pyramid highlights, is that it is much more effective and less flaky to automate tests at the Unit level.

ree

High-level tests are there as a second line of test defence. If you get a failure in a high-level test, You won't just have a bug in your functional code; you also have a missing or incorrect unit test. Before fixing a bug exposed by a high-level test, you should replicate the bug with a unit test. Then the unit test ensures the bug stays away.


Test-Driven Development


Test-Driven Development (TDD) Is an approach to agile development and one we encourage.

This relies on the repetition of short development cycles. Tests and code are developed together in a Red Green Refactor cycle.

ree

  • RED – Write a failing test

  • Green – Write the functional code to make the test pass

  • Refactor – Refactor the code to make it clean





These cycles should be extremely frequent, line by line. It's about making lots and lots of little steps in the right direction. There are many descriptions about why TDD is useful, from driving out design, building up the code coverage etc. However, in essence, it's summed up as, Knowing what you want the code to do, knowing when it does and knowing when it breaks.


Why TDD?

“One of the most dangerous things about a traditional requirements specification is when people think that once it's written they no longer need to communicate”


ree

Is this really what they asked for?!


Its incredibly difficult for one person to fully define how a system should behave, especially when most systems have various paths through the code, this can sometimes make it hard to communicate clearly and consistently to all the people involved in the project. Working as a team will spark healthy conversations to define what is truly required.

Example: Scenario: Add 2 numbers together Given I have the number 1 And I have the number 2 When I add them Then the result should be 3

The benefits of TDD are:

  • Tests prove that the code does what its meant to do

  • Tests drive the design of the program

  • Refactoring allows improvements to the design

  • You are building up a low-level set of regression tests

  • Test first approach reduces the risk of bugs landing in your production environment

  • You are creating working documentation that is easily understood by the whole team

Testing Types on the Platform

Unit Tests - Unit tests are those that while still testing meaningful units of functionality do this without any other dependencies. Unit tests run in isolation without the need for additional services or databases. They also run fast and provide instant feedback.


Component Tests - In its most basic form, it means a test that validates whether a developed service meets its specification.

A lot of delivery teams choose to validate this via automated end to end tests. However, this does not need to be the case. Our preferred approach for teams on the platform is for delivery teams to sit down with the “spec” (usually a list of acceptance criteria) and figure out which level of the Test Pyramid to cover that criteria. If you can do the bulk of your testing with unit tests, or component tests then great!

Contract Tests - When using micro-services, integration points between services are a breeding ground for bugs. Consumer-driven contract testing is a technique where the consumer defines the contract and verifications are made against this contract within the build/test lifecycle. Micro-services allow teams to release autonomously.


End to End Tests (E2E) - E2E testing can quite quickly become an ambiguous term, especially when developing micro-services. E2E in the context of testing integration between multiple micro-services is a bit of an anti-pattern. Ideally, while developing micro-services, we want to aim for being as independent as we possibly can be. We prefer to test the interaction between micro-services through contract tests rather than E2E tests. A high level of E2E testing can become an expensive overhead and tests tend to be flaky. A limited number of tests that checked that things hung together would still be useful.

Smoke Tests - A subset of automated tests that can be used to prove an environment is stable enough to proceed with further testing, or, to decide if the software is stable enough to be released into production. At the point of releasing software into higher environments such as an externally facing environment for test or production, we are solely proving we can deploy the software successfully; This is not another phase of functional testing.


Performance Testing - There are many variants of performance testing such as Spike, Soak and Stress testing. All types of performance testing have very different focuses. The only kind of performance testing that is encouraged on the platform is Load Testing. Each team is required to test each release of their service in isolation, to ensure that it can stand up to predicted load. Conducting regular but, short load tests on your micro-services is sufficient to identify performance-related bugs such as memory leaks. During a load test, we also need to consider what happens to the micro-service. This way we can assess how many instances are necessary in order to support the digital service that we're testing.


Security Testing - An essential element of your testing pipeline is running security testing. All teams must implement this.

Automated security testing is useful for finding generic vulnerabilities that are not related to the business domain of your code.

Manual exploratory testing is best suited to complex or domain-specific test scenarios. Examples of this are an appropriate use of authentication and authorisation, business logic exploitation which all require an understanding of the system's functionality. Some generic (or non-domain-specific) vulnerabilities are also best tested manually when they are sufficiently complex to exploit or require the chaining of multiple vulnerabilities.


Accessibility Testing - Accessibility Testing is performed to ensure that the application being tested is usable by people with disabilities like hearing, colour blindness, old age and other disadvantaged groups. It is a subset of Usability Testing. Accessibility must be a consideration when building software and it is important that accessibility testing is not left until the end of your cycle nor should it just form part of your Usability Testing sessions. This organisation is obliged to build software to an AA Standard: https://www.w3.org/TR/WCAG/


For all testing types, it is strongly encouraged that you implement them within your pipeline where possible and where appropriate. However, we understand there may be some exceptions, e.g. you may not want to run a performance test for every deployment. There is also a considerable amount of benefit of carrying out manual exploratory testing. Exploratory testing involves simultaneously learning about the software under test while designing and executing tests, using feedback from the last test to inform the next. If you find a defect as a result of exploratory testing, its worth having a team discussion to agree if there is value in automating that test and including it in your pipeline to prevent it happening in the future.


There are gaps in theses testing types above, things such as Compatibility. Accessibility etc. such things will be addressed by the platform in the future.


Environments


ree
  • Near-identical Environments for test and production purposes, this reduces risk

  • These environments exist in the cloud (Azure)

  • Deployments are automatic reducing risk and creating repeatability

  • Secure connectivity is provided to other internal or 3rd party services

  • Services are typically deployed in “containers” that provide a high level of predictability and assurance

  • Services scale up and down in response to actual demand, providing consistent performance and minimum cost

  • Automatic detection of issues and self-healing (redeployment) of nodes ensures high service availability

  • Logging, monitoring and alerting is standardised across services to enable a proactive response to any issues

Monitoring


As we implement monitoring on our services, we are aiming to answer the following:

  • Can we output the required statistics of our services?

  • Can we get the necessary information to understand the usage of our service?

  • Can we actively use the outputs to model or help give direction to some aspects of our testing?


If this comes in useful to just one person, it was worth the share.


 
 
 

Comments


logo_transparent_background.png

© 2025 Evolve Software Consulting Ltd.

bottom of page