We write software. We develop services and distribute them across environments. We're using them. Everything works great. So why should we test? Why should we test automatically, of all things?
The following blog post discusses various motivations for why automated testing makes sense. Automated testing supports many goals, but — unsurprisingly — doesn't solve every problem.
The benefits and goals of automated testing are not equally valid in every environment.
For managers: save time and money

Automated testing saves time and money — at least in the medium and long term.
In the short term, test automation costs more: After all, I have to write and automate the tests first. But with every execution, automated tests pay back some of the costs (figure below: cost vs. time, red solid line). Over and over and over again.
Manual testing costs less in the short term — but only in the very short term (blue solid line). An automated test is probably worthwhile after the third version. As a developer, I've already run an automated test more than three times after ten minutes of development.

Software tends to change: Changes to code go hand in hand with changes to tests or, ideally, with new tests. When testing manually with changes, costs rise earlier than without changes (blue dotted line). Changing automated tests or new automated tests also costs more (red dotted line). However, this does not change the experience that automated tests are cheaper than manual tests in the long term.
It is even cheaper not to test. But only as long as I haven't written and executed a piece of code. No — not testing is not an option, even from a time and cost perspective.
For the impatient: Early feedback

Automated tests are automatically executed by the developer locally after every code change or in the build pipeline. If a test fails, I immediately get feedback about the failure — fail fast.
If the test fails in the IDE, I need to find and fix the cause of the failure. I can then continue developing or check into version management. If I don't do that, it will probably be more and more difficult for me to identify the cause of the fault afterwards. The fail-fast aspect is lost.
For process-conscious people: automation of recurring activities

A deployment, a build, a local test run — all of these process steps depend on triggering minimal transaction costs when carried out. We carry out the process steps as often as necessary without asking questions about the effort involved.
Automated tests contribute to this minimization of transaction costs. The age of manual build & deployment with weeks of manual testing phases is over.
For purists: simple design

Photo by Jozsef Hocza on Unsplash
Test-driven design — i.e. writing code so that it fulfills a test — potentially leads to simpler design. The test gives the code a direction. We avoid or minimize boilerplate code — i.e. code that does not contribute to solving the actual problem.
For those careful: Validating refactoring

Software requirements and framework conditions change over time and require the change of code. When code changes, we ensure that functionality that is not affected by changed requirements is retained.
Automated testing helps validate Refactorings. If there are no, too few, or the wrong tests, it is difficult to validate the impact of a code change.
But be careful: Automated testing is an indicator that code is working — not a guarantee. Only complete tests provide this guarantee and these can only be achieved with unreasonable effort, even with simple requirements. By indefensible, I mean not only indefensible but unrealistic. Really. Would you like a small example?
We've written a function to multiply two integers and want to test it completely.
Integer numbers range from -2,147,483,648 to 2,147,483,647.
This means that an integer number can accept 4,294,967,296 different values. However, we do not have one integer number, but two. Any value of one number can be multiplied by any value of the other number. This results in 4,294,967,296 values multiplied by 4,294,967,296 values.
In other words, 18,446,744,073,709,550,000 or 18.4 sextillions of different multiplications are possible. 18.4 trillion test runs are necessary to completely test the multiplication of two integer numbers.
That. Is. Don't. Doable.
The functionality of modern IT systems usually consists of more than the multiplication of two integer numbers. Its functionality is significantly more complex. Fully testing here is even less realistic.
For code users: Documentation of functionality

Automated tests [1] are part of documenting the features they test. If I want to understand how a piece of code works, I read the code. But I can't be sure I understand its function correctly. Examples that show the code in action and measure its result help here.
For sustainable people: Understanding mistakes

If an error occurs in a functionality after the development is complete, I try to understand the error. I might be able to do that by looking at the code. I can adjust the code and hope to have fixed the error that occurred.
Or I try to simulate the error with the help of an automated test. If I succeed, I can now adapt my code so that the test that can generate the error fails. If the test fails, I've fixed the bug with some certainty. But I want to make sure that the error — due to another code change, for example — doesn't happen again.
For this, I [2] negate the result of the failed test. The test is now successful. If this test ever fails again, I know I've reinstalled a known bug. I can correct my change that resulted in an error.
For fortune tellers: predicting code behavior

Automated testing helps us predict the behavior of a function when certain conditions occur.
For this purpose, test data is used in Equivalence classes subdivided, with the help of which “similar” data is assumed to behave similarly. If I discover a new equivalence class due to an error, I add a corresponding test. And improve the predictability of my function's behavior.
For heirs: Facilitate development

When developers take over the maintenance or development of existing code, good automated tests are an essential prerequisite to make it easier for them to get started. Another requirement is good design.
Good tests can be used to “learn” the expected behavior of existing code. If a test breaks after a code change, the team receives quick feedback. The developers' learning curve is increasing.
For safety-conscious people: Demonstrate the behavior of critical functions

Automated tests for the code of a critical functionality can prove that it is technically correct. It cannot guarantee that the functionality is error-free, but only that the functionality behaves as intended by its developers.
For explosives experts: Minimize the effects of a change

Changing a frequently used feature can have a serious impact on my entire system. The explosion radius of the change is correspondingly large. Through appropriate, automated tests, I can determine the explosion radius of changes at an early stage. I can react immediately.
For publishers: Avoiding trivial mistakes

The repeated execution of code means that trivial programming errors do not first find their way into production. A null pointer exception right at the start of processing? I discover before my users. An array index-out-of-bounds exception because I'm accessing an index incorrectly? Found it before I even checked in the code.
For goal-conscious people: Find mistakes earlier and faster

Code coverage is a valuable tool for identifying an error. If I can understand an error with an automated test, I can isolate the source of the error with code coverage.
Whether troubleshooting is just as quick is another matter.
For sleuths: make debugging easier

Can I simulate the error with an automated test? Perfect! I start the test in debug mode and tap through the code until I can understand where the error is occurring.
This usually only works for unit tests. It is often difficult for me to debug in an integration test.
For effective people: Using time effectively

Automation saves time and effort when repeating tests. I can use the time I have gained for the exciting things of being a developer or a tester. For example, by writing code or other automated tests.
For people in a hurry: Shorter development time

If I have to start my distributed application completely with all its components for automated testing, it costs time. A lot of time when I have to do that frequently. If I only have to start a part to test automatically, I will receive feedback earlier. And can react to errors earlier. I'm faster overall when I often get feedback through testing.
For structural engineers: increasing software stability

Software that is covered by many good tests is potentially more stable than a system that has been tested manually or not tested at all. The more frequently a branch is run with various parameters, the higher is the probability that it will remain stable in production. However, this does not provide a guarantee.
Anyone who makes assumptions about which functionality is being run should test this and contribute to the stability of the functionality.

For thought leaders: supporting modularization

In addition to the unit test level, I have to test a system that is modular at this level — module level [3]. The granularity of these module tests is generally coarser than unit tests. If module tests are written and used from the start, this usually automatically supports the module character of the affected functionality. The opportunity to create reusable functionality is increasing.
For self-employed people: Independent code

Code that is isolated and has few dependencies tends to be easier to test automatically. When independent code is a declared goal, automated testing helps. Lots of testing. At level [4] where I'm striving for independence.
For truth-conscious people: Documenting current behavior

Tests don't lie — at least when testing. Because then they show exactly the behavior of the current implementation of a function.
If I don't run a test, I have no information as to whether the test fits my current implementation. That's why I run automated tests as soon as it makes sense. And that may well be the case 200 times a day, because the truth may have changed secretly.
For multiple personalities: testing on different target platforms

Does my service run on a mobile phone or on a mainframe computer? Use must have no effect on the ability of my automated tests to run. Ideally, they run unchanged in every environment. Sometimes I need to do something about this, such as installing a testing framework on a new target platform. And make configuration adjustments for this platform. Then please be able to run my tests there as well.
For dreamers: Unachievable goals for automated testing

Automated testing does not or does not completely solve the following goals:
- Find unknown errors
- Fix bad design
- Free tests
- Improve code quality
- Solving resource issues
- Gray Failure Support
- faultlessness
- Full testing
We will address all of these points in detail in a later blog post.
[1] This also applies to manual tests, which tend to become obsolete.
[2] The method is called a positive/negative test.
[3] A module can be a service, a subsystem, or a component, or whatever. Accordingly, a module test is a test at service, subsystem or component level.
[4] class, interface, function, subsystem,...



