You're wrong if you think being a good developer means writing tests for every single line of code you write, you're wrong.
You're wrong if you think 100% test coverage means you can ship your software with confidence.
Let's dig into the purpose of testing and the confidence when shipping your software, but first I need to clarify what I mean by testing in this context.
Types of testing
There are multiple ways to make sure your software works before shipping it. You can either manually test it, or write an automated test that does the testing for you.
Manual testing means using the software to make sure it works. There are pros and cons to this approach.
- Making sure it works for your users
- Being able to test in an environment where nothing (no service, no internal software) gets mocked
- You will not just have to test the feature you ship, but every existing feature to make sure no bugs were created when you implemented the new feature.
- Because it is time-consuming and draining, it will lead to the pace of development being slow, which will likely lead to reduced customer satisfaction.
Automated testing means writing a test, writing code, to make sure your software works. There are different types of automated tests:
- Unit: Testing a single part of the software
- Integration: Test how multiple parts of the software work together
- E2E Tests: Test resembling your users in an environment that is closest to the real environment
Let's take a look at some of the pros and cons.
- Doesn't take a lot of time compared to manual testing
- You can get feedback during development on whether your feature or any of the existing ones work
- Tests themselves can serve as documentation for other team members to understand what the code should be able to do
- You can refactor your code with confidence because the test will tell you when a behavior of the software isn't doing what it is expected to do
- You will have to mock out parts of the software depending on the test you write
- Manual testing is still required depending on what you're developing. For instance, if you're working on something that sends a message to the phone, you probably don't want to mock this part out and make sure it works
- Writing the wrong tests can lead to more pain & less confidence. With the wrong tests I mean if you're testing the implementation details of the code and not the behavior, then it will lead to the tests often failing even if the behavior hasn't changed, and when the behavior is wrong, the tests may miss that
The main reason we write tests is to get confidence that our software works as it should before we ship it to our customers.
This is important, because in the end, what really matters is customer satisfaction. If our software isn't reliable and consistently introduces new bugs, our customers will likely be frustrated, this can lead to negative things:
- Customer chooses to use one of our competitors' software
- Customer speaks ill about our software publicly
Gaining confidence before shipping software is extremely important:
- For every new feature implemented
- Every time we ship a new feature, we need to make sure the existing features work
If we do manual testing, it is hard to quickly get feedback during development on whether the feature works, even worse, the entire existing software.
For this reason, most if not all of our testing should be automated.
Manual testing will still be necessary as mentioned before for situations where you've to test manually to make sure it works.
Let's dig into the various ways of writing automated tests and discuss the pros and cons.
Unit testing is when you test a single unit of software, a single part. An example would be testing a function:
- Given an input, it should return the expected output
Unit tests are often small and very scoped, testing a single & specific part of the software.
Integration testing is a level higher than unit testing. It is when you test the behavior of the software that contains multiple parts. You could also say it contains multiple units.
An example would be testing a component (frontend):
- The user interacts with a button that lives inside the component which triggers something to be executed that in return updates the content inside the component. We can assert how the component looks before and after the interaction with the button.
Here we aren't testing every single piece involved separately, but instead, we're testing them when they are being used.
This is better than unit testing because we're closer to real usage. We aren't testing the pieces, we are testing the pieces in usage. This makes sense because the pieces aren't used standalone in software but instead are used in larger parts of the software.
E2E (end-to-end) testing is the closest thing to manual testing. You're testing resembling your real users, by having an automated test use your product in a real environment. You often end up mocking very little if nothing inside E2E tests, which leads to a lot of confidence.
For example, if you're building a website:
- The test will open a browser and your tests will be run there, resembling the user.
E2E tests are slower than unit and integration tests because of all the resources involved.
- It is fine to write many E2E tests.
- Be aware that E2E tests can take a while to run, but that is fine till it becomes a problem for your team.
- Do NOT default the mindset to: E2E tests are slow, therefore we should avoid them.
- E2E tests give you the most confidence when shipping your software compared to unit and integration tests.
- Try to extend existing E2E tests before adding new ones whenever you need to cover behaviors of the software.
What about snapshot testing?
You may have heard of snapshot testing. I'd describe it as testing code by keeping records of the code i.e. a particular function outputs, and whenever the output changes, the test fails.
I will be sincere, I have seen very few cases where snapshot testing makes sense, and seen over a dozen times where snapshot testing was just a burden and never really gave any confidence.
I would advise avoiding it.
One use case I've found snapshot testing helpful is when you have a function that does a lot of complicated logic and discover a bug, then it makes sense to write a regression test for that unit where you snapshot test it. It isn't necessary, but a use case I've found where snapshot testing is beneficial.
When and always?
We write tests to get confidence that our software works when our users use them before shipping them.
This brings another question, when and should we always write tests?
The answer: It depends.
I know it is not a satisfying answer, but it really depends on what your software does and the domain you're in. You should keep in mind though, it all ties back to confidence, but what is also important is the feedback loop during development.
Sometimes I don't write tests. The reason for that often is because the test wouldn't bring any new confidence I already have. This statement of mine is vague and not specific, and that is again because it really does depend on your product, use case, domain, software, etc.
However, for 99% of cases where behaviors are introduced, I make sure to test them. Most of the time following Test-Driven Development (TDD).
I hope at least this part of the article shines a light on the importance of being open-minded, and that there are no absolutes, it is a case-by-case thing.
Testing is extremely important. Writing the right tests and gaining the most confidence is even far more important.