Crafting a Solid Foundation with Outside-In TDD: A Step-by-Step Guide
The content here is under the Attribution 4.0 International (CC BY 4.0) license
Test Driven Development is part of my daily work as a developer. I shared already a few thoughts about it here and strategies to test legacy applications as well. Recently I read the book Growing Object-Oriented Software Guided by Tests and had a few ideas on how to approach things differently, thinking about testing from end-to-end, from the start - I relate this approach to the London School of TDD, also known as outside-in TDD (for some it is also known as double loop TDD).
Even though the style is well known in the industry, getting a proper setup is not something standardized - at least, I couldn’t find any. On one hand, it varies for different programming languages, for example, java uses junit1 and selenium2 whereas in PHP it could be PHPUnit3 and Behat4, in javascript jest5 and cypress6 (or any combination between them). On the other hand, such a setup is not taken into account when deciding which style to choose.
In this blog post I am going to share, the setup I have used and how I apply outside-in TDD in my projects. I used the “skeleton” approach because this is what I feel comfortable with when building something to get started with. I relate that with any bootstrap that any framework provides.
This is the BDD cycle. Driving development from the outside in, starting with business-facing scenarios in Cucumber and working our way inward to the underlying objects with RSpec.
Chelimsky, David and Astels, Dave and Helmkamp, Bryan and North, Dan and Dennis, Zach and Hellesoy, Aslak
Common ground
To get started with the setup first, there is a bit of history to go over the TDD style we are aiming at. Outside-in is known to start with a broad acceptance test (from the outside, no worries about the implementation) and as soon as it fails (for the right reason) we switch to the next test, but in this case, more specifically, to start implementing the needed functionality to make the acceptance test to pass.
Outside-in and Outside-out?
In A Gentle Introduction to TDD, the different styles of TDD are presented in more detail. Usually, new joiners to TDD start with inside-out and later on move to outside-in.
This is also known as the double TDD loop depicted in the GOOS book [1], later [2] published the rspec + cucumber approach to the outside-in, in a way to expand the BDD approach. The question is, how to have the minimum setup to get started with outside-in?
To start to answer this question the approach I chose was to think about what I need to start with outside-in. The minimum requirements I could think of are:
- Be able to do and intercept HTTP requests of any kind (be it loading style, javascript, requests to third-party APIs and so on)
- Available documentation and widely adopted by the community
- It should allow writing tests without relying too much on implementation details
An extra nice to have would be to avoid switching testing frameworks. Allowing writing both acceptance tests and unit tests, therefore I found this one to be a bit tricky as some tread-off needs to be taken into account. For the time being, I decided to postpone this kind of decision.
I noticed that outside-in means different things depending on who you ask. The common ground I found is that developers agree that outside means the part that is far away from the implementation. For example, asserting on text output, API responses, and browser elements - all of those are what we expect, but without saying how.
Cypress and testing library
One of the first ecosystems I got to start with outside-in was JavaScript. In my opinion, one of the key aspects that made me choose Cypress and the testing library ones the fact that they were popular and I agree with the philosophy. For example, cypress is made in nodejs and has integration with different browser vendors, making it the project to go when we are talking about browser automation.
On the other hand, the testing library grew its popularity because it treats testing as it should: focusing on the code behavior, rather than implementation details. Which allows refactoring and change on the code without coupling with tests. The folder structure I chose has no particular reason, it’s one I felt more comfortable with (files like package.json, the folder public/ and others have been removed for readability):
├── cypress -------------------|
│ ├── downloads | Under cypress is where the tests
│ ├── fixtures | are far away from implementation and
│ │ ├── 5kb.json | where I used the name acceptance to
│ │ ├── bigUnformatted.json | depict that.
│ │ ├── example.json |
│ │ ├── formatted |
│ │ │ └── 2kb.txt |
│ │ └── unformatted |
│ │ ├── 2kb.json |
│ │ └── generated.json |
│ ├── integration |
│ │ └── acceptance.spec.ts | Here the file has the first loop of
│ ├── plugins | outside-in. Writing this test failing
│ │ └── index.ts | first and then moving to the inner
│ ├── screenshots | loop.
│ ├── support |
│ │ ├── commands.js |
│ │ └── index.js |
│ ├── tsconfig.json |
│ └── videos |
│ └── acceptance.spec.ts.mp4 |
├── cypress.json -------------------|
├── src
│ ├── App.test.tsx
│ ├── App.tsx
│ ├── components
│ │ ├── Button.test.tsx <-----
│ │ ├── Button.tsx <-----
│ │ ├── JsonEditor.tsx <----- source code and test under the same
│ │ ├── Label.test.tsx <----- folder. Here is where we care about
│ │ ├── Label.tsx <----- the implementation, double loop TDD.
│ │ └── tailwind.d.ts <-----
│ ├── core
│ │ ├── cleanUp.ts
│ │ ├── formater.test.ts
│ │ ├── formatter.ts
│ │ └── __snapshots__
│ │ └── formater.test.ts.snap
│ ├── index.scss
│ ├── index.tsx
│ ├── react-app-env.d.ts
│ ├── reportWebVitals.ts
│ ├── setupTests.ts
│ ├── __snapshots__
│ │ └── App.test.tsx.snap
For Cypress, the folder structure is the same as the default installation when setting up. Everything related is inside the folder cypress/. For testing-library, I used another approach which is having the test files in the same directory as the production code. I found it easier to get going daily having those together, instead of a folder called tests, for two reasons:
- I don’t have to mirror the test structure with the source code (1-1 association)
- It makes it easier to have a mental snapshot of the feature I am working on as the tests are alongside the folder
I am using this specific setup for the json-tool, a tool that makes formatting JSON easy and combines privacy first. The following snippet was extracted from the piece of code in the acceptance.spec.ts, to start with the first loop in the outside-in mode:
describe('json tool', () => {
const url = '/';
beforeEach(() => {
cy.visit(url);
});
describe('User interface information', () => {
it('label to inform where to place the json', () => {
cy.get('[data-testid="label-json"]').should('have.text', 'place your json here');
});
});
describe('Basic behavior', () => {
it('format valid json string', () => {
cy.get('[data-testid="json"]').type('{}');
cy.get('[data-testid="result"]').should('have.value', '{}');
});
it('shows an error message when json is invalid', () => {
cy.get('[data-testid="json"]').type('this is not a json');
cy.get('[data-testid="result"]').should('have.value', 'this is not a json');
cy.get('[data-testid="error"]').should('have.text', 'invalid json');
});
});
});
The next example is the implementation for the details that we need to build in other to make the acceptance test pass, keep in mind that for the inner loop in the outside-in, we might have tests distributed in different files (this is exactly what happened with the json-tool). The file App.test.tsx holds the specific details in the test:
import { fireEvent, render, screen, act } from '@testing-library/react';
import App from './App';
import userEvent from '@testing-library/user-event';
import { Blob } from 'buffer';
import Formatter from './core/formatter';
describe('json utility', () => {
test('renders place your json here label', () => {
render(<App />);
const placeJsonLabel = screen.getByTestId('label-json');
expect(placeJsonLabel).toBeInTheDocument();
});
test('error message is hidden by default', () => {
render(<App />);
const errorLabel = screen.queryByTestId(/error/);
expect(errorLabel).toBeNull();
});
test('inform error when json is invalid', async () => {
render(<App />);
const editor = screen.getByTestId('json');
await act(async () => {
fireEvent.change(editor, {target: { value: 'bla bla' }});
});
const result = screen.getByTestId('error');
expect(result.innerHTML).toEqual('invalid json');
});
test.each([
['{"name" : "json from clipboard"}', '{"name":"json from clipboard"}'],
[' {"name" : "json from clipboard"}', '{"name":"json from clipboard"}'],
[' {"name" : "json from clipboard"}', '{"name":"json from clipboard"}'],
[' { "a" : "a", "b" : "b" }', '{"a":"a","b":"b"}'],
['{ "a" : true, "b" : "b" }', '{"a":true,"b":"b"}'],
['{ "a" : true,"b" : 123 }', '{"a":true,"b":123}'],
['{"private_key" : "-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n" }', '{"private_key":"-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n"}'],
[`{
"type": "aaaa",
"project_id": "any",
"private_key_id": "111111111111111111",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADG9w0BAQEFAASCBKgwggSkiEus62eZ\n-----END PRIVATE KEY-----\n",
"client_email": "banana@banana",
"client_id": "999",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/"
}`, `{
"type":"aaaa",
"project_id":"any",
"private_key_id":"111111111111111111",
"private_key":"-----BEGIN PRIVATE KEY-----\nMIIEvgIBADG9w0BAQEFAASCBKgwggSkiEus62eZ\n-----END PRIVATE KEY-----\n",
"client_email":"banana@banana",
"client_id":"999",
"auth_uri":"https://accounts.google.com/o/oauth2/auth",
"token_uri":"https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url":"https://www.googleapis.com/robot/v1/metadata/x509/"
}`
],
['{"key with spaces" : "json from clipboard"}', '{"key with spaces":"json from clipboard"}'],
])('should clean json white spaces', async (inputJson: string, desiredJson: string) => {
render(<App />);
const editor = screen.getByTestId('json');
await act(async () => {
userEvent.paste(editor, inputJson);
});
await act(async () => {
userEvent.click(screen.getByTestId('clean-spaces'));
});
const result = screen.getByTestId('result');
expect(editor).toHaveValue(inputJson);
expect(result).toHaveValue(desiredJson);
});
});
The key takeaway here is the difference between the outer loop and the inner loop when writing outside-in. Starting from a more generic way and going down into the details as I hope is depicted in the tests. It is worth mentioning that, as far as I am aware, there is no consensus on how many tests you should have for each test type, however, Google recommends the 70/20/10: 70% unit tests, 20% integration tests, and 10% end-to-end tests. Further inspection is needed to see if this suggested setup achieves such metrics.
References
- [1]S. Freeman and N. Pryce, Growing object-oriented software, guided by tests. Pearson Education, 2009.
- [2]D. Chelimsky, D. Astels, B. Helmkamp, D. North, Z. Dennis, and A. Hellesoy, “The RSpec book: Behaviour driven development with Rspec,” Cucumber, and Friends, Pragmatic Bookshelf, vol. 3, p. 25, 2010.
Footnotes
Table of contents
Got a question?
If you have question or feedback, don't think twice and click here to leave a comment. Just want to support me? Buy me a coffee!