Crafting a Solid Foundation with Outside-In TDD: A Step-by-Step Guide

Last updated Nov 30, 2023 Published Dec 16, 2021

The content here is under the Attribution 4.0 International (CC BY 4.0) license

Test Driven Development is part of my daily work as a developer. I shared already a few thoughts about it here and strategies to test legacy applications as well.

Recently I read the book Growing Object Oriented Software Guided by Tests and had a few ideas on how to approach things in a different way, thinking on testing from end-to-end, from the start - I relate this approach to the London school of TDD, also known as outside-in TDD (for some it is also known as double loop TDD).

Even though the style is well known in the industry, getting a proper setup is not something standardized - at least, I couldn’t find any. On one hand, it vary for different programming languages, for example, java uses junit1 and selenium2 whereas in PHP it could be PHPUnit3 and Behat4, in javascript jest5 and cypress6 (or any combination between them). On the other hand, such setup is not take into account when deciding which style to choose.

In this blog post I am going to share, the setup I have used and how I apply outside-in TDD in my projects. I used the “skeleton” approach because this is what I feel comfortable when building something to get started with. I relate that with any bootstrap that any framework provides.

This is the BDD cycle. Driving development from the outside in, starting with business-facing scenarios in Cucumber and working our way inward to the underlying objects with RSpec.

Chelimsky, David and Astels, Dave and Helmkamp, Bryan and North, Dan and Dennis, Zach and Hellesoy, Aslak

Common ground

To get start with the setup first, there is a bit of history to go over the TDD style we are aiming here. Outside-in is known to start by broad acceptance test (from the outside, no worries about the implementation) and as soon as it fails (for the right reason) we switch to the next test, but in this case, more specific, to start implementing the needed functionality to make the acceptance test to pass. This is also known as the double TDD loop depicted in the GOOS book [1], later [2] published the rspec + cucumber approach to the outside-in, in a way to expand the BDD approach.

The question is, how to have the minimum setup to get started with outside-in?

To start to answer this question the approach I chose was to think about what I need to start with outside-in? The minimum requirements I could think of are:

  1. Be able to do and intercept HTTP request of any kind (be it loading style, javascript, requests to third party apis and so on)
  2. Available documentation and widely adopted by the community
  3. It should allow writing test without relying too much in implementation details

An extra nice to have would be to avoid switching of testing framework. Allowing writing both acceptance tests and unit tests, therefore I found this one to be a bit tricky as some tread-off need to be taken into account. For the time being I decided to postpone this kind of decision.

I noticed that outside-in means different things depending who you ask. The common ground I found is that developers agree that outside means the part that is far away from the implementation. For example, asserting on text output, API responses, browser elements - all of those are what we expect, but without saying how.

Cypress and testing library

One of the first ecosystems I got to start with outside-in was javascript. In my opinion, one of the key aspects that made me chose cypress and testing library ones the fact that they were popular and I agree with the philosophy.

For example, cypress is made in nodejs and has integration with different browser venders, making it the project to go when the we are talking about browser automation.

On the other hand, testing library grew its popularity due the fact that it treats testing as it should: focusing on the code behavior, rather than implementation details. Which allows refactor and change on the code without coupling with tests.

The folder structure I chose has no particular reason, it’s one I felt more comfortable with (files like package.json, the folder public/ and others have been removed for readability):

├── cypress          -------------------|
│   ├── downloads                       | Under cypress is where the tests
│   ├── fixtures                        | are far away from implementation and
│   │   ├── 5kb.json                    | where I used the name acceptance to
│   │   ├── bigUnformatted.json         | depict that.
│   │   ├── example.json                | 
│   │   ├── formatted                   | 
│   │   │   └── 2kb.txt                 | 
│   │   └── unformatted                 | 
│   │       ├── 2kb.json                | 
│   │       └── generated.json          | 
│   ├── integration                     | 
│   │   └── acceptance.spec.ts          | Here the file has the first loop of
│   ├── plugins                         | outside-in. Writing this test failing
│   │   └── index.ts                    | first and then moving to the inner
│   ├── screenshots                     | loop.
│   ├── support                         | 
│   │   ├── commands.js                 | 
│   │   └── index.js                    | 
│   ├── tsconfig.json                   | 
│   └── videos                          | 
│       └── acceptance.spec.ts.mp4      | 
├── cypress.json     -------------------| 
├── src
│   ├── App.test.tsx
│   ├── App.tsx
│   ├── components
│   │   ├── Button.test.tsx    <----- 
│   │   ├── Button.tsx         <----- 
│   │   ├── JsonEditor.tsx     <----- source code and test under the same
│   │   ├── Label.test.tsx     <----- folder. Here is where we care about
│   │   ├── Label.tsx          <----- the implementation, double loop TDD.
│   │   └── tailwind.d.ts      <----- 
│   ├── core
│   │   ├── cleanUp.ts
│   │   ├── formater.test.ts
│   │   ├── formatter.ts
│   │   └── __snapshots__
│   │       └── formater.test.ts.snap
│   ├── index.scss
│   ├── index.tsx
│   ├── react-app-env.d.ts
│   ├── reportWebVitals.ts
│   ├── setupTests.ts
│   ├── __snapshots__
│   │   └── App.test.tsx.snap

For cypress, the folder structure is the same as the default installation when setting up. Everything related is inside the folder cypress/.

For testing library, I used another approach which is having the test files in the same directory as the production code. Personally, I found easier to get going on the daily basis having those together, instead of a folder called tests, for two reasons:

  1. I don’t have to mirror the test structure with the source code (1-1 association)
  2. It makes easier to have a mental snapshot of the feature I am working has the tests along side the folder

I am using this specific setup for the json-tool, a tool that makes formatting json easy and combines privacy first. The following snippet was extracted from the piece of code in the acceptance.spec.ts, to start with the first loop in the outside-in mode:

describe('json tool', () => {
  const url = '/';

  beforeEach(() => {

  describe('User interface information', () => {
    it('label to inform where to place the json', () => {
      cy.get('[data-testid="label-json"]').should('have.text', 'place your json here');

  describe('Basic behavior', () => {
    it('format valid json string', () => {
      cy.get('[data-testid="result"]').should('have.value', '{}');

    it('shows an error message when json is invalid', () => {
      cy.get('[data-testid="json"]').type('this is not a json');
      cy.get('[data-testid="result"]').should('have.value', 'this is not a json');
      cy.get('[data-testid="error"]').should('have.text', 'invalid json');

The next example is the implementation for the details in which we need to build in other to make the acceptance test to pass, keep in mind that for the inner loop in the outside-in, we might have tests distributed in different files (this is exactly what happened with the json-tool). The file App.test.tsx holds the specific details in the test:

import { fireEvent, render, screen, act } from '@testing-library/react';
import App from './App';
import userEvent from '@testing-library/user-event';
import { Blob } from 'buffer';
import Formatter from './core/formatter';

describe('json utility', () => {

  test('renders place your json here label', () => {
    render(<App />);
    const placeJsonLabel = screen.getByTestId('label-json');

  test('error message is hidden by default', () => {
    render(<App />);
    const errorLabel = screen.queryByTestId(/error/);

  test('inform error when json is invalid', async () => {
    render(<App />);

    const editor = screen.getByTestId('json');

    await act(async () => {
      fireEvent.change(editor, {target: { value: 'bla bla' }});

    const result = screen.getByTestId('error');

    expect(result.innerHTML).toEqual('invalid json');

    ['{"name" : "json from clipboard"}', '{"name":"json from clipboard"}'],
    ['    {"name" : "json from clipboard"}', '{"name":"json from clipboard"}'],
    ['    {"name" : "json    from   clipboard"}', '{"name":"json    from   clipboard"}'],
    ['    { "a" : "a", "b" : "b" }', '{"a":"a","b":"b"}'],
    ['{ "a" : true,         "b" : "b" }', '{"a":true,"b":"b"}'],
    ['{ "a" : true,"b" : 123 }', '{"a":true,"b":123}'],
    ['{"private_key" : "-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n" }', '{"private_key":"-----BEGIN PRIVATE KEY-----\nMIIEvgI\n-----END PRIVATE KEY-----\n"}'],
  "type": "aaaa",
  "project_id": "any",
  "private_key_id": "111111111111111111",
  "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADG9w0BAQEFAASCBKgwggSkiEus62eZ\n-----END PRIVATE KEY-----\n",
  "client_email": "banana@banana",
  "client_id": "999",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""
}`, `{
"private_key":"-----BEGIN PRIVATE KEY-----\nMIIEvgIBADG9w0BAQEFAASCBKgwggSkiEus62eZ\n-----END PRIVATE KEY-----\n",
    ['{"key with spaces" : "json from clipboard"}', '{"key with spaces":"json from clipboard"}'],
  ])('should clean json white spaces', async (inputJson: string, desiredJson: string) => {
    render(<App />);

    const editor = screen.getByTestId('json');

    await act(async () => {
      userEvent.paste(editor, inputJson);

    await act(async () => {'clean-spaces'));

    const result = screen.getByTestId('result');


The key takeaway here is the difference between the outer loop and the inner loop when writing outside-in. Starting from more generic way and the going down into the details as I hope is depicted in the tests.

It is worth mention that, as far as I am aware of, there is no consensus on how many tests you should have for each test type, therefore, google recommends the 70/20/10: 70% unit tests, 20% integration tests, and 10% end-to-end tests. Further inspection is needed to see if this suggested setup achieves such metrics.


  1. [1]S. Freeman and N. Pryce, Growing object-oriented software, guided by tests. Pearson Education, 2009.
  2. [2]D. Chelimsky, D. Astels, B. Helmkamp, D. North, Z. Dennis, and A. Hellesoy, “The RSpec book: Behaviour driven development with Rspec,” Cucumber, and Friends, Pragmatic Bookshelf, vol. 3, p. 25, 2010.