TDD anti-patterns - the liar, excessive setup, the giant, slow poke - with code examples in javascript, kotlin and php

Last updated Aug 20, 2022 Published Aug 28, 2021

The content here is under the Attribution 4.0 International (CC BY 4.0) license

Test Driven Development (TDD) has been adopted across different developers and teams as a way to deliver quality code, through short feedback cycles in a (fail-pass-refactor) fashion. Kent Beck [1] popularized the methodology that became a standard in which the industry follows and build on top of it. Therefore, starting with TDD is not easy and keeping it sharp in a way that the test suite run fast is even harder. Code bases that tackle business problems require a non-trivial amount of code, and with that, a non-trivial set of test cases. Therefore, as [2] describes: the industry has an immature test automation process leading to slow feedback.

Do you prefer this content in video format? The video is available on demand in livestorm.

James Carr came up with a list of 22 anti-patterns to have a look and keep it under control to avoid the testing trap, that extensive code bases might fall into. Dave Farley went through a few of them on his youtube channel, and as such, it made me realize that I had to spread the world on this subject as well. This post is inspired by Dave Farley video and also was a guide for my talk on this matter.

NOTE 1: TDD anti-patterns are often mixed with test smells [3], which in some cases, it might overlap the definition between one another, therefore, in my opinion, those are different subjects.

NOTE 2: Even though James Carr list has 22 anti-patterns, Yegor Bugayenko in his lecture describes 23 of them and the book testing with junit describes 27.

NOTE 3: The follow up posts are available at:

The liar

The liar is one of the most common anti-patterns that I can recall in my professional life practicing TDD. I would list at least those two reasons for me to spot such issue among code bases:

  1. Async oriented test cases
  2. Time oriented test cases

The first one is well explained in the jest official documentation. Testing asynchronous code becomes tricky as it is based on a future value that you might receive, or might not [4]. The following code is a reproduced example from jest official documentation (the docs states even in a code comment to not use the following code).

// Don't do this!
test('the data is peanut butter', () => {
  function callback(data) {
    expect(data).toBe('peanut butter');
  }

  fetchData(callback);
});

Getting back to the anti pattern, this test would pass without complaining, you know, the liar. The correct approach is to wait for the async function to finish it execution and give jest control over the flow execution again.

test('the data is peanut butter', done => {
  function callback(data) {
    try {
      expect(data).toBe('peanut butter');
      done(); // <------------------------ invokes jest flow again, saying: "look I am ready now!"
    } catch (error) {
      done(error);
    }
  }

  fetchData(callback);
});

The second one, Martin Fowler elaborates on the reasons for that to be the case [5], and here I would like to share some opinions that goes along what he wrote.

Asynchronous is a source of non-determinism and we should be careful with that. Besides that, threads are the one to keep an eye as well.

On the other hand, time oriented tests sometimes can fail for no reason. I have experienced tests suites that failed because the test wasn’t using mock to handle dates. As a result, on the day that the code was written the test was passing, but in the following day it broke. You know, the liar.

The liar - root causes

  1. Lack of practice on TDD.
  2. Oriented to coverage.

Excessive setup

I relate excessive setup due the non practice of TDD from the start and also the lack of practicing object calisthenics - I think it would apply for functional programming, but for the time being I will stay with object oriented programming.

The classic approach for excessive setup is when you want to test a specific behavior on your code but it becomes difficult dues the many dependencies that you have to setup first (dependencies such as classes).

The following code depicts the nuxtjs framework test case for this matter, the test file for the server starts with a few mocks, and then it goes on, till the method beforeEach that has more setup work (the mocks).

jest.mock('compression')
jest.mock('connect')
jest.mock('serve-static')
jest.mock('serve-placeholder')
jest.mock('launch-editor-middleware')
jest.mock('@nuxt/utils')
jest.mock('@nuxt/vue-renderer')
jest.mock('../src/listener')
jest.mock('../src/context')
jest.mock('../src/jsdom')
jest.mock('../src/middleware/nuxt')
jest.mock('../src/middleware/error')
jest.mock('../src/middleware/timing')
 
describe('server: server', () => {
  const createNuxt = () => ({
    options: {
      dir: {
        static: 'var/nuxt/static'
      },
      srcDir: '/var/nuxt/src',
      buildDir: '/var/nuxt/build',
      globalName: 'test-global-name',
      globals: { id: 'test-globals' },
      build: {
        publicPath: '__nuxt_test'
      },
      router: {
        base: '/foo/'
      },
      render: {
        id: 'test-render',
        dist: {
          id: 'test-render-dist'
        },
        static: {
          id: 'test-render-static',
          prefix: 'test-render-static-prefix'
        }
      },
      server: {},
      serverMiddleware: []
    },
    hook: jest.fn(),
    ready: jest.fn(),
    callHook: jest.fn(),
    resolver: {
      requireModule: jest.fn(),
      resolvePath: jest.fn().mockImplementation(p => p)
    }
  })
 
  beforeAll(() => {
    jest.spyOn(path, 'join').mockImplementation((...args) => `join(${args.join(', ')})`)
    jest.spyOn(path, 'resolve').mockImplementation((...args) => `resolve(${args.join(', ')})`)
    connect.mockReturnValue({ use: jest.fn() })
    serveStatic.mockImplementation(dir => ({ id: 'test-serve-static', dir }))
    nuxtMiddleware.mockImplementation(options => ({
      id: 'test-nuxt-middleware',
      ...options
    }))
    errorMiddleware.mockImplementation(options => ({
      id: 'test-error-middleware',
      ...options
    }))
    createTimingMiddleware.mockImplementation(options => ({
      id: 'test-timing-middleware',
      ...options
    }))
    launchMiddleware.mockImplementation(options => ({
      id: 'test-open-in-editor-middleware',
      ...options
    }))
    servePlaceholder.mockImplementation(options => ({
      key: 'test-serve-placeholder',
      ...options
    }))
  })

Reading the test from the beginning gives an idea that to start with, there are 13 jest.mock invocations. Besides that, there are more setup on the beforeEach, around 9 spies and stub setup. Probably, if I wanted to create a new test case from scratch, or move tests across different files I would need to keep the same excessive setup as it is now.

The excessive setup is a common trap, I also felt the pain of having to build many dependencies before starting testing a piece of code. The following code is from my research project called testable:

import { Component } from 'react';
import PropTypes from 'prop-types';
import { Redirect } from 'react-router';
import Emitter, { PROGRESS_UP, LEVEL_UP } from '../../../../packages/emitter/Emitter';
import { track } from '../../../../packages/emitter/Tracking';
import { auth } from '../../../../pages/login/Auth';
import Reason from '../../../../packages/engine/Reason';
import EditorManager from '../editor-manager/EditorManager';
import Guide from '../guide/Guide';
import Intro from '../intro/Intro';
import DebugButton from '../../buttons/debug/Debug';
import {SOURCE_CODE, TEST_CODE} from '../editor-manager/constants';
import {executeTestCase} from '../../../../packages/engine/Tester';

const Wrapped = ( 
  code,
  test,
  testCaseTests,
  sourceCodeTests,
  guideContent,
  whenDoneRedirectTo,
  waitCodeToBeExecutedOnStep,
  enableEditorOnStep,
  trackSection,
 
  testCaseStrategy,
  sourceCodeStrategy,
 
  disableEditor,
  introContent,
  enableIntroOnStep,
  editorOptions,
  attentionAnimationTo = []
 ) => {
  class Rocket extends Component {
 
    state = {
      code: {
        [SOURCE_CODE]: code,
        [TEST_CODE]: test
      },
      editorOptions: editorOptions || {
        [SOURCE_CODE]: {
          readOnly: true
        },
        [TEST_CODE]: {}
      },
      done: false,
      showNext: false,
      currentHint: 0,
      initialStep: 0,
      introEnabled: false,
      intro: introContent || {
        steps: [],
        initialStep: 0
      },
      editorError: false,
    }
  }
}

The excessive number of parameters to test the function are so many, that if anyone (and even me) start to write a new test case, we would forgot what to pass in order to receive the desired result.

[6] shows another example from jenkins, an open source ci/cd project. He depicts a particular test case that builds up a web browser to assert the url being used, in this it could have been a regular object calling a method to assert that.

Excessive setup - root causes

  1. Adding tests after the fact.
  2. Lack of SOLID principles.
  3. Lack of Object Calisthenics [7].

The giant

The giant is also a sign of lack in the design of the code base. Designing code is also a subject of discussion among TDD practitioners. [8] As opposed to the excessive setup I would say that this anti pattern can also happen while developing in a TDD fashion. The giant is often related to the God class, which is a anti pattern for OOP and goes against SOLID principles as well.

In TDD the giant often relates to many assertions in a single test case as Dave Farley depicts on his video. The same test file used from nuxtjs in the excessive setup shows signals of the giant. Inspecting the code closer we see a bit of code followed by assertions, a bit of more code also followed by assertions:

test('should setup middleware', async () => {
  const nuxt = createNuxt()
  const server = new Server(nuxt)
  server.useMiddleware = jest.fn()
  server.serverContext = { id: 'test-server-context' }

  await server.setupMiddleware()

  expect(server.nuxt.callHook).toBeCalledTimes(2)
  expect(server.nuxt.callHook).nthCalledWith(1, 'render:setupMiddleware', server.app)
  expect(server.nuxt.callHook).nthCalledWith(2, 'render:errorMiddleware', server.app)

  expect(server.useMiddleware).toBeCalledTimes(4)
  expect(serveStatic).toBeCalledTimes(2)
  expect(serveStatic).nthCalledWith(1, 'resolve(/var/nuxt/src, var/nuxt/static)', server.options.render.static)
  expect(server.useMiddleware).nthCalledWith(1, {
    dir: 'resolve(/var/nuxt/src, var/nuxt/static)',
    id: 'test-serve-static',
    prefix: 'test-render-static-prefix'
  })
  expect(serveStatic).nthCalledWith(2, 'resolve(/var/nuxt/build, dist, client)', server.options.render.dist)
  expect(server.useMiddleware).nthCalledWith(2, {
    handler: {
      dir: 'resolve(/var/nuxt/build, dist, client)',
      id: 'test-serve-static'
    },
    path: '__nuxt_test'
  })

  const nuxtMiddlewareOpts = {
    options: server.options,
    nuxt: server.nuxt,
    renderRoute: expect.any(Function),
    resources: server.resources
  }
  expect(nuxtMiddleware).toBeCalledTimes(1)
  expect(nuxtMiddleware).toBeCalledWith(nuxtMiddlewareOpts)
  expect(server.useMiddleware).nthCalledWith(3, {
    id: 'test-nuxt-middleware',
    ...nuxtMiddlewareOpts
  })

  const errorMiddlewareOpts = {
    resources: server.resources,
    options: server.options
  }
  expect(errorMiddleware).toBeCalledTimes(1)
  expect(errorMiddleware).toBeCalledWith(errorMiddlewareOpts)
  expect(server.useMiddleware).nthCalledWith(4, {
    id: 'test-error-middleware',
    ...errorMiddlewareOpts
  })
})

The point of attention here is to reflect if it makes sense to break down each block of code and assertion on its own test case. The issue here is to have a block of test code assertions, more code and more assertions. It would require further inspection do double-check if it’s possible to break the code as suggested. Anyways it makes a good example of the giant, as Dave Farley depicts on his video that this practice is not recommended.

The giant - root causes

  1. Test after, instead of test first.

The slow poke

Slow poke reminds me of pokemon, and like the creature, the slow poke takes down the efficiency of the test suite run. Usually it puts the test suite to execute and take longer to finish and give the developer feedback.

Time related code is usually difficult to handle under a test case, it involves trying to manipulate it in different ways. For example, if we are dealing with payment systems, and we would like to trigger some routine to launch a payment in the end of the month. For that we would need a way to handle time and check for a specific date (the last day of the month) and time (somewhere around the morning? Evening?). In other words, we need a way to handle the time and deal with it without the need of waiting the end of the month to run the test.

Time as async leads to nondeterminism, as mentioned by [5]. Lucky us, there is a way to overcome this situation with mocking.

Pushing towards more integration tests or end-to-end tests can also transform the test suite in a pain to run, taking long hours or even days. This approach is also related to the ice cream cone, instead of the pyramid of test. In an ideal scenario you would have as the base, more unit tests, a few integration tests and even less from end-to-end tests [9].

The slow poke - root causes

In my experience the slow poke comes in two different flavors:

  1. Lack of knowledge in TDD.
  2. Testing with strategies that the framework already gives out of the box.

Wrapping up

All in all, testing and TDD is a practice that developers adopted and have been practicing for a few years now, therefore there are some room for improvements. Starting from the anti-patterns that were explored. In short:

  1. Time async behavior need an extra care when writing tests
  2. Poor design of the code can lead to excessive setup and giant test cases, making it harder to test
  3. Having a slow test suite can lead to frustration for developers, keep the test suite as fast as possible to improve the feedback loop.

James Carr enumerated 22 anti-patterns, in here, I went over four of them. It means that there are a lot more to go over, keep it sharp and keep testing.

Appendix

This sections offers extra resources that might have been mentioned in the text.

Edit Aug 26, 2021 - TDC INNOVATION

This blog post is a companion for a talk that I gave around TDD anti-patterns, it goes into the same content as I wrote here but instead of written I used the slides as a guide. In other words, the same content used in the blog post will be in the slides.

Edit Sep 14, 2021 - Twitter discussion

There have been a good discussion in twitter on how to get started with TDD anti-patterns in which different experts in the field contributed to.

Edit Oct 17, 2021 - Codurance talk

Presenting the tdd anti-patterns at Codurance talk.

References

  1. [1]K. Beck, TDD by example. Addison-Wesley Professional, 2000.
  2. [2]Y. Wang, M. Pyhäjärvi, and M. V. Mäntylä, “Test Automation Process Improvement in a DevOps Team: Experience Report,” in 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 2020, pp. 314–321, doi: 10.1109/ICSTW50294.2020.00057.
  3. [3]G. Meszaros, “Test Smells,” 2021 [Online]. Available at: http://xunitpatterns.com/Test%20Smells.html. [Accessed: 11-Jun-2007]
  4. [4]jestjs.io, “Testing Asynchronous Code,” 2021 [Online]. Available at: https://jestjs.io/docs/asynchronous. [Accessed: 05-Sep-2021]
  5. [5]M. Fowler, “Eradicating Non-Determinism in Tests,” 2011 [Online]. Available at: https://martinfowler.com/articles/nonDeterminism.html. [Accessed: 05-Sep-2021]
  6. [6]D. Farley, “When Test Driven Development Goes Wrong,” 2021 [Online]. Available at: https://www.youtube.com/watch?v=UWtEVKVPBQ0&feature. [Accessed: 08-Sep-2021]
  7. [7]R. Singham et al., The ThoughtWorks® Anthology: Essays on Software Technology and Innovation. Pragmatic bookshelf, 2008.
  8. [8]S. Mancuso, “DevTernity 2018: Sandro Mancuso - Does TDD Really Lead to Good Design?,” 2018 [Online]. Available at: https://www.youtube.com/watch?v=KyFVA4Spcgg. [Accessed: 22-Sep-2021]
  9. [9]H. Vocke, “The practical test pyramid,” 2018 [Online]. Available at: https://martinfowler.com/articles/practical-test-pyramid.html. [Accessed: 10-Sep-2021]