TDD anti-patterns - the liar, excessive setup, the giant, the slow poke - with code examples in JavaScript, Kotlin and PHP

Last updated Jun 26, 2025 Published Aug 28, 2021

The content here is under the Attribution 4.0 International (CC BY 4.0) license

Test Driven Development (TDD) has been adopted by developers and teams as a way to deliver quality code through short feedback cycles using the red-green-refactor flow. Kent Beck (Beck, 2000) popularized the methodology, which became a standard that the industry builds upon. Starting with TDD is challenging, and maintaining a fast test suite is even harder. Code bases that tackle business problems require non-trivial amounts of code and comprehensive test suites.

As (Wang et al., 2020) describes, the industry often exhibits immature test automation processes that lead to slow feedback cycles.

A space dedicated for Test-Driven Development

This blog hosts a dedicated space for TDD-related content, where you can find posts that explore the concept of TDD, its benefits, and how it can be effectively implemented in software development workflows.

James Carr developed a list of 22 anti-patterns to identify and manage the testing trap that extensive code bases can fall into. Dave Farley explored several of these anti-patterns in his YouTube channel. This post is inspired by Dave Farley’s work and serves as a companion to my presentation on this topic.

NOTE 1: TDD anti-patterns often overlap with test smells (Meszaros, 2021). While related, these are distinct concepts.

NOTE 2: While James Carr’s list includes 22 anti-patterns, Yegor Bugayenko’s lecture describes 23, and the book Mastering Software Testing describes 27.

NOTE 3: Follow-up posts covering additional anti-patterns are available:

The liar

The liar is one of the most common anti-patterns encountered in professional TDD practice. This anti-pattern is insidious because the test appears to pass, giving false confidence in code correctness. However, the test is not actually verifying the intended behavior, which means defects slip through undetected in production. Two primary reasons explain this pattern:

  1. Asynchronous code in test cases
  2. Time-dependent test cases

Testing asynchronous code is challenging since it depends on future values that may or may not be received (jestjs.io, 2021). The following code, reproduced from the Jest documentation, demonstrates an anti-pattern (the docs explicitly state not to use this approach):

// Don't do this!
test('the data is peanut butter', () => {
  function callback(data) {
    expect(data).toBe('peanut butter');
  }

  fetchData(callback);
});

This test passes without complaint, despite not properly testing the function—a classic liar. The correct approach waits for the async function to complete before allowing Jest to continue:

test('the data is peanut butter', done => {
  function callback(data) {
    try {
      expect(data).toBe('peanut butter');
      done(); // <------------------------ invokes jest flow again, saying: "look I am ready now!"
    } catch (error) {
      done(error);
    }
  }

  fetchData(callback);
});

Martin Fowler elaborates on time-dependent tests (Fowler, 2011). Time-related tests are a source of non-determinism, and developers should carefully handle both time-dependent code and threads.

Time-oriented tests can fail unpredictably. I have experienced test suites fail because they didn’t mock dates. The test passed on the day it was written but failed the following day—another liar.

The liar - root causes

  1. Lack of practice on TDD.
  2. Oriented to coverage.

Excessive setup

I attribute excessive setup to insufficient TDD practice and unfamiliarity with object calisthenics principles. While this typically applies to object-oriented programming, similar concepts apply to functional programming.

The classic excessive setup occurs when testing specific behavior requires setting up numerous dependencies (such as multiple classes).

The following code depicts the nuxtjs framework test case for this matter. The test file for the server starts with a few mocks, and then it goes on till the method beforeEach that has more setup work (the mocks).

jest.mock('compression')
jest.mock('connect')
jest.mock('serve-static')
jest.mock('serve-placeholder')
jest.mock('launch-editor-middleware')
jest.mock('@nuxt/utils')
jest.mock('@nuxt/vue-renderer')
jest.mock('../src/listener')
jest.mock('../src/context')
jest.mock('../src/jsdom')
jest.mock('../src/middleware/nuxt')
jest.mock('../src/middleware/error')
jest.mock('../src/middleware/timing')
 
describe('server: server', () => {
  const createNuxt = () => ({
    options: {
      dir: {
        static: 'var/nuxt/static'
      },
      srcDir: '/var/nuxt/src',
      buildDir: '/var/nuxt/build',
      globalName: 'test-global-name',
      globals: { id: 'test-globals' },
      build: {
        publicPath: '__nuxt_test'
      },
      router: {
        base: '/foo/'
      },
      render: {
        id: 'test-render',
        dist: {
          id: 'test-render-dist'
        },
        static: {
          id: 'test-render-static',
          prefix: 'test-render-static-prefix'
        }
      },
      server: {},
      serverMiddleware: []
    },
    hook: jest.fn(),
    ready: jest.fn(),
    callHook: jest.fn(),
    resolver: {
      requireModule: jest.fn(),
      resolvePath: jest.fn().mockImplementation(p => p)
    }
  })
 
  beforeAll(() => {
    jest.spyOn(path, 'join').mockImplementation((...args) => `join(${args.join(', ')})`)
    jest.spyOn(path, 'resolve').mockImplementation((...args) => `resolve(${args.join(', ')})`)
    connect.mockReturnValue({ use: jest.fn() })
    serveStatic.mockImplementation(dir => ({ id: 'test-serve-static', dir }))
    nuxtMiddleware.mockImplementation(options => ({
      id: 'test-nuxt-middleware',
      ...options
    }))
    errorMiddleware.mockImplementation(options => ({
      id: 'test-error-middleware',
      ...options
    }))
    createTimingMiddleware.mockImplementation(options => ({
      id: 'test-timing-middleware',
      ...options
    }))
    launchMiddleware.mockImplementation(options => ({
      id: 'test-open-in-editor-middleware',
      ...options
    }))
    servePlaceholder.mockImplementation(options => ({
      key: 'test-serve-placeholder',
      ...options
    }))
  })

Reading this test from the beginning reveals the scale of setup required. There are 13 jest.mock invocations, plus approximately 9 additional spy and stub configurations in the beforeEach section. Creating new test cases or moving tests across files would necessitate duplicating this entire setup.

The excessive setup is a common trap. I also felt the pain of having to build many dependencies before starting testing a piece of code. The following code is from my research project called testable:

import { Component } from 'react';
import PropTypes from 'prop-types';
import { Redirect } from 'react-router';
import Emitter, { PROGRESS_UP, LEVEL_UP } from '../../../../packages/emitter/Emitter';
import { track } from '../../../../packages/emitter/Tracking';
import { auth } from '../../../../pages/login/Auth';
import Reason from '../../../../packages/engine/Reason';
import EditorManager from '../editor-manager/EditorManager';
import Guide from '../guide/Guide';
import Intro from '../intro/Intro';
import DebugButton from '../../buttons/debug/Debug';
import {SOURCE_CODE, TEST_CODE} from '../editor-manager/constants';
import {executeTestCase} from '../../../../packages/engine/Tester';

const Wrapped = ( 
  code,
  test,
  testCaseTests,
  sourceCodeTests,
  guideContent,
  whenDoneRedirectTo,
  waitCodeToBeExecutedOnStep,
  enableEditorOnStep,
  trackSection,
 
  testCaseStrategy,
  sourceCodeStrategy,
 
  disableEditor,
  introContent,
  enableIntroOnStep,
  editorOptions,
  attentionAnimationTo = []
 ) => {
  class Rocket extends Component {
 
    state = {
      code: {
        [SOURCE_CODE]: code,
        [TEST_CODE]: test
      },
      editorOptions: editorOptions || {
        [SOURCE_CODE]: {
          readOnly: true
        },
        [TEST_CODE]: {}
      },
      done: false,
      showNext: false,
      currentHint: 0,
      initialStep: 0,
      introEnabled: false,
      intro: introContent || {
        steps: [],
        initialStep: 0
      },
      editorError: false,
    }
  }
}

The excessive number of parameters makes it difficult for anyone to write a new test case without forgetting critical parameters or misunderstanding their purpose.

Dave Farley shows another example from Jenkins, an open-source CI/CD project (Farley, 2021). He depicts a test case that builds an entire web browser to verify a URL. A simple object method call would have sufficed.

Excessive setup - root causes

  1. Adding tests after the fact.
  2. Lack of SOLID principles.
  3. Lack of Object Calisthenics (Singham et al., 2008).

The giant

The Giant is often a sign of poor code design. Design itself is hotly debated among TDD practitioners. (Mancuso, 2018) Unlike excessive setup, this anti-pattern can occur even with proper TDD practice. The Giant relates closely to the God class anti-pattern in OOP, which violates SOLID principles.

In TDD, the Giant often manifests as many assertions in a single test case, as Dave Farley depicts in his video. The NuxtJS test file used earlier also demonstrates this pattern: a code snippet followed by assertions, then more code followed by more assertions:

test('should setup middleware', async () => {
  const nuxt = createNuxt()
  const server = new Server(nuxt)
  server.useMiddleware = jest.fn()
  server.serverContext = { id: 'test-server-context' }

  await server.setupMiddleware()

  expect(server.nuxt.callHook).toBeCalledTimes(2)
  expect(server.nuxt.callHook).nthCalledWith(1, 'render:setupMiddleware', server.app)
  expect(server.nuxt.callHook).nthCalledWith(2, 'render:errorMiddleware', server.app)

  expect(server.useMiddleware).toBeCalledTimes(4)
  expect(serveStatic).toBeCalledTimes(2)
  expect(serveStatic).nthCalledWith(1, 'resolve(/var/nuxt/src, var/nuxt/static)', server.options.render.static)
  expect(server.useMiddleware).nthCalledWith(1, {
    dir: 'resolve(/var/nuxt/src, var/nuxt/static)',
    id: 'test-serve-static',
    prefix: 'test-render-static-prefix'
  })
  expect(serveStatic).nthCalledWith(2, 'resolve(/var/nuxt/build, dist, client)', server.options.render.dist)
  expect(server.useMiddleware).nthCalledWith(2, {
    handler: {
      dir: 'resolve(/var/nuxt/build, dist, client)',
      id: 'test-serve-static'
    },
    path: '__nuxt_test'
  })

  const nuxtMiddlewareOpts = {
    options: server.options,
    nuxt: server.nuxt,
    renderRoute: expect.any(Function),
    resources: server.resources
  }
  expect(nuxtMiddleware).toBeCalledTimes(1)
  expect(nuxtMiddleware).toBeCalledWith(nuxtMiddlewareOpts)
  expect(server.useMiddleware).nthCalledWith(3, {
    id: 'test-nuxt-middleware',
    ...nuxtMiddlewareOpts
  })

  const errorMiddlewareOpts = {
    resources: server.resources,
    options: server.options
  }
  expect(errorMiddleware).toBeCalledTimes(1)
  expect(errorMiddleware).toBeCalledWith(errorMiddlewareOpts)
  expect(server.useMiddleware).nthCalledWith(4, {
    id: 'test-error-middleware',
    ...errorMiddlewareOpts
  })
})

Point of Consideration: It’s worth questioning whether each block of code and assertions should become separate test cases. Breaking such tests into logical units may improve clarity and maintainability. As Dave Farley disputes in his video, this approach is generally not recommended.

The giant - root causes

  1. Test after, instead of test first.

The slow poke

The Slow Poke, like the Pokémon creature, degrades test suite efficiency. It prolongs execution time and delays developer feedback.

Time-dependent code is typically difficult to test. For example, consider payment systems that trigger routines at month-end. Testing requires a way to manipulate time and verify specific dates and times without waiting for the actual event. This requires mocking time handling.

Time creates non-determinism in tests, as mentioned by (Fowler, 2011). Mocking can help overcome this.

Prioritizing integration and end-to-end tests over unit tests can also slow the suite significantly, sometimes requiring hours or days to run. This approach relates to the ice cream cone anti-pattern, which inverts the test pyramid. Ideally, the pyramid should have a broad base of fast unit tests, a narrower layer of integration tests, and an even smaller layer of end-to-end tests (Vocke, 2018).

The slow poke - root causes

From my experience, the Slow Poke manifests in two distinct ways:

  1. Insufficient knowledge of TDD practices.
  2. Over-reliance on testing strategies that frameworks provide by default.

Wrapping up

Overall, testing and TDD are practices that developers have adopted and refined for several years. However, significant room for improvement remains, particularly by avoiding these anti-patterns.

Key Takeaways

The four anti-patterns covered here share a common thread: they all indicate a disconnect between test code and the actual behavior being tested.

  1. The Liar and The Slow Poke reveal testing issues with time and asynchronous behavior—both require explicit handling and awareness
  2. Excessive Setup and The Giant signal poor code design—when tests are hard to write, the production code likely needs refactoring
  3. All four patterns emerge when testing shifts focus from behavior to implementation details

Practical Checklist

When reviewing your test suite, ask:

  • Does each test verify a single behavior or expectation?
  • Can I explain what the test is checking in one sentence?
  • Is setup code minimal and focused on necessary state?
  • Would changing an implementation detail (without changing behavior) break the test?
  • Does the test pass for the right reasons, not by accident?

Next Steps

James Carr identified 22 anti-patterns total. This post covers four foundational ones. Additional episodes explore 16 more anti-patterns, each building on these core concepts. Continue practicing TDD deliberately, maintain test suite speed, and remember: tests are code too—they deserve the same care and attention as production code.

Appendix

This section offers extra resources that might have been mentioned in the text.

Edit Aug 26, 2021 - TDC INNOVATION

This blog post accompanies a presentation on TDD anti-patterns. The talk covers the same content as this written guide, using slides to guide the discussion.

Edit Sep 14, 2021 - Twitter discussion

A thoughtful discussion about getting started with TDD anti-patterns occurred on Twitter, with contributions from various experts in the field.

Edit Oct 17, 2021 - Codurance talk

Presenting the tdd anti-patterns at Codurance talk.

References

  1. Beck, K. (2000). TDD by example. Addison-Wesley Professional.
  2. Wang, Y., Pyhäjärvi, M., & Mäntylä, M. V. (2020). Test Automation Process Improvement in a DevOps Team: Experience Report. 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 314–321. https://doi.org/10.1109/ICSTW50294.2020.00057
  3. Meszaros, G. (2021). Test Smells. http://xunitpatterns.com/Test%20Smells.html
  4. jestjs.io. (2021). Testing Asynchronous Code. https://jestjs.io/docs/asynchronous
  5. Fowler, M. (2011). Eradicating Non-Determinism in Tests. https://martinfowler.com/articles/nonDeterminism.html
  6. Farley, D. (2021). When Test Driven Development Goes Wrong. https://www.youtube.com/watch?v=UWtEVKVPBQ0&feature
  7. Singham, R., Fowler, M., Ford, N., Bay, J., Lentz, T., Robinson, I., Parsons, R., Robinson, M., Pantazopoulos, S., Doernenburg, E., Simpson, J., Farley, D., Vingrys, K., & Bull, J. (2008). The ThoughtWorks® Anthology: Essays on Software Technology and Innovation. Pragmatic bookshelf.
  8. Mancuso, S. (2018). DevTernity 2018: Sandro Mancuso - Does TDD Really Lead to Good Design? https://www.youtube.com/watch?v=KyFVA4Spcgg
  9. Vocke, H. (2018). The practical test pyramid. https://martinfowler.com/articles/practical-test-pyramid.html

Changelog

  • Feb 15, 2026 - Grammar fixes and minor rephrasing for clarity

You also might like