TDD anti-patterns - episode 4 - The greedy catcher, The sequencer, Hidden dependency and The enumerator - with code examples in javascript, python and php

Last updated Nov 5, 2023 Published Mar 10, 2022

The content here is under the Attribution 4.0 International (CC BY 4.0) license

This is a follow up on a series of posts around TDD anti patterns. The first of this series covered: the liar, excessive setup, the giant and slow poke, those four are part of 22 more, which is listed in the James Carr post and also discussed in a stackoverflow thread.

Do you prefer this content in video format? The video is available on demand in livestorm.

In this blog post we are going to focus on four more of them, named: The greedy catcher, The sequencer, Hidden dependency and The enumerator. Each one of them focuses on a specific aspect of code that makes testing harder, sometimes it comes from not doing TDD, sometimes it just comes for the lack of knowledge on designing code.

The greedy catcher

Handling exceptions (or even using them) is tricky. There are developers that advocate for no exceptions at all, other uses it in a way to inform that something went wrong during the execution.

Despite the kind of developer you are, testing for exceptions can unveil some patterns that hurt the test first approach. In the previous episode, we had a look into the secret catcher, and now we are going to have a look at its cousin, the greedy catcher.

The greedy catcher appears when the subject under test handles the exception and hides useful information regarding the kind of the exception, message or stack trace. Such information is helpful to mitigate possible undesirable exceptions being thrown.

Next up, we have a example from a possible candidate for the the greedy catcher, this piece of code was extracted from the project Laravel/Cashier stripe - Laravel is written in PHP and is one of the most popular projects in the ecosystem, the following code is a package that wraps the stripe sdk into a Laravel package. Despite of having a try/catch handler inside the test case(that could potentially points to further improvements), when the exception is being thrown, the test case catches it and assets some logic:

public function test_retrieve_the_latest_payment_for_a_subscription()
{
    $user = $this->createCustomer('retrieve_the_latest_payment_for_a_subscription');

    try {
        $user->newSubscription('main', static::$priceId)->create('pm_card_threeDSecure2Required');

        $this->fail('Expected exception '.IncompletePayment::class.' was not thrown.');
    } catch (IncompletePayment $e) {
        $subscription = $user->refresh()->subscription('main');

        $this->assertInstanceOf(Payment::class, $payment = $subscription->latestPayment());
        $this->assertTrue($payment->requiresAction());
    }
}

The greedy catcher arises not only in the test code, but also in the production code, hiding useful information for tracing back an exception is a source of time spent that could have been much less. The following example is a representation of a javascript middleware that parses a JWT token and redirects the user if the token is empty - the code uses jest and nuxtjs.

Line 1 decoded the jwt token via jwt-decode package, in the case of success, the middleware follows the flow, if the token is falsy for any reason it invokes the logout function (line 2 and 3).

As far as the code looks, it is difficult to note that under the catch block, the exception is being ignored and if something happens, the result will be what the logout function returns (line 3).

export default function(context: Context) {
  try {
    const token = jwt_decode(req?.cookies['token']); // 1
  
    if (token) {
      return null;
    } else {
      return await logout($auth, redirect); // 2
    }
  } catch (e) {
    return await logout($auth, redirect); // 3
  }
}

The test code uses some nuxtjs context to create the request that is going to be processed by the middleware. The single test case depicts an approach to verify if the user is being logged out if the token is invalid. Note that the cookie is behind the serverParameters variable.

it('should logout when token is invalid', async () => {
  const redirect = jest.fn();
  const serverParameters: Partial<IContextCookie> = { // 1
    route: currentRoute as Route, $auth, redirect, req: { cookies: null },
  };

  await actions.nuxtServerInit(
    actionContext as ActionContext,
    serverParameters as IContextCookie
  );

  expect($auth.logout).toHaveBeenCalled(); // 2
});

The tricky part here is that the test above passes as it should, but not for the expected reason. serverParameters holds the req object that has cookies set to null (line 1), when that is the case, javascript will throw an error that is not possible to access a token of null, as null if not an array or object. Want to try out this behavior in javascript?

Such behavior, executes the catch block, that calls the desired logout function (line 2). The stack trace for this error won’t show up in any place, as the catch block ignores the exception in the production code.

You might be wondering that this is something acceptable, as the test passes. Therefore, there is an even further catch for this kind of scenario.

The sequencer

The sequencer brings light on a subject related that was covered in the testing assertions post (in thi case, using jest as a testing framework). More specifically, the section about Array Containing depicts what the sequencer is.

In short, the sequencer appears when an unordered list used for testing appears in the same order during assertions. In other words, giving the idea that the items in the list are required to be ordered.

The following example shows the sequencer in practice, the test case checks if the desired fruit is inside the list, the focus here is to known if the fruit is or is not in the list, regardless of the position:

const expectedFruits = ['banana', 'mango', 'watermelon']

expect(expectedFruits[0]).toEqual('banana')
expect(expectedFruits[1]).toEqual('mango')
expect(expectedFruits[0]).toEqual('watermelon')

As we don’t care about the position, using the utility arrayContaining might be a better fit and makes the intention explicit for further readers.

const expectedFruits = ['banana', 'mango', 'watermelon']

const actualFruits = () => ['banana', 'mango', 'watermelon']

expect(expectedFruits).toEqual(expect.arrayContaining(actualFruits))

It is important to note that arrayContaining also ignores the position of the items and also if there is an extra element. If the code under test cares about the exact number of items, it would be better to use a combination of assertions. This behavior is described in the official jest documentation.

The example with jest, gives a hint on what to expect in code bases that have this anti-pattern, but the following example depicts a scenario in which the sequencer appears for a CSV file.

def test_predictions_returns_a_dataframe_with_automatic_predictions(self,form):
   order_id = "51a64e87-a768-41ed-b6a5-bf0633435e20"
   order_info = pd.DataFrame({"order_id": [order_id], "form": [form],})
  file_path = Path("tests/data/prediction_data.csv") // 1
   service = FileRepository(file_path)

   result = get_predictions(main_service=service, order_info=order_info)

   assert list(result.columns) == ["id", "quantity", "country", "form", "order_id"] // 2

On line 1 the CSV file is loaded to be used during the test, next the result variable is what will be asserted against and on line 2 we have the assertion against the columns found in the file.

CSV files use the first row as the file header separated by comma, in the first row is where the name of the columns are defined and the lines below that follow the data that each column should have. If the CSV happens to be changed with a different column order (in this case, switching country and form), we would see the following error:

tests/test_predictions.py::TestPredictions::test_predictions_returns_a_dataframe_with_automatic_predictions FAILED [100%]
tests/test_predictions.py:16 (TestPredictions.test_predictions_returns_a_dataframe_with_automatic_predictions)
['id', 'quantity', 'form', 'country', 'order_id'] != ['id', 'quantity', 'country', 'form', 'order_id']

Expected :['id', 'quantity', 'country', 'form', 'order_id']
Actual   :['id', 'quantity', 'form', 'country', 'order_id']

In this test case, the hint is that we would like to assert that the columns exists regardless of the order. In the end, what matters the most is having the column and the data for each column regardless of its order.

A better approach would be to replace the line 2 previously depicts by the following assertion:

assert set(result.columns) == {"id", "quantity", "country", "form", "order_id"}

The sequencer is an anti-pattern that is not that often caught for its nature of being easy to write and the test suite is often in green, such anti-pattern is unveiled when someone has a hard time debugging the failure that is supposed to be passing.

Hidden dependency

The hidden dependency is an anti-pattern popular across developers, in particular, the hidden dependency annoys and makes developers sad about testing in general. It can be the source of hours debugging the test code to find out why the test is failing as it sometimes gives little to no information about the root cause.

Hint: if you are not familiar with vuex or the flux pattern, it is recommended to check it out first.

The example that follows in this section is related to vue and vuex, in this test case the goal is to list users in a dropdown. Vuex is used as a source of truth for the data.

export const Store = () => ({
  modules: { // 1
    user: {
      namespaced: true,
      state: {
        currentAdmin: {
          email: '[email protected]',
        },
      },
      getters: userGetters,
    },
    admin: adminStore().modules.admin, // 2
  },
});

On line 1, the structure needed for vuex is defined, and on line 2 the admin store is created. Once the stubbed store is in place we can start to write down the test itself. As a hint for the next piece of code, note that the store has no parameters.

it('should list admins in the administrator field to be able to pick one up', async () => {
  const store = Store(); // 1

  const { findByTestId, getByText } = render(AdminPage as any, {
    store,
    mocks: {
      $route: {
        query: {},
      },
    },
  });
  await fireEvent.click(await findByTestId('admin-list'));
  await waitFor(() => {
    expect(getByText('Admin')).toBeInTheDocument(); // 2
  });
});

On line 1 we create the store to use in the code under test and on line 2 we try to search the text Admin, if it is in the text we assume that the list is working. The catch here is that, if the test fails to find the Admin, we will need to dive into the code inside the store to see what is going on.

The next code example depicts a better approach to explicitly use the data needed when setting up the test. This time on line 1, the Admin is expected to exist beforehand.

it('should list admins in the administrator field to be able to pick one up', async () => {
  const store = Store({ admin: { name: 'Admin' } }); // 1

  const { findByTestId, getByText } = render(AdminPage as any, {
    store,
    mocks: {
      $route: {
        query: {},
      },
    },
  });
  await fireEvent.click(await findByTestId('admin-list'));
  await waitFor(() => {
    expect(getByText('Admin')).toBeInTheDocument();
  });
});

In general, the hidden dependency appears in different ways and in different style of tests, for example, the next example depicts an hidden issue that comes from testing the integration with the database.

def test_dbdatasource_is_able_to_load_products_related_only_to_manual_purchase(
   self, db_resource
):
   config_file_path = Path("./tests/data/configs/docker_config.json")    // 1
   expected_result = pd.read_csv("./tests/data/manual_product_info.csv") // 2

   datasource = DBDataSource(config_file_path=config_file_path)

   result = datasource.get_manual_purchases()                            // 3

   assert result.equals(expected_result)

On line 1 and 2 the setup is done via configuration files, 2 is important as the result from the test should match its content. Then on line 3, the code under test is exercised.

Inside this method, there is a query that is executed in order to fetch the manual purchases and assert that it is the same as the expected_result:

query: str = """   
   select
	 product.id
       po.order_id,
       po.quantity,
       product.country
   from product
   join purchased as pur on pur.product_id = product.id
   join purchased_order as po on po.purchase_id = cur.id
   where product.completed is true and
   pur.type = 'MANUAL' and
   product.is_test is true
   ;
"""

This query has a particular where clause that is hidden from the test case, thus making the test failing. By default the data generated from the expected_result sets the flag is_test to false, leading to no results returning in the test case.

The enumerator

Enumerating requirements in a brainstorm session usually is a good idea, it can be handy to create such a list for later consumption, those can even become new features for a software project. As in software we are dealing with features it seems to be a good idea to translate those in the same language and order, so the verification of those becomes a checklist.

As good as it might sounds for organization and feature handling, translating such numbered lists straight to code might bring undesired readability issues, and even more for test code.

As weird as it might sound, enumerating test cases with numbers is a common behavior I have seen among starters. For some reason, at first, it seems a good idea for them to write down the same test description and add a number to identify it. The following code depicts an example:

from status_processor import StatusProcessor

def test_set_status():

    row_with_status_inactive_1 = dict(
        
    row_with__status_inactive_2 = dict(
        
    row_with_status_inactive_3 = dict(
        
    row_with_status_inactive_3b = dict(

    row_with_status_inactive_4 = dict(

    row_with_status_inactive_5 = dict(

For new developers inside the code base the question that will arise is: what 1 means? what does 2 mean? Are those the same test case? In a nutshell, the space for being explicit on what is being tested is the key point here.

The first example was in python, but this anti-pattern arises in different programming languages. The next example in typescript is another type of enumerating test cases, in this scenario, the test cases are file names that are used to run the tests.

Github action with the enumerator

Enumerating test scenarios could hide some business patterns that are being replaced by numbers. The intention of what is being tested is not clear. Another issue that comes with that is the mitigation problem, if any of those tests fail, the error message will most likely give you a number, but not the root cause of the failure.

Appendix

Javascript code

Copy and past this snippet to reproduce the error in javascript when trying to access a property of null:

const req = { cookies: null }

req?.cookies['token']

Edit Jan 21, 2021 - Codurance talk

Presenting the tdd anti-patterns at Codurance talk.

References