Our toolchain

Rationale

We were developing quite a big application, involving a front-end built with React and Redux. Pretty soon, we wanted to be able to run tests on the front-end, and include them in our continuous integration process.

We expected basically three things from our tests :

  • be fast, so that developers can run them often during their development cycles ;
  • be simple to write, so it would be easy for developers to write them when they developed new components ;
  • be able to perform both unit tests (of a particular method or component of the app) and integration tests (involving a whole part of the app, like a whole page, and the back-end).

Finding the right toolchain

At first we used Facebook's Jest, but its automatic mocking features made us feel like we had lost control over our modules under test. We wanted to be able to know precisely what was mocked, and how, and it was hard to control Jest mocking capabilities on a per-module basis. We'd rather mock only the modules we needed to, but Jest's default choice is to mock every module except the ones it's currently testing. The other drawback of Jest was the complexity we found to run integration tests involving our REST API with a clean database, because that requires to run commands and spawn child processes, by hooking on certain events fired by the test runner. Such events were hard to trigger with Jest's "black box" behaviour.

That's why we abandonned Jest, and switched to Mocha as a test runner, and Chai as an assertion library. We chose this solution because Mocha and chai.expect have the same API as Jest's description and assertion API. This allowed us to reuse our already existing tests without rewriting too much code. We also kept Jest's philosophy of the tests files being stored as close as possible as their related components, making the folders have a fractal structure (read the part about application strucutre for more details).

We still needed to mock some modules in our app, and to spy on functions call. For those purposes, we used two external libraries : Sinon and Proxyquire. Finally, to render tested components and check they behaved correctly, we used AirBNB's Enzyme, coupled with JSDOM (which is, basically, a lightweight headless browser). Enzyme is really useful when it comes to working with the subtree of the component under test, checking its content and making assertions on it.

However, there was one huge downside : the tests involving the DOM were really painful to debug. When a test failed, we had two choices :

  • console.log() the HTML that JSDOM rendered (not very comfortable);
  • use the CLI debugger included in Node.JS (you don't want that either).

As you have probably already figured out, this often led to debugging the tests in a simple, straight-forward way : adding a this.skip and prepending the failing test with a // TODO. This was not exactly the best way to make sure our code was regression-free.

The hybrid toolchain

We wanted to be able to run and debug our tests in a browser, so we could use our favorite devtools, but to keep them fast and headless when running on our CI server. For this purpose, we chose to use both Karma and Mocha as test runners. This also felt like The right way to doTM, because the app runs in the browser. and we appreciated the ability to actually see our test scenarios happen before our eyes.

Karma also has a few nice features :

  • a file watcher, allowing the developer to be running the test continuously while working (useful for the TDD junkies);
  • a rich plugin ecosystem, allowing to bundle our test suite using Webpack and Babel (exactly the same way as our app);
  • the ability to tweak it to our needs thanks to a lot of config options.

This sounds nice, but when trying to use our existing test suite, we encountered some compatibilities issues. We found out that some of our hacks to make the tests work in a Node / JSDOM environment were not necessary anymore when running tests in the browser, which was not really disappointing, but we also had to adapt a few tests and patterns to be compatible with Karma.

One of biggest problem we ran into was that Webpack is not compatible with Proxyquire. Because it bundles the whole app in a single file, it needs to be able to resolve import and require in a static way. Proxyquire prevents this, because it alters the required module depending on the context where it is called. To work around this issue, we decided to throw away Proxyquire, and use two designs to be able to mock dependencies of our tested components. This led us to also question our implementation, and sometimes this led to make components simpler and smaller so they would be easier to test without requiring all their environment to be mocked around them.

Dependency injection instead of mocking

To be able to mock a module locally, we implemented dependency injection on some of our modules. Say we want to test this component Foo, that requires a module Bar :

// foo.js

// ...
import bar from 'bar';

export default class Foo extends Component {
  // ...
  bar.someMethod();
  // ...
}

In such cases, we implemented a dependency injection on foo to be able to inject our mock for Bar. We also want to avoid wondering which dependencies foo requires, so we also want to export

// foo.js
export function fooInjectable(deps) {
  const bar = (deps && deps.bar) ? deps.bar : require('bar');

  return class Foo extends Component {
    render() {
      // ...
      bar.someMethod();
      // ...
    }
  }
}

export default fooInjectable();

Now, we can import fooInjectable in our tests, injecting our mocks, and we can import the default export Foo in our app, just as we did before :

// __tests__/foo.js
import { fooInjectable } from '../foo';
import barMock from './mocks/bar';
const foo = fooInjectable({ bar: barMock });

describe('foo', function() {
  it('should harvest coconuts', function() {
    // render <Foo /> with Enzyme and make assertions on its resulting DOM
  })
});
// some/app/file.js
import Foo from './components/.../foo'; // Nothing is mocked inside Foo Component !

Global mocking with webpack's resolve option

Proxyquire has a @global option that allows to inject a mock globally, allowing it to be mocked in-depth, in the required module's dependencies. However, using this option may be dangerous, for several reason explained in Proxyquire docs.

Yeah, we are mocking fs three levels down in bar, so that's why we have to set it up when testing foo

To avoid inducing this overhead by implementing DI on several levels, and to make reasoning about our tests easier, we chose to mock some modules globally and in the whole tests context. Let's drop some names : those were the modules performing automagic DOM mutations such as react-modal, or libraries not design to work with React such as SimpleMDE or HandsOnTable.These prevented us from testing our own component's DOM, and we didn't feel guilty about mocking them because they were out of the perimeter of our tests (detecting regressions on our code while developing).

To implement this, we simply added a modules_overrides directory, which contains mocks of npm modules that we want to mock globally during tests. It has priority over node_modules when Karma asks Webpack to bundle the test suite, thanks to Webpack's resolve config. Below is an extract of the webpack.conf file used by Karma. In this case, the react-modal (mock) module found in modules-overrides will have priority over the one found in node_modules, beacause of the order those folder appear in :

// webpack.conf.tests.js
resolve = {
  root: [
    path.resolve('modules_overrides'),
    path.resolve('src'),
    path.resolve('node_modules'),
  ]
}

Integration tests

At the end of this process, we ended up with a tests folder at the root of our project, containing resources for both Karma and Mocha :

├── tests/
│   ├── karma/
│   │   └── index.js
│   ├── mocha/
│   │   └── index.js
│   └── test-utils/
│       ├── find-tests.js 
│       ├── runtime.js 
│       └── integration.js

Both karma/index.js and mocha/index.js scripts call resources contained in test-utils, and especially three scripts.

find-tests exports a function that globs for test files in the source folder, and returns lists of tests paths for the test runners to push them in their queue.

runtime.js contains global functions and constants that are made available globally in the tests runtime by pushing them at the first place in the test runners queue :

// karma/index.js, after creating the tests queue
testFiles.unshift(path.join(__dirname, 'karma-setup.js')); // declares global utility function

// mocha/index.js, before adding the files in the queue
mocha.addFile(path.join(__dirname, 'mocha-setup.js'));

This is surely a trade-off, as this induces magic through hidden globals, but we only put in this file functions we needed everywhere in our tests. This prevented us from requiring the same file in each and every of our test files.

Finally, integration.js looks for test files named *.integration.js. That is our naming convention for integration tests requiring a back-end server to be launched. This script checks whether our project's config file contains the address of a back-end server to be used for integration tests, and, if it's not the case, builds with maven and runs the resulting JAR, then passes down the local address of the started server :

// test-utils/integration.js :

const child_process = require('child_process');

const BACK_ROOT = 'path/to/the/back-end/repository';
const BACK_JAR = 'path/to/the/maven/target/backend-server.jar';

// Building a JAR, if required :
const buildBackIfNecessary = (callback) => {
  if (!fs.existsSync(BACK_JAR)) {  
    const exec = child_process.exec(
      `mvn clean package -DskipTests`, {
        cwd: BACK_ROOT,
      },
      (error, stdout, stderr) => callback(error, stdout, stderr)
    )
  }
};

// Running the server :
export default function runIntegration() {
  let startedInstance;
  buildBackIfNecessary(() => {
    startedInstance = child_process.spawn(
      'java',
      ['-Xms128m', '-Xmx256m', '-jar', BACK_JAR],
      {
        cwd: BACK_ROOT,
        stdio: ['pipe', 'pipe', 'pipe'],
      }
    );
  });

  return startedInstance;
}

This allowed us to intercept every possible setup on a developper's computer, and make running tests easy because it can be done in a single npm test or npm run karma, depending on today's mood, without wondering whether you have an instance of the back-end running.

Conclusion

At the end of this iterative process, we had a test suite that enabled the developers to run tests frequently, to debug them easily using their browser's devtools, and to easily setup their test suite. The biggest upside that it is easy for developers to write tests, and to rely on them while developing (for example by continuously running tests with Karma's watch mode, for TDD fans), but to keep performance when running them headless on our CI server. However, this was quite hard to setup, and the overhead of launching your tests both in Karma and Mocha to check they pass in both is often unpleasant.

A final word : for an app with 100k lines of code, such setup is relevant. For smaller apps, with less tests, we would probably have used only Karma, wired to a headless browser such as PhantomJS to run tests on our CI server, and wired to Chrome for development. Have you had experiences with that kind of testing toolchain ? We would love to hear about it thanks to the comments below.

results matching ""

    No results matching ""