How to write good BDD tests

Not many teams use Behaviour Driven Development (BDD). This blog shows how even as developers we can leverage BDD tools to build robust solutions, more quickly and with greater transparency.

Posted: Fri 06 May, 2016, 18:07
I have been a fan of Behaviour Driven Development (BDD) for many years now. Sometime also known as Specification by Example, BDD involves writing human readable specifications that prove an application works whilst also providing documentation on supported features. Specifications are parsed by a BDD tool to drive the application under test via fixture code and then responses are validated against the specification as well. The specfication is both the driver and the checker. At the end a copy of the original specification is output with green or red highlighting to indicate success or failure.

Many years have passed since we all first heard about Agile - TDD, Scrum and Sprints - all the good stuff associated with this style of development. Having been a practitioner of different elements over the years I have to confess that very few have stuck with me, except one, BDD.

For those that are new to BDD, here are my thoughts on why, as developers, we should write BDD tests and how to make it work.

  1. Develop stuff that works. Really works. Always.
  2. Short of time? Still build BDD tests
  3. My Best Toolset - Cucumber + Groovy + DI Framework
  4. Invest in test rig design. And then invest more.
  5. Find the right interface
  6. "BDD doesn't work for our application"
  7. When it all comes together, the wider benefits are revealed.

Develop stuff that works. Really works. Always.

As developers, our job every day is build stuff that works for users and clients. The very concept of "works" however is massively subjective and always open to different interpretations. Sometimes feature requirements are just one line in a chat, other times they are detailed specifications crafted over weeks by BAs. Normally however there is a lot of ambiguity. This leaves the developer with the problem of building something that "works" without always having a clear definition.

When using BDD, a number of scenarios are created for a given feature. These scenarios will at the very least describe the happy path requirement and normally also demonstrate how the application would behave in error scenarios. At the end of the process, these feature files will express the requirement in pure business terms (no mention of XML, JSON, REST URL or SQL tables).

Given this it is also clear to your line manager, pod, project manager or client precisely what has been delivered because the output is readable and does not require translation from a developer. So we can generally dispense with the phrase "it works". There is no "it", there is only what we can demonstrate is working. (To paraphrase and misquote Yoda slightly).

By exercising your code at the public interface (very different to unit tests), confidence is built that the thing actually works. It is being used exactly as it will be when deployed so the testing gap is minimized. The more we test, earlier in the cycle, the the greater chance of success because we have taken ownership - effectively - of the development and testing. Very Agile.

Another important element is that we have built a wall around your feature. You have actively defended it, fortified it and made sure that it will never break without someone knowing about it. Whether you run you CI env on Jenkins or TeamCity or whatever, you can have confidence that this feature is working even when the code base is being developed by 10 other developers.

This is really valuable. BDD tests become the canary in the coal mine that give early warning that something, somewhere has changed and broken the tests and therefore the application.

Short of time? Still build BDD tests.

There is never enough time. Delivery expectations are ever present and there is always a deadline looming. Deadlines are necessary (often artificial) evil without which nothing would ever get done. With this in mind, a developer is often coding against the clock to deliver a build that works. In this environment there are tempations to cut corners, and the first corner is automated tests.

The justification for this is rarely valid. In the long and short term, time to code a working solution is always less when BDD tests are built along with the features. Quicker in the long term because it avoids regression later, but "I don't care about later, I just need to deliver this feature" I hear you say. It will also save time in the short term. A good BDD test will make it quicker to prove a feature is working as it is developed, saving those monotonous manual test cycles. It is always a time saver so developers benefit themselves too.

It is easy to carried away with automated testing in Agile however. Most teams will already by trying to do unit tests. So where do you draw the line? My rule is this - if I am so short of time and do not have time to write unit tests and BDD tests, I will always drop the unit tests. Controversial, I know. But fundamentally I do not care if a method works, I care more that the whole thing works. Much more.

My Best Toolset - Cucumber + Groovy + DI Framework

OK so which of the many BDD tools are best? Well I have tried the following:

My favourite (and this contradicts my earlier blog !) is Cucumber. Concordion is good but I found it too fiddly to build the specifications, which need to be written in HTML. Also there is no equivalent to step definitions within the framwork.

Spock is pretty good in that it is written in Groovy and offers many conveniences. After trying, very hard, to like it I recently gave up. My main concern is that Spock is too rigid on what can/cannot be initialised during setup/setupSpec but also, more imporantly, that there is no specification involved. Without using a specification to drive the test, it is back to writing integration tests with JUnit = Nooooo!!!! The appeal to me of Spock was the convenience of Groovy and I found Cucumber Groovy not much use. I was missing the woods for the trees though. Standard Java Cucumber works perfectly with Groovy so it is possible to have the best of both worlds - convenience of Groovy and the power of Cucumber.

Plus a DI framework is essential. It depends very much on the application under test but DI frameworks are valuable in test code for all the same reasons they are important in application code. If you are running application under test in "embedded" mode, then pulling in the top level DI config is ideal for initialising the app.

Ultimately, find the best toolset for you.

Invest in test rig design. And then invest more.

There are two common ways of testing an application. One is to fire it up as a local, embedded instance and then start testing at the public interface to ensure expected outputs are recieved for given inputs. The second way is to have a remote instance running on another machine and the tests can then interact remotely. Both are perfecty valid.

Either way, the BDD tests expect to have access to a fully functional running instance of the application or component - and always through the public interface. The practicalities of achieving this are never simple and can require a fair investment of time by the developer. This is often the main reason people give up on BDD for all it's benefits.

My suggestion is to keep going and make something that works well enough. Often data is changing and so tests still need to work when underlying data is variable - which is a challenge. But persevere. It always pays off in the long term.

Build a test rig with fixtures around the public interface into your application or component, wherever it runs. It will be necessary to write fixture code that will glue your specifications to the public interface, just very simple proxy code that will translate from plain English (or language of choice) through to your API. This should never be much code, but needs to be written. And this work needs to be done as part of the feature demand, not separate add-on demand to write the tests, it is part of the feature.

Find the right interface

I talk about interfaces a lot, more than most developer (I have noticed!). My observation is that many put interfaces in the same box as inheritance, encapsulation and polymorphism i.e. some academic definition that they need to know for interviews and never again. And yet as a developer, few concepts are more powerful than the contract an interface offers. This isolation it offers to users on either side is useful in so many ways. The clarity of contract that makes it crystal clear what functionality is available. Mostly it is the separation of concerns where it is most useful - to build solutions that do not care what is on the other side of the interface.

In a BDD context, there may already be a REST interface that you Javascript UI talks back to. Which interface should we test against? Normally the back-end REST interface rather than drive the UI to drive the REST interface. Normally there is already an interface that you can work against - which is precisely how the application will be used in "real life", but sometimes it is necessary to formalise the interface.

Make sure your BDD tests interact at a public interface such that they are black box tests. They have no dependency on what is inside the black box, only what it does. But, "I develop that black box, surely I should be testing my code?". Well you are, implicitly, via a higher level API that will in turn exercise your code. Your BDD tests should always work at higher level interface and test the business functionality - not the classes and methods specifically.

It sounds like a contradiction, but using BDD in this way means you really don't care about the code, only what it does. This is the same as your clients and business users - they don't care about your code (sorry!) - so is very aligned to their perspective. As engineers, we care about precisely how we design and structure our code. There is a need for one person to be both inside and outside the code base, something I think developers struggle to get comfortable with.

"BDD doesn't work for our application".

Teams are often quick to reject BDD style tests because their application is thought to be incompatible.

All I can suggest here, from my experience with BDD, is to remember that development is a creative process. We are not always fixing, we are often creating. Many developers are loath to acknowledge this as they identify themselves as engineers - which we are. But there is a significant creative element involved in selecting the right solutions to achieve your goals. Inventing original and simple solution. Making effective BDD specifications and test harnesses often requires some creative thinking.

There is one caveat though. I have yet to encounter a team that leveraged BDD as an end-to-end development process. Where BAs write specifications as feature files, pass them to developers who implement via a three amigos and then the developer does a demonstration once complete. This rarely happens - for reasons I should probably discuss in another blog - but my experience is that the toolset and approach do work for Agile developers.

When it all comes together, the wider benefits are revealed.

Over time, a golden source of specifications and examples that an application accrues effectively defines all business requirements to date, in business terms. They "box in" what the application does and does not do. A very strict adherent to BDD principles would say if it is not in the specifications, it is not supported. It is hard to argue with that. Not tested == not supported.

However once specifications are in place they can empower internal reworking and refactoring of code. Developers can confidently replace all or some of back-end systems safe in the knowledge that a large part of the functionality used by the business can be validated at the click of a button. By comparison, unit tests (to highlight the difference and relative on-going value) would be chucked in the bin with the old implementation.

In this way applying BDD methodologies can have long lasting quality benefits, allows confident re-architecting and reduce the long term cost of application ownership.