Stubbing API calls using stub.by

Typically, if I was developing an application that talks to an API, the TDD approach I’ve historically followed would end up with me using a mocking framework to stub a service component that talks to the API, in my unit tests. I’d then include this service in my implementation code via dependency injection. A fairly common approach of writing tests and stubbing dependencies I guess.

WebMock and Stubby

Colleagues of mine who build rails apps often use the WebMock gem to ‘catch’ API requests and return a specific known response – a different kind of stub.

In the spirit of @b_seven_e‘s DDDNorth 2014 talk at the weekend – of our ruby and .NET teams learning from each other – I found a .NET package called Stubby which does a similar thing to WebMock.

The source code and documentation can also be found on github – https://github.com/mrak/stubby4net

Getting Started

To try Stubby out, I’m building a simple ASP.Net MVC web app, that will get it’s data from a ReSTful API.

I began by adding the Stubby NuGet package

PM> Install-Package stubby

So to try out Stubby I began with a single ‘happy path’ test that tests – given an API returns some data – if my web application returns some data.

[Test]
public void When_you_view_a_muppet_profile_then_you_see_the_muppet_name()
{
var controller = new MuppetsController();


var actionResult = controller.GetMuppet("gonzo");


var viewModel =
(MuppetViewModel)((ViewResult)actionResult).Model;


Assert.That(
viewModel.Name,
Is.EqualTo("Gonzo"));
}

This seems straightforward, but where is the actual stub for the API I’m going to call..? This is set up via the following code in the test fixture –

private readonly Stubby _stubby = new Stubby(new Arguments
{
Data = "../../endpoints.yaml"
});


[TestFixtureSetUp]
public void TestFixtureSetUp()
{
_stubby.Start();

}


[TestFixtureTearDown]
public void TestFixtureTearDown()
{
_stubby.Stop();
}

The YAML file which defines the actual behaviour of the stubbed API looks something like this –

- request:
url: /muppets/gonzo/?$
response:
status: 200
headers:
content-type: application/json
body: >

{
"name":"Gonzo",
"gender":"Male",
"firstAppearance":"1976"
}

This is a basic request/response pair. In this case it matches any GET request to muppets/gonzo and returns a 200 response with the specified headers and body.

There is much more you can do in terms of defining requests and responses – all detailed in the Stubby documentation. You can also set up the request/response directly in code, but I went with the YAML file approach first.

When I run my test – Stubby starts running on port 8882 by default. I add some config to my test project to ensure that my implementation points at this host when making API calls, and then every API call that is made will be caught by Stubby. If I make a call to /muppets/gonzo then this will be matched against the YAML file and the response above is returned.

So now I have this failing test, so I can go and write some basic implementation code to make it pass. In my case I add some code to the controller which makes an API call, de-serialises the JSON returned into an object, and then maps this to a ViewModel which is returned with a View.

More Tests

Once I had this test passing I extended my API stub to include the scenarios where the API returns a 404 or a 500 status code.

- request:
url: /muppets/bungle/?$
response:
status: 404


- request:
url: /muppets/kermit/?$
response:
status: 500

This allowed me to explore how my application would respond if the API was unavailable, or if it was available but returned no resource. In this case I decided that I wanted my application to act in different ways in these two different scenarios.

Refactoring & Design

With these green tests, it now feels like past time to refactor the implementation code.

I haven’t ended up with the service and repository components that I might normally end up with if I’d followed my old TDD approach of writing a test that stubbed the API component in code.

I can put these components in myself now, but it feels like I am a lot more free to exercise some design over my code at this point.

This feels like a good thing. I have a set of tests that stub the external dependency, and give me confidence that my application is working correctly. But the tests don’t influence the implementation in any way, nor do they mirror the implementation in the way that you sometimes get with my previous approach. The tests feel more loosely-coupled from my implementation.

This also feels a bit more like the approach outlined by @ICooper in his “TDD – Where did it all go wrong” video – stub the components which you don’t have control over, make the test pass in the simplest way possible, then introduce design at the refactoring stage – without adding any more new tests.

 Testing Timeouts

Another interesting thing we can do with Stubby is to test what happens if the API times out. If I set up a request/response pair in my YAML file that looks like this

- request:
url: /muppets/beaker/?$
response:
status: 200
latency: 101000

this will ensure that the API responds after a delay of 1:41 – 1 second longer than the standard HttpClient timeout.

So by requesting this URL,  I can effectively test how my application reacts to API timeouts. Obviously I wouldn’t want to run lots of timeout tests all the time, but it could run as part of an overnight test run.

This gets more useful if I’m creating an application which makes multiple API calls for each screen, and I want my application to fail more gracefully than just displaying an error page if an individual API error occurs.

Is this a worthwhile approach?

Well it feels pretty good, so after discussion with my colleagues, we’re going to try it out on some production work, and see how it goes… watch this space…

Advertisements

Failure IS an option

I wrote this post after coming off a daily ‘stand-up’ call where one Developer admitted he “didn’t know what he was doing” because he’s covering for one of our UI Developers while she’s off , and a DBA told us we weren’t seeing the data we expected that morning because he ran the wrong package by mistake.

3095099782_1306a8169c_z

It got me thinking about how it’s important to encourage an atmosphere where people aren’t afraid to talk about the mistakes they’ve made.

No-one admonished these people for saying these things. We respect someone for being open about the fact that they made a mistake, and then fixed it. We admire someone for actively stepping out of their comfort zone to work on something they’re not used to – it broadens their skills and reduces the number of single points of failure in our team – which in turn helps to keep our work flowing.

Whilst failure can be bad – it is also a chance to learn, and improve. It’s okay to make mistakes, and to admit when you don’t know the best way to do something, as long as you learn from it!

 


 

 

photo credit: ncc_badiey cc