The why

Microservices are the result of the evolution of service oriented architecture. This post isn’t about the pros and cons of microservices or the appropriate granularity of microservices. There cannot really be only one good answer for those questions. However, one of the outcomes of microservices based architecture is the significant increase in the number of dependencies. Engineering teams own one (or more) microservices and depend on other microservices provide other functionaility. For example, our service that does a business action on behalf of the user, depends on the authentication and authorization functions provided by another service.

While developing in such a setup, we have to be mindful of the dependencies when testing locally. We have frameworks like Mockito that allow mocking most dependencies in our unit tests. Integration tests help us test our integration points with other services along with behavior of our service in a production-like mode. Most likely, the integration test environment is common across all engineering teams and as such is used for testing. This usually results in 1 of the 2 things:

  1. Our service takes a dependency on the integration-test versions of dependent services and we are the mercy of our dependencies not breaking randomly. This is really bad for the confidence that the integration tests are supposed to provide.
  2. Our service uses the “production” endpoints of our dependencies. This is rather precarious. We may inadvertently cause production impact by an errant integration test or a bug in our code.

What we really need is predictable behavior from our dependencies while we test our service.

Mountebank is an open source application that lets you mock your remote dependencies. We can use Mountebank to create a full self-contained local development environment where we get to define the expected behavior(1) from our dependencies and test our application without changing our configuration. Let’s work out an example.

Setup

Example controller with a remote dependency

Here, we have a controller implementation that depends on a remote microservice. Spring Boot and Java just happens the choices for this example, the concept is still the same.

@RestController
public class AccountController {

    /** Client for a  remote dependency */
    private final AuthServiceClient auth;

    public AccountController() {
        // We have a remote dependency on auth.example.com
        // NOTE: These dependencies should be injected into this class
        this.auth = new AuthServiceClient("http://auth.example.com/v1/");
    }

    /**
    * This is a dummy handler implementation 
    */
    @RequestMapping("/accounts/${accountId}/deposit")
    public Response deposit(@PathParam(value="accountId") String accountId, 
                            String userId, BigDecimal amount) {
        // Make the remote call to check permission
        boolean hasPermission = 
                auth.checkPermission("ACCOUNT_UPDATE", userId, accountId);

        if(hasPermission) {
            return updateBalance(userId, accountId, amount);
        } else { throw new PermissionException("Cannot update account"); }
    }
}

A simple docker compose file

This docker-compose.yml will load the service for us with appropriate volumes and port bindings.

version: '3'
services:
  account_service: 
    image: example/account_service:latest
    container_name: account_service
    volumes:
      - ./logs:/app/logs
    ports:
      - "443:443"

If we need to test this setup on our development machine:

  1. We can disable remote calls based on an environment variable. This is a potential attack vector in production and extra code to maintain.
  2. Actually use integration or production endpoints of Auth service if its reachable from the network.
  3. Use mountebank to write remote mocks and test locally

Introducing Mountebank

From the mountebank website:

mountebank is the first open source tool to provide cross-platform, multi-protocol test doubles over the wire. Simply point your application under test to mountebank instead of the real dependency, and test like you would with traditional stubs and mocks.

Figuring out the request/response structure for Auth service

There are multiple ways to go about determining the request and response structure for any service. In addition to looking at the backing types for the request and response, we can also inspect any available API specs or simply curl the endpoint with required info and capture the response. For example:

$ curl -X POST "http://auth.example.com/v1/" \
        -d '{"userId": "rohit@example.com" , "accountId": "32435638632893", "permission" : "ACCOUNT_UPDATE"}' \
        -H 'Content-Type: application/json'
{
    "auth_status" : "true"
}

Building an Imposter

We’ll build one for Auth service, saving it as imposters.ejs locally. Imposters contain stubs that define responses for every request matching a predicate.

{
  "port": 80,
  "protocol": "http",
  "stubs": [{
    "responses": [{ 
        "is": { 
            "statusCode": 200,
            "body": { "auth_status": "true" }
        }
    }],
    "predicates": [{
        "equals": {
            "path": "/v1/",
            "method": "POST",
            "headers": { 
                "Host": "auth.example.com",
                "Content-Type": "application/json" 
            }
        }
    }]
  }]
}

In the above imposter:

  1. We are binding to port 80 with protocol http
  2. Defined a predicate that will attempt to match the incoming request
  3. Defined a stub that will return a response with auth_status set to true if the request matches the predicate

We have a very simple imposter that will suit our needs for now. Check out mountebank documentation to learn about more options to build a more complex imposter.

Putting it all together

Building a docker container

Here’s a simple Dockerfile to build a container running mountebank.

FROM alpine:3.9

RUN apk update && apk add --no-cache nodejs npm

# Install mountebank in the container
ENV MOUNTEBANK_VERSION=2.0.0
RUN npm install -g mountebank@${MOUNTEBANK_VERSION} 

# Set up a working directory
RUN mkdir -p /mountebank
WORKDIR /mountebank

# Run container with mountebank
ENTRYPOINT mb --configfile /mountebank/imposters.ejs --host 0.0.0.0

Container built with the above Dockerfile will execute mountebank with /mountebank/imposters.ejs as its input

Running alongside AccountService

We need to run this mountebank container along with our service for it to be useful as a remote mock. The easiest and self contained way to do this would be including it in the same docker-compose.yml as another service. Here’s the updated version:

version: '3'
services:
  account_service: 
    image: example/account_service:latest
    container_name: account_service
    volumes:
      - ./logs:/app/logs
    ports:
      - "443:443"
    networks:
      account_service_network:
        ipv4_address: 172.1.1.5
  mountebank:
    build:
      context: account_service/mocks
      dockerfile: Dockerfile
    container_name: mountebank
    ports:
      - "2525:2525"
    volumes:
      - ./account_service/mocks/imposters.ejs:/mountebank/imposters.ejs
    networks:
      account_service_network:
        ipv4_address: 172.1.1.6
      aliases:
        - auth.example.com 
networks:
  account_service_network:
    driver: bridge
    ipam:
      driver: default
      config:
        -
          subnet: 172.1.1.0/24

Alright, so lots happening in this compose file. We have:

  1. Created a network account_service_network for all containers in the file
  2. Created a new service entry for mountebank with a local container
  3. Mounted our defined imposters.ejs as a volume
  4. Assigned ipv4 addresses to our containers
  5. Defined a network alias for auth.example.com pointing to mountebank

This results in all calls to auth.example.com being handled by mountebank. We can mock as many dependencies as we need using this method. All you have to do is add a corresponding stub to your imposter and a network alias in the docker-compose file.

Wrap

We now have a way to run a local test of our service without doing any configuration wrangling for dependencies. In fact, with advanced stub configuration, we can model failures from our dependencies and verify that we do indeed handle them correctly. No more ssh tunnels or depending on production endpoints.