This article assumes that you are already familiar with developing node applications with express and mongo. The aim is to take an existing application and with minimal effort increase testability and provide built in abstractions using inversion of control.
It all starts at the spine of your application; the route definition file. For simplicity and brevity the code written in this article is to illustrate the intent of this article, and does not include proper error handling or best practices.
A basic route file defines endpoints inline and is typical when seeing node back ends particularly in hello world or tutorial examples and apps that grow from those over time. These work for basic use cases and can be written quickly but if and when complexity grows the testability is limited to outside-in integration tests, and code in not very reusable due to the fact that it’s all defined inline.
import mongoose from 'mongoose';
export const register = (app: Express) => {
const Cat = mongoose.model('Cat');
app.get('/cats', (req, res, next) => {
Cat.find({}, (err, cats) => {
if (err) {
return next(err);
}
res.send(cats);
});
});
};
The industry has converged around a popular solution for separating concerns between different layers of an application; this is commonly referred to as hexagonal architecture, the onion architecture, the model view controller architecture (adapted), the clean architecture, whatever you want to call it.
The goal is to split the application into three distinct areas.
// repository layer
const repositoryCall = (cb) => {
const Cat = mongoose.model('Cat');
Cat.find({})
.lean()
.exec((err, cats) => {
if (err) {
return cb(err);
}
cb(null, cats);
});
};
// service layer
const serviceCall = (cb) => {
repositoryCall((err, cats) => {
if (err) {
return cb(err);
}
// do some other stuff. usually more interesting in /POST calls.
cb(null, cats);
});
};
// controller layer
const handler = (req, res, next) => {
serviceCall((err, cats) => {
if (err) {
return cb(err);
}
res.send(cats);
});
};
app.get('/cats', handler);
While these abstractions look trivial in the above code, they increase testability and isolation which becomes clear as services grow. Let’s dive into each in a bit more detail.
Repositories (for a given bounded context) are a centralized location for all of your data access logic and coordinate the reconstruction or domain entities through factories. Much of this functionality is provided with an object relational mapper, but recommend not using these because they are often a leaky abstraction and bleed data access logic all over your app. Veering away from being overly coupled to an object relational mappers / object document mappers is a worthwhile investment. Repositories make it easy to reuse query transforms, find which indices need to be added, and most importantly stub out data access for testability. A key value add of repositories is the repository design pattern which allows a clean separation between interface and data access which allows client code to change database implementations fairly easily. Many shy way from the repository pattern with the rationale of “how often do you actually change databases” but the reality is, it depends.
As your application grows you may want to replace disk calls (mongo reads) with redis caches, or fork areas of the codebase into microservices if using a microservice architecture. When implementing a repository pattern, you need to find a common denominator across data stores and leverage that for your abstracted calls. If your domain is heavily dependent on features of a particular data store, for example using firebase for real time updates, you may not have all code sitting behind a repository — need to implement the pattern with salt to taste. These implementations will need a heavy lift if changing data stores and it’s better to defer that effort rather than build it in up front. For low hanging fruit like basic CRUD functionality, you can implement a data-store agnostic repository implementation with far less code that writing the database queries yourself. See this repository on github for common CRUD implementations in Node using almost a dozen different popular data stores from firebase to mongo to sql. https://github.com/blugavere/node-repositories
For more information about repositories, see understanding the “Repository Pattern”
When your application has business logic such as conditional statements or interacting with third party services, these things may be triggered from request handlers (in an express app) or from a trigger such as a scheduler or queue consumer. For this its helpful to separate actual business logic from the infrastructure which triggers it. This is known as the “Ports and Adapters” pattern. Among other benefits such as reusability, the main benefit is indirection of the trigger itself. Services store all procedural logic and orchestrate actions, be it inline or via calling respective domain entity methods.
Controllers are just request/response handlers (in the context of an http request). They parse query parameters, retained middleware objects, request payloads, and figure out which business logic needs to execute as a result. Controllers also handle transforming the outbound data for the consumer. Domain logic can execute in a number of different contexts, and the controller is the window for the context into the domain.
Here’s an implementation leveraging es6 classes using inversion of control, manually constructing and using a mongoose backed repository.
import mongoose, { Model, Document } from 'mongoose';
import express, { Request, Response, NextFunction } from 'express';
class Repository {
constructor(private model: Model<Document>) {
this.findAll = this.findAll.bind(this);
}
public async findAll(): Promise<any[]> {
return this.model.find().lean();
}
}
class Service {
constructor(private repo: Repository) {
this.findAll = this.findAll.bind(this);
}
public findAll() {
return this.repo.findAll();
}
}
class Controller {
constructor(private service: Service) {
this.findAll = this.findAll.bind(this);
}
public findAll(req: Request, res: Response, next: NextFunction) {
(async () => {
const docs = await this.service.findAll();
return res.send(docs);
})().catch(next);
}
}
// some required boilerplate omitted for brevity
const schema = new mongoose.Schema({});
const model = mongoose.model('Cat', schema);
const repo = new Repository(model);
const service = new Service(repo);
const controller = new Controller(service);
const app = express();
app.get('/cats', controller.findAll);
Here i’m showing class based implementation of an n-tier architecture. For a generic CRUD read, you can easily instantiate an instance of a repository object using it’s constructor with an alternative implementation, which demonstrates the power of dependency injection.
const repo = new Repository({
async findAll() {
return [];
},
});
This flexibility enables a developer to mock or stub particular components within an architecture to trigger difficult to reach edge cases for automated testing, but also creates a more flexible design allowing for change later. As an application grows into multiple services spanning business domains, dependency graphs naturally grow complex and structural configuration can become unwieldy. Application bootstrapping code when using inversion of control requires constructing instances of (often) singleton objects which may have many dependencies and requires object references to be passed around in specific locations such as via dictionaries, constructor arguments, or via setter injection. For a simple application writing the wiring inline within a main file or even a separate configuration file works fine but for a large application a Dependency Injection (DI) Container is often used.
There are many libraries and frameworks available in virtually every popular programming language, as the pattern is quite common. Fundamentally the problem that a DI Container solves is that of complex object construction with deep complex dependencies. It does so by registering objects with declared dependencies and via some mechanism, figures out how to construct the object graph in a consistent and efficient way.
Dependency Injection has been around for a long time but was popularized in the javascript community with the release of AngularJS in 2010 where there was an strong push for testability for every component of an application.
I strongly recommend you try rolling your own DI Container as a learning activity if you don’t fully understand the inner workings already. Often, it sits as the backbone of an applications architecture and feels like magic that just works, so, it’s good to know what’s going on.
Try giving it a go yourself, or find a tutorial to help you understand it a bit better, but before moving to production there are plenty of npm packages available which provide a more full feature set and are battle tested.
The DI Container I tend to use most often is the Boxed Injector https://github.com/giddyinc/boxed-injector which I wrote, which is an extremely simple implementation that does no more than promised.
Of course the library you choose should be the the one that fits your needs the most, but I tend to prefer to libraries do one thing really well and don’t try to do too much.
A DI Container is fairly simple, but some features that I think would make one container more compelling than another are:
Here’s an extremely basic express app which uses the DI Container to wire up it’s dependencies. Note that, the value provided may seem insignificant when used in a trivial app but the object construction code does add up.
import { Injector } from 'boxed-injector';
import express, { NextFunction, Request, Response } from 'express';
import mongoose, { Document, Model } from 'mongoose';
const app = express();
const User = mongoose.model(
'User',
new mongoose.Schema({
email: String,
}),
);
class Repository {
constructor(private User: Model<Document>) {}
public findById(id) {
return this.User.findById(id).lean();
}
}
class Service {
constructor(private repository: Repository) {}
public findById(id) {
return this.repository.findById(id);
}
}
export class Controller {
/**
* 'Service' is declared as a dependency
*/
public static inject = ['Service'];
/**
* service instance is automatically constructed
* and passed in as an argument to the controller
* during it's construction
*/
constructor(private service: Service) {
this.findById = this.findById.bind(this);
}
public findById(req: Request, res: Response, next: NextFunction) {
(async () => {
const id = req.params.id;
const user = await this.service.findById(id);
res.send({
user,
});
})().catch(next);
}
}
const injector = new Injector();
injector.factory('User', User);
injector.factory('Repository', Repository);
injector.factory('Service', Service);
injector.factory('Controller', Controller);
const controller: Controller = injector.get('Controller');
app.get('/users/:id', controller.findById);
app.listen(3000);
Thank you for reading! To be the first notified when I publish a new article, sign up for my mailing list!