< Back to articles

Hexagonal Architecture Is Principally Good

We humans tend to like and promote things not because they are good, but because we are momentarily charmed by them or just because we don’t want to be wrong. I know it because I do it myself. To stay out of trouble in my professional life, I try not to rely too much on my instincts, but rather discuss the pros and cons, which is the thing I started doing myself on code review to avoid “that’s not the way I would do it” comments. In over a year I became super interested in software architecture and especially Hexagonal architecture. And it made me wonder, why is it good? Let’s find out.

Single Responsibility Principle (SRP)

According to SRP, the objects (classes, modules, …) in your system should have no more than one responsibility, i.e. only one reason to change or serve only a single actor (user, component etc.) in the system. The hexagonal architecture helps you see the responsibilities more clearly, because it forces you to define the ports separately for the implementations (without it, the communication is implicit in the implementation). Not only that, but with the link to adapters, you inherently respond to a single entity most of the time, because typically, each actor has a custom adapter to interact with the system, though not every single time. You can definitely go against it by bulking up the ports into large facades, but at least at that point, it will be painfully obvious. When talking about system architecture, a more fitting rule is the Common closure principle (CCP), which is pretty much analogous to the SRP.

Open Closed Principle (OCP)

This is an inherent feature of the hexagonal system on the domain - infrastructure (adapters layer is usually called infrastructure as a counterpart to the domain) level. All adapters are by definition replaceable and interchangeable.

Liskov Substitution Principle (LSP)

If you are new to the concept of hexagonal architecture, it might seem odd to have an adapter pattern for a single implementation. The core benefit of the isolation is testability and to achieve that with enough cool, you want to test the parts isolated with mocks or fakes, that is your code will run at least with two adapters for each port: the production one and the testing one. Breach of LSP is usually a problem with:

a) hidden (yet observable) behavior nuance

b) the need to “type-cast” or handle a special case in the user code

Preventing the first one is difficult, but avoiding breaking the b) is actually easier, because you typically handle both adapters when implementing it, noticing the exceptions soon enough to realize the presumptions of the initial design were flawed.

Interface Segregation Principle (ISP)

This is for free. This is the direct, obvious implication of the hard work you put in. All contracts are separated from the implementation and well defined. Not only it brings independence but also it helps you get rid of the clutter that is not essential for the interaction. This is one of the good reasons to keep the ports discipline on the driving side as well.

Dependency Inversion Principle (DIP)

Again, obvious success: The zero dependency policy in the domain takes care of the dependency orientation being aligned with the abstraction level. With the secondary ports, you naturally apply the inversion of control.

Common Closure Principle (CCP)

“Don’t force users of a component to depend on things they don’t need.” This is again a return to the similar concept of SRP or ISP. By keeping the ports small and single-purpose, you can allow the interactors (domain or adapters) to have only “what they need” and no more.

Acyclic Dependency Principle (ADP)

Hexagonal architecture does not prevent you from doing cyclic dependencies (speaking from experience), in fact it’s really not that hard, especially with port implementations in the domain. However if you use a language or framework where you can practically import anything from anywhere (looking at you JavaScript), you will need to take the dependencies more seriously, because they are to play a much larger role. You will likely face them, but at least you have a higher chance of noticing them early on.

Stable Dependency Principle (SDP)

SDP suggests that “less stable” components should depend on “more stable” components, and not vice versa. By more or less stable, we mean stability by measuring ingoing and outcoming dependencies: (number of dependencies on current module) / (number of dependencies on current module + number of dependencies of current module). Let’s look at two components from our imaginary app:

// cats/domain/cat.entity.ts  
export default interface Cat {  
  name: string  
  profession: string  
  age: number  
}
// cats/domain/cat-repository.port.ts  
import Cat from "./cat.entity";  
  
export default interface CatRepository {  
  getCat: (name: string) => Promise;  
}  
  
  
// cats/infra/cat-repository.pg.ts  
import CatRepository from "../domain/cat-repository.port.ts";  
import { Client } from "pg";  
import CatMapper from "cat-repository.mapper.ts";  
  
export default class CatRepositoryPg implements CatRepository {  
  constructor(private readonly client: Client) {}  
  async getCat(name: string) {  
    const dbCat = (  
      await this.client.query(  
        "SELECT * FROM cats WHERE name = ? LIMIT 1",  
        [name]  
      )  
    ).rows;  
    return CatMapper.toEntity(dbCat);  
  }  
}

There are three simple code snippets, a cat entity, port for the cat repository and its PostgreSQL adapter.

  • Entity stability: 1 / (0 + 1) = 1
  • Port stability: 1 / (1 + 1) = 0.5
  • Adapter stability: 0 / (0 + 3) = 0

As expected, the direction of dependencies is aligned with the stability score. The core objects in the domain have no dependencies (meaning no reason to change), so they are highly stable. Port definitions have only few dependencies (because they can only import other domain objects) and finally the adapters that import various libraries are the least stable. The fact that the stability is aligned with the dependency direction is not a coincidence, but a natural implication of isolating the domain and shifting all I/O details to adapters.

Stable Abstraction Principle (SAP)

SAP combines the concept of stability with a new metric: abstractness: a ratio between abstract objects (abstract classes, interfaces) and all objects in the module. Modules with only abstract objects have abstractness of 1, whereas only implementation modules have abstractness of 1.

  • Entity abstractness: 1 / 1 = 1
  • Port abstractness: 1 / 1 = 1
  • Adapter abstractness: 0 / 1 = 0

The SAP suggests that stability should be linear to abstractness: the more abstract, the more stable it tends to be. This linear scale is marked as “main sequence” on the following diagram and the blue dots are our three components.

Notice that:

  • Adapter completely follows the expectations: it is minimally stable and minimally abstract (concrete implementation)
  • Entity is the direct opposite, maximally stable, maximally abstract. Entity and adapter will likely nicely conform to the main sequence. Notice however, that if you changed the entity to a specific class, the situation would be a lot different and it would fall into the “Zone of pain” (because typically, modules that are non-abstract, but stable, tend to be rigid and hard to change). The simple switch from interface to class, does not make the architecture bad, in fact, it is one of the known “exceptions” to the rule (the other being low-level helper functions or std lib), as suggested by Robert C. Martin in Clean Architecture: “Some software entities do, in fact, fall within the Zone of Pain. An example would be a database schema. Database schemas are notoriously volatile, extremely concrete, and highly depended on. This is one reason why the interface between OO applications and databases is so difficult to manage, and why schema updates are generally painful.”. That is exactly our case, but with an important advantage: this entity shall not change as the database is migrated.
  • The port is the only module that falls out of the main sequence, which is caused by the small example here. In a real scenario, the abstraction would likely stay the same, but stability would increase (because the dependants would increase at least by one: the testing adapter)

Overall, it is a good sign that even on the small sample we can see that there is the expected correlation between abstractness and stability.

Well, it’s been a ride. At this point I think I discussed all principles mentioned in the Clean Architecture, except one, which I think is frankly not relevant (Let me know which I missed in the comments if you know!). But for all others, the hexagonal architecture either pushes you (sometimes a great deal) in the right direction, punishes you for bending the principles or at least makes it more obvious when you break them.

However as I mentioned in my previous post I don’t think it is a recommended go-to approach for all systems, especially the ones with non-dominant domain logic. For complex systems that are going to live a long time it could be an essential friend in your developer or architect journey! Farewell and let me know directly or in the comments if you have any questions or would like me to cover more topics!

Jaroslav Šmolík
Jaroslav Šmolík
Backend DeveloperJára likes to learn about design patterns and delves into how the same problems are solved in different languages and technologies. He likes TypeScript, reading, gRPC, Fish shell, sauna, Kitty terminal and puzzles.

Are you interested in working together? Let’s discuss it in person!

Get in touch >