Photo by Johannes Plenio on Unsplash

Evolving a Solid Architecture

How evolving architecture and some SOLID principles lead to better software

Gian Lorenzetto, PhD
8 min readOct 18, 2021

--

When designing software systems, the architecture is often the first thing that people try to define. A little is fine, but typically the desire is to “nail it down” early.

This introduces greater risk and cost to a software product lifecycle, usually resulting in re-work, technical debt and an overall reduction in team performance as they battle a brittle and increasingly difficult to work on code base.

Getting your software architecture wrong can be very expensive. Ideally, we want to start with the simplest possible architecture that supports the current need and then evolve it just enough to support each new feature.

If we keep the architecture simple and just enough to support the feature that we are developing, then it will be easier to maintain and operate. But there is a hidden side-effect to evolving architecture that really helps reduce the overall cost of ownership, and it comes from the Open Closed Principle (OCP).

The OCP of course comes from the SOLID principles —

  • Single Responsibility Principle (SRP)
  • Open Closed Principle (OCP)
  • Liskov Substitution Principle (LSP)
  • Interface Segregation Principle (ISP)
  • Dependency Inversion Principle (DIP)

Let’s take a look now at the OCP and how it applies to architecture, as well as how the other SOLID principles can also help guide our architectural decisions.

Open Closed Principle

The Open Closed Principle (OCP) states we want our systems to be open to extension, but closed to change. Essentially, we want our architecture to be flexible and extensible where it’s needed, but closed to change everywhere else. Following this principle means we are building the simplest thing that can support specifically the required behaviours and output of the system. The simplest thing that works!

Some form of complexity in software systems is almost inevitable in all but the most trivial of applications. However, needless complexity will kill productivity and your ability to innovate and deliver features. Needless complexity can manifest in several ways — technical debt, premature optimisation, building support for features with no discernible or immediate driver.

In order to prevent needless complexity, we build software in thin vertical (ie, end-to-end) slices to ensure that we do indeed have a valid reason for adding complexity to our software. By building software in this way, we avoid the trap of trying to build a full horizontal slice (the API, the database, the front-end) without knowing exactly what that slice is supposed do do, or how it should support the previous / next layer.

Building horizontal layers in isolation will lead to re-work. Every time. Go ahead and try it if you don’t believe me :) It is unavoidable when you build swathes of software with no obvious need and with no obvious driver — that is, no way to test the behaviour of the system. There is no way to exercise the system end-to-end from the perspective of the consumer of the feature.

This leads nicely into the most misunderstood of all the SOLID principles, the Single Responsibility Principle (SRP).

Single Responsibility Principle

The SRP tells us that we need to keep things together that change together. That is, things that —

  1. change at the same time;
  2. change at the same rate; and
  3. change for the same reason.

This is really just a more specific version of one of the key axioms of software development — loosely coupled, highly cohesive. We want things to be loosely coupled, so we keep things that don’t change for the same reason, or at the same rate, apart. But we do want things to be cohesive, so we keep things together that change for the same reason.

Incidentally, this has some interesting implications for cross-functional product teams. Often the reason things change is because they are driven by specific personas. A good place to start with the SRP is keep code specific to a particular persona (or user) separate, while keeping more general, common code together.

In fact, when we’re dealing with architectures this is exactly what we want to know — what parts of our architecture are generic to all personas and what parts are specific to one particular persona. You can easily substitute reason to change for persona in the pervious sentence. This will help us identify where we need to invest in extension and abstraction points and where we can stick to the simplest thing that works.

Regardless, we want to make changes to our system intentionally. In order to do that, we must deeply understand the reason for the change, the driver for that change.

Keep in mind, we don’t know where flexibility is needed and where it isn’t until we hit a feature that drives a change in our system. Some behaviour that the system (and architecture) don’t account for. Then we have a choice as to how to modify the architecture, but make no mistake, as you develop your system you will need to modify the architecture.

There is another principle that can cause early architecture designs to be thrown out of the window — Dependency Inversion.

Dependency Inversion Principle

Dependency Inversion (DI) is easiest to understand if you consider an application with no abstraction and direct dependencies on utility code. That is, the application code and business logic depends directly on the lower level utility code. This code is simple, yet inflexible and highly coupled. A change in a utility will require a rebuild of the entire application.

In this case the flow of control follows the code dependency — our application calls out to the utility code (the flow of control) and our application depends on the utility code to compile.

In order to break the above dependency and decouple responsibilites, we invert the flow of control. Typically this is in the form of an abstraction such as an interface or abstract class. Our application code now depends on the abstraction.

In the doing the above, we have now inverted the dependency between the utility code and application code. That is, the utility code now has a dependency on the abstraction defined by the application. The utility library is now free to implement the abstraction however it likes. In fact, there may be multiple implementations or even multiple utility libraries. The down side is that now the utility library is coupled to the application code, explicitly requiring it in order to build.

However, this particular inversion of dependency between application and utility code is usually more desirable as typically the utility code will change at a different (often slower) rate to the application code. We also now have the ability to build customised utility code, for specific users and scenarios, as needed. We have increased the complexity of the code, but we have an extension point that gives us flexibility.

Depending on the environment and language, introducing this type of abstraction will have an impact on your build, test and deployment process — usually making it more complex. We bought flexibility at the cost of complexity. This is fine so long as the value inherent to the flexibility is the same or greater than the cost of increased complexity.

As an aside, a good rule to follow is that while those boxes on an architecture diagram are important, the lines between them are more important. The relationships between those boxes, the flow of information and data, the code dependencies and the control flow are all more important to an architect than the actual behaviour of the boxes.

We’ve touched on interfaces above when discussing abstractions, so now let’s take a look at the last two SOLID principles, Interface Segregation and the Liskov Substitution Principle.

Abstractions, Interfaces and Polymorphism

In the previous section I mentioned interfaces as mechanisms for decoupling code. Interfaces (sometimes called traits) and abstract classes are useful ways to separate implementation from the usage of code. However there are some simple rules we need to be careful of when doing this — introducing any abstraction adds complexity and mental load for the developer. We want to make sure we aren’t making that worse by misusing this approach.

No discussion of interfaces and code abstraction is complete without mentioning polymorphism. Put simply, polymorphism is what allows us to have one object look and behave like another. A concrete example is a class that implements several interfaces. An instance of said class can be used where ever one of the implemented interfaces is expected.

With great power comes great responsibility and so it is with polymorphism. The Liskov Substitution Principle (LSP) is somewhat obvious, but easily abused — if you create a class that promises to implement a particular interface (ie, some specific behaviour) then you must honour that contract and not change the meaning of that interface.

If that sounds somewhat vague, that’s because it is. Any contract is open to interpretation and misuse and so it is with software interfaces. There is no inherent impediment to implementing an addition (+) operator to first divide by 2, then subtract to the values and return the result! (Perhaps there is an inherent impediment to this in form of your fellow developers, but that’s another article ;)

However, it’s good practice to test classes that implement an interface via the interface type itself. In fact, a single suite of tests for an interface may be sufficient to test that specific behaviour in all classes implementing that interface. This also ensures that we adhere to the LSP.

A corollary of the above is that the larger an interface the more likely it is to be misinterpreted or misused. This leads us to the last of the SOLID principles, the Interface Segregation Principle (ISP).

The ISP is another, specific form of the loosely coupled, highly cohesive axiom. We want our interfaces to be small and focused. It states that we want to break large interfaces down into small, cohesive units. This is also similar to the SRP, in that we are looking for interfaces to collect related features and separate features that come from a different driver or outcome.

SOLID Architecture

In summary, we want our architectures to be as simple and straight-forward as possible. We want to be intentional when we add complexity, only adding it as and when necessary. We want our architectures to be flexible exactly where they need to be flexible. We want to evolve our architectures, and systems in general, at the same time as we grow and evolve our understanding of the needs of the system.

The SOLID principles give us a nice framework to guide our decision making and to help keep our architecture (and software in general) clean, maintainable and readable. Following these princicpals will help to make future maintenance, development and operation as easy and cheap as possible.

--

--