Software Architecture Guide
Iterative and Incremental Evolution of the Architecture of (Legacy) Software
In many companies, much legacy software has accumulated in the system landscape over the years. Replacing them with new software is often not possible without further ado. In order to modernize these systems and prepare them for future requirements, we recommend a step-by-step approach in most cases. This approach might be best described as an iterative and incremental evolution of the legacy software’s architecture. Following this itinerary, we already have been able to lead numerous customer projects to success.
Why Big Bang Is Risky
Many initially successful projects [suffer from] the second system syndrome – the tendency of small, elegant, and successful systems to evolve into giant, feature-laden monstrosities due to inflated expectations. ― Neal Ford, Rebecca Parsons, Patrick Kua: Building Evolutionary Architectures
Big Bang in software development usually implies the idea of a completely new software being specified, designed, and implemented, aiming at replacing the old software entirely.
This approach involves certain risks:
- Inadequate requirements analysis: The old software is so complex that a specification for the new solution may turn out incomplete and incorrect.
- Inflated expectations: Since this is all about a new development, great hopes are placed in the new software. It is supposed to fulfil a lot of additional requirements ("second system syndrome").
- Error source: The new software certainly avoids known errors of the old software, but may contain new errors.
- Complex implementation processes: Implementing the new system requires detailed planning of the launch, a high number of dry runs and complex fallback scenarios.
- Data migration from the old to the new software is extremely demanding.
- Downtime on the user side: The specialist department may get almost no new features for a very long time (possibly years).
- Possible loss of know-how: If a new software is developed by a different team than the one already existing, the original development team may get frustrated, resulting in motivated employees possibly leaving the company.
Step by Step to Modernization
When customers engage ConSol, they often have already tried to replace their old software once or twice with a new solution the Big Bang way. We then focus on an iterative and incremental evolution of the architecture. In other words, we gradually modernize the legacy software instead of putting all our hopes in the Big Bang. This procedure has proven to be highly successful in practice.
Agile Approach Is Mandatory
Business people fear breaking change. If developers build an architecture that allows incremental change [...], both business and engineering win.― Neal Ford, Rebecca Parsons, Patrick Kua: Building Evolutionary Architectures
Agile process models such as Scrum or Kanban represent key elements in the iterative and incremental evolution of the legacy software architecture. Since agile methods work with software increments, the development team immediately starts solving existing problems. The old software is being improved regularly so that the customer or an (external) test team can check the results frequently. Due to the continuous, flexibly adaptable planning it is possible to consider change requests at any time.
Another mainstay of this procedure is the customer’s economic evaluation of costs and benefits of the improvements. This links any improvement of the legacy software’s internal quality as well as any modernization of the architecture to the creation of business value. The modernization can thus be controlled based on business parameters. The fact that the customer himself prioritizes the improvements, brings his most urgent problems into focus.
The agile approach also improves communication, enhanced by constant contact and exchange between the project partners: short, efficient communication channels enable quick feedback. Decisions are made jointly. Regular meetings as well as Sprint Backlog and Product Backlog ensure high transparency.
Proven Techniques for the Evolution of Architecture
Change is the defining characteristic of software. That change – that adaption – begins with release. Release is the beginning of the software’s true life; everything before that release is gestation. Either systems grow over time, adapting to their changing environment, or they decay until their costs outweigh their benefits and then die. ― Michael T. Nygard, Release It!: Design and Deploy Production-Ready Software
Software is never "finished". The demands it must meet as well as its environment will be ever changing. Therefore, the improvement of software architecture must be a continuous process, for which the following techniques are suitable:
Evolution by Modularization (Identification of Domains, Increase of Cohesion)
- Identify code components of the legacy software that belong to the common domain.
- Modularize these code components (by refactoring and applying CleanCode principles)
- Introduce an interface for the domain.
The identification of domains and modularization is a prerequisite for the division and targeted modernization of legacy software.
The disadvantage of this approach: Individual domains are often closely interwoven with other components of architecture. In order to successfully modularize, a good understanding of the domain and the legacy software’s code is required.
Evolution by Extraction (Change by Extraction, Downsizing)
- Identify independent domains within the legacy software.
- Extract independent domains into a separate system.
- Improve the system of the now stand-alone domain, possibly using Change By Abstraction to redevelop the system or parts of it.
- Also improve the legacy software, which now has fewer tasks.
This procedure is only possible if there are independent domains within the legacy software. The disadvantage is that since existing code is being reused, "hidden" dependencies can arise between the new stand-alone system and the old software.
Evolution Through Abstraction (Change by Abstraction, Outside-in Interfaces)
- Define an external, "ideal" interface for the legacy software.
- Encapsulate the communication with the old software via a facade / a facade system.
- Convert all clients to the new interface.
- Create iteratively and incrementally better new software for the interface.
- Gradually replace the old software with the better new software.
This procedure is only possible if a clear interface for existing services can be defined. The disadvantage: If the interface is too much oriented to the old software, errors are not being corrected.
Evolution by Strangulation (Strangulate Bad Parts, The Strangler, Switch)
- Identify a bad part of the code within the old software to be replaced.
- Insert a switch (also: Dispatcher, Proxy, Load Balancer), which first redirects to the bad part of the old software.
- Write a new implementation for the bad part.
- Let the switch transfer load temporarily to both the bad part of the legacy software and the new implementation.
- Reduce increasingly the load on the bad part of the legacy software and increase the load on the new implementation until the bad part of the legacy software is no longer used.
The prerequisite for this procedure is that the bad part of the old software can be delimited e.g. via interface.
The disadvantage here is that the new system must be oriented to the interaction of the other systems with the old software. Perhaps the other systems adapt undocumented characteristics of the old software, are dependent on unspecified behavior or on established workarounds.
Evolution Through Cloning and Deletion (Clone and Slash, Change by Split)
- Identify user groups.
- Clone complete legacy software for each user group.
- Delete in the clone what the user group does not need.
- Extract any similarities (but code duplication for too strong dependencies).
- Optimize for each user group.
Prerequisite for this procedure is a separate team for each clone.
The disadvantage here: the clones are coupled via database for a long time.
Hexagon Architecture Is Target Architecture
The architecture’s goal should always be a separation of professionalism and technology. A hexagon architecture is particularly suitable for this (according to Alistair Cockburn; also: Ports and Adapters, Onion Architecture according to Jeffrey Palermo, Clean Architecture according to "Uncle Bob" / Robert C. Martin).
Instead of thinking in layers, this architecture distinguishes between inside and outside. The interior contains the domain model, i.e. the implementation of the domain-oriented use cases. The external area includes everything else like integration, persistence, and UI. The communication between inside and outside takes place via interfaces (Ports), which are implemented in the outer area (Adapters).
The Hexagon architecture follows these principles:
- Inside doesn't know anything about outside.
- The interior can handle any concrete implementations of the interfaces in the external area.
- The professional logic is located exclusively inside.
- The technical details are exclusively exterior.
Software built in the Hexagon architecture is easier to modify, test and understand. Compared to a classical layer architecture, this approach minimizes the danger that domain-oriented logic is implemented outside the domain-oriented core, e.g. in the user interface or the database, and is thus distributed uncontrolled. The result: less technical guilt and improved maintainability.
Proven Supporting Techniques
Unknown unknowns are the nemesis of software systems [...]: things no one knew were going to crop up yet have appeared unexpectedly. This is why all Big Design Up Front software efforts suffer - architects cannot design for unknown unknown.― Neal Ford, Rebecca Parsons, Patrick Kua: Building Evolutionary Architectures
In addition to architecture migration techniques, proven methods support the iterative and incremental evolution of the architecture of (legacy) software:
- A feature toggle allows a feature under development to be turned on and off at runtime.
- More resilience is achieved through the use of stability patterns such as circuit breakers or bulkheads. This means that the software can also maintain its essential functions in the event of failures and malfunctions instead of failing completely.
- With the help of experiments, the team can learn a lot and make better decisions.
- Deployment variants with Blue/Green Deployment or Canary Release help to roll out the software with low risk.
- The monitoring of the business services instead of the monitoring of hosts allows a significantly better monitoring of the software.