Software architecture theses days is all about separating things. For example, MVC patterns separates the view from the model it represents and code that governs its behavior. Or applications that have a data access layer, a business logic layer, and a presentation layer, to separate different aspects of the application. All these approaches come down separation: for instance you should hypothetically be able to change the way you data is displayed (the presentation or view) without needing to change the model or how the data is accessed.
But how do these approaches actually work in reality? My experience has almost always been the opposite: the more separation there is, the longer it takes to develop. I’ll give you a few examples…
A few years ago was doing a project in WPF. If you don’t know, WPF is a Microsoft technology for developing GUIs (or rather, a framework on which GUIs can be developed). It could be said to be the successor of WinForms, and when I was researching it, it certainly looked very attractive. The underlying architecture it encourages is called MVVM – Model View View-Model, which is similar to MVC. In essence, the “model” is the data in your application, the “view-model” is the data represented in a form that you want presented to the user, and then the view is the actual graphical presentation of this data.
This is great in principle. It means the graphic design team can work on the view and how to present the information to the user in the most flashy way possible, while other teams work on the view-model and model. If there’s some business reason to change the presentation, you can do it without changing anything else. But here’s the catch: all these teams were just… me. It wasn’t such a big project that we needed more than one person working on it, so it was just me. Suddenly all the layers are just things that get in the way. When I need to add piece of data to the model… Now I need to add it to the view-model and view as well, and add all the wiring that connects them. And now we have the same domain object smeared across multiple files.. the xaml file for the view, a view-model C# file, and a model C# file. If the domain changes a little, you need to update all 3 files.
Another example that comes to mind is a website I’m working on. This one is not a one-man project. We have a team of 5 or 10 developers (depending which roles you consider to be part of the actual site development), and the site has hundreds of pages. The architecture chosen for website, when it was started a few years back, is to have separate layers for data access, business logic, business objects, and presentation. In addition to these, there are “layers” underneath the data access layer for stored-procedures and then for tables in the database.
This seems like the perfect situation for a layered architecture. We can have database professionals working on the SQL side of things, and user experience professionals working on the presentation side of things, professional software engineers working on the business logic, and some kind of entity framework or something similar for the data access.
But that’s not what we have. The reality is this: the customer requests a new feature; the feature gets handed off to one of the developers, who implements the tables or fields to store the data, the stored procedures to read the data, the data access layer to call the stored procedures from the web server, the business object layer to represent the objects fetched from the stored procedure, the business logic layer (which is normally almost empty, but gets implemented anyway), and finally the presentation layer so the user can actually see the new data that was added. The end result is about 5 or 10 new or changed files across a few different platforms and whole lot of layers. All for one logical change, like “a customer now needs an address field”.
Is the problem the way work is distributed among developers? Would it be more efficient to have 5 people working on the one logical problem, so each can work on a different layer? No, I don’t think so. Sure, the database would be more efficient, and the presentation layer more presentable. But even with 5 people, the same total work would need to be done, except now you have the extra overhead of coordinating between multiple people on the same logical change – a change that probably evaluates 20 lines of real, business-valued code (smeared across a number of different places). If you can afford the extra inefficiencies, the result might be better, but at an extraordinary price which I would say is only suitable in particularly large corporations.
A successful model
In my mind, the two above examples show how layered development can get in the way of real productivity. The feeling separation “good style” it provides is a lie. But what’s the alternative?
Well, it so happens that I do have a successful alternative. I recently finished re-architecting another project to avoid exactly this problem. To give you some background, the application is a server which mediates between a remote embedded system and a database server (among other things). The original architecture was like the layered one described above: a communications layer, logic layer, and data access layer – each in its neatly separated. If you needed to add some new domain concept you would need to add it to all three layers: the communication layer to be able to transfer it from the embedded system, the logic layer to specify how it should be transferred, and the data layer to interface to the database. Not to mention the extra man-hours spent debugging the connectivity and marshaling between the layers themselves.
The second part of the solution is to remove repetition. Since the data “layer” is now really just a hundred mini data layers – one for each separate domain concept – but we don’t want to repeat the same data-accessing code in every file. There are many solutions to this. The one I happened to choose in this case was to write a .NET IL emitter, which at runtime would automatically do a one-step transformation of a C# interface definition into the communications code or database code required. The code behind each “mini-layer” can’t get any smaller – it’s just a single interface, with no explicit implementation.
The result is amazingly maintainable code. All the code related to a single domain feature is consolidated within a single file or handful of related files in a folder, so the process of responding to requests for feature changes or additions generally involved changes to a very isolated part of the code base.
Taking it further
This architecture has proven itself highly successful. The types of maintenance done most often take the least amount of effort, and only ever change or add code in a single isolated subset of the codebase. Also, as much as 90% of new code is actually business logic, rather than boilerplate and wiring code.
It really makes me want to take it to the next level. I mentioned that this project is a mediator between an embedded system and a database. So in effect, the whole application is a “layer” in a larger system. In fact, the web system I was talking about is also a layer – between the database and the end-user. When there is a new domain concept to be added to the system as a whole, they need to be added to all of these super-layers. Why not unify all layers? The embedded C code that relates to a particular domain concept should be physically right next to the SQL that stores it, and the HTML that displays it. If the feature is simple, they could even be all in one file. Static type checking can verify that everything’s compatible.
Each “unit” of code would span multiple platforms and mediums, but hold only a single business concept. Sure, there will be dependencies.. but you can manage these dependencies in the same way that you would have managed dependencies between business concepts in a single layer back when the architecture was layered.
I won’t lie, it would be a difficult goal to achieve to have a fully heterogeneous business units that spans multiple environments and languages. But the problem is not that it’s a bad idea. The problem is just that it goes against the existing way of doing things, and the existing way has a lot of support and tools. Another reason is that the existing system is very good for large teams of highly specialized individuals who can each work on a separate layer, while the “unified layer” system is best for small teams and individuals who have skills in all the different environments that a feature spans. In other words, the big money is behind the layered systems, at the expense of generalists like me who would develop much more efficiently in unified systems.
In the real world there will always be dependencies between domain concepts, but this is largely an orthogonal problem. ↩