Archive

Posts Tagged ‘architecture’

Technical Debt — Defined

February 7th, 2011 No comments

Designing software that is meant to be used requires that you put the user experience front-and-center when coming up with the design.

However, when  you have functionality that you need to add to your system you have two ways to do it,

  1. Quick and messy – you are sure that it will make further changes harder in the future. This involves actions like hardcoding parameters or bringing in libraries that you don’t completely understand.
  2. The other results in a cleaner design, but will take longer to put in place.

Ward Cunningham coined a wonderful metaphor (Technical Debt) to help us think about this problem.

In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.

I’ve made some choices during my career where hard deadlines, or the limited maintenance nature of the project meant that the effort for very clean code and architecture was not justified. However, one practice that I would advocate is to keep a copy of Bugzilla around where you can log all the ‘todos’ required to refactor, clean-up and enhance robustness in your project.

When you have debt, you have to keep track of it so that you can pay it off. Any other alternative is reckless and irresponsible. The metaphor hold equally well in the domain of software engineering as it does in the field of personal (or corporate) finance.

Architectural changes in web applications

July 6th, 2009 No comments

For an ‘old-timer’ like me, who witnessed the birth of the web, and the adoption of the Internet, it’s been a challenge to unlearn some ‘rules-of-thumb’. I’m listing some of these as food-for-though for others who follow a similar technical path.

Moore’s law has mutated. Technology is no longer about boosting speeds and capacities. Gone are the days of the break-neck races between Intel and AMD to achieve higher Gigahertz in their CPUS. The new reality is all about parallelism, multiple-cores, caching layers in architectures (typically via memcached), formal and informal means of splitting data across multiple machines (i.e. sharding, load-balancing, map-reduce). Any non-trivial architecture that requires massive scalability has to build in the capability for synchronizing across distributed server components:

Assemble battle-tested components, rather than build a proprietary stack. I’m surprised that people who are learning to program are still taught to use linked-lists, and spend time at the data structure level. Most developers will never need this granularity of understanding, and will simply plug in the data structures from the C++ Standard Templates Library, Java Collections Frameworks, or whatever language they prefer to use. Obviously, this low-level knowledge is very useful if you’re working in an area that needs it, but frankly, the majority of developers do not need it. The existence of Service Oriented Architectures actually makes it possible to ‘plug-into’ remote processing capabilities that are no longer even managed by your team. Cloud computing has also taken this to another level. Amazon’s EC2 is not a bizarre anomaly, but a celebrated part of the mainstream now.

Database normalization is passe. There was a time when people bragged about how normalized their DB was. It was a time when purists reigned. Nowadays, unless you’re tracking the world’s financial data, you don’t need that level of normalization. It’s a sign of a confident developer, when they purposefully denormalize parts of their database, to speed up the database access, and reduce the burden on their server. It is possible to do this without running into excessive redundancy, stale data, and integrity problems. The art is in knowing how!

REST is available for almost free. There are a number of development frameworks that allow your web application to be offered almost immediately using the SOA model. Sure, you’ll still have the human web interface, but by compiling on RubyOnRails, you get the ability for others to query your web applications as if they were remote components in their system. This reduces the interface rendering processing, and allows for collaborators to develop reliable system that are integrated-via-contracts to your system.

These are the most interesting paradigm changes that have taken place in web architectures. Any comments on other shifts that I may have missed?