Language

Optimizing the service versus cost ratio in your computing system

'Au pays d'Aragon il y avait une fille qui amait les glaces citron et vanille ...'
Boby Lapointe

1 + 1 is less than 2

Let's start through asserting that computing objective is optimizing human productivity. This just excludes some special computing fields such as gaming which are out of the scope of this all site.

The good news is that in many fields, there are efficient mainstream and now often cheap products that can bring close to state of the art productivity. Now the bad one is that when you use several of them, the extra productivity you get is less than the sum each of them would bring if used alone.

Let's study the reasons.

First of all, interfacing has always been the weak point about computers. One fundamental reason for it is probably the nature of a cybernetic brains as opposed to livings brains. A living brain is very strong at making associations, I mean saying 'this is roughly equivalent to that'. It is also the weak point of living brains because the reliability of associations tend to be weak, but that's another story. When interfacing is concerned, high association capabilities is what makes it easy for living brains, and hard for cybernetic brains (1) because they will not be able to self recover from any small glitch (due to lack of flexibility in the association mechanism). So, as long as you introduce one or few computing systems, the interfaces are human-computer, so the human automatically and efficiently adapts, but when you introduce more computing systems or open to the external computing systems, then come computer-computer interfaces that are hard to setup, keep reliable and make evolve over time, so that the cost of setting up and maintaining the computer-computer interfaces starts to reduce the increased productivity brought by computers.
Please notice that this is so much true that in most real word organizations, you can find many places where a computer-computer interface has been replaced by a computer-human-computer interface just because it is easier to setup, and the drawbacks (unproductive direct salary cost, reliability, execution speed, availability, retraining new operators cost) are often largely underestimated because spread to various departments instead of being assigned to the computing department. In the end, various computing systems 'look' individually cheap, but it is largely a trick related to poor real costs accounting.

The second factor that tend to lower productivity gains brought by computers is huge adapting cost. The root of the adapting issue is the same one: cybernetic brains weak association capabilities. As a result, changes that look tiny for a human can very well require reshaping a large part of the computing application leading to unexpectedly high costs. The consequences are huge at management level: fast evaluation of all possible organization changes to find the optimal adaptation is no more possible. In other words, has soon as the manager is not the guy that developed most of the computing system, only a small subset of the possible organization changes can be scanned due to the fact that evaluating the cost related to each of them on the computing system is a slow (if reliable) process. In the end, the drawback of computing system poor adapting capabilities is double: costs tend to be high even for changing not much, and moreover, managers decision process efficiency is seriously reduced. In this paragraph, I have not proved that two computing systems bring less than twice the productivity gain brought by each of them, but rather that each computing system brings less productivity gains that it used to. De Gaulle said 'la viellesse est un nauffrage' (oldness is a disaster), but in computing field, oldness starts very soon.

The third factor reducing the global productivity gain brought by all computing systems compared to the productivity gain each of them would bring if used alone is related to the fact the complexity of a computing system tends to grow much faster than linearly. This is very much related to the two previously expressed issues: each of them interfaces and adapts poorly, so when many are implied, two infernal circles tend to appear: the organization change that is cheap for system A is expensive for system B, and when upgrading system C, system D breaks. So the direct cost of operating and adapting the various computing systems tends to grow fast impacting the effective productivity gain they bring at organization level.

The last point that lowers the productivity gain effectively brought by several computing systems is related to current software market. Mainstream products are just ignoring the three issues expressed above. They rather look forward providing maximal productivity gain right out the box (2). It implies that the productivity gain is considered when the product is used alone, and unmodified. So, they tend to be fat products, with maximal out of the box features set, with impossible (closed products often with intentional locks in) or very expensive (mainstream free softwares) adaptation of the product itself when it's necessary.
The side effect is that organization largely try to escape the huge adaptation costs though selecting more than anything else products that are 'backward compatible' in the end. It means nothing less than at the second generation, a company is not selecting anymore the product that is best suited to optimize it's today and tomorrow activity, but just tries to escape huge transition costs.
In the worse but quite frequent case, after a few releases, the transition costs tend to exceed the productivity gain brought by the new release, but upgrades are mandatory either to get known bugs fixes (often at the price of new ones so that it will never end), new hardware support, or just interoperability constrains. Software provider, one point, company, zero.
I have also noticed that even if at some point the company decides to restart part of it's computing system from scratch to get rid of the constrains of the old solution, the process for selecting the new tools the new system will rely on is often very naive and mostly falls to either 'we select what our staff already knows' or 'we select what we believe to be the standard' or 'we select the product from the company around the corner so that support will be easy'. Since none is related to the three main issues expressed at the beginning of this document, the switching costs are full, but the switching gains tend to fad very soon.

I started through asserting that computing objective is optimizing human productivity. After explaining related issues, I conclude that this initial very general goal is the same as the following much more focused one: preventing productivity gains brought by computing systems to stall after some time. This is a difficult to achieve goal because extending the computing system is difficult due to the interfacing issue and the complexity fast growth issue, and keeping it productive is also difficult due to the adaptation issue, but of course, not facing real issues does not help.

Shantytown computing

In the first part of this document, I expressed computing issues, and they are very much related to the microscopic nature of computing expressed as weak association capabilities. I will now deal with the macroscopic aspect, I mean overall computing systems architecture.
Please notice that the microscopic aspect is largely something you have to do with rather than something you chose. I mean there are not much choices and it's unlikely to change unless a revolution appends in computing hardware technology. As a result, it has a positive side effect that the conclusions are unlikely to be biased by some implicit choices or soon get old fashioned. On the other hand, as soon as we deal with the macroscopic aspects, there are a lot of possible architecture choices so that conclusions tend to be weaker and more opinion-ed. That's why the ultimate goal of this document will be to propose macroscopic architecture level choices as directly as possible driven from microscopic level issues.

Since I assume that the reader does not have the computing culture needed to directly review computing products and systems and safely conclude their strengths and weaknesses at architecture level, I will now use a helpful metaphor to describe what computing is about.
Let's say that a modern computing system is very much like a big town.
So, the main constrain is that one can't build, or rebuild, or even significantly reorganize it in one day. As a result, two important questions are:
What tools are used to build and reorganize the town ?
What methods are used to conduct the evolution on the long run ?
And the abrupt answers are:
For tools, the standard is spades and picks as opposed to bulldozers.
For methods, the standard is the shantytown model.
Since I expect most readers reaching this point to at first think that I'm greatly exaggerating the picture, I will now provide arguments to explain how all this did append and continues.

In order to illustrate the shantytown aspect of modern computing systems, let's take a Linux box as an example: What are the shared parts between the operating system that is providing low level services to applications and enables then to pacifically coexist on the same machine, the database engine, the application language, the user interface toolkit, the web browser, the office applications. Not much: they tend to all use their own language (SQL for the database, Java for the application, bash or Perl for the operating system configuration), their own configuration files standard, their own high level libraries on top of the low level operating system services, their own caching system, and so on. Having specialized subsystem is not necessary a bad fact if it enables each of them to be better suited for the special task it's used for. The problem here is that each of them did a fast and dirty implementation because a good one would have delayed the project several years so is not an acceptable solution.
Ok, that's a bit short as an explanation, but I'm not going to convince you through reviewing with great details several mainstream computing components because it would be too technical and fairly boring. On the other hand, maybe I can more easily convince you through asking the other way round: rather than checking the result, which is hard, let's check the assigned resources. I mean what organization is spending significant efforts to coordinate the computing systems as a all. Let's start with mainframes, where a single organization, IBM, is managing nearly everything: the hardware, the operating system, the database engine, and (not any more) the application level language. The problem here is that a mainframe is a very restrictive system which is only part of modern computing system. It does not provide other than database oriented tools, and has to connect to customers through the now mandatory web, not talking about interconnection with different servers. Now, if you look at Linux on mainframes which has been recently introduced to increase the number of services on mainframes, it's just a port: the zillions of Linux applications have not been rearchitectured to properly match mainframe high level of predictability and availability. They are just minimally patched to run on the mainframe. So a mainframe reduces the number of hardware boxes to manage, but does not make the overall computing system more consistent at architecture level. Then we could look at companies such as Microsoft that have products for many things so that it's possible to have a Microsoft only computing system, and Microsoft could spend significant resources on the overall consistency. Some consistency efforts have been achieved through the introduction of Visual basic as the single scripting language for many applications, but what ever resources level might currently effectively be assigned on the subject, the end result is still very limited for three reasons: first, many of the products are the result of small companies buyout as opposed to being written from scratch, so consistency with others has not been favored at design time so that the task is daunting, even for a large company. Then the desktop applications don't work in client-server model, so cannot be used on server side, so cannot be used without huge constrains on the client side (the right software with the right release, enough resources, some application code on client side, lack of reliability, etc), so that in the end, high end features cannot work beyond the limits of a single company. The third point is that the single interface that made the success of Windows broke with the web success, and Windows has not switched to the new mandatory interface. Assuming it could technically succeed the switch, which is probably false, it would just remove what is in facts the single element the Windows word unity relies on. Let's finish through studying the Linux word. Who is working on making the all system consistent. The distributions. Now, their influence don't go beyond providing a common application packaging mechanism (3). That is far from enough to provide any significant result towards the issues expressed at the beginning of this document.
As a conclusion, and in very few words, no powerful organization is currently efficiently working at providing an overall consistent mainstream computing system, so it's very unlikely to happen. The latest greatest method to deal with the shantytown aspect is still to add a wall named the graphical user interface to separate the nice district where the user lives from the shantytown where real work is performed. Mac word did it first, then Windows, and now Linux with Gnome and Ubuntu. The reality is that there is no miracle: either you will enter the shantytown very often, or the interfacing and evolution issues expressed at the beginning of this document will just hit you (kick your ass) harder (4).

Let me now illustrate what I mean by the tools used to build the large town used as a metaphor of an overall computing system are spades and picks. I'm just referring to the number of code lines (6) the tools and applications consist of, and as a result, the number of workers required to build and maintain them. There are several factors favoring this situation.
Let's start with the technical side: it is much easier to achieve a computing service with a lot of code lines than with few, just as it is much easier in math to provide a long demonstration of a given problem than a short one. Now, let's see how it impacts external computing systems tools providers and internal computing department attitude.
First, the transition cost of any computing system to a significantly different one strongly favors companies and products that adopt the fast to the market strategy as opposed to ones spending more time on the design so reaching the market later. On computing market, fast and dirty is still the wining recipe, so tools tend to be fat and poor at design level (5).
The same applies internally, a guy in the computing department is very much on pressure in the early days of the application cycle, when it's not yet used for production. On the other hand, as soon as the application enters production, the high transition cost to a completely different system and even the inconsistencies of the application will start to protect him. So computing department will tend to also just ignore the main issues expressed early in this document and favor fast and dirty, so fat and poorly designed.
Then, managers still don't learn at school any efficient method to evaluate their computing system, so tend to be very conservative, receptive to marketing from computing tools suppliers, and often even naively believe what guys from the computing department say. So it once again helps computing tools suppliers with a lot of cash because of a successful old fashioned product keep their position, and computing department never do the house-keeping required to bring the overall system complexity down. Managers can rely on external auditing to get reliable informations on subjects they don't master or have time to study, but an external auditing company is not that good at discovering how computing might be optimized on that particular company because it's knowledge of the company organization details is generally weak. On the other hand, the strength of an external auditing company is it's ability to compare with other quite similar companies. So, if the standard in computing systems is fatness and mediocrity, the external auditing company will not bring any helpful advise to the manager.
Lastly, the 'size and cost' possible reduction factor is deeply underestimated. One reason is that unless you are visionary, you need a reference to compare with. Since the only reference of an up to date overall consistent so short computing system is Pliant, and Pliant is completely ignored, people tend to just perceive the local possible optimizations that they evaluate to less than 50% possible reduction, whereas if you count lines, you get an astonishing more than 90% reduction, and the reason is that most parts of real computing system are just duplicated work (code that brings nothing new), glue code (code that does nothing) and unnecessary complex solutions (result of fast and dirty method). Please notice that one part of the code does not only show one of the problems, but most of the time all of them and since each of them tend to not add some lines, but multiply the number of lines, you get the effective explanation of the unbelievable (and unbelieved) final result.

In the end, since possible improvements are so much underestimated, the task assigned to computing system department manager is currently more to maintain the computing system cost within specified budget, just because an exceeded computing system department budget is something immediately visible whereas missed productivity gains are hard to account. The bad side effects is that if we get higher view level, the computing system department just adjusts the budget constrain through reducing the service provided to the company in facts. The service reduction appends in several area: suboptimal interfaces that will add costs to other services that, as we have seen earlier, will not be accounted back to the computing department, and increased requested budgets to provide extra features as a result of the unnecessary complexity of the existing system that will result in less new services to be deployed in the end.

A more effective computing system quality measure

I started through explaining that the weak points of current mainstream computing systems are poor interfacing capabilities and high cost to apply tiny changes, due to cybernetic brains poor association capabilities, and resulting in overall productivity improvements brought by computing technology stall earlier than expected.
Then I described mainstream computing systems as inconsistent and unnecessary fat.
So, my conclusion is now straight forward: since you can't change the profound nature of computing systems, you just can't prevent interfacing and adaptation issues to raise (6), so it is very important to work on reducing the cost to solve them, and the best way to do it is to bring down the complexity of the overall computing system. It will just make troubles location in the code, patching or partial redesigning easier, faster, so cheaper. In other words, you can't prevent problems to raise, but you can greatly reduce the cost of handling them through reducing the overall complexity.

As a result, rather than just counting the number of features, a more effective method to measure a computing system quality is to divide the number of provided features by the number of lines of code (7) and to apply it both at external tools selection and internal application level (8).

Another corollary is that providing the new feature withing time and budget is short term only management at the head of the computing system department. Continuously reducing the complexity of the overall computing system is the long term task, and the just proposed measure is an effective way to check it, just like the budget is an effective way to control the short term task.

 

(1)

  

Some programming techniques named 'neuronal programming' have been introduced that focus on providing more association capabilities to cybernetic brains, but non of them is currently used to deal with interfacing issues, and the fact they would provide satisfying results in this area (automatically solve issues without creating new ones) is still unknown.

(2)

 

The main reason is the lack of computing culture of computing system buyers. Culture is long to learn, so we currently see in the computing field many of the characteristics we can see in politic field in the early years after democracy have been introduced.

(3)

 

Without it, installing a Linux box would require several weeks of work to compile various packages from unpatched source and deal with the zillion of small issues related.

(4)

 

The standard answer of dumb people in this area is to deny the interface issue of their favorite system though saying that they don't have interface issues with the people using the same great system, so it is a proof that this particular system is better.

(5)

 

The ability of a product to escape from it's initial design limits is a myth most of the time, but a myth with a lot of marketing support.

(6)

 

Many people think that correcting the bugs is enough. A very sure sign of mediocrity !

(7)

 

Grumpy people will answer that the number of lines is not meaningful because in many programming systems, an application can be written as a single very long line. That's true. What I really mean is the number or elements, where an element is either a word as in human languages (an example could be 'then'), or a set of consecutive signs that change of meaning if separated (an example could be ':='), or a numerical value (an example could be '12.6').
Grumpy people will point out that the measure is still weak because any application can be turned to a single very large number and some tiny decoding code, so I have to add that all constants must be computed by hand as opposed to with the help of a computer.
Grumpy people will once more reply that a human can perform any computation with enough time. Prove it. While you play with your pen, I'll be able to finish this article undisturbed :-)

(8)

 

Improving accounting through assigning to the computing department humans involved at various places just to cope with the computing system weaknesses, mostly at interface level, could also be helpful, but might prove difficult on the practical side.