Could the cloud drown in FUD?

Mike Kavis has written a great post following the Forrester EA forum, suggesting that cloud computing faces the risk of heading down the same road of death by over-definition recently run by SOA. I couldn’t agree more with what he says – especially his lessons for getting the pitch right. Still, I wouldn’t lose too much sleep about cloud computing going away any time soon.

As a concept (even one which is misunderstood and misrepresented woefully), cloud computing is orders of magnitude simpler to explain than SOA ever was. SOA is the only industry buzz-word that I’ve ended up buying books about just to get my head around the general concept (that may say more about me than SOA mind).

Cloud computing, I feel has much more of a self-fulfilling dynamic about it than SOA could have had. The economics of it are mind-blowingly simple – even if a solution ends up more expensive than in-house, it is at least cost-transparent. The benefits to the business are clear and are very often then things that in-house IT has been failing to deliver for years (think agility and effective communication of costs in particular).

Ultimately, while some of the FUD is important, it’s in the process of being answered. Most of the issues with cloud computing have been solved somewhere already. What we’re going to see (sorry, are seeing!) now is the emergence of services able to tick many boxes simultaneously. These services will take off, FUD or no FUD.

Five years from now, we may well all have forgotten the phrase ‘cloud computing’ but it will be there one way or another, and the enterprise IT department and the data-centre will have been changed forever regardless.

A venerable old bird retires

And with it, the reason this blog is called (currently) 72A. That’s the seat number (think of it as very very very economy) I usually ended up in during my all too frequent trips to Perth from Melbourne in 2008. To explain, Qantas has finally retired the last of the ancient 747s they had been using on this route to punish travellers. 

In Qantas’ defence, this was really the fault of Airbus and Boeing for failing miserably in their efforts to get new and more fuel efficient jets out the door. Still, I won’t miss being steadily deafened while sitting at the back of the old monsters.

(Re)fragmenting the IT department

When I started in IT in the mid-1990s, many medium and even large organisations had highly fragmented IT delivery functions. At Ernst and Young in 1994, I worked in a small team delivering IT to the Yorkshire office in the North of England. At the start of the year, we were largely autonomous and able to deliver new services and applications quickly and with only local change control. By the end of the year, we were (along with everyone else in E&Y IT) being outsourced to Sema and amalgamated into a single IT department.

Likewise at the BBC, I started out in the IT team of the ‘Youth and Entertainment Features’ department in BBC Manchester (one of three support and delivery organisations in one building!). By the time I left Auntie in 2004, I had been through three sets of organisational consolidation before finally being outsourced (again!) to Siemens.

The last twenty years has seen a steady process of consolidation of IT delivery in organisations. The relentless trend has been for the centralisation of development, infrastructure delivery and support into the corporate IT department. Outside of business units with very high margins and esoteric IT needs, it has become increasingly difficult for business units to develop and deploy applications without the cooperation of the corporate IT department.

I wonder though, whether cloud service providers open up the risk (or is it an opportunity?) that business units will once again be able to develop, deploy and support new applications, independently of the corporate IT department. Where once, deploying an un-authorised app. would mean running servers under desks or stacking Lacie drives off the back of a desktop to create a private file server; business units can now employ any one of thousands of boutique consultancies or developers to knock up the apps of their dreams using the cloud to obviate the need to involve corporate IT.

Of course, corporate security and finance policy may well stand in the way but history suggests that these won’t be too great an impediment. Once more than a few apps have been deployed, we might see the re-growth of the parallel support organisations that the corporate IT department thought they’d seen the back of (or more likely – absorbed) over the last twenty years.

If that happens, there are all sorts of consequences, many of them nasty. Business units and even businesses as a whole might be willing to pay them to gain agility and bypass what many see (if unfairly) as bureaucratic impediments to business thrown up by corporate IT.

The answer for corporate IT is to make sure that it is nearly or as easy to develop and deploy new applications through them as it is through the cloud suppliers. Maybe that means using the public cloud as a support for internal systems or maybe it means developing private clouds (though some are already pouring cold water on that idea). Either way (or some other way…), it’s an interesting time to be in IT.

Building resilience into applications

Storagebod has written that:

“[When new applications are deployed,]often the first contact that the infrastructure team will have is when an application is delivered to be integrated into the infrastructure and they try to get the application to meet it’s NFRs, SLAs etc.


Turning to the infrastructure to fix application problems, design flaws and oversights [in application design] should become the back-stop; yes, we will still use infrastructure to fix many problems but less often and with a greater understanding of why and what the implications are.”

I agree that it would be nice to see applications and developers bear more of the burden of ensuring they are recoverable from OR and DR perspectives. It’s worth noting though, that in the not so distant past applications that simply had to work – that were ‘carrier grade’ so to speak – would be developed on operating systems that had the necessary software ‘infrastructure’ to deliver on those NFRs. This begs the question as to why we don’t see all applications built in this way. There are a number of reasons but I’d argue that the primary one is simply that application development is more expensive than infrastructure.

Development of a new application or platform costs a lot of money. Whatever the complexity involved in developing non-functional requirements (NFRs) for things like availability, the pain involved in determining the functional requirements (or features) is far greater. Outside of a small number of edge-cases such as core software components of telecoms networks or manufacturing facilities, it does not make sense to build operational or disaster failure tolerance into the code of an application. Application developers (both internally in companies and in the wider ISV environment) focus on the functionality that delivers value to the business or will contribute to selling their product, not on replication, block-level data validation or data-recovery.

Even where developers are interested in building in resilience, they have been faced with a lack of software ‘infrastructure’ to support them. Many of those highly resilient applications in telcos, manufacturers etc. were built on operating systems or used components that provided services such as shared everything clustering and highly resilient, sharable file systems. OpenVMS which pioneered many of these services will forever be a niche product – albeit one that supports many extremely critical functions – because of the costs in terms of cash and flexibility that are paid when applications are developed on it. Building in resilience makes development (both initial and ongoing) and maintenance of applications more complex and expensive. It also means that the developer has to take responsibility for guaranteeing recoverability and who in their right mind would want to do that ;)?

Today, Oracle and MS are building a new variety of this software ‘infrastructure’ into their products but it’s only being used in a small proportion of developments. Even given the possibility that you might save some money on storage replication, people don’t seem to be using these services all that readily, for the reason I suspect, that developers (or management overhead) are still more expensive than those replication licenses.

The only way to change that situation would be (as SB notes) to make delivering resilience at the application layer simple, repeatable and manageable. That’s very much easier said than done though and the twenty-odd years of development of infrastructure resilience services is testament to that. There’s one place where the problem is being addressed though and that’s out there in the cloud….

Personally I think there is often a wider issue of integration between application and infrastructure teams that leads to situations where organisations focus on data rather than application or service recoverability. That boils down to process and in some cases (over) specialisation but it’s a question for another day I think.

Chucked about the bus

If you have ever lived in one of the world’s major metropolis, you’ll almost certainly have experienced the phenomenon of the g-force happy public transport operator. This might be your bus driver, tram, trolley or train driver or maybe even a snarky black cabby eager to show off the virility of his new TX4.

In each case, the result for the poor, defenseless public transport user is the sensation of being tossed around like a cherry tomato in a particularly vigorous salad spinner. On London’s buses I’ve seen grannies thrown to the floor, coffees dashed against windows and handfuls of coins tossed to the floor as a driver, seemingly in the belief that only he is on the bus (and road), pulls away from the stop like one of Max Power’s finest.

In London, by the time I left it seemed there was nary a driver or operator left who understood how to drive the monstrous vehicles for which they were responsible. Given automatic gearboxes and a tidy nine litre engine, these beasts are capable of out-accelerating many of the smaller cars on London’s roads. In the hands of the hoolies who drive them, they are more than capable of disrupting the connection between floor and foot of even the toughest young London commuter.

It’s not just London though, and it’s not just busses. Even in laid-back Melbourne, Yarra Trams seems to employ a number of real bruisers in the cabs on their shiny (and some not so shiny) trams. While in general, better than London bus drivers (I have yet to see any wearing those string-backed driving gloves that I noticed appearing on the hands of some particularly speed obsessed drivers), I have been chucked about pretty effectively by one or two testing out the grunt available from their charges.

Cornering Tram

So what’s the answer? I remember a few years ago reading an article (probably in the Metro – where else would you find it) about a study of London Underground operators. This study found that passengers generally prefer the style of women operators. In the absence of those ‘amusing’ commentaries that some operators like to provide their passengers, this would be an entirely ‘blind’ test and potentially very reliable. By all accounts, the female operators were able to accelerate, brake and corner their trains in such a way that passengers were able to maintain contact between foot and floor and bum and seat. Something that was simply not possible when their male colleagues took the stick.

Sadly though, I suspect that Britain’s discrimination laws would see off any chance of firing all the male drivers out there. Realistically then, we can only expect the ladies to sort out a proportion of the problem. Luckily, I have a solution for the rest.

Accelerometers. It’s really very simple. Just a lorry drivers have a record kept of their speed and duration in the seat. Just as airline pilots have their every action tracked, monitored and investigated and just as MPs honestly track their expenses (um), public transport operators should be allocated a maximum average ‘G’ per KM travelled in any given day.

The driver who accelerates smoothly, brakes anticipatorily and treats their passengers like, you know, passengers will set the benchmark. The drivers who think that their job is a competition to achieve the greatest deviation from centre of gravity for the top deck of their bus will by contrast find themselves receiving remedial training around Swindon’s Magic Roundabout (and if that doesn’t teach them, they can just stay in Swindon).

Just imagine being able to walk the deck on the bus to your seat safe in the knowledge that your driver isn’t going to attempt to place you on your backside on the floor. Bliss, and all for the sake of a little bit of electronics and some damn harsh sanctions.

And yes. I smacked my head on one of the yellow rails of a tram a few days ago after the driver of a 96 decided to find out what the 0-60 time of the Bumblebee was.