Thursday, May 1, 2014

Is cloud computing the future or the past?

More years ago than I prefer to think about, we learned computer programming using a dumb terminal that could only do whatever the central processor agreed on -- and there were many evenings and many trees wasted on re-doing punch cards until the CPU was satisfied.  .  Anything beyond running the few programs we were allowed to use required the intervention of the "gold coats" - the system administrators, who knew how to run the machines but were of no help whatever with building programs.

One day, some brilliant young fellows in California (and an older fellow in Texas) came up with the idea of providing people with the ability to do their own computing.  In a remarkably short period, the entire planet underwent a new industrial revolution.   And in a remarkably short period of time ... the experts want to turn it all off again.

Hot new trends include virtual desktops, tablet computers and cloud-based software.  All of them have a common thread:  without connectivity; they are completely unusable.  Everything must be done through the central processor, or nothing is done at all.  Sure, it's handy having a solution right out of the box - well, really without a box at all.  Just log in and off you go.  Once established and proven, a cloud system will probably be pretty effective in meeting the center-of-mass requirements.  If the cloud system has already worked out how to interact with other systems, you can have a whole suite of software solutions at your fingertips in a matter of hours.  But you'll need to do everything their way.  And you have to trust them to get it right.

As it happens, I am a fan of cloud solutions.  You can get productive very quickly, effectively and generally inexpensively.  But before going all-cloud all the time, stop and think about whether you are fully comfortable with the risks you must assume.

If we have learned anything in the past decade or so, it should be that large organizations will, by the sheer law of averages, make large mistakes that have great impacts on everyone else (although seemingly little consequence for themselves).  A large provider's network operations certainly are run better and cheaper than you could do for yourself.  But when large providers do fail, they do so on a large scale that it is hard to avoid.  You or our company can join the victims of technology or data events that are related to your business only in the very vague sense that you (and thousands of others) share your computing space with someone else.

The whole point of automated work is to make it more productive.   The ultimate cloud application - remote working - takes care of having to shut down the business just because of icy roads.  Workers can stay home and work.  Of course, all these things have flip sides: if your remote workers cannot connect, then you're back to paying everyone to not work.  Or, if you have virtualized your desktops in the office too, then you'd better make sure your network is very stable.  The only thing worse than paying people to sit hoe and not work is to make them come into the office and paying them to sit around and not work. Productivity gains from allowing workers to telework instead of losing a day to weather are erased if the network is out of action for a day.

Does all of this matter?  As far as being able to do anything about the inexorable drive in the industry to put itself out of business, probably not.  But it is worth thinking about whether one-size-fits-all does in fact fit you, and what you're going to do about those times (and they are definitely "when", not "if") when you and your team cannot connect to the network.

No comments:

Post a Comment