Sunday, May 18, 2014

WBS by phase or deliverable?

In almost every project of any complexity, a debate arises as to how to set up the work breakdown structure.

Traditionally, it is about the work.  If you're building an airplane, you need engine, wings, body, and so on. They often get produced by different vendors so it makes the contract management and earned value reporting much simpler to keep these products together.  It allows one to see the progress towards having a completed component or product.  Besides, it's called a work break down structure - shouldn't it be about the work?  But more often that not, the project WBS looks like a life-cycle:  plan, gather requirements, design, build, test, deploy (or something pretty much like that).

As soon as you start decomposing a product-based schedule, assuming an organization with some level of process maturity, you notice that you're just cutting and pasting task names. Each element of the solution requires each of the life-cycle phases. Now each phase has deliverables in it, so those deliverables are repeated numerous times through out the schedule.  And each of them has a set of generally-identical steps: draft document X, review document X, update document X, obtain approval for document X, publish and distribute document X.   Very tedious.

It doesn't take long before somebody says "why don't we organize this thing the other way around?"  Put all the planning and approval actions together, all the requirements gathering together, and so on.  This is especially prevalent where the organization has defined a fairly comprehensive list of artifacts, while the actual products the project is to deliver are somewhat nebulous.  All the more reason to expose those products at the highest level of  the project!

The math on this is actually quite simple:  under every circumstance but one, it doesn't matter.  The number of tasks is the number of components times the number of process steps.

However, the one exception may actually be the norm.  If many of the product components move through the process at exactly the same pace, then they can be handled together on a single artifact set (requirements document/s for wings, fuselage, wheels as one document, requirements for the engine as a separate document, rather than tracking 4 sets of artifacts through their creation.  If the entire work is subjected to a series of phase gates, where work is essentially being steered towards at least notional stopping points, this is not an unreasonable point of view.   It does encourage a realization that the work does have to pass through these phase gates, and it does serve to acknowledge the governance's authority's right to stop the work where it doesn't seem to make sense any more.

The main disadvantage of focusing on the life-cycle rather than the work product is that it does seem to lead to a tendency to forget that the project is about getting actual working products or services into being rather than about getting a bunch of documents completed.  As noted above, if the project's real deliverable (not the artifacts) are unclear in the beginning, hiding that structure inside a canned SDLC structure is not going to provoke the necessary thought; such a project is almost sure to end up missing some very important capabilities, the absence of which may not be detected under the sheer volume of artifacts.

Now, what is NOT acceptable is to use a life-cycle approach for things that are turned out at different times. That completely masks visibility into the progress of the products.  For instance, if a solution will be rolled out in three distinct increments, it is very important that all the work leading to each incremental roll-out be part of an integrated package.   Much of the work for subsequent increments does build from similar work in earlier increments,  but that's no reason to package (for instance) all the design work for all increments under an overall "design" heading.

Perhaps one of the reasons that this issue keeps coming back up is the ability to pile a lot of detail into tools like Microsoft Project.  Just because one can do so quite easily does not mean that it needs to be done. Putting it in there is one thing; managing it is another (and no doubt there are professional schedulers who relish the job-security a grossly over-complicated schedule seems to provide them, at least until the project collapses under the weight of its overhead).  A grossly-abused "best practice" in project management is to have no task lasting longer than two weeks.  If we go to the references, we see that the actual recommendation is that tasks in a schedule should be no longer than twice the reporting cycle - and that reporting cycle should be based on what makes sense for the level to which reports are being rendered. No doubt a work team leader wants reporting weekly, but in that case, what is the proper degree of granularity for a business-unit executive?

Let's not forget the other guidance in this area: projects should be baselined (and then reported on) at the contract WBS level , which is defined as one level (at most two levels) of indenture below the contract itself and therefore at most only only the third level in the project.   If you've got tasks like "review draft report" in your schedule, you'd better be leading the team that is actually writing or reviewing that report.  If this sort of task is making its way up the reporting chain, you're adding a lot of fairly meaningless reporting overhead and costs to your projects, and the sheer volume of all that may be distracting you from much more important observations.



Thursday, May 1, 2014

Is cloud computing the future or the past?

More years ago than I prefer to think about, we learned computer programming using a dumb terminal that could only do whatever the central processor agreed on -- and there were many evenings and many trees wasted on re-doing punch cards until the CPU was satisfied.  .  Anything beyond running the few programs we were allowed to use required the intervention of the "gold coats" - the system administrators, who knew how to run the machines but were of no help whatever with building programs.

One day, some brilliant young fellows in California (and an older fellow in Texas) came up with the idea of providing people with the ability to do their own computing.  In a remarkably short period, the entire planet underwent a new industrial revolution.   And in a remarkably short period of time ... the experts want to turn it all off again.

Hot new trends include virtual desktops, tablet computers and cloud-based software.  All of them have a common thread:  without connectivity; they are completely unusable.  Everything must be done through the central processor, or nothing is done at all.  Sure, it's handy having a solution right out of the box - well, really without a box at all.  Just log in and off you go.  Once established and proven, a cloud system will probably be pretty effective in meeting the center-of-mass requirements.  If the cloud system has already worked out how to interact with other systems, you can have a whole suite of software solutions at your fingertips in a matter of hours.  But you'll need to do everything their way.  And you have to trust them to get it right.

As it happens, I am a fan of cloud solutions.  You can get productive very quickly, effectively and generally inexpensively.  But before going all-cloud all the time, stop and think about whether you are fully comfortable with the risks you must assume.

If we have learned anything in the past decade or so, it should be that large organizations will, by the sheer law of averages, make large mistakes that have great impacts on everyone else (although seemingly little consequence for themselves).  A large provider's network operations certainly are run better and cheaper than you could do for yourself.  But when large providers do fail, they do so on a large scale that it is hard to avoid.  You or our company can join the victims of technology or data events that are related to your business only in the very vague sense that you (and thousands of others) share your computing space with someone else.

The whole point of automated work is to make it more productive.   The ultimate cloud application - remote working - takes care of having to shut down the business just because of icy roads.  Workers can stay home and work.  Of course, all these things have flip sides: if your remote workers cannot connect, then you're back to paying everyone to not work.  Or, if you have virtualized your desktops in the office too, then you'd better make sure your network is very stable.  The only thing worse than paying people to sit hoe and not work is to make them come into the office and paying them to sit around and not work. Productivity gains from allowing workers to telework instead of losing a day to weather are erased if the network is out of action for a day.

Does all of this matter?  As far as being able to do anything about the inexorable drive in the industry to put itself out of business, probably not.  But it is worth thinking about whether one-size-fits-all does in fact fit you, and what you're going to do about those times (and they are definitely "when", not "if") when you and your team cannot connect to the network.