Infrastructure Outsourcing: Time to Mind Your Ps and Qs (Part 1 of 3)

Posted

Traditionally, the mechanism for creating value in an IT Infrastructure sourcing has been to push down hard, real hard, on price — the “P” lever. The notion is that a sourcing will result in a lower unit cost for the labor needed to manage a device or deliver a service. The objective is to create as big a difference as possible between the buyer’s current unit cost and the suppliers proposed unit price. The reason for that is obvious: P * Q = Total Price.

To create value for the buyer by reducing the total price either P or Q has to change. Historically, P is what changes, because the buyer expects to have at least the same, if not a higher, quantity of things (devices, applications, project hours, etc.) as they have today. Like the last two decades, this remains the strategy behind most if not all IT Infrastructure managed services arrangements. Supplier’s value propositions are predicated on lower unit costs partially achieved through lower labor costs and partially achieved through improved productivity.

Yet, over the last several years the conventional alchemy has become less likely to create the benefit buyers are seeking. We are seeing a number of buyers whose unit costs are at or below the unit prices offered by suppliers. While it is hard to pin down exactly why buyers’ costs have declined, it is pretty clear that investments in technology and productivity or lower salaries are not the drivers. Generally, it appears to be the result of the weak economy and the constant pressure on IT budgets. IT Infrastructure organizations have been forced to reduce staff and leave open positions unfilled while the quantity of devices, storage and services have stayed the same or increased — reducing the unit cost. Service delivery risk has likely increased but we have yet to see a buyer quantify or price the added risk.

Parity (or near parity) between market unit prices and what buyers are achieving is a difficult issue to overcome. Most executives on the buy side are unwilling to step up to the transitional operational risk and the switching costs associated with a sourcing without a robust financial benefits stream and, while we have seen a few that recognize their increased risk position, few are willing to increase their costs for risk mitigation in a continued weak economy. So, there appears to be no other choice than to address the other variable in the total price equation, the quantities (the “Qs”).

There are two sides to quantities — the demand side and the supply side. While it is certainly fashionable to argue for demand side control, I believe that businesses generally demand what they need and are not inefficient in their requests. To the extent they are, the magnitude is insignificant compared to the inefficiencies on the supply side. IT organizations simply use too much stuff to meet the needs of the businesses. Businesses don’t buy machines and use them at 10% – 25% and they don’t make nine copies of every terabyte of storage, for example. Those outcomes are the result of weak internal IT governance models and decision rights, poorly architected and developed business applications and infrastructure organizations who for years have satisfied demand without considering the long-term supply side impact, expecting instead to be rescued by some forthcoming platform or tooling marvel.

What are the answers? In Part 2 of this blog we’ll look at why virtualization, while not without benefits, really doesn’t solve the problem, and in Part 3 we’ll offer our views on how this challenge could be addressed by the supplier community – if they are willing.