Articles Posted in Service Performance

Posted

Technology continues to infuse our homes, businesses, and places of employment. For example, the “Internet of Things” – as it is sometimes called – brings a lot of promise to a wide variety of industries and sectors, including farming, government, natural resources, and manufacturing. The list goes on.

Even though it often gets the (unwarranted) reputation as being slow to innovate, the real estate industry has joined the technological trend. Real estate developers, property managers, and construction firms are constantly on the lookout for new ways to incorporate the promises of new technology into the design, development, and maintenance of their projects and properties.

For example, automated parking garages have become an efficient way to maximize parking in markets where automobile space is at a premium. Some hotel chains are doing away with keys and permitting guests to access their rooms with smartphone apps. Homes and apartments are following suit. Construction firms are starting to gain FAA approval for drone use in connection with their projects. And finally, there is a smartphone app for just about every sector of the real estate industry.

Posted

Quantitative measures of supplier performance in the form of service levels are critical in any outsourcing relationship.   However, they provide an incomplete picture of how well the supplier is performing and meeting the client’s business and IT objectives.  A common complaint is that the service levels are green each month, but the client is dissatisfied with the supplier’s performance – typically due to the supplier failing in areas that are difficult to measure quantitatively.

To fill this gap, we recommend to our clients that a quarterly “key stakeholder satisfaction survey” be included in the outsourcing contract as a service level.  This service level is a subjective determination by the client of its level of satisfaction with the supplier’s performance.  A meaningful service level credit applies if the supplier fails to achieve an acceptable rating.

Continue reading

Posted

We recently completed a major renegotiation of a very large, longstanding infrastructure outsourcing contract. As is typical with renegotiations, there were areas of the contract that required changes and areas the client wanted to leave alone. In this case, scope (and the presumed current solution) was to be left alone as the focus of concern was thought to be on other areas of the relationship. However, the need to update a seemingly simple exhibit – the Key Supplier Personnel list – told the client they had reason to be a lot more concerned about the supplier’s current solution.

Like most IT outsourcing contracts, this one had the typical provisions around Key Supplier Personnel (KSP) (e.g., full-time,

employees of the supplier, rules about replacing the KSP, commitments to tenure on the account, etc.).  When asked to update the KSP exhibit, the supplier came back with three names – the Account Executive, Deputy Account Executive and the Business Manager (yep, the person in charge of billing the client).  That was it.  Not a single person with technical knowledge of the client’s critical systems or technologies.  Nobody involved with actually running the client’s IT environment on a day-to-day basis.

Posted

In Part 1 of this blog post Time to Mind Your Ps and Qs we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation and that to achieve the next breakthrough the supplier community has to address Q. In Part 2, we addressed why more virtualization is not the real answer. Where are the next big benefits going to come from and who is willing to make the paradigm shift?

Continuing in our example from Part 2 where our Buyer was looking for $125M in savings over a five year term, if the virtualization dog won’t hunt (well enough) what dog might? Perhaps x86 hardware consolidation should be addressed in a different way in a sourced environment. What if instead of using 15,000 virtual images, applications could be stacked, like they are on other platforms like mainframes. While no application stacking effort would achieve 100% results, neither would virtualization. For simplicity in calculating the virtualization numbers we assumed 100% of the images could be virtualized and we will do so again for the application-stacking alternative. In both cases, what can be achieved in actual implementations will be less.

Let’s assume that each of the 15,000 O/S images runs one application instance. Then let’s take those applications and stack them inside let’s say three O/S images on each of 1,000 machines. We will still need the same amount of hardware, the same amount of virtualization software, which will cost $62.3M over the term, but then let’s stack the 15,000 application images in the resulting 3,000 O/S images. In that case our service fees would drop from $202.5M to $89.1M (15,000 * $225 for 18 months + 3,000 * $225 for 42 months) a projected savings of $113.4M over the term. The $113.4M is 90% of the buyer’s savings goal of $125M.

Posted

In Part 1 of this blog post (Time To Mind Your Ps and Qs), we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation, and that to achieve the next breakthrough the supplier community has to address “Q”. The current standard answer from suppliers on reducing Q is “virtualization”, but that won’t solve the problem, at least not entirely. Here’s why.

Assume we have a buyer with significant IT Infrastructure labor costs — say $125M per year. The buyer decides to go to market despite having a pretty good idea that its unit costs are roughly at parity with current managed services market pricing. The buyer’s objectives include, in addition to qualitative and risk mitigation goals, lopping $20M to $25M p.a. off the labor costs to manage its infrastructure. A five-year labor-only deal in the $500M TCV range is certainly going to attract plenty of interest in today’s marketplace. The buyer has made a strategic decision not to source hardware and software ownership to the supplier so, if necessary, they can “fire the maid without selling the house.” Furthermore, the buyer has decided to signal to the suppliers that its unit costs are near where it believes the market should be and winning this business is probably going to require a clever solution that addresses the Qs along with the Ps.

So, let’s first look at this from the supplier’s perspective. If you are the clever solution developer at a major supplier, you see a way out of this conundrum. You’ll propose a virtualization initiative for the buyer’s vast portfolio of x86 servers! And, since x86 services are typically priced by O/S images, you will still get the same amount of revenue regardless of the degree of virtualization, 15,000 images on 15,000 machines or 15,000 images on 1,000 servers — all the same to you, right? However, since this is a labor only deal and you will be reducing the quantities of something that isn’t in your scope, you have to change the way the buyer calculates benefits to include all the ancillary stuff they won’t buy from you anyway (i.e., floor space, energy, racks and, other than a couple of suppliers, the machines themselves). Starting right in the executive summary you will tell the buyer to think strategically, not tactically. That is, think about TCO, not just about this isolated deal when calculating benefits. You are still going to have to employ a lot of “weasel words” to deal with how virtualization will occur (and how fast) — but at least there’s a story to tell.

Posted

Traditionally, the mechanism for creating value in an IT Infrastructure sourcing has been to push down hard, real hard, on price — the “P” lever. The notion is that a sourcing will result in a lower unit cost for the labor needed to manage a device or deliver a service. The objective is to create as big a difference as possible between the buyer’s current unit cost and the suppliers proposed unit price. The reason for that is obvious: P * Q = Total Price.

To create value for the buyer by reducing the total price either P or Q has to change. Historically, P is what changes, because the buyer expects to have at least the same, if not a higher, quantity of things (devices, applications, project hours, etc.) as they have today. Like the last two decades, this remains the strategy behind most if not all IT Infrastructure managed services arrangements. Supplier’s value propositions are predicated on lower unit costs partially achieved through lower labor costs and partially achieved through improved productivity.

Yet, over the last several years the conventional alchemy has become less likely to create the benefit buyers are seeking. We are seeing a number of buyers whose unit costs are at or below the unit prices offered by suppliers. While it is hard to pin down exactly why buyers’ costs have declined, it is pretty clear that investments in technology and productivity or lower salaries are not the drivers. Generally, it appears to be the result of the weak economy and the constant pressure on IT budgets. IT Infrastructure organizations have been forced to reduce staff and leave open positions unfilled while the quantity of devices, storage and services have stayed the same or increased — reducing the unit cost. Service delivery risk has likely increased but we have yet to see a buyer quantify or price the added risk.

Posted

Industry research firm Horses for Sources reported recently that 49% of the companies it surveyed were planning to outsource call center services for the first time, or expand the scope of their existing call center outsourcing, over the next year. With call center outsourcing on the rise, we wanted to share a few of the lessons Pillsbury has learned from negotiating these deals over the past 20+ years.

Baseline Data is Critical to Effective Pricing. Make sure you provide potential suppliers with detailed, accurate historical and projected workload volumes. The data should include:

  • Number of contacts broken down by type (call, email, web chat, fax, white mail)

Posted

There is an inherent “right brain / left brain” tension in procuring outsourced services. The right side of the brain seeks innovative service delivery solutions and emphasizes relationship building with the supplier. The left brain seeks a high level of supplier accountability for performance, competitive pricing and favorable contractual terms.

The two sides of the brain are fundamentally different in nature.

  • The right brain is collaborative and focused on solution and relationship building i.e., aligning customer and supplier interests.

Posted
By

Like most everything in life, the making of an outsourcing transaction is a process of taking amorphous ideas and concepts (fuzziness) to a point where there is sufficient clarity for all involved to move forward in a coordinated and desired manner (crispness).

While true for all transactional components of an outsourcing, it couldn’t be more so for what’s at its heart – the services. There are plenty of analogies that could be used, but the consultant in me feels more comfortable with an inverted triangle.

Triangle.png Just recognize that time progresses as one moves from top to bottom and the mechanism is nearly complete. The question becomes, where do you start and where does it end?

By
Posted
Updated:

Posted
By

In part one of this post, we examined the challenge of discussing IT demand in terms meaningful to our internal customers. That accomplished, the CIO’s organization must next fulfill that demand by acquiring, integrating and delivering the appropriate service(s), whether sourced internally or from the marketplace. Imagine for a moment the perfect world, one in which we would be able order the supply-side components of an IT solution where each provider would stand behind the complete realization of an intended outcome (for example, a provider of midrange server operations would put its fees at risk if the total IT solution didn’t, say, increase inventory turns). Back on the ground here in Kansas, however, we recognize that no party providing much less than a total solution (business process and underlying capabilities such as people and technology) would be willing to sign up for a business result. Furthermore, the current trend towards multi-sourcing puts such a total solution (and the business outcome coverage) even farther out of reach. And if the provider of one solution component would commit to the entire business outcome, would we really have faith in that guarantee anyway?

So on the supply side, we tend to be left with “traditional” service level agreements (SLAs), measuring the elements IT performance. Now, that’s not all bad. If we understand (and the provider can perform to) such SLAs, we should in theory be able to architect a solution based on the sum of those individual components. But theory seems to be failing miserably… so why don’t SLAs work as well as they should?

While there can be many shortcomings to SLAs, some are not so obvious. Most SLAs, take server availability, for example, adequately describe the level of quality we require for that part of the solution and are also useful in measuring the performance of the provider. But what if the production problem was a failure of redundant load balancers (yes, this really happened)? Oops –didn’t think to make that an SLA (hey, it was redundant!) — and the service provider gets off scot-free while the customer is angry and frustrated. Or how about the ability of the service provider to onboard the right project resources (e.g., skills, seniority) in the right timeframe. Believe it or not, I know of a case where the timeframe for getting a particular resource is over a year … and counting! Did the customer have that SLA in the contract? No, but access to specialized skills is a realistic expectation of a Tier 1 service provider and a factor that materially contributes to successfully meeting demand. The point is we can’t measure (and don’t want to try to measure) all aspects of a provider’s performance that may possibly impact the IT service (and hence, the business outcome). What we want from our providers is just a good, reliable service – what we signed up for. So what can we do to get the results we expect, when the dimensions of performance are as often qualitative as they are quantitative? Some different thinking is in order!

By
Posted
Updated: