Posts Tagged ‘Justin Mullen’
The governance of service delivery and service performance has never been more important than today. We have demanding customers and business units, less money and less staff, and extremely complex delivery models, many of which may be outsourced, multi-sourced or shared.
ITIL as a discipline and best practice is a great guide for organisations on how best to support and deliver services that are aligned with business needs. The success of ITIL however is measured by the performance (SLAs/KPIs) of those services (stats, customer surveys and feedback, proxy measures, etc.) and the effectiveness of the Service Improvement and Business Change Programs that run alongside ITIL services.
So given this and, given the many sources of information that underpin ITIL SLAs/KPIs/Projects/SIPs/etc. (e.g. transaction systems, excel spread sheets, projects plans, word documents, information in people’s heads), 2 key question remains.
“How do you connect all this disconnected information on a single page (or iPad), to understand how effectively ITIL is underpinning service performance in your organisation?”
“How do you decide which aspects of the service require your focus and attention at the next governance meeting?”
For years IS Management have sought the Holy Grail of a well-structured, top-down management framework for IS but this has failed to materialise. Decisions on operational performance, infrastructure change management, third party outsourced delivery and numerous projects require a lot of information to be crunched and brought together for governance meetings.
ITIL/TOGAF/Prince2 have been around for quite some time and are big business. Consultancies sell time delivering these frameworks and these solutions. Often they design beautiful paper based processes with wonderful documentation that unfortunately never get adopted. Software providers deliver solutions that support different bits of it. Some of them deliver solutions that reportedly support all of it.
The fact is that ITIL/TOGAF/Prince2 are great frameworks of best practice and guidelines, but rarely, if ever, are they supported by one toolset or process within any organisation.
Typically, information on the performance of ITIL , Governance, IS Service delivery (SLAs and KPIs) or IS Projects, as well as the risks, issues, decisions taken, commitments made, SIPs on-going and outstanding actions, reside in (and in some cases are locked in) a variety of systems. MS Word documents, MS Excel spread sheets, MS Project plans, CRM systems, Service Management systems and financial management systems are all the places we keep this locked disconnected information.
Furthermore the information of most value (e.g. the context behind a performance indicator, the progress on an issue, the status of a milestone in a project) may be locked inside someone’s head.
Combined, this mix of data (both the facts and and the context/opinions) is the critical data that we need to aggregate. When we do, we get to see whether the things we are doing and the decisions we are taking, are having the desired effect on the organizations performance, customer satisfaction, or whatever the key outcomes are that define success.
Current Solution Options
Solutions like Digital Fuel’s Service Flow with reporting platforms like Cognos/Business Objects/SAS and enterprise project management platforms like Planview/Primavera/MSProjectServer are a possibility, but the cost of buying and integrating these platforms is often prohibitive.
Additionally this approach is not quick. Infrastructure needs to be purchased and implemented, requirements need to be understood, integration points need to be analysed, software needs to be installed, all before the configuration and training is even started.
Increasingly organisations are looking to avoid complex environments with 3-4 tools welded together to get the answer and they don’t want to wait 6-12 months before they start to see results. They have a governance challenge and they just want it fixed quickly.
So this cost and complexity of building and integrating multiple platforms has led the industry to a much “easier”, but more manual, way of doing things. Excel and PowerPoint based governance reporting dominate business reporting including ITIL and IT Service Management.
To be clear, Microsoft is the most widely used set of tools on the planet for data analysis and performance reporting.
With Microsoft Excel we
- Extract the performance from all our key systems
- Build graphs of the performance of each of the metrics
- Manage our Risks, Issue, Assumptions and Dependencies
- Track the actions we agreed at previous meetings
- And manage things like change requests, financial performance, stakeholder engagements, communication plans, etc.
With Microsoft PowerPoint or Microsoft Word we
- Build visually nice reports for our stakeholders
- Import all our summary data and graphs do each area or service line
- Add commentary regarding the performance
- Update with the latest information on risks, issues, assumptions, dependencies
We do all of the above to avoid the need for big systems, bypass them if they are already in place, and most importantly meet our commitments on governance and performance reporting to our stakeholders. What we have created is a cottage industry on governance reporting. Moreover every document is usually stored in some sort of document repository as the belief is that this means it is shared and dynamic – is it really?
What would the “almost” perfect solution look like
If we had a magic wand and said “what would the solution look like?” I would suggest that it could be described as follows
- It will be able to understand corporate outcomes and model the direct linkage of IS delivery and projects to those corporate outcomes.
- It will have ITIL/Togaf/PRINCE principles preconfigured inside the system with limited need to build these.
- It will be able to aggregate a lot of very detailed data from any service tools like HP OpenView, BMP Patrol, Remedy, SupportWorks, Assyst, Microsoft , Primavera, etc. etc. and connect that with commentary from people
- It will be able to present the same information rolled up to appropriate levels of summary.
- It will require no infrastructure, can be switched on tomorrow and turned off when no longer required.
- It will be setup within a matter of weeks and used by the people within the organisation with very minimal training.
- It will be able to be deployed to as many users as needed without increases in cost per user.
So back to our questions
“How do you connect all this disconnected information on a single page (or iPAD), to understand how effectively ITIL is underpinning service performance in your organisation?”
As I see it you have three choices. You either
a) accept the cost, complexity and long lead times that go with coupling a number of large solutions together to deliver this level of connectivity to, or
b) retain or hire a load of people to manually do this and accept the costs, risks, manual errors, and lack of confidence that goes with manual ITIL reporting, as the reports are not directly connected to the underlying data and audit trails of progress, or
c) you find something that looks similar to the “almost” perfect solution I describe above, that will take what you have already, enrich it with context and perspective, and deliver it on single page views (and iPad views) per governance point.
“How do you decide which aspects of the service, or current project portfolio, require your focus and attention at the next governance meeting?”
This remains in the realm of good management but good managers are made great managers when they have the information they need, can consume it quickly, have visual indicators on where to focus their attention, and can get to the detail and audit trail on any item with minimal effort.
When provided with this, great managers make quicker decisions and consequentially are much more likely to achieve the goals they are responsible for delivering.
The undisputed facts about outsourcing, Part 1: Buyers are saving money, but aren’t seeing a whole lot more
I have just read the latest study from Horses from Sources conducted by the London School of Economics Outsourcing Unit on the State of Outsourcing in 2011.
While it is clear to see that the cost savings targets are being met, the concerns are clear – thats about it.
I am unsure of how many participated in this from the UK but regardless of this fact, everything we see validates the results. Even though organisations say they are chasing capability lift and innovation, the facts remain that cost is the driver and the provider chases (espicially in first generation outsouring) the delivery of those cost committments without the the loss if its profit. It has to. Thats where the focus is, and the critical measures and targets align with that.
So when does the organisation realise this is going the wrong way.
Many would say they told you right from the start it was going wrong, but I guess it happens not long after the organisation realises that 1) the service hasn’t changed much in the last 2-3 years, 2) the processes are just as manual in places as they were before, and 3) the services are not supporting the business objectives or worse, preventing the business achieving its objectives.
This is when they really know is gone wrong. However perhaps its gone right, just right against the wrong objectives. I spoke to the head of a worldwide shared services center recently who outsourced the service delivery element to a global provider and he told me “I didnt have a mandate to do it well or clever, I had a mandate to do it cheap, and we delivered on that. Now its time to start thinking about clever.” Well as least he didn’t think he was doing something else or that the provider was to deliver anything else, which is better than most.
It seems to me the that the changeover from first generation outsourcing to second generation outsourcing is the critical point. The point at which the business realises its done it cheaper, now it needs to do it better. I suspect many of the candiates interviewed were in this position. For the organiosation to now do better and do clever, it needs to refocus the incumbent or select new, but either way charge them with re-enginerring the processes and making a step change in the value being delivered back to the business.
But, as my friend Deborah Kops might say, thats when the fun really starts. Is the organisation prepared and capable of absorbing the cultural changes this mandate will require.
Catch more from Deborah on this subject at www.souringchange.com.
If you haven’t heard of Horses for Sources yet, then this latest study should accelerate your interest in them. Great organisation and well considered by many. See the link below for the full article.
As the famous saying goes “Prediction is very difficult, especially about the future”. However, it seems obvious that the era of the PC as the principal engine of growth for the IT industry is drawing to a close. In the nineteenth century there was a shift from people owning one general purpose electric motor with various specialised attachments to people owning many motors each embedded in a single use device. A similar thing seems to be occurring now. Computers are becoming both less obviously computers and also more ubiquitous, location-aware and permanently connected. I recently replaced by wife’s ailing MacBook with an iPad. For her, as for many others, it’s probably all the computer she’ll ever really need. Ironically, she doesn’t even regard it as being a computer. She’s scared of computers. Mostly she’s scared that if she does something “wrong” she’ll “break” it. Not so with the iPad. As the PC is supplanted by the Really Personal Computer (e.g. tablet, app-phone) much of the burden of computation and storage is shifting from the user’s own device into the cloud. And what a cloud. Incredibly rich and diverse services are available for free (well, seemingly free). Many enable their users to maintain and develop networks of friends, relatives and business contacts. Others enable their users to manage their day-to-day lives more easily, listen to virtually any music ever recorded or simply share their family photos and videos.
The world of cloud services is rich and user-focussed. The world of corporate IT is the polar opposite. Most IT departments spend their time patching old systems in order to coax them into continuing to function or bending them violently in unnatural ways in order to meet new business requirements which are far distant from those originally intended by the system’s original developers. The vast majority of code written by developers isn’t anything to do with business process. Instead developers spend most of their time worrying about the infrastructural context in which their code will run. If the highest abstraction always wins then why are we still engaged in hand-to-hand combat instead of undertaking surgical strikes with the IT-equivalent of laser-guided bombs? Users (rightly) expect better. After all, at home, they can get exactly what they want. At the office – not so much.
Corporate IT is broken and the users have noticed. There must be something we can do about this. The ongoing purpose of my entries to this blog is to explore the architectural shifts occurring in the IT industry and try to suggest ways in which these changes can be exploited to help mend corporate IT.
Today Microsoft finally announced availability of its Azure platform and the race is officially on between the big boys on who will capture the lions share of “cloudspace”.
Microsoft have been a longtime getting to market with this and many commentators suggested it may be too late however this is clearly not the case. They have been trialing Azure in the US for nearly a year now and are expressing impressive confidence in the platform even for a V1 release.
I guess it now remains to be see what will happen and who corporate will gravitate towards. The holy grail will be who can convince the corporates to load mission critical line of business apps on their platform.