Saturday, May 19, 2007

Ten Essential Questions for Project Managers

As a project manager with almost twenty years’ experience under my belt, I have, like most people in this profession, seen some spectacular stuff-ups. In fact, as project managers, our job often seems to be to play the starring role in these never-ending nightmares. Yet most of us know that, while a good project manager is able to pull some impending disasters out of the fire, many projects were just set up to fail and there is nothing even the best project manager can do except ensure that the customer’s money doesn’t haemorrhage too badly.

Perhaps it is because I spent some time as a freelance project manager and therefore became more likely to be asked to run projects that I had no hand in establishing, that I notice this more now than I used to but it is a problem I have been aware of since I was handed a doomed project by my line manager in the late 1980’s; an experience like riding a tiger, from beginning to bitter end. Even when I was a freelance and able to pick my own jobs, I walked right into another one of them - and I’m still feeling the trauma long after it was finally canned.

So I sat down and asked myself what is going wrong. The symptoms are always the same. The project runs at a madcap pace. The risk and issues registers start long and get longer. Almost every risk that can eventuate does so. The project seems to have no luck. The team works hard, then harder, then becomes sullen and weary. Milestones start to slip and remedial actions don’t seem to work. Effort estimates are ludicrously short of actual effort. You start shedding scope and crashing the schedule but the scope never seems to get smaller. Eventually, you throw in the towel and the customer starts throwing around the blame.

My analysis of a few projects like this suggests it all comes down to one thing: a failure at the very start of the project to estimate it correctly. It all sounds so simple, put like that, but an initial underestimate is a pernicious, systemic problem that surfaces in many different ways, sometimes quite late in the project. An initial underestimate leads to the following inevitable sequence:

1. The overall project plan, based on an initial underestimate, provides inadequate time for each of the major stages of the project: analysis, design, build, integrate, test and deploy. Worse, hard end-dates might be agreed, on which there are external dependencies.

2. Inadequate time for analysis means that the project will discover “hidden scope” during later stages. Some of this hidden scope will not emerge until the build or integration stages.
Hidden scope means that, during the early, or even the middle parts of the project, it can still look as if the project is on track, or, if not quite on track, that it will still complete within the contingency.

3. Inadequate time for design means that there is likely to be re-work as problems crop up during the build, or, more likely, during integration.

4. Less obviously, but just as perniciously, the technical and business analysts on the team, who were unable to grasp the size of the project initially, will continue to compound this error by failing to grasp the size of the effort to completion as the project proceeds. It can take a long time before the project manager is aware of the scale of the underestimation because the senior team members continue to compound their initial error.

5. The result is that the project looks possible until there is a sudden blow-out at the end caused by newly-revealed complexities and major re-work.

6. The project sponsor and the steering committee, who have also been guilty of believing the project is smaller than it really is, are surprised and disappointed when the project manager announces, late in the day, that the project will not meet its deadline after all.

The worst thing about such projects is that standard project management techniques will fail to spot the problem for a very long time. Milestones will be met in the early stages. Initiation, planning and a lot of the project execution can appear to run smoothly before the analysis and design problems begin to surface. Even when major problems have emerged, replanning based on new estimates to completion will fail to save the day because the project team is working with a fundamentally flawed conception of the true size and complexity of the project.

Some approaches, such as rapid application development (RAD), which might be expected to help where there is inherent uncertainty about the scope, can even make matters worse. An RAD approach will delay the detailed analysis and design of what seem to be low-priority parts of the scope. However, when the overall analysis is weak because the effort devoted to it was underestimated, it is quite possible that some of these ‘low priority’ areas contain hidden complexities and hidden dependencies which means that tackling them near the end of the project leaves no time left to resolve the problems they throw up.

It is a common syndrome with a common cause, but what is at the root of such a serious estimating error? If we know that, we can perhaps answer the question; “How does a project manager detect such a project before agreeing to take it on?” Looking across a variety of troubled projects in several different industries, the root cause seems to be that the people involved in setting the budget and the timescale for the project have done so from a position of ignorance. Usually;
  • they (and their advisors) have never been involved with a project like this before,
  • they (and their advisors) are attempting to force suppliers to deliver to an “aggressive” budget as a cost-saving measure and
  • the culture of the organisation is creating a strong pressure on them to hear only what they want to hear about the feasibility of their time and budget constraints.
As a result of this analysis I have developed a short questionnaire which I now use to screen out problem projects. I have restricted myself to ten questions so that they are few enough to be included quite naturally in an interview with prospective customers. The ten questions are listed below with what I believe would be acceptable answers.

Q1: Who set the delivery date and on what basis was it chosen?

A1: Ideally, the delivery date should be based on a realistic estimate of the time required to deliver the project, together with an appropriate time contingency. If it was chosen to meet some other external event (the end of a financial year, the start of a new business process that requires the output of this project) or was chosen arbitrarily, or was based on “what seemed reasonable” to the sponsor, the red flag would go up. All is not lost, however, as long as the customer is willing to be flexible about the end date. The answers to questions Q5, Q6 and Q10 now become crucial.

Q2: How was the project scope determined and by whom?

A2: The best answer would be that a thorough analysis of the requirements had been undertaken but this is rarely the case. The best that normally can be hoped for is that the scope was set by the business, working closely with IT specialists, and that it has been clearly and unambiguously documented. If any of these elements are missing (business ownership, IT involvement, and clear scope statement), good answers to questions Q6, Q7 and Q10 become very important.

Q3: How was the project budget determined?

A3: The budget should have been determined based on the needs of the project after the scope, duration and size of the project have been fully understood and an adequate time and effort contingency added. If the budget was set on the basis of a rough estimate, or very little analysis, alarm bells should start ringing. Worse still is the situation where the budget was set before the scope and the project has been sized to fit. In this case, it is very important to have satisfactory answers to questions Q1, Q2 and Q4 and to know that the people involved in the estimating all worked to the fixed budget constraint. A good response to Q7 should also be looked for and, to be safe, Q10.

Q4: How was the staffing of the project determined?

A4: Don’t look just for numbers here but also the right skills and experience. If the answers to Q8 and Q9 are “no” then the chances are good that there could be problems in this area. If the team size and composition was based on estimates that don’t seem to have been soundly based, then you need a positive response to Q10.

Q5: What would happen if the project failed to deliver on time?

A5: Customers don’t like to be asked this question. Either they feel as if you are criticising their ability to deliver, or you are declaring a lack of confidence in your own capabilities. However, it is an issue that must be faced. If there is no willingness to bend on the end-date, or to accept that the initial estimate could be wrong, there may be a problem. There is almost certainly going to be a problem if the answer to question Q1 was not satisfactory. However, if the scope is flexible, a rigid end-date may not be an insurmountable problem. Look out for a good response to question Q6.

Q6: What would happen if the project failed to deliver the full scope?

A6: This can be a very telling question. For a start it will give you an idea of how well the scope is understood. Generally, the more vague the customer is, the more worried you should be. A good response is that all the elements of the scope have been prioritised so that decisions about trade-offs can be made. A bad answer would be that all of the scope is absolutely essential.

Q7: Has a technical lead been appointed and has this person reviewed the feasibility of the project?

A7: I threw this question in as a check on the customer’s commitment to and ownership of the estimate. Sometimes, a customer will rely on third-party estimates (in a tender response, say, or from a consultancy employed to establish the project) and they have a tendency to point the finger at these third parties if the estimates turn out to be wrong. I am always more comfortable with a customer who tells me that they did the estimate themselves, or that they validated the estimate themselves and that they believe in it and will stand by it. I’m even more comfortable if the technical lead who will be responsible for the delivery has validated the estimates him- or herself.

Q8: Has this organisation ever undertaken a similar project before?

A8: If they haven’t, what gives them any confidence in their estimates? If they have, is this estimate in line with their previous experience? More often than not, I am asked to work on projects that are in some ways new to every single stakeholder, including myself. Some customers don’t appreciate that the basis of any estimate is experience — preferably lots of it.

Q9: Has the main supplier ever undertaken a similar project in this industry before?

A9: Very often, suppliers will add significantly to the problem when a project has been underestimated. The very nature of competitive bidding encourages suppliers to take risks. I am looking for two things with this question. Firstly that any part of the estimate based on a supplier’s estimate can be trusted and, secondly, that the customer appreciates the risk of placing trust in an inexperienced supplier — however attractive the low bid is! The necessity of the supplier having experience in the customer’s industry was brought home to me on a project a few years ago where the supplier had extensive worldwide experience in other industries but badly underestimated the effort required to do the same kind of project in a new sector which had complexities they had not experienced before.

Q10: Is there an opportunity to review the project’s foundations before proceeding?

A10: The answer to look for here is, of course, “yes”. And if the customer says “yes” and you have had less than perfect answers from the other nine questions, then take them up on it. Challenge all the assumptions and validate those estimates. Even if you are happy with the customer’s responses, a review consisting only of examining the existing documentation and talking to the key stakeholders, is a prudent thing to do; for your own benefit and for the customer’s.

As project managers, we take on a lot of responsibility for the delivery of projects. When things go wrong we inevitably take the majority of the blame. The set of questions presented above is intended as a tool to help us shed some light on problem projects so we don’t go stumbling blindly into them. Although I’ve described this process as a self-defensive manoeuvre, it should actually be of huge benefit to our customers too. No-one sets out deliberately to run a bad project and a few, well-targeted questions before too many resources have been committed could save our customers, as well as ourselves, from making some terrible mistakes.

Saturday, January 13, 2007

A Theory of Human-Computer Interaction

The Trouble With HCI

Human-Computer Interaction (HCI) as a discipline is drifting. It has no long-term objectives and no strategic direction. This would be forgivable if, like other disciplines, it had methods which were widely accepted as being likely to produce increases in understanding. But HCI has no such methods. It is not possible to say that HCI is a science, it is barely possible to describe it as an engineering discipline and yet we, as practitioners, are reluctant to relegate it to the status of a craft.

Part of the difficulty appears to be that it is hard to identify the “core” subject-matter of HCI. This is often acclaimed by practitioners as a benefit because it leads to the need for a multi-disciplinary approach which is considered to be a good thing. Yet without an identifiable subject-matter, it is difficult to identify what are the important issues in HCI, it is hard to say what an appropriate method for studying or practicing HCI would be and it is impossible to develop a theory of HCI.

This last is of peculiar importance. If we could produce a theory of HCI (whatever that might be), it would at the same time tell us what HCI is (as defined by the theory) and what its scope is. If we could say what the core subject-matter of HCI was, we would, at the same time, be proposing something like a theory of HCI. The problem of finding a theory of HCI is clearly what Rittel and Webber called a “wicked problem”. That is, finding a way to understand the problem is tantamount to finding a way to solve it. The way to tackle the problem then is to break the vicious circle by creating a theory of HCI. This would then seed a process of elaboration and refutation which would eventually end in there being a theory (or set of theories) of HCI which were acceptable to practitioners in the field—almost regardless of the quality of the original.

What Would a Theory of HCI Look Like?

The definition of a theory used here is a little idiosyncratic but it is one that should seem familiar to HCI people. I would like to propose that a theory is the set of rules that defines the behaviour of a system of conceptual objects and relationships. The set of conceptual objects and relationships itself, I will call a conceptualisation and we can imagine that there could be an arbitrary number of theories for any particular conceptualisation as well as there being an arbitrary number of conceptualisations for any field.

A Conceptualisation

A conceptualisation belongs to a field just as a theory does. It describes in detail the objects that are important to the field and the relationships between them. In the field of HCI we could imagine a conceptualisation involving objects such as “user”, “computer”, “input device”, “data” and so on, and relationships between them such as “display”, “use”, “enter”, etc.. Yet there is an arbitrary number of other conceptualisations for the same field. If one were to read the HCI literature and attempt to extract the conceptualisation used by the author in each separate paper, a large set of more-or-less complete, vague, overlapping and coherent conceptualisations would be the result.

To some extent, it will be possible to say that some conceptualisations are “better” than others and we can imagine dimensions along which to judge them such as completeness of coverage of the field, the degree of discrimination or granularity of the concepts and the extent to which practitioners in the field will agree with the conceptualisation. However, it is also true that some conceptualisations may be better than others for different purposes and that many may be equally good for the same purpose. Nevertheless, I feel that the lack of a clearly articulated common conceptualisation for HCI is a significant problem for the field. It hinders communication within and outside the community of practitioners and it effectively blocks the development of theory.

A Theory

A theory is the set of rules that governs the behaviour of the system described by the conceptualisation. In Newtonian mechanics, for instance, the three laws of motion are examples of such rules.

It is unlikely that a theory of HCI would ever have the simple elegance of a physical theory. The conceptualisation of HCI is, to begin with, far more complex than the conceptualisation of space and mass. Additionally, the individual concepts are themselves considerably more complex than those found in physics. Contrast typical HCI concepts such as “user”, “display” and “message” with concepts from Newtonian physics such as “mass”, “distance” and “time”. Nevertheless, it is still conceivable that such a theory could be built—even if the rules were far more elaborate and commonly involved the use of qualification and uncertainty.

Tuesday, January 09, 2007

Repositioning Usability

Anyone who has worked in the field of user inteaction design will know what a hard sell it is in all but the most enlightened of organisations. IT departments and IT services companies are often the most resistant customers. Why is that?

Well, I believe that, after 25 years in the business, I've finally worked it out. Below is the abstract and conclusion from a paper I've just written on the subject. The full text can be found at:

http://graham.storrs.cantalibre.com/compulsion/repositioningusability.html

ABSTRACT
In the IT services industry the various specialisms which deal directly with usability are generally considered of little value – in fact, of so little value that they are dispensed with entirely in the great majority of development projects. Some efforts have been made to cost-justify usability but IT suppliers and customers remain unconvinced. This discussion paper argues that the main reason for the perception of the low value of usability is because it is incorrectly regarded as a property of IT systems. The paper argues that a more realistic view- and one that accords better with best practice in the field - is that usability is a property of business processes. In particular it characterises the quality of the communication of the people participating in these process with the tools, equipment and media they use to assist them.
CONCLUDING REMARKS
It is clear to almost everyone that the software services industry is not doing a good job for its customers. It is also clear that, despite some inroads into product development, usability is not thought to offer much of value, even to an industry in such disarray. I contend that this is because the services which usability professionals are offering tend to have very low intrinsic value except in the area of task redesign. Task redesign, I argue, is the same as business process redesign. In this area, usability professionals have a huge contribution to make because process engineers have largely neglected the issue of process usability. Usability professionals have developed an extensive collection of task analysis and design approaches that help ensure process usability and these methods and techniques could profitably be adopted in industry. Beyond even this, the use of iterative prototype and evaluate cycles to create usable processes, has the potential greatly to improve process quality. Finally, I have argued that, to make use of these methods and techniques, organisations must radically change the way they procure IT projects. Specifically, they should fully specify their business processes, down to the level of a detailed interaction design, which would then form the basis of an IT procurement.

Friday, November 17, 2006

Jakob Nielsen

I have mixed feelings about Jakob Nielsen.

On the one hand, he’s fabulously famous (for an HCI practitioner, anyway). People who can barely spell usability are still quite likely to know the name and may even be subscribers to his Alertbox newsletter. He’s also basically sound. That is, the advice he gives is pretty much in line with what most of us would offer our clients. And, hugely to his credit, he is a great promoter of the idea that designers should base their decisions on evidence, gathered from real users.

On the other hand, the evidence he gathers is hardly what you’d call scientific. In fact, he is a good example of how people in HCI get away with abysmally low standards of evidence for their claims. There is no sign of rigorous experimental design or procedure in his reports, and the merest, occasional nod in the direction of hypothesis testing, replication and peer review. This is where his popularity becomes a problem.

It has happened a few times to me in recent years that I have had a design decision challenged on the basis that ‘last month’s Alertbox said…’ which then kicks off an hour-long discussion about the contingent nature of research findings, the need for proper hypotheses, the need for controls, other evidence from more reliable sources, the special circumstances of our own user, task, technology mix, and so on and so on. Not that I really mind having these conversations – it’s not often you get a chance to convince your client that you really do think about what you’re doing – it just bothers me that most people who read Nielsen take it as usability gospel, not just one man’s particular experiences.

I’m sure Nielsen would be the first to agree that blind faith in his pronouncements is a bad thing – but I suspect he might argue it’s not so much of a bad thing as not reading any usability guidelines at all. And, given that he is, after all, on the side of the angels, and he is, on the whole, pretty sound, I suppose I’d have to agree. Besides, the guy is a businessman with a living to make, and that no doubt involves presenting his slightly dubious evidence as rather more authoritative-sounding than it really is. So there is no point expecting him to qualify everything he says the way a scientist would.

Still, it makes me uneasy. The whole field is dogged by what amounts to anecdotal evidence being passed off as serious research. There is almost nothing in the way of solid theory in HCI and nobody seems to think this is a problem. Essentially that makes us a collection of craftspeople – not even engineers, and certainly not scientists. And the field is full of ‘gurus’ whose advice comes mostly from personal experience and not from good research.

Oddly enough, his business partner, Donald Norman, is someone I have considerable respect for and who has made some (of the very few) significant contributions in our meagre attempts to develop HCI theory.

Tuesday, November 14, 2006

Déjà vu

I’ve just read an interview between Jared Spool and Hagen Rivers and I’m feeling pretty depressed by it (even though the transcript was brimful of laughter!). I know that this interview was a thinly disguised advert for a report that they’re trying to sell (someone should do a book on ‘Salesmen and Showmen in HCI’) but that wasn’t what was so awful. It was the content of the report that upset me.

I must come clean here and say I have not read the report. Frankly, if it has in it what these two say is in it, I never will. It seems to concern an insight that Ms Rivers has had that the structure of Web applications can mostly be described as a set of ‘hubs’ and ‘interviews’. A ‘hub’ is just a page from which you can reach several other pages, and an ‘interview’ is a linked sequence of pages. Now I expect most of you are saying something like ‘Well, duh!’ or ‘Surely there has to be more to it than that!’ – which is pretty much what I said too.

The fact is that anyone who has been in this business for five minutes, or has ever used a Web application for that matter, will see that this is a statement of the blindingly obvious. Yet, for some reason, Ms Rivers believes this to be a profound insight and so does Jared Spool. I hate to sound like a bitter old fart but this was the kind of insight that was old hat more than 20 years ago, in the early days of hypertext, before the Web was even invented. I published a paper back then describing metrics for analysing a number of common navigational structures (and rather more sophisticated ones than the ‘hub’ and ‘interview’ structures that have just been ‘discovered’ – see Canter, D., Rivers, R. and Storrs, G. (1985). “Characterizing User Navigation Through Complex Data Structures”, Behaviour and Information Technology, 4(2),93-102.) If Ms Rivers or Mr Spool had bothered to look, they would have found many other papers on the subject too. In fact, a moderately large literature on hypertext navigation and structure that makes this exciting new report of theirs look rather silly.

If this was an isolated incident, it wouldn’t be so bad but the Web is full of ‘gurus’ like this who are discovering things that were well known and even well understood may years ago. God knows, I’m no scholar but I get the impression that these ‘experts’ never open a journal to see what has been done before. They just have their amazing ‘insights’ and splurge them out as if it was news.

Human-computer interaction (oh sorry, we’re calling it ‘interaction design ‘ this year) has always been an almost theory-free zone but there are people out there doing proper studies and seriously trying to accumulate understanding. The least we could do is look at their findings from time to time. Isaac Newton famously commented that, if he saw a little farther than others, it was because he was standing on the shoulders of giants. Well, there are very few giants in this field but there are plenty of midgets we could be standing on if we were really interested in seeing a bit more.

All-in-all, the discipline could do with a bit more Newtonesque humility and a little less guruism.

Sunday, October 08, 2006

The Usability Culture - Part 2

<< Want to read Part 1 first?

Why You Need A Usability Culture

I have argued that only content is of direct value to a customer. Only content, for example, generates revenue. Customers will not pay for usability of itself — yet they will avoid sites without it. The cost of usability must therefore be added to the cost of the content. In a price-sensitive market, this makes usability appear to be a liability.

To have usability generally involves some cost over and above the usual costs of software development. I don’t want to go into the detail here but almost all software developed for online use has poor to mediocre usability and was developed without the appropriate skills or techniques. To make usability more widespread, we either need to reduce the cost of including it, or convince managers of its value. As someone who has, for 20 years, tried unsuccessfully to persuade managers that usability adds value, I know that this is not a sensible path to follow. But what if I could show that usability can be had at almost no cost? Wouldn’t it be foolish not to include it in all the software you use or produce?

Creating an excellent user experience is not about methods and techniques and tools. Of course, the usability expert has plenty of these but such things are easy to acquire and to learn. Anyone with a little intelligence can master them. What is really hard is having the right attitude. To build a user experience that is truly great requires the developer to hold the view that only the users know what they want, only the users know what works for them, and that, in the end, the quality of everything we do can only be judged by the users. It is an attitude that is alien to IT departments and to software and services companies - no matter what lip-service they pay to it. Yet it is the attitude you need to instil in your whole organisation to be a winner in the coming years. The organisations that get this right will be the ones the customers keep coming back to, the ones they tell their friends about.

Readers old enough to remember the Eighties will recall the Total Quality movement. The basic idea behind it was to instil the ideas of quality assurance and quality control into the very heart of an organisation's culture so that quality became an ordinary part of everyday business. The proponents of the movement had a slogan: 'Quality is Free'. The message behind this is that while organisations think of Quality as an add-on to their normal way of doing things, it will appear costly and difficult to justify. As soon as Quality is part of everybody's normal way of thinking, the cost of compliance drops down into the noise.

My message about customer experience is essentially the same. If you can change the culture of your organisation so that considering quality of user experience becomes a normal part of everybody's attitude, then achieving excellence in this area becomes the norm too and it happens without incurring abnormal costs. Essentially, my point is: since it costs about as much to do it badly as it costs to do it well, why wouldn't you want to do it well?

Achieving The Usability Culture

Having said all this, I now have to come clean. Achieving cultural change is not easy. In fact, it's just about the hardest thing an organisation can do. However, the techniques for organisational change have been developing for a couple of decades or more and will be well-known to most managers and management consultants reading this. So I won’t go into how to change an organisation’s culture. I will just describe the change we need to achieve.

The following list is of the ten characteristics of an organisational culture oriented towards excellent user experiences and provides a starting place for you to do a quick gap analysis on your own organisation. In the Usability Culture:

1. Nobody, from the CEO down, ever says they know what the user wants without having checked it first with the users themselves.
2. Everybody in the organisation knows exactly who uses its products and services.
3. Everyone in the IT Department understands the differences between content, usability and aesthetics, and the value of each.
4. Everyone on the whole management team understands the differences between content, usability and aesthetics, and the value of each.
5. Users are a normal part of design teams, stakeholder panels and evaluation teams.
6. For every delivery channel, people are always asking, what do the users think about how this channel is working?
7. Corporate standards exist for usability, graphic design, and content production for each channel and for each major user group.
8. Everyone in the IT Department, Marketing, Customer Service, Sales and other customer-facing departments understand the corporate standards and how to apply them in their work.
9. The IT Department, as well as business units which purchase IT, have procedures, which are part of their normal approach to all IT procurement or development, for ensuring the usability of their products.
10. All staff on incentive schemes have part of their bonus dependent on user experience measures for, at least, all external channels.

It may look as if I am setting the bar rather high but this is the end-point, the goal to be aimed for. Any movement in this direction will yield benefits to the organisation. In the end, this transformation is about gaining and retaining market share for online services, improving the rate of consumption of online services, and improving staff efficiency and morale. It’s about helping your customers and business partners to do more business with you and helping your staff to do more business for you. In the coming years, it will become a major issue for organisations like yours.

The Usability Culture - Part 1

Introduction

By putting so much of our business on computers, we have given a problem to the people who want to do business with us and for us — the users of these systems. They have to grapple with complex and poorly designed software that will make them jump through bizarre and arbitrary hoops and dismiss them with mechanical indifference if they put a foot wrong. We all know that computers are stupid. We all know that software is difficult to use, fragile and unforgiving. Well guess what is out there every day dealing with your customers, your business partners and your staff.
This is an article about recognising the importance of managing the experience that people have when they use your organisation's software. It is also about ways to make that experience better without spending a fortune. The central tenet of this article is that an organisation can change its attitudes and processes to give the appropriate focus on usability and deliver first class user experiences without feeling the pain.

Firstly, though, I need to justify to the reader the need to focus on user experience at all. After that, I will try to explain why, despite the importance of good user experience, organisations still do not value usability as a service feature. Finally, I will show that good user experience can be achieved with little extra cost or investment, giving organisations the benefits while minimising the costs.

User Experience is Important

User experience has always been important. In the good old days (ten years ago!) when computer systems were largely used by an organisation’s own staff and nobody else, a poor reception by users was manifested in lowered productivity, higher error rates, lowered morale and so on. In some cases, new software was rejected entirely by the intended users. Fixes for this kind of problem included additional training, software fixes (contributing much of the so-called ‘maintenance’ costs of new systems) and procedural work-arounds. Yet, since it was all happening within the organisation, management could take steps, blame the obstinacy or stupidity of its staff and, generally, keep the organisation going despite the problems.

These days, your computer software is increasingly out there, interacting directly with suppliers, distributors and, worst of all, your customers. Surveys repeatedly show that the quality of the user experience is typically among the top three reasons why customers would go back to a website (up there with the brand and the products). It is also worth noting that ‘word of mouth’ is also normally ranked very highly among the reasons why people go to a particular website in the first place. And what about the brand damage you can suffer? ‘Brand equity’ is not just an airy-fairy notion, it translates to real dollars and cents on your share price.

This is a serious problem for organisations attempting to do business in our networked world. Organisations can educate, incentivise or bully their own staff into using poorly-designed computer software but they can rarely force a business partner or a customer. We’ve had a brief honeymoon period where simply having an online service has been a market differentiator but this is no longer the case. By the end of another five years, not having an online service will be the exception. As this comes about, the quality of the online user experience itself becomes a major differentiator and organisations who treat their customers and business partners badly will suffer the inevitable consequences.

Why Usability has no Value (but you need it anyway!)

The user experience of a piece of software has three major aspects: usability, aesthetics and content. It may be a business tool, or a database, or a retail website. Whatever it is, it has the same three aspects.

Usability is the aspect which makes it more or less easy for someone to use the software to achieve some task. It therefore includes all the ergonomic features of the software such as the legibility of any text, the comprehensibility of any messages, the way the software is structured for navigation so that people know where they are and how to find what they need. It also encompasses more subtle properties to do with how the interaction with the user unfolds over time, whether the software seems coherent and consistent, and whether there is a good match to the user's expectations and preconceptions.

Aesthetics is about the way the software appears to the user. It is about Design with a capital 'D': the style, the layouts, the choices of fonts and colours, the use of visual, verbal and other stylistic elements to convey brand and image messages. Aesthetics is often confused with usability but the two are very different and sometimes the requirements of one are diametrically opposed to those of the other. I won’t dwell on aesthetics here but it has its place in the overall user experience and it too should not be neglected.

The third aspect of a piece of software is its content. Content is a broad term to include whatever the organisation is providing to the user. It may be products (as in a Web shop), or services (such as advice or news), or tools (such as business applications). Whatever it is, it is the supply of the content to the user that is the main purpose of the software.

And this is where people get so confused about their online service delivery because they believe it is all about the content — just because the content is where the user receives value and because it is the content for which the customer is paying. It is true that the value of content has to be maximised, especially (but not only) for customers. This is essential to the so-called ‘value proposition’ of the organisation. It is also true that, since usability and aesthetics are not what the customer came to buy, their addition to the software is often seen as an unnecessary expense which does not in itself add value and merely increases the cost of providing the content.

However, as I have already argued, the total user experience is of great importance to the user — and it should be of equal importance to the organisations that rely on it to promote their offerings. Content itself is of little value to anyone if it cannot be found, or understood, or easily used or purchased. The value of content should be maximised but usability and aesthetics should be optimised. That is, they should be developed to the levels appropriate to the efficient and effective use or acquisition of the content.

On to Part 2 >>

Sunday, September 24, 2006

Science and the HCI Practitioner

I used to be a scientist. After my PhD work, I held research jobs at a couple of universities before joining Logica's R&D centre in Cambridge (the original one). And then I became a craftsman when I began commercial human-computer interaction work.

The thing about HCI is that there isn't much science behind it. In fact, even by the standards of the social sciences, what passes for research in HCI is barely credible. In fact, I'd go so far as to say that most studies in the field are so specific to the technologies and tasks and environments involved that they probably can't be generalised beyond that particular situation. Proper experimental design, let alone replication of a result, is incredibly rare.

Yet the field is full of rules and guidelines, do's and don'ts, as if there was a vast corpus of repeatable research behind them. (I know, I know. Some of you are shouting 'What about Fitt's Law?' and you're right. That is real, solid research. One piece. Count it. One.) So what's going on here? How do people like me get to tell our clients we're experts with a straight face?

Well, partly it's this. Proper research takes a very, very long time. By the time you've done a serious study and published it in a peer-reviewed journal, the chances are you can no longer even buy the display technologies, operating systems and input devices that were used in your experiments. The H in HCI may not have changed in 150,000 years but the C certainly has, and so has the I. And proper science is just no good at shooting at such rapidly-moving targets.

So we experts rely on three things; the experience we gain in working with people and interactions over years or decades, a set of 'rules of thumb' which we know are probably wrong in many situations, and an attitude to the problem that goes ‘the best thing to do is evaluate this with real users’.

That’s why I think of myself as a craftsman, rather than a scientist, or even a technologist – because my real skills are in my approach, my kitbag of techniques, my years of experience, and my attitude to the problem. I wish, sometimes, it could be different but, in this field, it can’t.