Forgot your username or password?
I’ve seen it dozens of
times. The design team meets after observing people use their design, and they’re
excited and energized by what they saw and heard during the sessions. They’re
all charged up about fixing the design. Everyone comes in with ideas, certain they
have the right solution to remedy users’ frustrations. Then what happens?
On a super collaborative
team, everyone is in the design together, just with different skills. Splendid!
Everyone was involved in the design of the usability test. They all watched
most of the sessions. They participated in debriefs between sessions. They took
detailed, copious notes. And now the “what ifs” begin:
What if we just changed
the color of the icon? What if we made the type bigger? What if we moved the
icon to the other side of the screen? Or a couple of pixels? What if?
How do you know you’re
solving the right problem? Well, the team thinks they’re on the right track
because they paid close attention to what participants said and did. But teams
often leave that data behind when they’re trying to decide what to do. This is
On a super collaborative
team, everyone is rewarded for doing the right thing for the user, which, in
turn, is the right thing for the business. Everyone is excited about learning
about the goodness (or badness) of the design by watching users use it. But a
lot of teams get stuck in the step after observation. They’re anxious to get to
design direction. Who can blame them? That’s where the “what ifs” and power
plays happen. Some teams get stuck and others try random things because they’re
missing one crucial step: going back to the evidence for the design change.
Observations tell you what
happened. That is, you heard participants say things and you saw them do
things—many, many interesting (and sometimes baffling) things. Good things and
bad things. Some of those things backed up your theories about how the design
would work. Some of the observations blew your theories out of the water. And
that’s why we do usability testing: In a low risk situation, like a small,
closed test, what will it be like when our design is out in the wild?
The next natural step is
to make inferences. These are guesses or judgments about why the
things you observed happened. We all do this. It’s usually what the banter is
all about in the observation room.
“Why” is why we do this
usability testing thing. You can’t get to “why” from surveys or focus groups. But even in direct
observation, with empirical evidence, why is sometimes difficult to ferret out. A lot of times
the participants just say it. “That’s not what I was looking for.” “I didn’t
expect it to work that way.” “I wouldn’t have approached it that way.” “That’s
not where I’d start.” You get the idea.
But they don’t always tell
you the right thing. You have to watch. Where did they start? What wrong turns
did they take? Where did they stop? What happened in the three minutes before
they succeeded or failed? What happened in the three minutes after?
It’s important to get
judgments and guesses out into the fresh air and sunshine by brainstorming them
within the team. When teams make the guessing of the why an explicit act that
they do in a room together, they test the boundaries of their observations.
It’s also easy to see when different people on the team saw things similarly
and where they saw them differently.
And so we come to the
crucial step, the one that most teams skip over and the reason why they end up
in the “what ifs” and opinion wars: analysis. I’m not talking about
group therapy, though some teams I’ve worked with could use some. Rather, the
team now looks at the strength of the data to support design decisions.
Without this step, it is far too easy to choose the wrong inference to direct
the design decisions. You’re working from the gut, and the gut can be wrong.
Analysis doesn’t have to
be difficult or time-consuming. It doesn’t even have to involve spreadsheets. (Though 95% of data analysis does. Sorry.) And
it doesn’t have to be lonely. The team can do it together. The key is examining
the weight of the evidence for the most likely inferences.
Take all those brainstormed
inferences. Throw them into a hat. Draw one out and start looking at data you
have that supports that being the reason for the frustration or failure. Is
there a lot? A little? Any? Everyone in the room should be poring through their
notes. What happened in the sessions? How much? How many participants had a
problem? What kinds of participants had the problem? What were they trying to
do and how did they describe it?
Answering questions like
these, among the team, helps us understand how likely is it that this
particular inference is the cause of the frustration. After a few minutes of
this, it is not uncommon for the team to collectively have an “ah-ha!” moment.
Breakthrough comes as the team eliminates some inferences because they’re weak,
and keeps others because they are strong. Taking the strong inferences together,
along with the data that shows what happened and why, snaps the design
direction right into focus.
The team comes to the
design direction meeting knowing what the priority issues were. Everyone has at
least one explanation for the gap between what the design does and what the
participant tried to do. Narrowing those guesses to what is the most likely
root cause based on the weight of the evidence—in an explicit, open and
conscious act—takes the “what ifs” out of the next version of a design, and
shares the design decisions across the team.
Ed. note: This article was originally published on Usability Testing, the author’s blog.
You might’ve heard the buzz around Dana’s Kickstarter campaign to redesign ballots— a serious usability issue that’s plagued campaign credibility
in the past. See the result at her
But what you might not know is that her love of user research started in 1983, when she participated in a usability test on a mainframe office system developed by IBM.
Since then, as founder of UsabilityWorks, Dana has gathered and analyzed user data to inform product designs for clients like Yahoo!, Intuit, AARP, Wells Fargo, E*TRADE, and Sun Microsystems.
Dana’s workshops are known for being highly interactive, learning intensive, and seriously fun. If you’re looking for guidance with usability testing and user research, then check out her book, “Handbook
of Usability Testing,” or read the prolific writings in her blog.
Learn more about the jurors’ thoughts on this 2013 “Justified” selection.
Section: Why Design -
Steven Heller has immortalized our graphic past and made coherence of our present. He is the author, co-author, or editor of more than 60 books on design-related topics. A journalist, critic, and commentator, he has written for a wide array of publications and has been the editor of AlGA's journal of graphic design, Voice, since its inception in the early '80s. In addition, for 33 years he was an art director at the New York Times, originally on the OpEd Page and for almost 30 of those years with the New York Times Book Review. Currently, he is co-chair of SVA's “MFA Designer as Author” department and a special consultant on new programs to the president of SVA, and writes the Visuals column for the New York Times Book Review. In recognition of his role as the ubiquitous, tireless chronicler of our design times, he was awarded an AIGA Medal in 1999.
Section: Inspiration -
print design, AIGA Medal, writing, criticism
An Apple a Day
RT @ApexLeaf: #model #landscapearchitecture #design http://t.co/CPlHUqpeiz
12 minutes ago
Logoworks by HP
AIGA 100 Celebration + Chair Competition
December 07, 2013
AIGA Mentor Program visits a favorite: IDEO
December 06, 2013
Calvin K Carter
Do Good Design
Corcoran Glimpse Book