I’ve seen it dozens of
times. The design team meets after observing people use their design, and they’re
excited and energized by what they saw and heard during the sessions. They’re
all charged up about fixing the design. Everyone comes in with ideas, certain they
have the right solution to remedy users’ frustrations. Then what happens?
On a super collaborative
team, everyone is in the design together, just with different skills. Splendid!
Everyone was involved in the design of the usability test. They all watched
most of the sessions. They participated in debriefs between sessions. They took
detailed, copious notes. And now the “what ifs” begin:
What if we just changed
the color of the icon? What if we made the type bigger? What if we moved the
icon to the other side of the screen? Or a couple of pixels? What if?
How do you know you’re
solving the right problem? Well, the team thinks they’re on the right track
because they paid close attention to what participants said and did. But teams
often leave that data behind when they’re trying to decide what to do. This is
On a super collaborative
team, everyone is rewarded for doing the right thing for the user, which, in
turn, is the right thing for the business. Everyone is excited about learning
about the goodness (or badness) of the design by watching users use it. But a
lot of teams get stuck in the step after observation. They’re anxious to get to
design direction. Who can blame them? That’s where the “what ifs” and power
plays happen. Some teams get stuck and others try random things because they’re
missing one crucial step: going back to the evidence for the design change.
Observations tell you what
happened. That is, you heard participants say things and you saw them do
things—many, many interesting (and sometimes baffling) things. Good things and
bad things. Some of those things backed up your theories about how the design
would work. Some of the observations blew your theories out of the water. And
that’s why we do usability testing: In a low risk situation, like a small,
closed test, what will it be like when our design is out in the wild?
The next natural step is
to make inferences. These are guesses or judgments about why the
things you observed happened. We all do this. It’s usually what the banter is
all about in the observation room.
“Why” is why we do this
usability testing thing. You can’t get to “why” from surveys or focus groups. But even in direct
observation, with empirical evidence, why is sometimes difficult to ferret out. A lot of times
the participants just say it. “That’s not what I was looking for.” “I didn’t
expect it to work that way.” “I wouldn’t have approached it that way.” “That’s
not where I’d start.” You get the idea.
But they don’t always tell
you the right thing. You have to watch. Where did they start? What wrong turns
did they take? Where did they stop? What happened in the three minutes before
they succeeded or failed? What happened in the three minutes after?
It’s important to get
judgments and guesses out into the fresh air and sunshine by brainstorming them
within the team. When teams make the guessing of the why an explicit act that
they do in a room together, they test the boundaries of their observations.
It’s also easy to see when different people on the team saw things similarly
and where they saw them differently.
And so we come to the
crucial step, the one that most teams skip over and the reason why they end up
in the “what ifs” and opinion wars: analysis. I’m not talking about
group therapy, though some teams I’ve worked with could use some. Rather, the
team now looks at the strength of the data to support design decisions.
Without this step, it is far too easy to choose the wrong inference to direct
the design decisions. You’re working from the gut, and the gut can be wrong.
Analysis doesn’t have to
be difficult or time-consuming. It doesn’t even have to involve spreadsheets. (Though 95% of data analysis does. Sorry.) And
it doesn’t have to be lonely. The team can do it together. The key is examining
the weight of the evidence for the most likely inferences.
Take all those brainstormed
inferences. Throw them into a hat. Draw one out and start looking at data you
have that supports that being the reason for the frustration or failure. Is
there a lot? A little? Any? Everyone in the room should be poring through their
notes. What happened in the sessions? How much? How many participants had a
problem? What kinds of participants had the problem? What were they trying to
do and how did they describe it?
Answering questions like
these, among the team, helps us understand how likely is it that this
particular inference is the cause of the frustration. After a few minutes of
this, it is not uncommon for the team to collectively have an “ah-ha!” moment.
Breakthrough comes as the team eliminates some inferences because they’re weak,
and keeps others because they are strong. Taking the strong inferences together,
along with the data that shows what happened and why, snaps the design
direction right into focus.
The team comes to the
design direction meeting knowing what the priority issues were. Everyone has at
least one explanation for the gap between what the design does and what the
participant tried to do. Narrowing those guesses to what is the most likely
root cause based on the weight of the evidence—in an explicit, open and
conscious act—takes the “what ifs” out of the next version of a design, and
shares the design decisions across the team.
Ed. note: This article was originally published on Usability Testing, the author’s blog.
Dana is a self-described elections nerd who has been working in civic design since 2001. She co-founded the Center for Civic Design with Whitney Quesenbery in 2013.
Nominations to join the 2016-2017 AIGA Alaska Board of Directors are now open. We are 100% volunteer run, so AIGA Alaska needs you to make the cool stuff happen. Nominations are due by Tuesday, April 19, 2016. Terms begin July 1.
AIGA member Jessi Arrington made this video about creating her skateboard for “Bordo Bello,” an annual skateboard art show hosted by AIGA Colorado. Her skateboard design is one of many on view at the AIGA National Design Center in New York City April 22–July 2, 2013.
End the Lies
Video: AIGA Medalist Charles S. Anderson
.@CreativeLive wants to ignite your inner genius w 30-day challenge https://t.co/gfoKg7d9qF We're doing it, are you? https://t.co/86kkJejSoP
35 minutes ago
Need some help from design friends? Meet up @aigaalaska Design After Hours @WilliwawSocial https://t.co/Lt6A2eRiB0 https://t.co/40I1nvVByp
An hour ago
When in-house + outside designers collab = amazing redesign from @whitneymuseum #AIGAinhouse https://t.co/VuVTKrYov0 https://t.co/cOaYydSGar
1 hours ago
Sponsor highlight – Sessions College
April 13, 2016
Ask a Lawyer // April 2016 // Got Paid? No? Now What?
April 11, 2016