I’ve seen it dozens of
times. The design team meets after observing people use their design, and they’re
excited and energized by what they saw and heard during the sessions. They’re
all charged up about fixing the design. Everyone comes in with ideas, certain they
have the right solution to remedy users’ frustrations. Then what happens?
On a super collaborative
team, everyone is in the design together, just with different skills. Splendid!
Everyone was involved in the design of the usability test. They all watched
most of the sessions. They participated in debriefs between sessions. They took
detailed, copious notes. And now the “what ifs” begin:
What if we just changed
the color of the icon? What if we made the type bigger? What if we moved the
icon to the other side of the screen? Or a couple of pixels? What if?
How do you know you’re
solving the right problem? Well, the team thinks they’re on the right track
because they paid close attention to what participants said and did. But teams
often leave that data behind when they’re trying to decide what to do. This is
On a super collaborative
team, everyone is rewarded for doing the right thing for the user, which, in
turn, is the right thing for the business. Everyone is excited about learning
about the goodness (or badness) of the design by watching users use it. But a
lot of teams get stuck in the step after observation. They’re anxious to get to
design direction. Who can blame them? That’s where the “what ifs” and power
plays happen. Some teams get stuck and others try random things because they’re
missing one crucial step: going back to the evidence for the design change.
Observations tell you what
happened. That is, you heard participants say things and you saw them do
things—many, many interesting (and sometimes baffling) things. Good things and
bad things. Some of those things backed up your theories about how the design
would work. Some of the observations blew your theories out of the water. And
that’s why we do usability testing: In a low risk situation, like a small,
closed test, what will it be like when our design is out in the wild?
The next natural step is
to make inferences. These are guesses or judgments about why the
things you observed happened. We all do this. It’s usually what the banter is
all about in the observation room.
“Why” is why we do this
usability testing thing. You can’t get to “why” from surveys or focus groups. But even in direct
observation, with empirical evidence, why is sometimes difficult to ferret out. A lot of times
the participants just say it. “That’s not what I was looking for.” “I didn’t
expect it to work that way.” “I wouldn’t have approached it that way.” “That’s
not where I’d start.” You get the idea.
But they don’t always tell
you the right thing. You have to watch. Where did they start? What wrong turns
did they take? Where did they stop? What happened in the three minutes before
they succeeded or failed? What happened in the three minutes after?
It’s important to get
judgments and guesses out into the fresh air and sunshine by brainstorming them
within the team. When teams make the guessing of the why an explicit act that
they do in a room together, they test the boundaries of their observations.
It’s also easy to see when different people on the team saw things similarly
and where they saw them differently.
And so we come to the
crucial step, the one that most teams skip over and the reason why they end up
in the “what ifs” and opinion wars: analysis. I’m not talking about
group therapy, though some teams I’ve worked with could use some. Rather, the
team now looks at the strength of the data to support design decisions.
Without this step, it is far too easy to choose the wrong inference to direct
the design decisions. You’re working from the gut, and the gut can be wrong.
Analysis doesn’t have to
be difficult or time-consuming. It doesn’t even have to involve spreadsheets. (Though 95% of data analysis does. Sorry.) And
it doesn’t have to be lonely. The team can do it together. The key is examining
the weight of the evidence for the most likely inferences.
Take all those brainstormed
inferences. Throw them into a hat. Draw one out and start looking at data you
have that supports that being the reason for the frustration or failure. Is
there a lot? A little? Any? Everyone in the room should be poring through their
notes. What happened in the sessions? How much? How many participants had a
problem? What kinds of participants had the problem? What were they trying to
do and how did they describe it?
Answering questions like
these, among the team, helps us understand how likely is it that this
particular inference is the cause of the frustration. After a few minutes of
this, it is not uncommon for the team to collectively have an “ah-ha!” moment.
Breakthrough comes as the team eliminates some inferences because they’re weak,
and keeps others because they are strong. Taking the strong inferences together,
along with the data that shows what happened and why, snaps the design
direction right into focus.
The team comes to the
design direction meeting knowing what the priority issues were. Everyone has at
least one explanation for the gap between what the design does and what the
participant tried to do. Narrowing those guesses to what is the most likely
root cause based on the weight of the evidence—in an explicit, open and
conscious act—takes the “what ifs” out of the next version of a design, and
shares the design decisions across the team.
Ed. note: This article was originally published on Usability Testing, the author’s blog.
Dana is a self-described elections nerd who has been working in civic design since 2001. She co-founded the Center for Civic Design with Whitney Quesenbery in 2013.
YOU: a creative Baltimore area resident. US: a collective of professional designers interested in sharing your creativity across our social streams in order to highlight and amplify the creativity of Baltimore and inspire fellow community creatives.
Design methodologies add value to the visual arts curriculum by teaching the practical and purposeful
application of creative thinking—the very definition of innovation. So why has design education been largely absent in conversations about K12 education reform?
Section: Tools and Resources
The Gaslight Anthem
Base Art Co.
Kenneth Carbone and Leslie Smolan
“Readability is overrated!” Boundary-breaking @bienalebrno posters: https://t.co/Sz70LL8xuw #typography ?? on Design https://t.co/ViWH6zpV3u
1 hours ago
“The most important elements are often out of sight." Yann Le Bec’s film noir illustrations: https://t.co/XX0khMPxiG https://t.co/dDmOPdZxxh
16 hours ago
.@AIGABaltimore rec'd 2 grants for special projects that'll impact #Baltimore's design comm: https://t.co/gngzubu8ee https://t.co/Bn7laa0Txa
19 hours ago
Two AIGA Innovate Awards Granted to AIGA Baltimore
July 22, 2016
SEEKING Talented Designers and Creatives in Baltimore
July 21, 2016