Have you ever discussed survey data with a stakeholder and came to different conclusions on what the data meant or how you should move forward? We definitely have, and unfortunately this is a common issue with interpreting data. When people think of data or testing, many feel that the answers drawn from it should be straightforward - how can data be anything but concrete?

Interpreting user feedback from large sets of data can be tricky because we know people aren’t always honest in a test, or even honest with themselves. Interpreting user data is all about reading between the lines and forming hunches based on what we see. We have a saying at ZURB: “People don’t know what they’ll do, until they do it.”

So how do you navigate data when this is the case? We, for one, leverage abductive reasoning by taking many data points from a user and finding patterns that indicate something interesting about their personality or behavior.

We had a huge response on this image! Click here to see some interesting insights and perspectives.

Did this answer your question?