All posts tagged “Research”

23andMe to offer users’ medical data to Pfizer for research

Following hard on the heels of its $ 60 million deal with Genentech, personal genetics startup 23andMe has announced an agreement to share its user data and research platform with pharmaceutical giant Pfizer. Although 23andMe is still languishing under FDA restrictions (the company is only permitted to offer ancestry reports and raw genetic data to customers — not medical analysis), its well-organized database of some 640,000 genotyped individuals is proving popular with the medical industry.

“The largest dataset of its kind”

In a press statement announcing the deal, 23andMe spelled out the attractions of its genetic resources: “Researchers can now fully benefit from the largest dataset of its kind, running queries in minutes across…

Continue reading…

The Verge – All Posts

Amazon tells FAA to change drone laws or it’ll move research abroad

Amazon has warned the Federal Aviation Authority (FAA) that if it doesn’t relax its attitude towards drone regulation then the internet giant will be forced to move its research teams out of the country.

“Without the ability to test outdoors in the United States soon, we will have no choice but to divert even more of our [drone] research and development resources abroad,” said Amazon’s vice president of global public policy Paul Misener in a letter to the FAA seen by the Wall Street Journal. “I fear the FAA may be questioning the fundamental benefits of keeping [drone] technology innovation in the United States,” said Misener.

Amazon is already experimenting with drones in the UK

Amazon’s plan to introduce delivery-by-drone in its…

Continue reading…

The Verge – All Posts

The Office For Creative Research: The creative trio uses boundary-defying interfaces at the nexus of art and technology to help people understand big data

The Office For Creative Research

NYC-based The Office for Creative Research unites three internationally renowned media artists—Jer Thorp, Ben Rubin and Mark Hansen—in an exploration of data’s expressive possibilities. OCR harnesses vast troves of raw data, streaming it through……

Continue Reading…


Cool Hunting

Studio Visit: Lex Pott: The Dutch designer on using extensive research and experimentation to create a new language from archetypical materials

Studio Visit: Lex Pott


Since graduating cum laude in 2009 from the prestigious Design Academy Eindhoven, young Dutch designer Lex Pott has been very busy—first with working for Hella Jongerius and …

Continue Reading…


Cool Hunting

Guerrilla Research Tactics and Tools

I was recently in a project meeting in which several stakeholders were drawn into an argument over a homepage design. As the UX professional in the room, I pointed out that we aren’t our users, and suggested we invest a few weeks into research to learn what users are really doing. The project lead rejected the idea, deeming that we didn’t have time for research. Instead we’d just have to rely on assumptions, debate and ‘best practice’.

Many UX practitioners can relate to this scenario. The need to stay competitive forces agencies, freelancers and internal teams to reduce budgets however they can. Much to the chagrin of designers, research time is often the first cut.

The problem is that cutting research often results in usability disasters. With no data or insight, people fall back on assumptions—the enemy of good design. Stakeholders will preface statements with ‘As a user…’, forgetting that we aren’t our users. Without research we inadvertently make decisions for ourselves instead of our target audience.

In times like these we need guerrilla research. To be ‘guerrilla’ is to practice faster, cheaper and often less formal research alternatives; alternatives that don’t necessarily need to be sponsored, budgeted or signed-off on. Much like the warfare from which it takes its name, guerrilla research is unconventional yet effective, in that it allows the designer to gather meaningful data at low cost.

The concept of guerrilla research isn’t new. Experience designer David Peter Simon discussed the basics of guerrilla usability testing here at UX Booth last July. Now I’m going to expand on his premise by reviewing other tools to add quantitative and qualitative insight—without impacting the project budget.

Research smart, research fast

Limited budgets require us to be very efficient, and a traditional UX research phase can be very involved. Researchers can potentially spend weeks trawling data, conducting interviews or running user testing in order to ultimately identify valuable insights.

When budgets and time are constrained however, we need to know exactly what insights are needed before we begin.

Let’s refer back to the meeting I had, discussing my client’s homepage design. The issue was that we couldn’t answer a specific question: ‘What do people do on our home page?’ In most cases, a question like this is the perfect frame for a research topic. If we can plan our research as a series of questions, we can keep our time much more focused. The more questions we can identify early in the design process, the more likely we will prevent nasty and unproductive debates later on.

Some examples of research questions might include:

  • How do people navigate the site?
  • How easily do people understand what the site is about?
  • What do people do after purchasing a product?

This kind of mentality is absolutely critical for guerrilla research, when it’s even more important to keep the research directed, quick, and low-budget. Once we’ve set our research questions we can then select the cheapest and most effective tools for providing an answer.

Online tools

Thanks to advancements in technology, researchers are now spoiled for choice with options for quickly gathering usability data. The methods we have at our disposal make this a really exciting time for all researchers—guerrilla and otherwise. Techniques with origins in expensive usability labs have now been adapted into ‘quick and dirty’ online tools by our peers.

Analytical tools

Though often misconstrued as a resource only for marketers, Google Analytics is one of the most invaluable UX tools available. It provides a huge array of information about site visitors and their behaviours. Best of all it costs nothing and is immediately accessible. The data can be used to answer questions like:

  • What are our users’ interests?
  • How do users move through the site?
  • Do behaviours vary between devices, locations or demographics?
  • Additional free analytics tools such as page load speed calculators and accessibility checkers are also incredibly handy. They help us understand the existing site performance, and set goals for improving these factors in our redesign work.

Usage for guerrilla research: These tools are totally free and the data is immediately available. There’s no need to get sign-off or budget approval for gathering analytics data. It’s ready and waiting to be analysed. In my own experience, many clients already have Google Analytics or an equivalent set up (even if they aren’t using it). Adding these tools to the project workflow permits fast data insight with no budget or stakeholder dependencies.

Analytics help us to identify user interests.

Heatmapping tools

Traditional analytics are great for finding the ‘what?’, but less so for identifying the ‘why?’. Heat mapping services such as CrazyEgg and ClickTale track actions like mouse-movement, clicking and scrolling at the page level. We can use these tools to answer questions like:

  • Are our calls to action effective?
  • Is a particular in-page feature used?
  • Do people scroll on long pages?

Usage for guerrilla research: These tools are a fast and affordable way to get behavioural insight. They replicate the effects of eye tracking, but with a much lower barrier to entry (think $ 10 instead of $ 10,000). Stakeholders also respond very well to this data, as the heat map visualisations speak for themselves during meetings and presentations. They effectively answer the eternal question “did the user look at what we wanted them to see?”

Heatmapping tools show were users look on the screen.

Keyword & content analysis

Keyword research is a crucial component of your site’s SEO, but also has a huge effect on user experience. Understanding the vocabulary that people prefer can heavily feed into the planning of a site’s information architecture, and tools such as Google’s Keyword planner can support this. This same research can then be used to optimise site search, plan landing pages and feed into content strategy. Though they come at a slight cost, tools like MOZ can produce inventories of existing content—and that of competitors.

From a UX perspective, effective keyword research allows us to answer questions like:

  • How should we structure the site information architecture?
  • What naming / labelling should be used in navigation?
  • Are there particular sections of the content we should prioritise?

Usage for guerrilla research: Conducting a content inventory manually is a tedious and time-consuming exercise. It’s such a large task that it is often planned as a separate phase of the project entirely. When there’s no time or budget for a full content inventory, these tools can ensure that key information is still be acquired at low cost, informing future content and navigation choices.

Unmoderated user testing

Pioneers in UX originally outlined user testing as a rigorously scientific process. Studies were run in a labs using scripts, task sheets and specialist recording equipment. This was all very expensive though—a study by Jakob Nielsen in 2003 cited the average cost for usability test recruitment alone as $ 171 per participant, and this didn’t even account for the researcher’s time!
Small budgets rarely permit for this level of formality when testing. Moreover, when research time is squeezed to its limits even the leanest of methodologies may present too much of a time barrier. Luckily, by using the right tools participants can still test the product without needing a moderator.
There are many free screen and audio recording tools available for allowing users to test products, just requiring the company to send out the scenarios. For example, Macbooks come bundled with Quicktime as standard. For PC users, or to enhance recordings, paid tools such as OpenHallway are available that offer slightly more robust features and task prompting that can be useful, but aren’t at all required.
Video players such as VLC will also allow researchers to increase the speed of video playback. By watching the session at double-speed, the researcher can get all of the insight from the session in half the time.
Usage for guerrilla research: Unmoderated testing leaves the researcher to analyse the sessions in their own time. It’s always preferable for researchers to moderate their usability sessions (even if remotely) to ensure that participants stick to the ‘think-aloud’ protocol, but this allows for qualitative insights when that time isn’t available.

‘Targeted’ feedback tools

Sometimes timescales and budgets are so prohibitive that full testing of the product is out of the question. This sounds dramatic; but it’s the harsh reality of many projects. If we just need to quickly test our landing pages or calls to action, services like Five Second Test or Verify App are ideal. Because of the shorter and more focused nature of the tests, it’s possible to get qualitative feedback really quickly and without needing to schedule or analyse full user testing sessions.

This can answer more subjective questions around how people feel, giving insights that you couldn’t get with an analytics tool. These might include questions like:

  • Do visitors like the brand look-and-feel?
  • Are landing pages catching the user’s attention?
  • Do people understand the product’s value proposition?

Usage for guerrilla research: These tools prove that qualitative insights can be gained without needing to organise traditional user testing. They forego broad-brush usability testing sessions in favour of tighter and more focused feedback results. These insights will be specific to smaller details of the product, and we can use them to address specific research questions.

Are landing pages catching users’ attention?

All research is good research

There’s no single ‘right way’ to approach research, but any research is better than no research. Embracing these guerrilla techniques in my workflow has done wonders for my productivity in otherwise troublesome projects.
If we can take away one lesson from guerrilla research, it’s the necessity of being pragmatic and flexible—it’s often the way to get things done. A fully scoped and budgeted research phase is certainly nice, but we can free ourselves from believing it’s mandatory. Above all else, there is no excuse for designing based off assumptions—in the immortal words of Jakob Nielsen: “Leaving the user out is not an option.”

Further reading


The UX Booth

Guerilla Research Tactics and Tools

I was recently in a project meeting in which several stakeholders were drawn into an argument over a homepage design. As the UX professional in the room, I pointed out that we aren’t our users, and suggested we invest a few weeks into research to learn what users are really doing. The project lead rejected the idea, deeming that we didn’t have time for research. Instead we’d just have to rely on assumptions, debate and ‘best practice’.

Many UX practitioners can relate to this scenario. The need to stay competitive forces agencies, freelancers and internal teams to reduce budgets however they can. Much to the chagrin of designers, research time is often the first cut.
The problem is that cutting research often results in usability disasters. With no data or insight, people fall back on assumptions—the enemy of good design. Stakeholders will preface statements with ‘As a user…’, forgetting that we aren’t our users. Without research we inadvertently make decisions for ourselves instead of our target audience.
In times like these we need guerrilla research. To be ‘guerrilla’ is to practice faster, cheaper and often less formal research alternatives; alternatives that don’t necessarily need to be sponsored, budgeted or signed-off on. Much like the warfare from which it takes its name, guerrilla research is unconventional yet effective, in that it allows the designer to gather meaningful data at low cost.
The concept of guerrilla research isn’t new. Experience designer David Peter Simon discussed the basics of guerrilla usability testing here at UX Booth last July. Now I’m going to expand on his premise by reviewing other tools to add quantitative and qualitative insight—without impacting the project budget.

Research smart, research fast

Limited budgets require us to be very efficient, and a traditional UX research phase can be very involved. Researchers can potentially spend weeks trawling data, conducting interviews or running user testing in order to ultimately identify valuable insights.
When budgets and time are constrained however, we need to know exactly what insights are needed before we begin.
Let’s refer back to the meeting I had, discussing my client’s homepage design. The issue was that we couldn’t answer a specific question: ‘What do people do on our home page?’ In most cases, a question like this is the perfect frame for a research topic. If we can plan our research as a series of questions, we can keep our time much more focused. The more questions we can identify early in the design process, the more likely we will prevent nasty and unproductive debates later on.
Some examples of research questions might include:

  • How do people navigate the site?
  • How easily do people understand what the site is about?
  • What do people do after purchasing a product?

This kind of mentality is absolutely critical for guerrilla research, when it’s even more important to keep the research directed, quick, and low-budget. Once we’ve set our research questions we can then select the cheapest and most effective tools for providing an answer.

Online tools

Thanks to advancements in technology, researchers are now spoiled for choice with options for quickly gathering usability data. The methods we have at our disposal make this a really exciting time for all researchers—guerilla and otherwise. Techniques with origins in expensive usability labs have now been adapted into ‘quick and dirty’ online tools by our peers.

Analytical tools

Though often misconstrued as a resource only for marketers, Google Analytics is one of the most invaluable UX tools available. It provides a huge array of information about site visitors and their behaviours. Best of all it costs nothing and is immediately accessible. The data can be used to answer questions like:

  • What are our users’ interests?
  • How do users move through the site?
  • Do behaviours vary between devices, locations or demographics?
  • Additional free analytics tools such as page load speed calculators and accessibility checkers are also incredibly handy. They help us understand the existing site performance, and set goals for improving these factors in our redesign work.

Usage for guerrilla research: These tools are totally free and the data is immediately available. There’s no need to get sign-off or budget approval for gathering analytics data. It’s ready and waiting to be analysed. In my own experience, many clients already have Google Analytics or an equivalent set up (even if they aren’t using it). Adding these tools to the project workflow permits fast data insight with no budget or stakeholder dependencies.

Analytics help us to identify user interests.

Heatmapping tools

Traditional analytics are great for finding the ‘what?’, but less so for identifying the ‘why?’. Heat mapping services such as CrazyEgg and ClickTale track actions like mouse-movement, clicking and scrolling at the page level. We can use these tools to answer questions like:

  • Are our calls to action effective?
  • Is a particular in-page feature used?
  • Do people scroll on long pages?

Usage for guerrilla research: These tools are a fast and affordable way to get behavioural insight. They replicate the effects of eye tracking, but with a much lower barrier to entry (think $ 10 instead of $ 10,000). Stakeholders also respond very well to this data, as the heat map visualisations speak for themselves during meetings and presentations. They effectively answer the eternal question “did the user look at what we wanted them to see?”

Heatmapping tools show were users look on the screen.

Keyword & content analysis

Keyword research is a crucial component of your site’s SEO, but also has a huge effect on user experience. Understanding the vocabulary that people prefer can heavily feed into the planning of a site’s information architecture, and tools such as Google’s Keyword planner can support this. This same research can then be used to optimise site search, plan landing pages and feed into content strategy. Though they come at a slight cost, tools like MOZ can produce inventories of existing content—and that of competitors.
From a UX perspective, effective keyword research allows us to answer questions like:

  • How should we structure the site information architecture?
  • What naming / labelling should be used in navigation?
  • Are there particular sections of the content we should prioritise?

Usage for guerrilla research: Conducting a content inventory manually is a tedious and time-consuming exercise. It’s such a large task that it is often planned as a separate phase of the project entirely. When there’s no time or budget for a full content inventory, these tools can ensure that key information is still be acquired at low cost, informing future content and navigation choices.

Unmoderated user testing

Pioneers in UX originally outlined user testing as a rigorously scientific process. Studies were run in a labs using scripts, task sheets and specialist recording equipment. This was all very expensive though—a study by Jakob Nielsen in 2003 cited the average cost for usability test recruitment alone as $ 171 per participant, and this didn’t even account for the researcher’s time!
Small budgets rarely permit for this level of formality when testing. Moreover, when research time is squeezed to its limits even the leanest of methodologies may present too much of a time barrier. Luckily, by using the right tools participants can still test the product without needing a moderator.
There are many free screen and audio recording tools available for allowing users to test products, just requiring the company to send out the scenarios. For example, Macbooks come bundled with Quicktime as standard. For PC users, or to enhance recordings, paid tools such as OpenHallway are available that offer slightly more robust features and task prompting that can be useful, but aren’t at all required.
Video players such as VLC will also allow researchers to increase the speed of video playback. By watching the session at double-speed, the researcher can get all of the insight from the session in half the time.
Usage for guerrilla research: Unmoderated testing leaves the researcher to analyse the sessions in their own time. It’s always preferable for researchers to moderate their usability sessions (even if remotely) to ensure that participants stick to the ‘think-aloud’ protocol, but this allows for qualitative insights when that time isn’t available.

‘Targeted’ feedback tools

Sometimes timescales and budgets are so prohibitive that full testing of the product is out of the question. This sounds dramatic; but it’s the harsh reality of many projects. If we just need to quickly test our landing pages or calls to action, services like Five Second Test or Verify App are ideal. Because of the shorter and more focused nature of the tests, it’s possible to get qualitative feedback really quickly and without needing to schedule or analyse full user testing sessions.
This can answer more subjective questions around how people feel, giving insights that you couldn’t get with an analytics tool. These might include questions like:

  • Do visitors like the brand look-and-feel?
  • Are landing pages catching the user’s attention?
  • Do people understand the product’s value proposition?

Usage for guerrilla research: These tools prove that qualitative insights can be gained without needing to organise traditional user testing. They forego broad-brush usability testing sessions in favour of tighter and more focused feedback results. These insights will be specific to smaller details of the product, and we can use them to address specific research questions.

Are landing pages catching users’ attention?

All research is good research

There’s no single ‘right way’ to approach research, but any research is better than no research. Embracing these guerrilla techniques in my workflow has done wonders for my productivity in otherwise troublesome projects.
If we can take away one lesson from guerrilla research, it’s the necessity of being pragmatic and flexible—it’s often the way to get things done. A fully scoped and budgeted research phase is certainly nice, but we can free ourselves from believing it’s mandatory. Above all else, there is no excuse for designing based off assumptions—in the immortal words of Jakob Nielsen: “Leaving the user out is not an option.”

Further reading

The post Guerilla Research Tactics and Tools appeared first on UX Booth.


The UX Booth

The Ethics of UX Research

As a UX researcher for a social media operation, Ute considers different interface designs that might allow users to make more social contacts. Ute gets a radical idea to test her hunches: What if we manipulated some of our current users’ profile pictures and measured the impact of those changes on their friends list? If successful, her research would provide valuable insight into the social media design elements most likely to result in sociability online. Of course, a successful study would also diminish the experiences of thousands already using her company’s service. In Ute’s mind, this is a simple A/B test, yet in the wake of recent controversy surrounding social media research, she’s starting to wonder if she should be concerned about the ethics of her work.

As a research scientist and professor at two different universities, I work to better understand the social and psychological impact of technology on human communication. Our experiments have tested the limits of accepted research design practice, with designs ranging from the manipulation of romantic jealousy using social networks to studying the impact of induced stress and boredom on video game experiences, and a host of other experiments and observations. Yet, these studies all share a common element: they were all subject to intensive internal and external ethical review practices to ensure that participants in these studies were both informed (either before or after the study concluded) and unharmed.

CITI Researcher Certification

On these two points, recent debates surrounding the recent Facebook “emotional contagion” study have centered on notions of informed consent (Did Facebook users know they were in a study?) and minimizing harm (Were any Facebook users hurt by this study?). Yet, to the majority of UX researchers who have not undergone the same required extensive ethics training as biomedical and social scientists, some of these issues appear more abstract than useful. To this end, I offer below an “insider’s perspective” into the mechanics of research ethics, along with some issues that UX researchers might consider in their daily practice.

So, UX research isn’t research!?!

First, a quick primer on how we define research. As would be suggested in the job title, UX researchers are often tasked with gathering and analyzing user data, usually drawing comparisons between different interface designs to see which ones result in the most desired behaviors among particular users.

However, such activity does not usually fall under the legal definition of research. According to the U.S. Department of Health and Human Services #46.102, research is defined as “systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.”

That last clause, “… generalizable knowledge” is key, as the vast majority of A/B testing is not intended to contribute to the larger body of knowledge on UX – indeed, much of this work is proprietary to those companies conducting it, and never released to the public. What might have well helped Facebook (ironically) is if they had never published the study in the first place, the idea of which led to a bit of confusion on Twitter as to why it’s okay to do research, so long as it isn’t published.

What that means for us UX researchers is that technically, any research is “allowed” because it isn’t research. However, in order to make ethical decisions that we are comfortable with as human beings, it’s worth digging deeper to understand why UX research isn’t subject to the same ethics reviews as other research.

Legally ethical research

One common reason that internal corporate research—such as product testing—is not often subject to ethics review is that most UX research is done on anonymous data, or data without any personal information.

Regarding the Facebook study, one university exempted the study from internal review because the researchers were never given direct access to any individual Facebook user data. In general, research on big data tends to be exempt from ethics review so long as the data is aggregated and not focused on individual persons, and many social and behavioral scientists have subscribed to this ethical perspective.

However, even when data is anonymous, this doesn’t mean that people aren’t affected. In most research ethics reviews, the main concern is balancing the risks and rewards of a given study. The research team must prepare an argument that the societal benefits of the study’s potential outcomes substantially outweigh any risks to people participating in the study.

As a dramatic example, a team of biomedical researchers might approach terminal cancer patients with an opportunity to participate in a case-control study in which they are randomly assigned to receive either (a) a proprietary and experimental cancer medication or (b) a placebo. In this case, the societal benefits (a potential cure for a particular cancer) are thought to outweigh the risks (the eventual death of terminal cancer patients not receiving the experimental medication).

Likely, the risks of most technology research (including my own) are far less extreme – perhaps influencing a user to spend more time reading a particular advertisement or sharing a story element with their social media followers. However, UX researchers should still ask the question: “Would participants in this study be exposed to risks that are greater than those encountered in everyday life?” If the researchers can honestly answer “no,” then their studies are usually fine. In the case of the Facebook study, most have argued that the purposeful manipulation of emotions exposed participants to unnecessary psychological risk (such as depression or other negative emotional states). Moreover, while the end result of the Facebook study turned out to be statistically minute, many have counter-argued that the authors had no way to fully understand the potential effects of their emotion manipulations in such a way that they could have meaningfully worked to mitigate harm.

A great example of ethically-sound and effective industry A/B testing was performed by Dr. Jeffrey Lin, a research scientist with Riot Games trying to better understand reports of “toxic chat” in the video game League of Legends. His team of scientists manipulated several features of the game’s chat system without (initial) player knowledge, eventually finding that one of the best ways to protect players from salty talk was to simply disable in-game chat features by default. The end result was a dramatic drop in offensive language, obscenity, and negative affect, even while the actual chat activity remained stable.

Why did their UX research get so much praise, while Facebook got so much poison? Similar to the Facebook study, data was collected and analyzed anonymously (raw chat data) and participants were not informed about the study. Similar to the Facebook study, Lin’s team was interested in emotions from technology usage (in fact, both studies dealt with the same “emotional contagion” effect). However, unlike the Facebook study, Lin’s work did not expose participants to negative effects beyond those already existing in the game (i.e., “toxic talk”) but instead, randomly assigned some gamers to the “chat off” interface as a potential treatment for an observed problem in their product: negative play experiences.

For a UX research analog, consider how many A/B studies are done on the impact of color scheme on interface behaviors. UX researchers are often tasked with designing interfaces that might be more emotionally stimulating to users so that they might engage in a desired behavior. Many are inspired by color psychology, with recent work applying the theory to algorithms able to retrieve images based on the emotional content of a web page.

Fitting a hypothetical question back into Ute’s original research model, we might wonder about the ethics of an A/B study that intentionally presents a user interface to make it purposefully frustrating, stressful, or an emotionally negative experience. Some might argue that testing both “good” and “bad” experiences is necessary in order to have a complete understanding of UX, but I would contend that the purposeful exposure to a negative experience does little to advance UX, while it does a lot to frustrate users who might not be in a state of mind to handle it.

Usability testing with a one-way mirror

How can we be more ethical?

What can the active UX researcher take away from all of this? A long breath of relief. It is unlikely that any eventual fallout of the Facebook study (including a potential Federal Trade Commission investigation) will result in a death knell for corporate and organizational A/B testing.

However, this breath of relief – as with any contemplative effort – should be followed by a deep inhalation and a consideration about the “real” units of analysis in any UX researcher: individual people.

Let’s reconsider Ute’s dilemma from our introduction, but this time through the lens of a few questions that I recommend all UX researcher ask themselves when considering the ethics of their own work. Indeed, these are essentially the same questions I ask myself (and my institutions’ ethics boards ask of me) at the start of any research:

  1. Is the manipulation theoretically or logically justified?

    In scientific research, a research team often has to prepare a short literature review to explain the theory and logic behind their proposed manipulation. This is an essential step in the research process, as it provides the potential explanation for any observed effects. After all, what good is a positive A/B test if the researcher can’t give an explanation for the observed results? If Ute can’t produce a sound theoretical or logical explanation as to why she thinks visuals will be more engaging (although there is some data on the topic), then I might suggest that she needs to do more homework before conducting her study.

  2. Is a manipulation necessary for my research?

    As mentioned above, a key “tipping point” in the ethics debate around the Facebook study was the active manipulation of user’s news feeds. While experiments are often considered the “gold standard” of research, it is important to remember that they are not the only way to establish causality. In a famous example from 1968, scholars Donald Shaw and Maxwell McCombs were able to demonstrate that the mass media’s coverage of election topics in July of that year (a U.S. presidential election year) heavily influenced public opinion about the importance of these topics in November of that same year by using a cross-lagged correlational design, a simple design where researchers take multiple measurements and compare their influence on each other across time. One way that Ute could get around the ethical dilemma of actively manipulating user profiles is to use a similar design—watching users’ natural behavior over a set period of time and looking for changes in user behavior as a result of (in Ute’s case) using more or fewer photos in profile posts.

  3. Could the manipulation be potentially harmful in any way?

    Once a manipulation has been logically justified and considered necessary for addressing a UX researcher’s burning question, the project still isn’t ready for the green light until it can arguably pass the most important scrutiny: could the manipulation reasonably expose participants to any risks beyond what could be encountered in their normal usage of a site or platform? For Ute’s question, it might seem harmless enough to add or hide a few selfies on randomly selected user profiles. However, media psychologists suggest that selfies are a key component for identity expression, and we might question the extent to which Ute’s research proposal would disrupt these users’ online experiences. To some extent, the minimization of harm is very much related to having a clear understanding of the mechanisms behind a study (the first question in our list).

  4. How might our users feel about being studied?

    The first three questions deal more with planning and implementing a UX research project, but there is a final important ethical consideration: the user experience in the study itself. Often times in psychology experiments, researchers will conduct an exit survey where they will (a) explain to study participants the purpose of the study, (b) debriefed them about the mechanics of the study manipulations, (c) provide participants a chance to comment on the study and (d) ask them to offer oral or written consent, allowing the user’s data to be included in the final research report. While not always practical, such a practice can go a long way in making users feel included in the research process.

    In addition, these interviews can go a long way in providing qualitative data that might explain larger data abnormalities (in the business, we refer to this as mixed methods research). In general, chances are that if a UX research team doesn’t feel comfortable informing users about their role in a study, then they shouldn’t be conducting the study in the first place.

While intensive ethics training might not be practical, it wouldn’t hurt to at least consider the impact of the research beyond the data. Taking a more critical eye to the possible impact of A/B testing on users will not only result in more compassionate studies, but more compelling and effective results to boot.

The post The Ethics of UX Research appeared first on UX Booth.


The UX Booth

Research for the Right Audience

Before working in UX, I taught composition and technical writing at a university in Atlanta. Every semester, I tweaked and tailored my teaching methods and materials to each unique group of students: I brought in YouTube videos to illustrate concepts, I scoured the web for readings that related directly to my students’ majors or interests, I drew charts and edited examples on the board, and I met with students individually when group discussions and class time just weren’t enough. Ultimately, I understood that if I couldn’t effectively communicate the course information to my audience of students in a way they would understand, I was wasting both their time and mine.

In some ways, teaching is very much like UX research. We are first responsible for learning our “subject matter”: we gather information about products, analyze it, and make meaning from it. Then we share our understanding with an audience, to “teach” them. Of course, our audience isn’t students, but designers, developers, writers, marketers, and stakeholders who (ideally) use it to make better products or services. Unfortunately, though, we can become so focused on gathering research that we don’t focus enough attention on effectively sharing our research. What we may not realize is that as researchers, we can only influence our users’ experiences with our products if we effectively communicate in a way that influences action and change.

For many of us, the immediate challenge is figuring out how to maneuver through the process of research collection and analysis. What are the best research methods? How do I identify the right participants? How can I gather information more quickly? How do I organize and wade through all this information? But this focus on gathering and making sense of our findings is only half of a user researcher’s job. Just as with teaching, our hard work and knowledge of our subject matter is worthless if we can’t effectively communicate what we’ve learned.

It starts with audience

As a teacher, I focused my lessons around how my students learned best. In user experience, we focus our product design and content around what the user needs. But as user researchers, our audience is not the end user—it’s our colleagues.

Creating research deliverables without keeping our colleagues specifically in mind is kind of like trying to design for users without knowing anything about them. Each person and department in our organizations is different; they have different goals, different needs, and different communication styles. When we learn, for example, that numbers and percentages appeal to certain people and rich narratives appeal to others, or that reports work fine for some people and small group conversations work better for others, we can communicate our research findings more persuasively.

As a teacher, I didn’t conduct any formal research about my students. I didn’t take a poll asking about their preferred teaching method or give them a quiz to determine their learning style. I simply observed them and made the most of the hours I spent with them each week. As I became familiar with the makeup and dynamics of each class, I identified trends about their learning preferences and needs: some students were visual and needed to see multiple in-class examples, others had to verbally talk through tricky concepts, and a few seemed to learn perfectly fine just reading and applying material from the course texts. All students benefited from repetition and working with the same material in several different ways. I tested out new teaching techniques, discarding the ones that didn’t seem effective for that particular group and repeating those that were.

The MailChimp office

In a similar way, getting to know the people we work with also helps us identify the communication styles that appeal to them and the learning styles that are most effective for them. At MailChimp, a mid-sized software development company, one of the challenges our research team faces is figuring out how to share information among over 250 people, across several different departments. We’ve done a lot of experimenting over the past few years to figure out ways to keep research interesting and engaging. As we experiment, we pay attention to how our colleagues receive information and respond, so we can tweak and adjust how we communicate research in the future. We try to mix narratives with statistics, images with text, and videos with reports or company-wide presentations—all in an effort to make the research engaging and accessible.

Communicating UX research

Through the relationships we’ve established and continue to cultivate across every team at MailChimp, the UX research team makes a conscious effort to understand how our colleagues learn and process information. With that in mind, we experiment with different methods for sharing our research findings with the rest of the company. Every organization is different, so our methods may not be a perfect fit everywhere, but getting a sense of what works for our team at MailChimp may spark new ideas for other UX teams.

Reports

Reports are often what first come to mind when we think about communicating research results—but that doesn’t make them the best choice. Crammed full of valuable information, reports are often fated to a quick death, left unread on the corner of someone’s desk or filed away and forgotten in a metal cabinet. At MailChimp, we mitigate this risk by keeping reports short (usually under 20 pages). We also share them through Google Drive, which allows us to keep a running dialogue between folks in other departments through commenting. While our reports do take on a formal structure with the standard sections like Introduction, Executive Summary, Methodology, etc., we keep them as conversational and non-academic as possible. This both fits our company’s somewhat unique brand and image and also makes our reports less painful for our colleagues to read!

A few folks around the MailChimp office truly love digging into reports, but we know that many others cringe at the thought of reading a long-form document. While reports continue to live on as a familiar method for presenting research, we’d much prefer to experiment with other formats that are more visual and engaging.

Posters

Posters are a visually interesting way to present high-level information that can be taken in and digested at a glance. Last year, for example, MailChimp created posters of our customer personas and hung them near a spot in our offices that saw the most foot traffic—our espresso machine.

Our intention for these persona posters wasn’t to communicate everything we knew about personas—we had a report and a company-wide Coffee Hour presentation for that. Instead, we wanted something that would spark people’s interest and make them curious. The posters did just that! They prompted conversation, questions, and (most importantly) ideas. People began thinking of our customers in different ways: our marketing team brainstormed how they could reach out to different types of customers, our quality assurance team started structuring their testing differently, and support categorized customer issues in new ways. We count that as a major accomplishment!

(Bonus: Posters can be easily moved to different spots throughout an office to keep things fresh.)

Mini-documentary Research Videos

We love talking to our customers over Skype or GoToMeeting, and we love getting customer feedback from survey responses and emails, but nothing helps us understand our customers better than visiting them where they work and watching how they conduct business. We learn so much from these visits, and we want to share this “insider’s look” with the rest of our company. With permission, we take videos and pictures of our customers’ workspaces, and we record our interviews. After we get home, we go through all of that media for salient bits of information and then edit it down into short, mini-documentary style videos that we share with the rest of the company.

Videos help us turn our research into a narrative, a living story. With videos, customers aren’t just descriptions on a page; they’re living, breathing human beings with unique circumstances and environments. Sure, we could tell our company that some customers work in very noisy, distracting environments. But it’s much more effective to show them by letting them see a brief clip of people scurrying around an office or allowing them to strain for a few seconds to understand an interview that’s drowned out by a loud conversation that was taking place next door.

Because each mini-documentary is focused on just one customer, they are effective supplements to other communication forms, like reports or presentations. Videos help provide context to statistics by attaching faces or specific use cases to broad patterns and trends.

Usability Lunches

We work with an incredible group of user-focused designers and developers at MailChimp, but nothing sharpens that focus and motivates action like watching someone struggle with our app during a live usability test. We don’t have a fancy usability lab with two-way mirrors, but we’ve found that we don’t really need that. We usually just order in lunch, gather a group of about twenty folks together, project our participant’s computer screen up on the wall, and have a facilitator in charge to keep things running. Sure, it might take a little extra time to find someone willing to participate in a usability test in front of a small audience, but it hasn’t been too difficult. They’ve always understood that we’re testing the app, not them.

Like the mini-documentaries, live usability tests really personalize the struggles our users come up against while using our app. After the test is over, it’s not unusual to overhear folks already working through solutions to the problems they observed. These lunches have gone over so well, that we’ve been asked to schedule them more regularly.

Coffee Hour Presentations

Coffee Hour Presentations

Every Friday morning MailChimp hosts a Coffee Hour for the entire company. Usually we invite outside speakers to come and share things about technology, industry trends, and creativity, but several times a year the UX research team takes the floor and presents a summary of several months of research to the entire company. While the mini-documentaries and usability lunches have a very narrow focus on one individual user or company, we use Coffee Hours to share “big picture” research that highlights broad trends and patterns in app usage and customer behavior.

For example, in May the UX team hosted a Coffee Hour that summarized findings from our recent annual survey of over 18,000 customers. We shared important trends and pointed out areas of opportunity for future development or exploration. To avoid drowning our colleagues in statistics and numbers, we contextualized the quantitative data with specific quotes and use cases we encountered in our customer interviews and visits.

Internal Websites

A new idea we’ve been playing around with is creating web pages that lay out information in a more narrative format that we can easily share. With the help of our very talented Creative team, our goal is to create something that’s information-rich, visually interesting, and engaging. Of course, since this is user information and company research, we restrict IP access to only the MailChimp office.

Though ours aren’t quite as detailed, these sites have served as inspirations:

Crafting for “user” needs

User Research Meetings

In reality, there’s no perfect way to communicate research—so much depends on who we’re trying to reach and the message we’re trying to present. Part of our job, as UX researchers, is to become ambassadors, getting to know the folks working in other departments, learning what’s important and valuable to them, and assessing the best ways to communicate to them. When we think of our research as a product and our colleagues as our users, we can begin to craft our research to fit their needs and learning styles.

The post Research for the Right Audience appeared first on UX Booth.


The UX Booth

Google Analytics Tips for UX Research


Google Analytics can be an incredibly powerful tool for optimising your site’s user experience. Here are my top tips for harnessing its full potential.

1. Always have a goal

Google Analytics has continually evolved since it was originally released, and now offers a mind-boggling amount of information about your site visitors. This is a double-edged sword though, as one of the most common mistakes I see analysts making is getting lost in a labyrinth of data without any kind of clear goal or direction.

Sure; it’s possible you just might find some interesting tidbits by just poking around through the various different views and filters, but you really need to be setting some study goals before diving in. I find it’s best to work with stakeholders in advance of the study and make a list of all of the questions you want answered.

Some examples might include:

  • What content is trending right now?
  • What are people doing on the homepage?
  • Are people reading a specific piece of content?
  • Do mobile users exhibit different kinds of behaviour to desktop users?

The questions that you’ll set out to answer will obviously depend on the unique needs of your business, but having goals ensures that you stay productive in gathering meaningful, relevant data.

2. Make all insights actionable

For an audit to be productive it’s important to provide actions along with data. The information we’re collecting in analytics isn’t going to be very useful if it doesn’t drive us to do something in response. Usually an action comes naturally if solid goals are set at the beginning of the study, but it’s still important that we don’t just collect data for data’s sake.

An example, then. If we’re checking to see if a specific piece of content is being read and we find that it isn’t – what do we do with that insight?

In some cases the improvements might be clear, such as rethinking the site structure, making some copy tweaks or promoting it more prominently on the homepage. Other times further usability or AB testing may be required. Either way these are all valid actions, and they all ensure that we’re doing something productive with that data we’re collecting.

3. Prioritise your actions

When presenting the findings of a study to your stakeholders, make sure that the resulting actions are prioritised. In my experience the prioritisation of a study finding is mainly affected by two factors:

  • Impact – Is it going to marginally improve content, or will it fix a huge usability issue that is costing us conversions?
  • Effort – Is this a tiny content tweak that can be made in the CMS, or the introduction of a whole new site feature that needs building from scratch?

If you were to plot these actions on a hypoethetical graph, you’d see the low-hanging fruit in one corner – this is the stuff that you should be addressing right now. The opposite corner is still relevent, but might be better considered as longer term for the next larger-scale site redesign.

Graph of solution impact vs effort

4. Compare historical trends (where relevent)

A lot of insights can be affected by the time that data was extracted. Google Analytics gives you the ability to control the date range of your sample quite easily. This is pretty useful for benchmarking any data that we’re gathering, so take advantage of it whenever relevant.

Date range selector in Google Analytics

Sometimes stats can fluctuate over time, which might have something to do with an outside event. For example, the traffic or behaviours of a museum website could change during the school holidays, giving a very different view of the data regarding demographics and content interests.

A good way to keep track of this is by using annotations. Analytics provides the capability to add annotations onto a certain date, so that you can highlight where these events might be occurring and affecting our data. You’ll also want to include other interal factors that could cause data fluctuations, such as the site going down for maintinence or a sale taking place.

5. Assign a value to goals

Google Analytics gives admins the ability to create on-site goals, and assign a monetary value to their completion. To go back to our museum example, if we know that the average order value for tickets is around $ 20, we can infer that this is the value of a visitor completing our ticket booking form.

Goal values therefore give much clearer impression of how much money things like bounce and exit rates are costing us. This is a great way to sell proposed actions to your stakeholders, and give them a much better idea around the return on investment that they’ll get for implementing the proposed change.

6. Filter out your internal traffic

This is a really important setting in your analytics that often gets forgotten. Internal stakeholders who aren’t representative of your target audience can often be looking at the site for content curation or reference purposes and skewing your data. After all, who can resist the temptation to go and have a look at that new content they were so proud of writing?

You can prevent this by setting up a filter in Google Analytics to exclude all traffic from a certain IP address (namely your office or place of work). It’s important to note that these filters aren’t retroactive, so I’d recommend this is the first thing you look into when setting up a new Google Analytics install.

Filtering internal traffic in Google Analytics

Also remember that sometimes internal stakeholders can actually be a valid audience for your site. For example a very large corporation might communicate with existing employees through the .com website. In these cases it might be better to treat internal audiences as an advanced data segment rather than excluding them from the analytics altogether.

7. Use advanced segments

Speaking of advanced segments, they’re incredibly useful and should be used for answering all of those tricky audience-specific questions you’ll be setting. An advanced segment is essentially a permanent audience filter that can be applied to any analytics view. They’re absolutely fantastic for directly comparing the habits of two different user groups, and you can be as specific as you need to be when creating them.

In the example below, I’ve created an advanced segment for all elderly visitors who have made purchases using older versions of internet explorer.

Creating an advanced segment in Google Analytics

Obviously it’s rarely necessary to be that specific in your segmentation, but it really demonstrates how complex Google Analytics has gotten. Just remember to always be asking questions, and don’t go creating segments unless they’re going to provide those answers.

8. Leverage search analytics

Site search analytics are an extremely interesting and often overlooked way to see exactly what people are looking for on your site. They might seem like a no-brainer but I’d say about 90% of the analytics installs I’ve looked at didn’t have this set up – and it only takes about 30 seconds!

Search analytics are especially useful since they can reveal what users might be after that you haven’t accommodated for in your navigation or on your homepage. It might also bring to light that your visitors are using different words or terms than those you might expect, prompting changes to your navigation.

This is usually one of the first things I look at when doing a site audit, as it’s a very quick way to identify deficiencies in the site navigation or gaps in content.

9. Use dashboards for frequent reports

A lot of our audits might need to be repeated regularly. For example, it might be useful to provide stakeholders with a breakdown of top content or mobile device usage that is updated every month. The concept of an analytics dashboard isn’t anything new, but Google has made this extremely easy with the recent introduction of in-built dashboards in Google Analytics. We no longer have to spend time mucking around with the API and an excel spreadsheet, our useful dashboards can now be made in the application itself.

A custom dashboard in Google Analytics

As per the previous advice in this post, try to structure your dashboards around answering questions instead of just providing reams of data. Check out this article from Econsultancy for some great examples of dashboards.

10. Avoid analysis paralysis

This is a simple one, but it’s probably the most important point of all.

Web analytics tools have provided us with an amazing opportunity to get real user insights quickly and easily. But data without action isn’t any good to anyone. Always be actioning your insights, otherwise there isn’t much point gathering them in the first place.


The post Google Analytics Tips for UX Research appeared first on Speckyboy Design Magazine.


Speckyboy Design Magazine

The Hidden Benefits of Remote Research

Remote user research is often considered a last resort, suitable only when no other options are available. This week, Kathleen Asjes shows us situations where remote research may actually be the preferable option.

The post The Hidden Benefits of Remote Research appeared first on UX Booth.


The UX Booth