Context and the Changing Mobile Landscape

Marketers increasingly think about consumers in complex ways. It is understood that in a changing digital landscape, the context in which they learn and shop influences what messages we deliver and how we deliver them.  But we rarely define “context”. It is one thing to design a usable app that conforms to human factors and cognitive requirements, but it is quite another to design a stage in an environment when there are innumerable semi-autonomous devices mediating in a swirl of information.

Physical Context

Physical context refers to the notion of infusing devices with a sense of “place.”  In other words, devices can distinguish the environments in which they “live” and react to them. But this is difficult. Mapping out longitude and latitude is one thing, but reacting to features (political, natural, social, etc.) is much more problematic. Getting beyond the boundaries of identifiable borders and structures, means coming to grips with “place”.

Think of a mall.  There are hundreds of stores, each with hundreds of devices. The device now has to decode what information is relevant and how it will deliver information. What competing retailer apps get precedence over others? When you receive an offer, will the device “tell” other retailers in order to generate real-time counter offers? The digital landscape is continuous at all points throughout the day and getting design right means understanding the systems in which people operate.

Device Context

Just as various kinds of sensory apparatus (GPS-receivers, proximity sensors, etc.) are the means by which mobile devices will become geographically aware, another class of sensors makes it possible for devices to become aware of each other. This presents a series of problems that are different than those of physical context.

Technology is on the verge of existing in a world with zero-infrastructure networks that can spring up anywhere, anytime. Devices will exist in a constant state of discovery.  Returning to the mall, imagine that you are with a friend whose device is communicating with yours.  In the mall are a couple of thousand devices, all of which are discovering each other.  What happens now?  Assuming we’ve dealt with the problem of one friend’s device communicating with the other friend’s device while blocking out the other 2000 devices, you still have several thousand potential “identities” that may have useful information.  How is it decided what to manage without devoting significant time to setting up the hundreds of variables?

Information Context

This is the realm of information architecture. Data no longer resides “in” our computers.  Devices are extensions of the cloud and exist as something akin to perceptual prostheses.  They exist to manipulate data in the same way a joy stick allows us to handle the arms of robot in a factory.  This reflects a shift in how we use information because all information is transitory.

Storage issues are essentially removed from the equation.  Content can leap from place to place and device to device in an instant. Content will be customizable and reflect the human-application interaction rather than shaping it. Devices will find themselves in the fourth kind of context of social interaction, with all its contingencies. Just as behavior is shaped by the moment, so too will the apps and information needed to adapt.

Socio-Cultural Context

Each person is unique to contrasting cultures, tongues, traditions and world views. A cultural context may exist on levels as diverse as a workplace, a family, a building, a county, a continent, a hemisphere. Cultural context provides a framework for what “works” for each consumer in the world.

It is at this point where a better perspective is gained on what will and will not be accepted in the mobile universe. Take a beer pouring app that mimics the pouring of a beer when the device is tilted.  It serves no direct function and yet it has been successful because of the cultural needs it to which it speaks – workplace breaks, male-to-male bonding, etc. But in another context, say Saudi Arabia, the context shifts. Success lies in understanding the reasons behind the consumers beliefs and actions in the symbolic exchanges, and the ability to code and decode those exchanges.  Marketing mishaps come from a lack of comprehension.

So What?

Our great technological leaps forward have also produced more complexity, leading to a greater need to make sense of insights. Without a means to categorize context, marketers will miss identifying trends that matter most. What to do?

  • Rethink the problem. Frequently, “the problem” is a facet of something else. For example, when researching an eBook the problem to be solved isn’t technology, it is understanding why people read different material in different contexts. It may be about displaying books as a means of gaining status. The point is the problem seen may not be the problem at all.
  • Define the contexts. Defining the contexts helps articulate the range of possibilities for observation. For example, if the consumer behavior is drinking beer, all contexts in which beer is purchased and consumed need to be articulated.
  • Think through the sample. Who is the marketing targeting? What are the social circles that will shape the event? It isn’t enough to define a demographic sample, you need to think in terms of cultural systems.
  • Make a plan that involves experiential information gathering, not just statistics. Develop a guide to navigate the data collection and a method for managing the data (everything is data). Don’t  just think about the questions to ask, but also include opportunities for observation and participation.
  • Head into the field. This is the heart of the process. Meaningful insights and moments of “truth” are slow to get at. Low-hanging fruit will be easy to spot, but the goal should be to find those deeper meanings. Because everything is data, from attitudes to artifacts, it is important to capture as much as possible.
  • Do the analysis. Analysis is the most difficult, but also the most rewarding. The goal is to bring a deep understanding of cultural behavior to the analysis process. This goes beyond casual observation and gets to the underlying structures of why people do what they do.

The process is more time consuming than traditional approaches, but it ultimately yields greater insight and reduces time and costs on the back end. The end result is that you create greater value for the client and for the company.

Anthropology and Usability: Getting Dirty

There are significant methodological and philosophical differences between ethnographic processes and laboratory-based processes in the product development cycle.  All too frequently, proponents of these data collection methods are set at odds, with members on both sides pointing fingers and declaring the shortcomings of  the methods in question.  Methodological purity, ownership and expertise are debated, with both ends of the spectrum becoming so engrossed in justifying themselves that the fundamental issues of product development are compromised.  Namely, will the product work in the broadest sense of term. One side throws out accusations of a lack of measures and scientific rigor.  The other side levels accusations about the irrelevance of a sterile, contextually detached laboratory environment.  At the end of the day, the both sides make valid points and the truth, such as it is, lies somewhere between the two extremes in the debate.  As such, we suggest that rather than treating usability and exploratory work as separate projects, that a mixed approach be used.

So why bridge methodological boundaries? Too frequently final interface design and product planning begin after testing in a laboratory setting has yielded reliable, measurable data.  The results often prove or disprove the functionality of a product and any errors that may take place during task execution.  Error and success rates are tabulated and tweaks are made to the system in the hopes of increasing performance and/or rooting out major problems that may delay product or site release and user satisfaction.  The problem is that while copious amounts of data are produced and legitimate design changes ensue, they do not necessarily yield data that are valid in a real-life context.  The data are reliable in a controlled situation, but may not necessarily be valid when seen in context. It is perfectly possible to obtain perfect reliability with no validity when testing. But perfect validity would assure perfect reliability because every test observation would yield the complete and exact truth.  Unfortunately, neither perfection nor quantifiable truth does exist in the real world, at least as it relates to human performance.  Reliable data must be supported with valid data which can best be found through field research.

Increasingly, people have turned to field observations as an effective way of checking validity.  Often, an anthropologist or someone using the moniker of “ethnographer” enters the field and spends enough time with potential users to understand how environment and culture shape what they do.  Ideally, these observations lead to product innovation and improved design.  At this point, unfortunately, the field expert is dropped from the equation and the product or website moves forward with little cross-functional interaction. The experts in UI take over and the “scientists” take charge of ensuring the product meets measures that are, often, somewhat arbitrary.  The “scientists” and the “humanists” do not work hand in hand to ensure the product works as it should in the hands of users going about their daily lives.

Often the divide stems from the argument that the lack of a controlled environment destroys the “scientific value” of research (a similar argument is made over the often small sample size), but by its very nature qualitative research always has a degree of subjectivity.  But to be fair, small performance changes are given statistical relevance when they should not.  In fact, any and all research, involves degrees of subjectivity and personal bias.  We’re not usually taught this epistemological reality by our professors when we learn our respective trades, but it is true nonetheless.  Indeed, if examining the history of science, there countless examples of hypothesis testing and discovery that would, if we apply the rules of scientific method used by most people, be considered less than scientifically ideal James Lind’s discovery of the cure for scurvy or Henri Becquerel discovery the existence of radioactivity serve as two such examples.  Bad science from the standpoint of sample size and environmental control, brilliant science if you’re one of  the millions of to people to have benefited from these discoveries.  The underlying problem is that testing can exist in a pure state and that testing should be pristine.  Unfortunately, if we miss the context we usually overlook the real problem. A product may conform to every aspect of anthropometrics, ergonomics, and established principles of interface design.  It may meet every requirement and have every feature potential consumers asked for or commented on during the various testing phases. You may get an improvement of a second in reaction time in a lab, but what if someone using an interface is chest deep in mud while bullets fly overhead.  Suddenly something that was well designed in a lab becomes useless because no one accounted for shaking hands, decrease in computational skills under physical and psychological stress, or the fact that someone is laying on their belly as they work with the interface.  Context, and how it impacts performance with a web application, software application, or any kind of UI now becomes of supreme importance, and knowing the right question to ask and the right action to measure become central to usability.

So what do we do?  We combine elements of ethnography and means-based testing, of course, documenting performance and the independent variables as part of the evaluation process.  This means detaching ourselves from a fixation with controlled environments and the subconscious (sometimes conscious) belief that our job is to yield the same sorts of material that would be used in designing, say, the structural integrity of the Space Shuttle.  The reality is that most of what we design is more dependent on context and environment than it is on being able to increase performance speed by 1%.  Consequently, for field usability to work, the first step is being honest with what we can do. A willingness to adapt to new or unfamiliar methodologies is one of the principal requirements for testing in the field, and is one of the primary considerations that should be taken into account when determining whether a team member should be directly involved.

The process begins with identifying the various contexts in which a product or UI will be put to use.  This may involve taking the product into their home and having them use it with all the external stresses going on around them.  It may mean performing tasks as bullets fly overhead and sleep deprivation sets in.  The point is to define the settings where use will take place, catalog stresses and distractions, then learn how these stresses impact performance, cognition, memory, etc.  For example, if you’re testing an electronic reading device, such as the Kindle, it would make sense to test it on the subway or when people are laying in bed (and thus at an odd angle), because those are the situations in which most people read — external variables are included in the final analysis and recommendations.  Does the position in bed influence necessary lumens or button size? Do people physically shrink in on themselves when using public transportation and how does this impact use?  The idea is simply to test the product under the lived conditions in which it will find use.  Years ago I did testing on an interface to be used in combat.  It worked well in the lab, but under combat conditions the interface was essentially useless.  What are seemingly minor issues dramatically changed the look, feel, and logic of the site. Is it possible to document every variable and context in which a product or application will see use?  No. However, the bulk of these situations will be uncovered.  And those which remain unaddressed frequently produce the same physiological and cognitive responses as the ones that were uncovered.  Of course, we do not suggest foregoing measurement of success and failure, time of task, click path or anything else.  These are still fundamental to usability.  We are simply advocating understanding how the situation shapes usability and designing with those variables in mind.

Once the initial test is done, we usually leave the product with the participant for about two weeks, then come back and run a different series of tests.  This allows the testing team to measure learnability as well as providing test participants time to catalog their experience with the product or application.  During this time, participants are asked to document everything they can about not only their interaction with the product, but also what is going on in the environment.  Once the research team returns, participants walk us through behavioral changes that have been the result of the product or interface.  There are times when a client gets everything right in terms of usability, but the user still rejects the product because it is too disruptive to their normal activities (or simply isn’t relevant to their condition).  In that case, you have to rethink what the product does and why.

Finally, there is the issue of delivery of the data.  Nine times out of ten the reader is looking for information that is quite literal and instructional.  Ambiguity and/or involved anecdotal descriptions are usually rejected in favor of what is more concrete. The struggle is how to provide this experience-near information.  It means doing more than providing numbers.  Information should be broken down into a structure such that each “theme” is easily identifiable within the first sentence.  More often than not, specific recommendations are preferred to implications and must be presented to the audience in concrete, usable ways.  Contextual data and its impact on use need the same approach.

A product or UI design’s usability is only relevant when taken outside the lab.  Rather than separating exploratory and testing processes into two activities that have minimal influence on each other, a mixed field method should be used in most testing.  In the final analysis, innovation and great design do not stem from one methodological process, but a combination of the two.