Attention, Control, and Learning - Physical Information - Understanding Context: Environment, Language, and Information Architecture (2014)

Understanding Context: Environment, Language, and Information Architecture (2014)

Part II. Physical Information

Chapter 5. Attention, Control, and Learning

Never memorize something that you can look up.

ALBERT EINSTEIN

A Spectrum of Conscious Attention

CONTEXT IS A FUNCTION OF UNDERSTANDING THE ENVIRONMENT, which involves our consciousness. To what degree, however, are we really conscious of our environment, or even our consciousness? We like to think we’re logical, rational beings that take action mainly out of higher-order thought. Yet, as is explored in Chapter 4, our bodies do a lot of the thinking for us. It turns out that the environment is also responsible for much of our decision-making, attention, and learning. Embodied cognition is part of a general trend in the past few decades in which new schools of thought are questioning long-held assumptions that go back at least as far as Rene Descartes’ supposition that “I think, therefore I am.”

Research has shown that much of our daily activity is driven by deep, preconscious impulses and primitive-brain emotions rather than logical, conscious analysis and decision-making. In Descartes’ Error: Emotion, Reason, and the Human Brain (G.P. Putnam’s Sons), neurologist Antonio Damasio shows how mind, body, reason, and emotion all work as a single system rather than separate entities. In fact, reason is an outgrowth of emotion; it is crippled without an emotional foundation to drive our decisions.[69]

In another major work on the subject, Thinking, Fast and Slow (Farrar, Straus and Giroux), psychologist and economist Daniel Kahneman shows how behavior exhibits two sorts of consciousness at work, which he refers to as System 1 and System 2. System 1 is subconscious and emotionally driven; it works fast and automatically; it’s frequently engaged and relies on stereotype. System 2 is conscious, working slowly with purposeful effort; it is engaged less frequently (in comparison to System 1), employing logical calculation.[70] A related distinction from behavioral economics calls these systems “Automatic” and “Reflective.”[71]

There are other models exploring these different levels of consciousness, some of them breaking it down into more layers. One is Don Norman’s three-level model of “Visceral, Behavioral, and Reflective,” in which the first two are unconscious, and reflective is conscious.[72] Another, from the information sciences, is Marcia Bates’ quadrant model for information seeking, in which one axis is Active/Passive and another is Directed/Undirected.[73] I mention these models because they all provide equally useful perspectives on how much user action occurs without the same level of awareness or conscious thought. Don Norman stresses that “design must take place at all levels.”[74]

Here, I offer a simple model influenced by these others. Instead of presenting stages or levels, this model shows how these states of mind are not cleanly separated but instead function on a spectrum of conscious attention between Explicit and Tacit, as depicted in Figure 5-1.

Explicit is related to terms from other models such as conscious, deliberate, reflective, and System 2. It’s a state in which we are reflecting, self-aware, and consciously thinking through our actions or considering the meanings in our environment.

Explicit and tacit, along a spectrum

Figure 5-1. Explicit and tacit, along a spectrum

Tacit is related to other terms, such as unconscious, intuitive, automatic, and System 1. It’s a state in which we take action implicitly without reflection, driven by unconscious impulse, resonance with the environment, and by habit or convention.

Again, these are not binary categories. They are part of a gradual continuum; and we can’t count on someone being in just one side or another for everything they’re doing—although satisficing through the loop-of-least-resistance principle means that people do as much as possible as far toward “Tacit” in this spectrum as they can. We evolved to conserve energy, and making conscious, explicit decisions burns a lot of fuel. In fact, research has shown we can suffer mightily from something called decision fatigue.[75] Our ability to make decisions deteriorates after too much deliberation, causing us to make worse or more impulsive decisions from the more tacit level of consciousness, even when we should be explicitly concentrating.

This spectrum also aligns with the Explicit/Tacit model for knowledge first described by Hungarian chemist and philosopher Michael Polanyi in the mid-twentieth century. Tacit knowledge is that which is hard or impossible to communicate through explicit means, such as written instructions. In Polanyi’s terms, “We know more than we can tell.”[76] Riding a bicycle can’t be learned from a book; the body has to do the physical work of learning. A softball pitcher can’t explain how she pitches in explicit terms; the knowledge is essentially embodied in the act of pitching. However, even tasks that aren’t so physical are often tacitly driven. Language itself is eventually used tacitly after we learn it to the point of fluency.

Context depends heavily on what level of conscious attention is being demanded of the agent (perceiver, user, person, and so on). Anything that doesn’t accurately fit one’s unconscious, tacit habits and conventions must be explicitly attended to and learned, or else it runs the risk of tricking the perceiver into an unintended consequence.

When we drive on a road at night, we assume by habit and convention that the road has enough friction to keep us safely moving forward, unless we detect something about the road’s surface that would indicate otherwise. That’s why the phenomenon called “black ice” is so dangerous: it appears to be one sort of structure, when it actually is the opposite: a slick of frictionless surface that doesn’t reflect enough light to alert us to its presence, showing us only the dark asphalt underneath. When we see a variation in the road, we attend to it, slow down, drive more carefully. When we don’t, we act tacitly, unconsciously.

As designers of environments, one of the biggest risks we run is putting black ice in front of people; we inadvertently trick them into thinking the environment affords one thing when it actually affords something else, possibly to the user’s detriment. Facebook’s Beacon created a wormhole of information, leaking personal actions into actively published feeds. By being too subtle (and in some ways, just confusing) about how the system indicated its actions, Beacon created a sort of black ice that wasn’t perceived for what it would actually do. The few users who carefully paid explicit attention to it noticed what it was doing; but we can’t expect users to do the digital equivalent of checking every inch of road for ice.

Environmental Control

Consciousness is something we think of as internal to the individual, but the external environment plays a powerful role in consciousness and behavior, as well. This idea was formulated in 1926 by psychologist Kurt Lewin, who created what is now called Lewin’s Equation: B=f(P,E). It’s a heuristic formula (rather than a mathematical equation) that states, “Behavior is a function of the person and his or her environment.” It was controversial at the time it was published because it emphasized the environment of the person in the moment of perception over the learned experience of the past. It has since become a foundational idea in the field of social psychology.[77] Although this equation predates the work of ecological psychology theorist James J. Gibson by quite a few years, the spirit of it is in line with Gibson’s idea of the coupled relationship between the perceiver and the environment.

To a significant degree, context controls conduct. We like to think we actively decide our every action; isn’t that what free will is all about? Yet, it so happens that the environment’s structural constraints determine much of our daily behavior.

This doesn’t mean that we have no agency whatsoever; a perceiver detects information in the environment and then has the ability to decide what to do about it, controlling the perceiver’s motion.[78] But, environmental information is central to the very origin of the whole perceptual system, and it still exerts its structural pressures on our every act. Moreover, given that our cognition is bound up in action, our environment’s constraints can shape how we think, as well. Certainly, we modern humans are able to control many of our behaviors and teach our bodies new ways to respond to environmental information, but we’re doing it on top of a core organism—from limbs to limbic system—that was formed by the environment in which we evolved.[79]

Nature is not the only force exerting this pressure. Technologies also alter our perception of our bodies and their abilities. In experiments during which full-sized adults were put into an apparatus where they perceived themselves as having a virtual child’s body, the adults started to perceive the structures around them as a child might, including whether they could fit through openings or climb onto surfaces. The new body and the environment strongly influenced the adults’ choices and behaviors, and even their emotions.[80] In a comparison between a room suited for adults and one more suited to children, the study found that “you see the world bigger, have more childlike attributes, and prefer a [child-suited] environment rather than an adult one.”[81] The adults’ brains didn’t change; they were just estimating the world using a different sort of body than they were used to.

Some might argue that these adaptations mean the brain’s “schema” or internal representation of the body has plasticity, meaning that it can adapt and change.[82] But an embodied argument could be more straightforward: just as stairs have intrinsic information that doesn’t require a mental model for understanding their use, the brain doesn’t need a representation of the body and its capabilities. It already has an actual body present, so the body can act as its own image. Thus, if the brain is given information that tricks it into thinking it has a different body, it uses the new one instead. If the “trick” is convincing enough that, even when moving and calibrating, the body can continue to believe the trick, it continues behaving accordingly.

A version of this adaptation affects us in all areas of life. The environment’s information is soaked up by our bodies and changes our habits. Even our devices and applications affect our behavior, because new or different powers give us a different sense of what we can and can’t do. When a desktop application starts remembering what documents we had open after we quit, we start opening the app to get back to those documents rather than hunting down the files on our hard drives. Gmail gave users a big Archive button and a more powerful way to search emails, and many users stopped sweating over filing away messages in folders. When our phones began remembering phone numbers as names, we stopped memorizing everyone’s digits.

Our brains might not need a map or schema because all the information is right there in the environment. Of course, past experience is part of this dynamic as well, and we will look at that as part of memory and learning; but we tend to underestimate how much present information can shape our understanding and action.

This “nudging” effect is explored in the field of behavioral economics, which is largely about how environment influences behavior in complex cultural systems. One study showed that in the United States, when citizens have to “opt in” to be an organ donor, only about one-third do so. But in Austria, 99 percent are donors, partly because their government enlists all citizens in the program by default, giving them instead the choice to opt out. In both situations, people have a choice, but because of satisficing, this nudge in one direction versus another makes a huge difference.[83]The environment makes the initial, hard decision for them. Behavioral economists call these sorts of policy structures choice architecture. But this isn’t architecture of stone or steel; it’s architecture of rules and communications made of language.

We see environmental control in action in software environments, too. In the airport scenario in Chapter 1, when I assumed that my coworkers could see my travel schedule in my work calendar, it was because the affording information available in the interface didn’t specify that other people couldn’t see what I could see. The structures evident in the calendar app exerted control over what I perceived to be “my calendar.”

Likewise, when Facebook users were taken by surprise by Beacon publishing private information to public feeds, it was because the environment’s information didn’t adequately specify what was going on behind the scenes. Software can all too easily trick our perception into assuming our environment has a particular stable structure that isn’t really all that stable or universal.

When watching people use gadgets and software, we need to remember that the way they’re making use of their context is largely being determined by the structures available to them. Often, I have heard e-commerce clients complain that their customers are using the online shopping cart improperly, as a sort of wish-list, even when the site provides a separate wish-list function. Though when you look at the environment neutrally as a cluster of environmental structures, it becomes clear that Add to Cart is usually a much easier and quicker function to find and use than Add to Wish-List—the button tends to be more prominent, more available, and the “Cart” itself is always represented somewhere (normally as a concrete metaphor with a picture of a cart) regardless of where the user is shopping. Why wouldn’t the user make use of such an available, straightforward environmental structure over a less-available abstraction?

Part of what we will continue to explore is how these semantic constructs—whether the “choice architecture” of civic policy or the “information architecture” of software—aren’t merely metaphorical architecture, but real structure that exerts nudges and constraints on our behavior, similar to anything in the natural or built environment.

Memory, and Learning the Environment

Through all of this talk of perceiving environments and how the environment exerts control over our cognition, one might wonder, “yeah but what about memory? Don’t we remember things about the environment? And, isn’t that a sort of image or representation we store in our brains? If we’re supposed to be designing context, doesn’t it need to account for what people remember from previous experience?”

In short, the answer is yes; memory is a crucial part of how people experience context. However, the complicating factor is, no, we can’t count on stable, fixed memory of our users in what we design. Because this is such a big issue for context, we should spend some time looking at it more thoroughly.

What is Memory?

The idea of “memory” is so deeply ingrained in our language and culture that it’s a bit of a shock to learn that there is no universally accepted science or model for how it works.[84] The way we retrieve knowledge from ourselves is still, in its details, largely unknown and the subject of much scientific research and debate.

The prevailing idea of memory is the storage metaphor. We assume memory must be a place in our heads—like a sort of database or file cabinet—where our brains store experiences and then pull them out when needed. Until about 20 or so years ago, even cognitive science assumed this to be accurate but has since acknowledged that memory is much more complicated.[85]

Still, the storage metaphor is the way we conventionally talk about memory, even though it’s terribly misleading. If our brains actually stored everything away like cans of soup in a cupboard, we should be much better at remembering than we actually are. Memory is untrustworthy, and seems to hang onto only certain things and not others, often with little apparent rhyme or reason. In one study from 2005, people in the United Kingdom were asked if they’d seen closed-circuit television footage of a well-publicized bus bombing. Eighty-four percent of the participants said they had—some of them providing elaborate details in response to questions—even though no such footage existed.[86] More recent research has shown that even those who we popularly think of as having “photographic memory” (actually called highly superior autobiographical memory) are nearly as unreliable as those considered to have normal memory.[87]

Of course, we know that we can recall some sort of information from our past, using neurochemical activity that makes it possible for our nervous systems to retain a kind of information about our environment and past experience.[88] Yet, in spite of all that modern science has at its disposal, “human memory remains a stunning enigma.”[89]

The question is, what do we need to know about how memory works to design appropriately for it, especially when it comes to the prior experience people bring to context?

Types of Memory

From traditional cognitive science, there are many different models for how memory works, most of them variations on similar themes. Figure 5-2 presents a diagram showing one version.

Various types of memory, related to the disciplines that tend to study them (source: Wikipedia)Wikimedia Commons:

Figure 5-2. Various types of memory, related to the disciplines that tend to study them (source: Wikipedia)[90]

Such models have been built up over the years, based on the patterns researchers see in test-subjects’ behaviors, and in the little we can learn from watching energy and blood moving in their brains. A model like this can mislead us into thinking there are distinct areas of the brain that perform each of these functions. In actuality, it’s not so clear-cut.

An Embodied Perspective on Memory

Embodied cognition theorists tend to question a lot of the received wisdom about memory. J.J. Gibson criticized the idea of memory as a “muddle”—a sort of “catchall of past experience” that lacks real evidence. He argues against the theory that what we experience in the present is mostly assembled from memories of the past: “the doctrine that all awareness is memory except that of the present moment of time must be abandoned.”[91] Elsewhere, he points out that even the assumption that there is a clear distinction between present and past experience is somewhat of a fiction; perceiving and remembering are just two ways of looking at the same dynamic.[92]

Louise Barrett’s Beyond the Brain: How Body and Environment Shape Animal and Human Minds (Princeton University Press) shows how many seminal studies on memory—for example, Piaget’s “A not B” test regarding infant memory—have since been undermined by newer research that accounts for an embodied dynamic.[93] Barrett explains that “memory is not a ‘thing’ that an animal either does or doesn’t have inside its head, but a property of the whole animal-environment nexus; or, to put it another way, it is the means by which we can coordinate our behavior in ways that make it similar to our past experiences.”[94] From this perspective, memory is really accumulated impressions from environmental perception. It’s not something that begins in the mind, where some computer-like entity is recording sensory perception for later retrieval, or processing symbols and categories; rather it’s built up from what our bodies-plus-brains retain from our ongoing activity in the perception-action loop.

Of course, if our perception didn’t retain any information at all, we’d be poorly suited for survival. There is absolutely some form of retention and recall going on, and that can mean we have brain-centered experiences, thoughts, recollection.

Our friend J.J. Gibson allowed that we can have “internal loops more or less contained within the nervous system. There is no doubt but what the brain alone can generate is experience of a sort.”[95] The difference between this allowance and the mainstream conception of memory is that embodied cognition flips the model, making the body the center of how memory works. The internal experience of remembering is more like a byproduct of aggregated, residual perception. Keep adding up these residual perceptions, and eventually you have an internal life of a “mind” where prior perception and thought (what scientists call “off-line” experience) can be considered, inhabited, manipulated.[96] But it doesn’t begin in the mind—it begins in the body.

Learning and Remembering versus Memory

Memory is more verb than noun. It’s more useful to think of memory as a dynamic that emerges from many different cognitive systems, one that is always in process. We’re not accessing a memory so much as picking up perceptual experience and reconstructing what it means to us. Learning and memory are inseparable and are enmeshed with adaptive perception and action. As Eleanor Gibson succinctly put it, “We perceive to learn, as well as learn to perceive.”[97]

In fact, there is no truly stable memory to be retrieved, because the act of remembering actually changes the content of what is remembered, in a process called memory reconsolidation.[98] Each time we recall a past experience, we alter it in some way, influenced by current circumstances; when we recall the experience again later, it’s now the version that we reconstructed, interpreted yet again. It’s not unusual to remember something that happened to you only to find it happened in a different way, or even to some other person whose story you’ve heard over the years.[99]

Learning and Remembering are Entangled with Environment

Some memories of our past have more to do with photographs we’ve seen and stories we’ve heard from relatives than some original representation stored in a brain-cabinet.[100] We naturally interpret environmental cues as part of our actual memory, to the point that we can actually be fooled into thinking things happened to us that never did; for example, in one study, subjects were convinced they had ridden in a hot-air balloon because of manipulated photographs.[101] Our perception relies heavily on our current environment to inform what we think is true of past events.[102]

The structure of our physical environment is used by our brains to off-load some of the work of retaining prior experience, even when the content of memory isn’t about our surroundings. One recent study tested subjects in both virtual and physical connected-room environments, and found “subjects forgot more after walking through a doorway compared to moving the same distance across a room, suggesting that the doorway or ‘event boundary’ impedes one’s ability to retrieve thoughts or decisions made in a different room.” Additionally, returning to the original room after passing through several other rooms didn’t improve memory of the original information.[103] Memory doesn’t just sit on a shelf ready to be accurately accessed again; it’s always in flux, intermingled with our surroundings. Borrowing terms Don Norman often uses, “knowledge in the world” has a strong effect on whatever exists “in the head”—even what we think of as head-based knowledge.[104]

This makes sense, if we recall that our brains evolved to support our bodies, not the other way around. What else would memory have mainly evolved for other than recalling just enough about our surroundings to help us survive? Something like factual accuracy is an artificial idea we’ve invented in our culture. But organisms don’t separate fact from interpretation; they just retain what is needed to get by, without clear lines between invention, environment, and remembering.

In digital interfaces, this principle is still at work. When using a search engine such as Google, the way the environment responds to our actions tacitly teaches us how to use that environment. When Google changes its results to reflect your search habits, learning from how you search for information, it’s also simultaneously teaching you how to search Google, providing auto-suggested queries and prioritized results that create a sort of environmental feedback loop.[105]

Environment, and Explicit versus Implicit Memory

Sometimes we consciously, purposefully work at remembering information and experiences. One might memorize a poem or tell oneself to remember to take out the trash tonight. One might also try hard to recall a name of a friend or where they were on New Year’s Eve two years ago. This intentional act of consciously working to remember something is explicit memory. It can be something we worked to remember on purpose, or something we’ve simply retained without much effort but that we’re trying to pull up from the foggy depths of our minds and reconstruct in an explicit way.

Implicit (or in our model, “tacit”) memory is essentially the opposite: it’s the stuff we don’t have to think about intentionally. Recalling when a parent helped us learn how to ride a bike would be explicit memory. But implicit memory (specifically, procedural memory) would be how our bodies just know how to ride a bike from previous experience.

What is important for context is that both of these sorts of memory depend on environmental interaction. Most of what we remember in our environment is learned tacitly, through repeated exposure to patterns of affordance, through action. The procedural “muscle memory” we employ when riding a bike exists only because we made our bodies ride bikes enough in the past that the ability to calibrate our body position was ingrained in us through repeated, physical activity.

Other tacit learning can happen almost immediately if the experience causes a high spike in our fear or other emotional response. (This is a property involving the brain’s amygdala flooding the nervous system with hormones that mark the experience with sense-impressions of what the environment was like during the trauma.)[106] But, this is a highly unreliable memory resource when it comes to specific facts; for evolution, it has to be only accurate enough to keep us from traipsing accidentally into another lions’ den. It did not evolve to factually verify if another place has lions or not, or exactly what the lions looked like, or that the eucalyptus you smelled nearby during your early lion encounter isn’t actually as dangerous as the lions themselves. These effects are blunt instruments that can actually have negative consequences; for example, they can cause us to react inappropriately to safe situations, which we can see manifest in post-traumatic stress disorders.

Explicit learning can result in accurately remembering a great deal of information, but it’s a special case, and it always involves purposefully re-exposing ourselves to information until it “sticks,” or using some environmentally tied mnemonic technique.

One example has to do with learning to type on a keyboard: how we have to explicitly think about where they keys are until we’ve done enough typing that we can do it by touch. A common argument goes that the knowledge of the keyboard has gone from our bodies into our heads. Saying “into our heads” might lead us to think there’s a sort of representational map of the keyboard in the typist’s brain, but it turns out that’s not the case. In a recent study, it was found that skilled touch typists averaging 72 words per minute were unable to map more than an average of about 15 keys when asked to do so outside of the act of typing. If asked to type something, they can hit the right keys just fine, but it’s their fingers that seem to “know” where to go. There’s no explicit, readily retrieved representation in brain-storage. The body satisficed; it went straight to an embodied facility that translates words into “fingers making letters appear” without going to the trouble of constructing a conceptual map.

Likewise, when we get a little stuck trying to recall a phone number, we tend to do one of two things: we try to say it aloud to ourselves in sequence, as if recalling what it feels like in our mouths and ears, or we reach for phone to type it out, because our bodies seem to know which buttons to press (and in what order) better than our brains can remember the symbols alone.

Otherwise, we jot the number down someplace (on a napkin, a note, or the back of a hand). We use the environment to help us remember things all the time, even when we don’t realize it. Of course, we’re now a lot worse at remembering phone numbers because we seldom have to dial them with our fingers—we just tap a name in our phone’s contact list. As always, cognition satisfices.

What Does All This Mean for Design?

Regardless of the differences in one theoretical perspective or another, the overall lesson is clear: we can’t rely on an ability to invoke specific sorts of memory in users. We can’t assume they will accurately retain anything from prior experience, and we especially can’t expect them to explicitly memorize how to use a product. Even for the rare cases in which specialists are required to learn a complex system through repeated use, the system should do as much work as possible toward making its affordances clear without requiring memory. Perception satisfices, so it tacitly makes use of the environment around it directly as much as possible.

In Don Norman’s conceptual model of “knowledge in the head” versus “knowledge in the world,” he explains that we should always try to “provide meaningful structures...make memory unnecessary: put the required information in the world.”[107] The environment is such a major player in how our brains function, “everything works just fine unless the environment changes so that the combined knowledge (between head and world) is no longer sufficient: this can lead to havoc.”[108] If you’ve ever visited a country in which they drive on the opposite side of the road, or you’ve moved the furniture around in your bedroom only to bruise yourself on a chair in the dark until you get used to the new arrangement, you know about this havoc firsthand.

Of course, at a certain point of scale or complexity, it’s impossible to put all the knowledge in the world so that it can all be perceived at once. This is why we historically rely on extensive menus in software; users can uncover for themselves what actions are available without the screen being overwhelmed with buttons. It’s why an online retailer has to provide summary categories and search functions—you can’t see the entire inventory in one glance. And when software actions are happening beyond our perception, we simply don’t know about them unless the environment presents us with detectable information.

This is why one of the most complex things to design in a device such as a smartphone is the notifications capability. In my iPhone’s current iOS version, there are at least four different ways I can set various apps to alert me of events happening beyond my immediate view. We’ve created a world for ourselves in which we can’t perceive much (or most) of what matters to us without these notification mechanisms.

As Jakob Nielsen explains, “Learning is hard work, and users don’t want to do it. That’s why they learn as little as possible about your design and then stay at a low level of expertise for years. The learning curve flattens quickly and barely moves thereafter.”[109] With so much to learn, and such a low motivation and ability to learn it all, we have to rely more heavily on the conventions and implicit, structural affordances that users carry over from the physical world.

In the physical world, most important changes in the environment have perceivable signs that we learn to interpret: storm clouds or cold winds mean bad weather approaching; blooming flowers and longer days mean a warm season is coming; and if my neighbors can see what I’m doing in my house, it should be obvious to me that a window is uncovered or a wall has gone missing.

Software can disrupt these assumptions we’ve learned about how our environment works. When Beacon was launched, many users of Facebook had already become used to the structures of the platform as well as the structures implicit in how their browsers worked. If they were on a website in one browser window, it didn’t share places and objects with a different website in a separate browser window. The only constant was the browser itself, plus whatever plug-ins and things were part of its function. Beacon broke this environmental convention, disrupting expectations from past experience by creating a conduit that automatically gleaned information from another context and publishing it without explicit approval from the user.

So, what makes an environment easier to learn often has to do with whether or not its affording structures meet the expectations of its inhabitants, or if they do a good enough job at signaling disruptions of convention and teaching new expectations. Next, we’ll look at the building blocks of environments and how we perceive them, which will give us some ideas about how to create understandable environments with language and software.

THE PORT ELEVATOR SYSTEM

At a conference I attended in 2012, I and the other attendees encountered a new elevator system that the conference hotel had installed only a few months earlier.[110] Instead of calling for service by using a conventional set of Up and Down buttons, the PORT elevator system requires a guest to use a digital touch-screen to select a destination floor, as shown in Figure 5-3. The screen then displays which elevator the guest should use to get to that floor, requiring the guest to find that elevator and wait for it to arrive. Upon entering the elevator, the guest will find there are no floor-selection buttons inside. The elevator already knows the floors at which it should stop.

Technically, this is a brilliantly engineered system that corrects the inefficiencies of conventional elevator usage by calculating the logistics of which elevator will get each guest to his destination most quickly.

However, when attendees (including myself) encountered this system, there was widespread confusion and annoyance. Why?

People grow up learning how to use elevators in a particular way. You push a button to go up or down, watch for the first elevator that’s going in your direction to open its doors, get in, and then select your floor. These are rehearsed, bodily patterns of use that become ingrained in our behavior. That is, we off-load the “thinking” about elevator usage to bodily, passively enacted habit. Unfortunately, these ingrained behaviors severely break the intended scenario for using the PORT elevators.

Part of an instruction booklet from the Schindler elevator company, explaining how to use its new PORT elevator system

Figure 5-3. Part of an instruction booklet from the Schindler elevator company, explaining how to use its new PORT elevator system

§ The touch-screen design assumes the guests will keep watching the screen to see which elevator they should use. But people are used to looking away immediately after pressing the up or down button, so they tend to look away in this case too—meaning they might never see which elevator they are assigned.

§ People habitually step into whichever elevator opens first. In using the PORT system, however, chances are that the elevator that opens first or closest to you is actually not the elevator for your selected destination.

§ After entering the elevator, guests realize there’s no button panel and they have no control over floor choice. Even for people who follow the directions, discovering a lack of a button panel can be a surreal, upsetting surprise.

Throughout the event, we noticed hotel staff hovering around the elevators to explain them to guests—essentially acting as real-time translators between the unfamiliar system and people’s learned expectations.

The PORT system is an apt example of how an excellent engineering solution can go very wrong when not taking into account how people really behave in an environment. Remember the perception-action loop: just as people behave in any environment, they will tend to act first and think later. Requiring them to think before acting in this context is a recipe for confusion.

This is another example of how environment controls action. It doesn’t mean that this new system is a failure; it just tricked its users by presenting affording information that they were used to perceiving and acting upon without thought, and then pulled the rug out from under those assumptions. When people learn it as a new convention, it will result in more efficient and pleasant elevator experiences for everyone. There just needs to be an improved set of environmental structures provided to help “nudge” people toward stopping and thinking explicitly as they learn the new system, before using it improperly.[111]


[69] Damasio, Antonio. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Penguin Putnam, 1994.

[70] Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

[71] Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press, 2008.

[72] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:56, Kindle edition.

[73] Bates, Marcia J. “Toward an Integrated Model of Information Seeking and Searching.” (Keynote Address, Fourth international Conference on Information Needs, Seeking and Use in Different Contexts. Lisbon, Portugal, September 11, 2002.) New Review of Information Behaviour Research 2002;3:1-15.

[74] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:53, Kindle edition.

[75] Tierney, John. Do You Suffer From Decision Fatigue? New York Times Magazine. Retrieved August 23, 2011 (http://en.wikipedia.org/wiki/New_York_Times_Magazine).

[76] Polanyi, Michael. The Tacit Dimension. Chicago: University of Chicago Press, 1996:4.

[77] http://en.wikipedia.org/wiki/Lewin’s_equation

[78] Gibson, J. J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:17.

[79] ———. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:130.

[80] Banakou, Domna, Raphaela Groten, and Mel Slater. “Psychological and Cognitive Sciences: Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes.” Biological Sciences PNAS 2013;110(31):12846-51; published ahead of print July 15, 2013, doi:10.1073/pnas.1306779110.

[81] Hogenboom, Melissa. “Adults become more like children in a virtual world,” (http://bbc.in/1uuRqFI).

[82] Shokur, Solaiman, Joseph E O’Doherty, Jesse A Winans, Hannes Bleuler, Mikhail A Lebedev, and Miguel AL Nicolelis. “Expanding the primate body schema in sensorimotor cortex by virtual touches of an avatar.” Biological Sciences - Neuroscience PNAS 2013; published ahead of print August 26, 2013, doi:10.1073/pnas.1308459110.

[83] Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press, 2008:181.

[84] Schacter, Daniel. Searching for Memory the brain, the mind, and the past. New York: Basic Books, 1996.

[85] ———. Searching for Memory the brain, the mind, and the past. New York: Basic Books, 1996:4.

[86] McCall, Becky. “Memory surprisingly unreliable, study shows.” Cosmos magazine, September 15, 2008. http://bit.ly/1uDR42i

[87] Hayasaki, Erika. “How Many of Your Memories Are Fake?” The Atlantic (theatlantic.com) November 18, 2013

[88] Wilson, Andrew. “What Does The Brain Do, Pt 2: The Fast Response System” Posted in Notes from Two Scientific Psychologists, August 2, 2011 (http://bit.ly/1Fs0TWF).

[89] Malone, Michael S. The Guardian of All Things: The Epic Story of Human Memory. New York: St. Martin’s Press, 2013:14.

[90] Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Memory.gif

[91] Gibson, J. J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:202.

[92] ———. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:253.

[93] Barrett, Louise. Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton, NJ: Princeton University Press, 2011:183, Kindle edition.

[94] Barrett, 2011:214, Kindle edition.

[95] Quoted by Barrett p. 238 [Gibson (1970), p. 426.] J. J. (1970). On the relation between hallucination and perception. Leonardo 3:425-427.

[96] In cognitive studies, the terms “online” and “offline” refer to the difference between cognition about a situation that is at hand, in the present moment, versus cognition that’s about a situation not at hand and remembered from some earlier experience. Climbing a tree to get at some apples in higher branches would involve cognition “online.” Later, when away from the tree but planning how to get higher apples out of it the next time you’re there requires “off-line” cognition.

[97] Baranauckas, Carla. “Eleanor Gibson, 92, a Pioneer in Perception Studies, Is Dead.” New York Times January 4, 2003. Retrieved October 29, 2013.

[98] Alberini, Cristina M., ed. Memory Reconsolidation. Waltham, MA: Academic Press, 2013.

[99] http://www.radiolab.org/story/91569-memory-and-forgetting/

[100] Lindsay, D. S., L. Hagen, J. D. Read, K. A. Wade, and M. Garry. “True photographs and false memories.” Psychological Science 2004;15:149-154. doi:10.1111/j.0956-7976.2004.01503002.x.

[101] Wade, K. A., M. Garry, J. D. Read, and D. S. Lindsay. “A picture is worth a thousand lies: Using false photographs to create false childhood memories.” Psychonomic Bulletin & Review 2002;9:597-603. doi:10.3758/BF03196318.

[102] Gibson argues that “Information does not have to be stored in memory because it is always available” for pickup. Gibson, 1979:250.

[103] Guibert, Susan. “Walking through doorways causes forgetting, new research shows.” Notre Dame News November 16, 2011 (http://ntrda.me/1Fs0XFX).

[104] I should point out that Norman is not an embodied cognition theorist; I am (I think, accurately) appropriating some of his more embodiment-compatible ideas.

[105] Osberg, Molly. “Hug it out: can art and tech ever be friends?” The Verge (theverge.com) May 8, 2014.

[106] McGaugh, J. “Involvement of the amygdala in memory storage: interaction with other brain systems.” Proceedings of the National Academy of Sciences 1996. Available at http://www.pnas.org/content/93/24/13508.short.

[107] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:100, Kindle edition.

[108] ———. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:79, Kindle edition.

[109] Nielsen, Jakob “User Expertise Stagnates at Low Levels” September 28, 2013 (http://www.nngroup.com/articles/stagnating-expertise/).

[110] “Schindler Installs PORT at Hyatt Regency New Orleans” schindler.com January 26, 2012 (http://bit.ly/1wg5F4s).

[111] Much to my delight, many months after first drafting this passage, I learned that Donald Norman’s new edition of The Design of Everyday Things also explores this elevator system (pp. 146-149). Norman comes to similar conclusions (though, as an engineer, he seems to find more to like about the system than I did as an annoyed conference attendee).