Making Things Make Sense - Semantic Information - Understanding Context: Environment, Language, and Information Architecture (2014)

Understanding Context: Environment, Language, and Information Architecture (2014)

Part III. Semantic Information

Chapter 11. Making Things Make Sense

Thoughts exchanged by one and another are not the same in one room as in another.

LOUIS KAHN

Language and “Sensemaking”

WE CAN ACCOMPLISH A LOT OF PHYSICAL ACTIVITY WITHOUT HAVING TO CONSCIOUSLY MAKE EXPLICIT SENSE OF IT. We just do it. But sensemaking is a special sort of activity that brings another level of coherence with which we knit together our experiences, think about them, and understand them at a more abstract level.

When we consciously try to make sense of our experience, it is an expressly linguistic activity.[225] Like perception itself, language is enacted; it is something we “do.”[226] We communicate with each other to develop a mutual understanding of our shared environment. Likewise, as individuals, we engage in a dialogue with ourselves about the environment and our choices in it, putting the mirror of language in front of us to “reflect” on our actions.

The term sensemaking generally refers to how people gain meaning from experience. More specifically, it has been the term of art for several interdisciplinary streams of research and writing, starting in the 1970s, including human-computer interaction, organizational studies, and information science. Much of the academic work on sensemaking has been about how people close the gap between their learned experience and a newly encountered technology or corporate environment.

When we study nature and strive to understand its complexity, we use language to create and reflect on that knowledge. The same goes for human-made environments, which are largely made of language to begin with. As one seminal article on sensemaking puts it, “When we say that meanings materialize, we mean that sensemaking is, importantly, an issue of language, talk, and communication. Situations, organizations, and environments are talked into existence.”[227]

Our conscious engagement with context requires our use of language. When we experience new situations—whether a new software application or a new job at an unfamiliar company—we use language as an organ of understanding, calibrating our action to find equilibrium in our new surroundings. This activity involves the whole environment, including other people and the semantic information woven into the physical.

Like basic perception of the physical environment, this sensemaking activity happens at varying levels between explicit and tacit consciousness. Somewhere between fully conscious thinking and mindless action, we perform a nascent kind of interpretation but without explicit labeling, because no name has yet emerged for the new thing we encounter.[228] It’s in the ongoing perception-action cycle that we have to exercise conscious attention and begin making explicit, thoughtful sense of the experience.

We can perhaps think of this as a dimension added to our earlier diagram showing the perception-action loop (refer to Figure 11-1). As we make sense of our environmental context, the many “loops” of cognition that make up the overall loop are cycling not just among brain, body, and environment, but at different levels of conscious attention across the explicit/tacit spectrum.

In each thread of sensemaking, there’s a building of understanding that takes place. Tacit, passive awareness transitions to purposeful consideration, which then (for humans) leans on the infrastructure of language to figure out “what is going on?”—which is then followed by “what do I do next?” This then spurs action that might go through the cycle all over again.

We carve understanding out of the raw stream of experience by eventually naming and conceptually “fixing” the elements of our environment. This creates conceptual invariants around which we orient our sensemaking. Eventually, these newly identified invariants become part of the “common currency” for social engagement.[229] In other words, this individual cycle is also woven into the fabric of social sensemaking, where we all collectively add to the environment for one another, causing shared conceptual structures to emerge in our common culture.

Explicit and Tacit spectrum over the perception-action loop

Figure 11-1. Explicit and Tacit spectrum over the perception-action loop

There’s an awful lot going on in the preceding paragraphs, so here’s an example to illustrate: imagine pulling up to a fast-food restaurant’s drive-through menu. You’re hungry, and just wanting something to eat, but you’re also trying to eat more healthy fare lately, so you’re looking for better nutritional choices. Your hungry body (and emotional brain) is getting in the way of your more explicit health-goal-driven thoughts, all while trying to figure out what food is represented in the menu while an attendant is squawking, “What’s your order?” through a scratchy audio speaker.

A lot of your decision-making is being driven tacitly: your hungry body and the emotional centers of your brain are body-slamming your ability to thoroughly parse and understand the menu, while another hungry driver revs his engine behind you.

The convoluted menu doesn’t help much: clever labeling of a “Prime Deluxe” and a “Premium Choice Grill” doesn’t provide much distinguishing information. And the most-popular options (because they’re the most tasty and least healthy) have the biggest pictures and easy-to-order numbering schemes—like a magician forcing a card to you on stage—nudging you further toward taking the quick option rather than having to make a more difficult decision.

To avoid slipping into the path of least resistance, you have to begin reading the menu aloud to yourself, working hard to find the specific trigger terms you’re looking for—“salad” or “heart-healthy,” or whatever—and doing the math of calorie counts. Otherwise, you know you’ll just give up and say, “Gimme a number three,” and then drive away with a sloppy burger and a bag of greasy fries.

It’s challenging to make sense of the environment well enough to make a different choice and avoid the less-thoughtful, default “grooves” provided by the menu (and the stressful pressure to “order now” while others wait behind you). You have to stop and think, calculate, and drag your brain into explicitly reflecting with language.

This insight about how people make sense of their experience is crucial to designing context in any environment. Users gain coherent understanding of what they’re doing, where, and with whom, through individual and communal activity and communication. What something or somewhere “is” depends on all those factors, not just the discrete interaction of one person with one object or place. Nothing we make for others to use is experienced in a vacuum; it will always be shaped and colored by its surrounding circumstances.

Even the labels we put on products can alter our physical experience of them. In a well-known series of experiments, neuroscientists had subjects sample wine with differently priced labels—from cheap to expensive. The subjects didn’t know it was the same wine, regardless of price. Even though the semantic information specifying price was the only difference between the wines, functional Magnetic Resonance Imaging (fMRI) brain scans showed significant differences in activation of pleasure centers in the subjects’ brains—they literally enjoyed the “expensive” wine more than the “cheap” one, even though there was no physical difference in the wines.[230] Again, our perceptual systems don’t spend a lot of time parsing semantic from physical information. These subjects took price as a face-value framing for the wine, knowing nothing else about it. It’s another example of how our immediate experience of the environment is a deeply intermingled mixture of signification, affordance, cultural conditioning, and interpretation. Similar studies confirm that the aesthetic styling of websites can strongly affect users’ opinions of their value.[231]

In the airport example from Chapter 1, much of my activity was nudged and controlled by the semantic information that surrounded me, in concert with the social context: I tended to gravitate toward actions that were similar to what others around me were doing; and most of what I did was almost completely tacitly driven, until I had to stop and think about it explicitly. And even then, I found myself asking another person about what to do, and talking to myself about the labels and physical layout I was trying to understand. There was no separating the language from the physical layout of the airport; they were both intermingled as “environment” for my perception-and-action.

Physical and Semantic Intersections

In a sense, ever since we started naming places together, we’ve been living in a shared “augmented reality,” in which language, places, and objects are impossible to separate. Semantic and physical information are now so intertwined in human life that we hardly notice. Consider just a few ways that they work together:

Identification

We name things and people so that we can talk about them and remember what and who they are. Recall that labels give us the ability to move things around in our heads (or “on paper”), piling, separating, juxtaposing. Anything of shared human importance has a name.

Clarification

Sometimes, we can already tell what something is, but we still need more context. I can ascertain that an egg is an egg, but information on the carton informs me if it’s organic or cage-free as well as when the eggs will expire.

Orientation

A typical door has clear physical information for affordance, but without more information we don’t know why we would use that door rather than another. Especially in built environments where manufactured surfaces can all look nearly alike, there’s little or no differentiation (unlike in nature) to distinguish one layout from another. So, we need supplemental orientation to tell us “long gray hallway A goes to the cafeteria, and long gray hallway B goes to the garage.” Likewise, stairways clearly go upward, but their destination is often obscured. A wider view of our stairs example from Chapter 4 reveals that a step has the label “Poetry Room” painted on it (Figure 11-2). The label adds orienting information about where we will be after we climb the stairs—semantic scaffolding that signifies the context of the physical affordance.

The wooden stairs in the famous City Lights Bookstore in San Francisco, this time showing a labelPhoto by author.

Figure 11-2. The wooden stairs in the famous City Lights Bookstore in San Francisco, this time showing a label[232]

Instruction

Even if we know what something is, we often need help knowing how to use it. In recent years, many public bathrooms have been outfitted with automated fixtures. The invariants that many of us grew up around in bathrooms have been scrambled, so we have to figure out each new bathroom anew. In one, the sink might be automated, whereas the toilet is not, and in the next it can be the opposite. Anything that doesn’t have clearly intrinsic affordances requires some kind of instruction, even if it comes from instructing ourselves through trial and error. Instructions are an example of how all these sorts of semantic-and-physical intersections can overlap and work together. Sometimes, instructions help identify, clarify, and orient us all at once.

Digital Intersections

As we are called upon to create more physically integrated user experiences, these intersections between language, objects, and places are increasingly critical for us to design carefully. In software, it can be even more challenging, because the objects and places we simulate with graphical interfaces can so easily break the expectations we’ve learned from the physical environment. Recall from the Chapter 1 how hyperlinks introduced an unprecedented flexibility in connecting digital places but also introduced new opportunities for contextual confusion.

On the City Lights Bookstore website in Figure 11-3, we see labels, but no physical information other than what is simulated with graphical elements (color blocks, lines, arrow-triangles, negative space) and layout (spatial relationship of elements to one another) telling us that the label “POETRY” is something we can touch, and that it will take us to what we expect to be another place, about poetry. The function of a hyperlink is learned through experience and established through convention, like language itself.

The Poetry link on the City Lights Bookstore website

Figure 11-3. The Poetry link on the City Lights Bookstore website

If we walked up the stairs of the store, only to find that there was no “Poetry Room” but instead some other sort of room, or no room at all, we’d be disoriented. Similarly, tapping or clicking the hyperlink takes us to a place that we expect to fulfill the promise of the label. In a digital interface, however, so much of the information is semantic that the interface has to be designed with great care to reduce ambiguity, because the meanings of the labels and the subtle hints of visual layout are all we have work with as guiding structure for users.

In the built environment of cities, we’ve created such complex structures that we struggle to rely on the shapes of surfaces alone to give us contextual clues about where to go. So, the field of architectural wayfinding has expanded over the years to be almost exclusively about using semantic information to supplement the physical. The way icons and text help us get around in a city or building can make a huge difference in our lives. For example, in hospitals, research shows that good wayfinding promotes better healing, medical care, and even improved fiscal health of the organization.[233]

We can look at any modern city intersection and see how much semantic information is required to supplement human life there. In the image from Taipei City shown in Figure 11-4, nearly every surface has semantic markings, from the advertising to the street signs, traffic signals, and even the arrows, crosswalk markings, and street boundary lines painted on the city’s streets.

When we say city, we are talking about all of these modalities, all at once. In fact, it’s hard to say that language is merely scaffolding here, because in some instances the buildings are there to support the cultural activity of language-use to begin with. That is, the language environment came first, and the built environment emerged to support its growth and evolution. Language is more our home medium than steel and concrete. We’ve been speaking sentences longer than we’ve been building roads.

To understand and improve these environments, we should know how to distinguish physical from semantic, but we should not forget that the denizens of such a city can’t be expected to parse them. They are intermingled in what information architect Marsha Haverty suggests is a “phase space”—just as water can undergo a phase transition from solid (ice) to liquid (water) to gas (steam), information can move across a similar spectrum.[234]

Taipei City, Nanyang Street, in 2013Wikimedia Commons:

Figure 11-4. Taipei City, Nanyang Street, in 2013[235]

But, unlike water, which is categorically and empirically in one state or another, semantic information adds the contextual slipperiness of language to the distinction. No matter the perceiver’s umwelt (uniquely perceived environment), steam will have the properties of steam. But the word “tripe” in reference to some Chinese cuisines, where it is a staple protein, has a radically different meaning compared to a context in which “tripe” is an insult.

Physical and Semantic Confusion

Just because the modes intersect doesn’t mean it always works well. Sometimes, the semantic information we encounter is actually contradictory to the physical information at hand. In The Image of the City, author Kevin Lynch explains that, even when a city street keeps going in a physically continuous direction, if its name changes along the way, it is still experienced as multiple, fragmented places.[236]

We tend to lean on language as a supplement to the otherwise confusing parts of our physical environment. Sometimes the language we add can be helpful, but often it goes unnoticed or only adds confusion.

In a legendary example from his work, Don Norman explains some of the problems with doors, and why the semantic information we often add to them is a crutch we use to correct for poor physical design. This portion is from the revised 2013 edition of The Design of Everyday Things:

How can such a simple thing as a door be so confusing? A door would seem to be about as simple a device as possible. There is not much you can do to a door: you can open it or shut it. Suppose you are in an office building, walking down a corridor. You come to a door. How does it open? Should you push or pull, on the left or the right? Maybe the door slides. If so, in which direction? I have seen doors that slide to the left, to the right, and even up into the ceiling. The design of the door should indicate how to work it without any need for signs, certainly without any need for trial and error.[237]

The shape of door handles and how they indicate the proper operation of the door has been a touchstone of Norman’s influential ideas since the first edition of The Design of Everyday Things in the 1980s. The example is a great one for teaching designers that the affordances of a designed object should be intrinsically coherent as to how they should be used, especially basic objects such as hammers, kitchen sinks, and doors.

Yet, there’s more to a door than we might assume. Physically, there is a doorway—the opening itself—which is intrinsically meaningful to our bodies. It is directly perceived as an opening in a wall, providing a medium through which we can walk, in the middle of a solid surface of a wall. No signification—in the sense of something that means something else—is required.

That’s where the simple affording information stops, because as soon as we add a door to the doorway, things get a lot more complex. Even though the door is physical, there are many mitigating factors involved in how we perceive its function. As Norman points out, we have to know whether the door opens inward or outward, sideways, up or down, and if we need to pull or twist something to open it, or if it’s automatic, and what behavior will trip its sensors.

In Gibson’s terms, a door is a compound invariant—a collection of invariants that present a combined, learned function of “opening” and “closing” doors. A specific door is a solid cluster of objects that works the same way each time, following its own physical laws. Even if it doesn’t work like any other door, it persistently stays true to its own behavior.

Similar to how language means what it does because of conventional patterns of meaning, most doors fit conventional patterns or genres of door function. That is, even simple doors require learning and convention. We learn after a while that certain form factors in doors indicate that they work in one way versus another, not unlike the mailbox discussed in Chapter 7. Even if everything about the door is visible—its hinges, its latch mechanism—it still requires our having learned how those things function for us to put together the clues into the higher-order, mechanical affordance of door use. All of this speaks to whether we understand we are in a context in which we can go through an opening or not, and what physical actions will cause the events we need in that context.

Then, there is the nested context of the door: is it in a building where people are conditioned to avoid walking through unknown doors, such as in a highly secure office complex? Is it in a school where students learn a pattern of where to go from class to class each year? I still remember how it felt to start a new grade in school and be granted access to new rooms in new classes: the entire school felt as if it shifted under my feet, and old doors became clutter, whereas new doors became the new shape of places for what “school” meant to me...at least until the next summer.

No door is an island, so to speak. It’s part of a larger construct of symbols, social meaning, and cultural expectation.

I had a recent experience with a door that reminded me of Norman’s examples. In this instance, I nearly smacked my face into the glass entrance of an office supply store, because I didn’t pick up on the “Pull” label next to the door handle. Here’s a picture of the door in question.

A door leading into a retail storePhoto by author.

Figure 11-5. A door leading into a retail store[238]

There were a lot of contextual elements that contributed to my embarrassing encounter:

§ As in Norman’s examples, the handles were not shaped in a conventionally distinctive way to indicate whether they better afforded pulling than pushing. In fact, they look a lot like the handles that one normally pushes.

§ The doors are transparent glass, so I was already looking inside the store, trying to spot the department I was there to visit, barely paying attention to the door itself.

§ The glass also allowed me to see the handle on the other side; and since most doors with the same shape handle on both sides open both ways, my perceptual system didn’t bother looping more explicitly to cause me to consider any other possibility. As always, my body satisficed.

§ The sign wasn’t invisible to me—but my perception picked it up as clutter rather than its intended, semantic meaning. It was just an object between me and where I was going; an aberrant protrusion of gray into the glass. One, simpler set of information rode along on my “loops of least resistance” to override another, more complex set of information.

§ Also note how there is little difference between the capital letters spelling “PUSH” and “PULL.” So, in terms of raw physical information, this situation was relying on the narrow difference between “SH” and “LL”—on a label that was the same color as the door’s aluminum.

§ I was having a conversation with my wife and daughter, who were with me, so I was verbally preoccupied. Even though our cognitive abilities can take in lots of intrinsic, physical information at once, we have a difficult time picking up clear information from more than one semantic interaction at the same time.

§ I was the first to reach the door, and by the time I did so, the sign was actually below my field of vision. So when the door didn’t budge, the sign was of no help to me. Of course, my daughter’s barely stifled laughter and exclamation, “The sign says ‘Pull,’ Dad!” helped to clue me in.

Beyond rationalizing my clumsiness, this detailed look shows how we can take a simple situation and do a rigorous analysis of environmental information to think through the cognitive scenario. We should always bring such a “close reading” approach to answering the central question we’re exploring in this book: will this environment be perceived and understood, in a real situation, with real people? The reluctance and lack of patience in design work to do this kind of analysis is precisely why so many designs still have contextual confusion.

This door isn’t an object on its own, but a system of invariants, nested in an environment of other invariants, from simple intrinsically physical information to higher-order, complex, and signified semantic function. And, it’s nested within events involving people, some of whom are in an embodied state to comprehend “PULL” and some who aren’t. Context isn’t just one thing for everyone; it is shaped in part by the actions and perceptual state of the agent. From my perspective in this scenario, there was no clear line where affordance ended and signification began.

We see similar issues in the simulated objects and surfaces of digital places. For example, all of us have experienced receiving marketing content via email and deciding we want to unsubscribe from it. Most of these emails provide an easy way to turn off the subscription with only a click or two. Like most doors we encounter, we approach it with expectations driven by the invariants of convention and prior experience.

So, when my wife tried to unsubscribe from the deluge of emails she was receiving from Fab.com, she assumed it would work like the others. Tap or click “unsubscribe” in the email, then possibly verify the request at a web page. But she kept getting the emails. Take a look at Figure 11-6and see if you can spot the problem. Notice the big, red button that would normally signify the invariant for “Yes, let me out of this!”—but here, it actually means “No, I decided to stay!”

The interaction presents a series of steps that conventionally end with unsubscribing. A big red button at the end of most transactions means: Yes, complete this irreversible action. But in this case, it does the opposite, confounding what the user has learned from invariants in the past. This interaction was also nested within a smartphone’s display, rendering the view with tiny text that’s almost unreadable. So, not unlike the door into the retail shop, the text wasn’t doing much good here, and was easily trumped for a typical, satisficing user, relying on their cognitive “loop of least resistance.”

The “dark pattern” of accidentally resubscribing to Fab.com takes advantage of learned invariants

Figure 11-6. The “dark pattern” of accidentally resubscribing to Fab.com takes advantage of learned invariants

It’s similar to a technique used in so-called phishing scams, which trick users into providing information they would not otherwise offer. Phishing is named that way—after “fishing”—because, like a hungry fish biting a baited hook, a user often acts based on learned invariants without explicitly considering all the environmental factors at hand.

When an interface takes advantage of our cognitive shortcuts, against our wishes, we tend to call that a “dark pattern”—a sort of “dark side of the force” usage of a design pattern. Whether the designers at Fab.com did this consciously or not, the effect is the same. It uses our forward motion through the environment against us rather than meeting the embodied expectations we bring to the invariants of our context.

Ducks, Rabbits, and Calendars

Semantic information gives us the remarkable superpowers of symbols, but at the cost of disconnecting language from the physical environment. The less contextual information we have, the more complicated signification becomes, whether with visual or textual semantic information. InPhilosophical Investigations, Ludwig Wittgenstein famously regards a line drawing that could look like either a duck or a rabbit, and uses it as an example for how language works.[239] He refers to the figure in numerous places throughout Investigations. In one instance, he discuss how, if we place the picture among other duck pictures, it looks more like a duck, and more like a rabbit among rabbit pictures.[240]

From Jastrow’s “The Mind’s Eye,” 1899Jastrow, J. “The mind’s eye.” Popular Science Monthly, 1899, 54:299-312.

Figure 11-7. From Jastrow’s “The Mind’s Eye,” 1899[241]

Wittgenstein also explains that when we see such a figure, we don’t usually say, “I see it as a duck,” or “I see it as a rabbit.” Instead, we say, “I see a duck,” or “I see a rabbit.” That is, in our natural manner interacting with language, we don’t step back and distinguish between seeing something or seeing a representation as that something.

An optical illusion or visual trick such as the duck-rabbit works because it’s an incomplete representation, constrained by its medium. In nature, we wouldn’t confuse a duck for a rabbit; there would be enough physical information that we could pick up through active perception to tell one from the other.

But an optical illusion like this is not the physical world: it is a representation—a display—that leaves out the information we evolved to pick up when perceiving actual surfaces and objects. This is a quality that semantic information has generally; whether words, pictures, or gestures, it introduces ambiguities into our environment much more easily than physical information. Of course, this picture could be expanded to finish the drawing of the animal, and that would make it more clear what sort of animal it is. Like Groucho’s elephant in pajamas, though, this would spoil the “trick.” Most information environments aren’t jokes or optical tricks, however; they’re meant to be understood.

Because semantic information is part of our environment, our cognition tries to use it in the same satisficing way we use floors or walls or stones lying on the ground. We try working with it and making our way through it as if it were physical. When we see a link in a website that says “Poetry,” in the moment of action, we don’t typically think to ourselves, “I am going to click a link that means poetry and it will take me to things that represent publications containing poems.” We take the action expecting to then see and interact with objects containing poems. We “go there” to “look at the books.” We reify and conflate, as if it were a passageway into a place. When we look at a particular book on a bookstore’s website, we treat it as if it were a book we were looking at on a physical shelf at a local bookshop, if the website’s design affords us the convenience to do so.

But the same sort of ambiguity that we see with the duck-rabbit can creep into our software structures. Recall in my airport scenario how I had assumed that my coworker could see my travel information in my calendar? This has to do, in part, with how Google Calendar uses the word “calendar” ambiguously, but it also has to do with how a calendar isn’t just one display object anymore, but an abstraction that is instantiated in many different contexts.

Figure 11-8 displays some of these instantiations:

1. The object that iconically represents a calendar as represented in the Google Apps navigation menu.

2. Within the Web view, a calendar-like interface that takes up most of the screen, from the left column to the right edge.

3. Also in the Web view, the lists of “My calendars” and “Other calendars.” These are actually calendar feeds but are named here as “calendars.”

4. In TripIt’s web interface, the “Calendar Feed” I use to create the published calendar-API version of my TripIt itineraries.

5. In its “Home Screen” interface, my iPhone also has a “Calendar” app icon, which also represents the idea of a singular calendar-object.

6. My calendar as shown when opened on my iPhone. It shows some of the same information as the Google Web view, but not all the same. Some is color-coded differently as well. It doesn’t explicitly differentiate the “feeds” other than by color and by displaying the source of an event in its event-detail view.

The various instances of “calendar” from the airport scenario in

Figure 11-8. The various instances of “calendar” from the airport scenario in Chapter 1

Example B shows that I have a Project Status Meeting scheduled in the middle of my flight to San Diego. That’s because the scheduler didn’t know I was on the flight: the flight’s “calendar” is a feed generated by TripIt, and isn’t visible to those who share my Google Apps calendar.

Did I understand how all this worked? Yes, when I thought about it explicitly. But it had been many months since I had set up the TripIt feed, so I had forgotten the rules for access permissions and was in too much of a hurry to think about them. In the satisficing actions we take in an everyday environment, we don’t always take the conscious, explicit effort required to disambiguate all the different meanings of something. This is especially true if the environment’s language conflates many functions into one semantic object—in this case, the word “Calendar.”

There are many examples in Google’s applications suite, and in their other products, where they go to great lengths to provide these contextual cues. However, the more complex the contextual angles and facets of an environment become, the more the design has to strike a balance between context clarity and cluttering an interface. Users look at a calendar to see dates and reference or create events; comprehending the entire rule-based environment is peripheral to the main purpose, even though it is at times a crucial aspect of the application.

In this case, I can hardly fault the design decisions behind Google Calendar; they’ve provided at least some cues: they use a differently textured background color (faint stripes) to indicate a subscribed feed versus an item that is actually part of my Google calendar data (a convention not necessarily followed in a calendar client application, however). Additionally, when I click on the flight event, it’s clear that it is something that is part of the “AHTripit” calendar (see Figure 11-9), and that I could “copy to my calendar” if I wanted.

From an engineering perspective, everything works as it should; the system has a coherent logic that allows it to function with consistent rules. Even the interactive moment-by-moment mechanisms that I tap, click, or manipulate in these bits of software are fairly understandable. Where we find ourselves most muddled is in the information architecture of how the objects and places—and the rules that govern them—are represented with semantic information.

Google Calendar on the Web allows me to see what “Calendar” the event is part of, and gives a one-click method to add it to my present “calendar”

Figure 11-9. Google Calendar on the Web allows me to see what “Calendar” the event is part of, and gives a one-click method to add it to my present “calendar”

When I look at a predigital calendar, like the sort that hangs on a wall in the family kitchen, I know what I am seeing exists only in that place and time. But digital technology gives us the flexibility to create calendars that exist in many different forms. In a sense, there is no single calendar, no canonical object. It’s an aggregate, a reification; when we ask, “Will the real calendar stand up?” either they all stand, or none of them do.

Semantic information is so second nature to humans that we simply overlook how deeply it forms and informs our experience. We can’t expect end users, consumers, customers, and travelers to ponder the nature of signs, or spend time giving a close-reading analysis to all the stuff they have to work with every day. Design has to attend to this hard, detailed work so that users don’t have to.

Design has traditionally been centered on objects and physical environments. There is no “language design” discipline—it’s instead called “writing.” There’s nothing wrong with that, but we have to come to grips with the reality that language is a more important material for design than ever, especially with the arrival of pervasive, ambient digital systems. This distributed, decentered experience of “calendar” wouldn’t be possible without it, so our next focus will be on what it is about digital information that disrupts and destabilizes the physical and semantic modes.


[225] Weick, Sutcliffe, and Obstfeld. “Organizing and the Process of Sensemaking Organization Science.” INFORMS 2005;16(4):409.

[226] Dourish, Paul Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT Press, 2001:124, Kindle edition.

[227] Weick, Sutcliffe, and Obstfeld. “Organizing and the Process of Sensemaking,” Organization Science Vol. 16, No. 4, Frontiers of Organization Science, Part 1 of 2 (Jul. - Aug., 2005), pp. 409-421.

[228] Weick, Sutcliffe, and Obstfeld 2005.

[229] Weick, Sutcliffe, and Obstfeld 2005.

[230] Plassmann, Hilke, John O’Doherty, Baba Shiv, and Antonio Rangel. Marketing actions can modulate neural representations of experienced pleasantness. Published online before print January 14, 2008. doi: 10.1073/pnas.0706929105. PNAS January 22, 2008;105(3):1050-4.

[231] Reinecke, Katharina, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. “Predicting Users’ First Impressions of Website Aesthetics With a Quantification of Perceived Visual Complexity and Colorfulness.” Proceeding CHI ‘13 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York: ACM, 2013:2049-58 (http://bit.ly/1FwSxNz).

[232] Photo by author.

[233] Huelat, Barbara J., AAHID, ASID, IIDA. “Wayfinding: Design For Understanding.” A Position Paper for the Center for Health Design’s Environmental Standards Council, 2007 (http://www.healthdesign.org/chd/research/wayfinding-design-understanding).

[234] Haverty, Marsha. “Exploring the Phase-Space of Information Architecture” Praxicum (praxicum.com) May 8, 2014 (http://bit.ly/1t9pLgP).

[235] Wikimedia Commons: http://bit.ly/1xazYaH

[236] Lynch, Kevin. The Image of the City. Cambridge, MA: The MIT Press, 1960:53.

[237] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:1-2, Kindle edition.

[238] Photo by author.

[239] Wittgenstein got the idea from an 1899 article by the early experimental psychologist Joseph Jastrow, who borrowed the figure from Harper’s Weekly (which had republished it from a German humor magazine). The example here is from Jastrow (Wittgenstein’s is a simpler line drawing) (http://socrates.berkeley.edu/~kihlstrm/JastrowDuck.htm).

[240] Ludwig Wittgenstein Philosophical Investigations Copyright © Basil Blackwell Ltd 1958. First published 1953. Second edition 1958. Reprint of English text alone 1963. Third edition of English and German text with index 1967 Reprint of English text with index 1968, 1972, 1974, 1976, 1978, 1981, 1986. Basil Blackwell Ltd. 108 Cowley Road, Oxford, OX4 1JF, UK

[241] Jastrow, J. “The mind’s eye.” Popular Science Monthly, 1899, 54:299-312.