Environments, Elements, and Information - The Context Problem - Understanding Context: Environment, Language, and Information Architecture (2014)

Understanding Context: Environment, Language, and Information Architecture (2014)

Part I. The Context Problem

Chapter 3. Environments, Elements, and Information

Context is worth 80 IQ points.

ALAN KAY

A Wall and a Field

SOCIAL NETWORKS AND AIRPORTS don’t exist in a vacuum. They’re part of a wider world. And humans didn’t evolve with mobile phones in hand. Our bodies and brains grew in predigital environments that shaped the way we understand our surroundings. If context in digital environments is so hard to get our heads around, maybe we need to begin by establishing up front what an environment actually is. Take a look Figure 3-1, an idyllic landscape in Derbyshire, England.

A bucolic Derbyshire landscapeWikimedia Commons:

Figure 3-1. A bucolic Derbyshire landscape[10]

We might look at this landscape and assume that there’s not much information here, but there actually is. This is as much an “information environment” as any website or city intersection. That is, for us to just get around in a place like this, there has to be information about the structures in the environment that our bodies somehow understand well enough to take action. Where can we walk? What can we eat? What can we hold in our hands?

Most of this environment happened all on its own, growing naturally. But also note the stone wall in the field. Some of the earliest structures humans ever added to their environment were of this sort: stones stacked to create barriers and boundaries. Such structures have a physical effect of stopping or slowing terrestrial motion, but they also carry a cultural meaning, such as “this land belongs to someone” or “keep your sheep on the other side, away from my sheep.” The wall changes the context of the field, transforming an open, undifferentiated vista into a specific humanplace with additional layers of significance.

Keep this field in mind as a starting point, because from here on out, we will be looking at everything we make—digital devices or websites, phone apps or cross-channel services—as structure we add to the environment around us, not unlike this wall, which is deceptively simple at first glance, but full of meaningful layers upon closer reflection.

A Conventional Definition of Context

One challenge we have when grappling with contextual issues comes from our conventional assumptions about what context is to begin with. Yes, we would all agree it has to do with relationships between things. But, we tend to focus on the things, not the context. I mean, we tend to want to understand what a thing is in relation to what contains it, as if that container is “the context.” When you ask whether a wall is an urban wall or a country wall, the subject (wall) is supposedly informed by the setting—a wall “in a field in the country” is a country wall.

This convention is baked into our official dictionary definitions. For example, the Oxford English Dictionary defines context as “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.” This definition contains essentially the same elements of most definitions of context:

Circumstances

The setting or situational factors that surround the subject.

Agent

The (implied) entity that is trying to understand the subject.[11] Note that in this book, you will see the term “agent” used at times, but also words such as “person,” “perceiver,” or “user” somewhat interchangeably, depending on the point being made. They all reference the same concept, except that agents are not always people, or what we normally think of as a “user.”

Subject

The event, statement, or idea that is in the circumstances, and that is the focus of the agent’s attempted understanding. From any particular point of view, there’s always something that’s nested within something else. But, as we’ll see, that relationship is situationally dependent.

Understanding

An apprehension (or effort toward apprehension) of the meaning of the subject and its relationship to its circumstances.

So, the typical way of understanding context looks something like that presented in Figure 3-2.

The conventional way we think of context

Figure 3-2. The conventional way we think of context

That’s not a bad starting point. But the truth is, context is much more complicated. For one thing, as soon as something is “in a context,” it changes the context. The same elements are present, but the number of the elements and their relationships are part of a multifaceted reality.

Perhaps the field has only a stone wall in it. But, the wall actually changes the nature of the field, which is now not just “a field” but “a field with a stone wall.” Then, it’s only a matter of time before more walls are built, roads are added, hotels and pubs spring up, and eventually it’s a town with a road called “Field Avenue” as the only reminder that there was ever a field there at all.

As Malcolm McCullough explains in Digital Ground, context is bound up in our interaction with our environment.

“Context” is not the setting itself, but the engagement with it as well as the bias that setting gives to the interactions that occur within it. “Environment” is the sum of all present contexts.[12]

Context isn’t just the surrounding circumstances, because it includes and interacts with the subject that is surrounded, and the agent that tries to comprehend it all.

Paul Dourish, in his seminal paper on context and human-computer interaction, “What We Talk About When We Talk About Context,” argues for a model in which, “Context isn’t something that describes a setting; it’s something that people do....It is an emergent feature of the interaction, determined in the moment and in the doing. In other words, context and...activity...cannot be separated. Context...arises from and is sustained by the activity itself.”[13]

This is all sort of brain-bendy stuff, but unless we grapple with it, we run the risk of designing environments that assume too much about how agents understand them.

For example, my Google calendar via website has one set of information in it, including the TripIt calendar I’ve subscribed to, so that my travel information shows up among all my other scheduled events. However, that’s only from my perspective. Others on my team can see only my main calendar, not my calendar subscriptions, a state of affairs that isn’t clearly apparent to me. The only indication of this difference is a setting in Google Calendar that takes some effort to discover: it says “Anyone can: See Nothing.”

Even though it’s just one setting, the rule has implications that are harder to perceive—it means the circumstances are different depending on which agent is logged in and viewing the interface. The sort of place my calendar is to me is not the sort of place my calendar is to someone else.

As mentioned earlier, the contextual problems have to do not just with the calendar’s settings or even its graphical interface, but the meaning of “calendar” itself. The label we use for the digital object sets an expectation that it is a singular calendar such as a paper calendar hanging on a wall, not a multidimensional, virtual object with many “calendars” in aggregate. The language we use for the environment is part of the environment as well.

So, we at least have to allow that the circumstances are not separate from the agent that’s perceiving and trying to understand the subject; instead, the circumstances contain all these elements, as depicted in Figure 3-3.

The agent is also part of the circumstances

Figure 3-3. The agent is also part of the circumstances

But, what about circumstances in which the agent is also the subject (Figure 3-4)? For example, that’s what I was trying to understand when I was at the airport, working out which label applied to me in the overall system of labeling. And it’s what any of us are doing when we’re navigating within a space, trying to get somewhere—whether on-screen or off.

The agent is often also the subject

Figure 3-4. The agent is often also the subject

Add to this the fact that we’re pretty much always trying to understand who and where we are in relation to our environment while simultaneously needing to understand the context of a lot of other things happening around us.

One moment, something is the subject but at another it’s just background, part of the circumstances. In addition, we’re almost never trying to take in only one subject at a time; instead, we’re absorbing a shifting, roiling torrent of them, as illustrated in Figure 3-5. It all starts to get pretty overwhelming.

What a mess, right? Humans are pretty smart, but we’re still finite creatures. We can really think hard about only one or two things at a time. So, a factor in all this complexity is the amount of attention we can summon and the cognitive effort we have to bring to bear on our environment and everything it’s made of, including ourselves.

An agent in the midst of multiple subjects, and complicated circumstances

Figure 3-5. An agent in the midst of multiple subjects, and complicated circumstances

A New, Working Definition of Context

So, moving forward, we will stray from the conventional definition and use a new, working definition. It’s just a bit more technical than the one we started with in Chapter 1:

Context is an agent’s understanding of the relationships between the elements of the agent’s environment.

In this case, the parts of the definition are as follows:

Agent

A person or other object that can act in the environment. Not all agents are persons, and not all of them perceive or act the same way.

Understanding

An agent’s cognitive engagement with, and making sense of, its surroundings. Although there are other non-agent-bound ways of thinking about context, our purpose here has to do with the subjective, first-person experience of the agent, even when it’s not a person.

Relationships between the elements

Everything is made up of parts; context is all about how those parts relate to one another.

The agent’s environment

Context is always about the entire environment, because the environment is what informs the meaning of anything the agent is trying to understand. Note we aren’t saying “environment” in general—a setting observed from some omniscient, god’s-eye view. It is “the agent’senvironment,” because context is a function of the agent’s own first-person perception. Perception is what undergirds cognition, experience, and understanding.

This definition makes context a function of how an agent perceives and understands the environment, not a property that exists outside of that understanding. It also doesn’t specify that one element is the subject, special from everything else. Any element can be the subject at any moment and then drift or quickly switch to mere “circumstances.”

So, does that mean we can’t pin context down? Well, in a way, yes: if by pinning it down we mean we try to fix it in one state. That is, we can still map the elements out as agent, subjects, and circumstances; we just have to be careful that we aren’t assuming that only one snapshot represents the whole contextual experience.

In my airport scenario in Chapter 1, we could do a pretty decent job of mapping the most important contextual relationships by identifying the elements:

§ Each element that can reasonably be called an agent: myself, my coworker, each application on my phone that’s making some kind of decision and taking action, and even the customer service representative.

§ Each important element that at any moment could be a subject in the scenario: everything that’s an agent, plus my boarding pass, the security queues, my calendar, and so on.

§ Finally, the circumstances, which are basically each influential element of the environment, including (of course) the subjects and agents. Any element could be a circumstance or could be a subject, and vice versa. It’s all a matter of what the agent needs to perceive and understand in order to act.

We can get a lot of value out of just listing out these different relationships, and how they crisscross and overlap. There’s no way to map every single factor in even a simple real-world environment, but it’s possible to take snapshots from different perspectives, at various key moments, and bring them together into something more like a collage of snapshots that come nearer to telling the entire story. The important thing is to include more than a single, static perspective. What does the Passbook app understand about my situation that causes it to show a notification in the lock screen? What does my coworker understand about my calendar and the absence of any trip information there? And so on.

I offer this way of thinking about modeling context as a starting point. But, before anything else, we still have to get a handle on what we mean by things such as “understanding,” “environment,” and “agent.” It turns out that understanding the elements of the environment depends on the wayinformation works.

Modes of Information

Even though it’s not listed in our working definition of context, information is required before any perceiving or understanding can occur. Before around 1960, people didn’t use the word nearly as much as they do today.[14] The rise of computing technologies influenced how we talk about the way we communicate. Now, we say “information” a great deal. We use it as a catchall for just about anything that is intended to communicate a message or meaning or knowledge. It’s not as general as a word like “stuff,” but it’s close. Is it expected to inform? Then it’s information. Does it do a good job of informing? If it does, it’s “good information.” If not, it’s “bad information.”

And it would seem that bad information is everywhere, because the expectations we bring to information are often met with disappointment. Most people would categorize the legalese in a software license or the instructions for a tax form as information—but they would be hard-pressed to agree they fully understand those texts. They just say, “this information is awful” or “I don’t get this information at all.”

It does seem odd that we’d even use the word “information” for something that doesn’t effectively do the work of informing. In Information Anxiety, Richard Saul Wurman explains how the term gained an additional conventional meaning around the time of World War II, with the invention of information theories behind electronic transmission. The word “became part of the vocabulary of the science of messages. And, suddenly, the appellation could be applied to something that didn’t necessarily have to inform. The definition was extrapolated to general usage as something told or communicated, whether or not it made sense to the receiver.”[15] This slippage influenced our techno-fueled mid-twentieth-century culture to the point that we now talk of living in the “Information Age.” Now, information can mean the coding of DNA, the credits at the end of a movie, or verbal instructions from a customer service desk. Like so many words that can mean so many things, information is now a muddle.

For regular people in everyday situations, rigorous definitions for the word might not be necessary. But for people whose job it is to make environments with information, a specialized understanding is called for. Psychologists need to understand what elements make up sadness so that they can help people who are troubled by it. Artists and art historians are responsible for creating and curating art, so they have to think about the materials, techniques, and cultural meanings behind what makes something “art.” And game designers make a living from being deeply interested in what constraints, goals, and mechanisms make something “fun.” Similarly, it’s the responsibility of those of us who make “information things”—from software and digital gadgets to diagrams and newsletters—to have a practical, working model for how information works, and how it can meet the everyday expectations of its users.

You can find many sophisticated models and explanations of information in academic theories and scientific literature.[16] They all bring valid and valuable perspectives to bear. For our purposes here, though, I’ve developed a simple model that describes three different modes for information that provide lenses for understanding how context works in the environments we design. We’ll begin with a summary, here, but then delve more deeply into each mode in the next three parts of the book.

Figure 3-6 shows the three modes: Digital, Semantic, and Physical.

The three modes of information

Figure 3-6. The three modes of information

Let’s begin from the bottom and work our way up.

Physical

This is the information animals use to perceive their environment for purposes of taking physical action; of course, humans are a part of the animal realm, so we do this, too. It’s information that functions “ecologically”—that is, in the relationship between a creature and its environment. It’s about surfaces, edges, substances, objects, and the ways in which those things contain and relate to one another to support animals’ behaviors and survival.

Semantic

This is information people create for the purpose of communicating meaning to other people. I’ll often refer to this as “language.” For our discussion, this mode includes all sorts of communication such as gestures, signs, graphics, and of course speech and writing. It’s more fluid than physical information and harder to pin down, but it still creates environmental structure for us. It overlaps the Physical mode because much of the human environment depends on complementary qualities of both of these modes, such as the signage and maps positioned in physical locations and written on physical surfaces in an airport.

Digital

This is the “information technology” sort of information by which computers operate, and communicate with other computers. Even though humans created it (or created the computers that also create it), it’s not natively readable by people. That’s because it works by stripping out people-centric context so that machines can talk among one another with low error rates, as quickly and efficiently as possible. It overlaps the Semantic mode, because it’s abstract and made of encoded semantic information. But even though it isn’t literally physical, it does exist in physical infrastructure, and it does affect our physical environment more and more every day.

I should mention: like many other models I’ll share, this one isn’t meant to be taken as mathematically or logically exact. Simple models can sometimes work best when they are clear enough to point us in the right direction but skip the complexities of precision. So, for example, the overlapping parts of the modes are there to evoke how they are seldom mutually exclusive, and actually influence one another.

Starting from the Bottom

I began with the Physical mode for a reason. Context is about whole environments; otherwise, we are considering parts of an environment out of context. And when we take an environmental view, we have to begin from first principles about how people understand the environment, whether there are digital networks or gadgets in it or not. To illustrate this, Figure 3-7 presents another informal model that illustrates the layers involved in the discussion moving forward.

Pace layers of information

Figure 3-7. Pace layers of information

These are based on a concept known as pace layers—where the lower-level layer changes more slowly over time than the next, and so on. I’ve adapted the approach so that it also implies how one layer builds on another.[17]

§ Perception and cognition change very slowly for all organisms, including humans, and these abilities had to evolve long ago in order to have a functioning animal to begin with. Perception here means the core faculties of how a body responds to the environment. This is the sort of perception lizards use to climb surfaces and eat bugs or that humans use to walk around or duck a stray football.

§ Spoken language is next; as we will see, it has been around for a very long time for our species, long enough to at times be hard to separate from the older perception and cognition capabilities of our bodies and brains (as mentioned earlier, I’m lumping gestures in with speech, for simplicity’s sake). Even though particular languages can change a lot over centuries, the essential characteristics of spoken language change much more slowly.

§ Written/graphical language is the way we use physical objects—up until very recently, the surfaces of those objects—to encode verbal language for communicating beyond the present moment. Although spoken language is more of an emergent property of our species, writing is actually more of an ancient technology. Writing is also a way of encoding information, which is a precursor to digital code.

§ Information organization and design arose as identifiable areas of expertise and effort because we had so much stuff written down, and because writing and drawing enabled greater complexity than is possible in mere speech. The ability to freeze speech on a surface and relate it to other frozen speech on the same surface opened up the ability to have maps, diagrams, broadsides, folios, all of which required organization and layout. Our methods for organizing and designing written information have also been precursors to how we’ve designed and organized digital information for computing.

§ Last, there’s information technology, which is quite recent, and (as I’m defining it here) depended on the invention of digital software. We’ve seen this mode change rapidly in our own lifetimes, and it’s the layer that has most disrupted our experience of the other two modes, in the shortest time. It didn’t happen on its own, however; the ideas behind it originated in writing, linguistic theory, and other influences from further down the model.

If we place the three modes of information on top of these layers as demonstrated in Figure 3-8, it gives a rough idea of how these models relate to each other.

Modes and layers combined

Figure 3-8. Modes and layers combined

In reality, the boundaries are actually much more diffuse and intermingled, but the main idea is hopefully clear: the ways in which we use and perceive information have evolved over time; some aspects are foundational and more stable, whereas other aspects are more variable and quick to change.

In my experience, most technological design work begins with information technology first and then later figures out the information organization and design and other communicative challenges lower down. Yet, starting with technology takes a lot for granted. It assumes X means X, and Y means Y; or that here is here, and there is there. What happens when we can no longer trust those assumptions? The best way to untangle the many knotted strands that create and shape context is to understand how the world makes sense to us in the first place—with bodies, surfaces, and objects—and build the rest of our understanding from that foundation.


[10] Wikimedia Commons: http://bit.ly/1uDL7m6

[11] Using the term agent gives us the ability to include nonpersons, such as software or other systems that try to determine context. It’s also the term used most often in the scholarly literature for this element.

[12] McCullough, Malcolm. Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. Cambridge, MA: MIT Press, 2004: 48, Kindle edition.

[13] Dourish, Paul. “What We Talk About When We Talk About Context.” Personal and Ubiquitous Computing. London: Springer-Verlag, February 2004; 8(1):19-30.

[14] Based on a search for “information” in books from 1800 to 2000, using Google’s Ngram Viewer (https://books.google.com/ngrams/).

[15] Wurman, Richard Saul. Information Anxiety. New York: Doubleday, 1989: 38.

[16] I especially recommend Bates, M. “Fundamental Forms of Information.” Journal of the American Society for Information Science and Technology 2006; 57(8):1033-45, and ongoing work on a taxonomy of information by Sabrina Golonka, (http://bit.ly/1ySrrik and http://bit.ly/1CM2ti6).

[17] Borrowed and adapted from the work of Stewart Brand, particularly in How Buildings Learn, who adapted his approach from a concept called shearing layers created by architect Frank Duffy.