Language as Infrastructure - Semantic Information - Understanding Context: Environment, Language, and Information Architecture (2014)

Understanding Context: Environment, Language, and Information Architecture (2014)

Part III. Semantic Information

Chapter 9. Language as Infrastructure

If I feel physically as if the top of my head were taken off, I know that is poetry.

EMILY DICKINSON

Language and the Body

LANGUAGE IS PHYSICAL. WHEN we speak, we’re using our bodies to breathe and create the sound vibrations for articulation, not to mention gesturing and “body language.” When we write, we add to the environment physical information that we assume a reader will interpret.

Broca, the French physician for whom the language center of the brain is named (Broca’s area), argued that we have “not a memory of words, but a memory for the movements necessary for articulating words.”[180] We don’t recall language-expression as a disembodied set of abstracted concepts; it is rehearsed bodily action, eventually internalized over time. Recent research has demonstrated Broca’s argument: the more scientists observe the neural mechanisms at work when we use language, the more they find that language and bodily action are not separate systems, as once assumed, but part of a single connected system.[181] Even when we read silently, our bodies are firing neurons that we use when reading aloud.[182] This subvocalization has been used by NASA in new technologies with which users can give commands without having to literally say them.[183]

There is mounting evidence that sophisticated, symbolic language has been with humans since before we were Homo sapiens and that it has been a factor in shaping the evolution of our species. Patients with brain injuries to the main language-related areas of the brain are often still able to relearn language, an ability requiring extreme redundancy in brain structures that takes millions of years of evolution to develop.[184] Anthropologists have established a strong connection between language and the sophistication of complex tools and weapons; and they have discovered evidence of such tools from many thousands of years before Homo sapiens emerged 200 thousand years ago.[185]

Like those ancient tools, language is something we add to our environment to extend our abilities. But, as established earlier, the meaning of language depends on context and convention. As ecological psychologists Sabrina Golonka and Andrew Wilson explain, “Conventions can change and so can the meaning of words; language is much less stable than [physical-information] perception.”[186] We will keep coming back to this unstable nature of symbolic language, because it’s central to how this kind of information adds ambiguity to context.

Structure of Speech

We use more than single words or short phrases when we communicate; we need to string together sets of symbols to convey even simple concepts. So a big part of how language works is through its grammatical structure. Without environmental context and the interior structure of language grammar, spoken words are just sounds if they are without significance.

The order of words in a statement determine the statements’ meaning as much as the words themselves. For example, let’s take a look at Groucho Marx’s famous joke from the movie Animal Crackers:

One morning I shot an elephant in my pajamas.

How he got into my pajamas I’ll never know!

It’s old-fashioned humor, and funnier with Groucho’s delivery. But hang on; I’m going to spoil the joke even more:

The poorly structured grammar is what makes the joke funny. It sets us up with a verbally sketched situation that we think we understand to mean he shot an elephant while wearing his pajamas—because that’s the most likely meaning. In the cultural context of the joke, it’s an invariant fact that pajamas are worn by people more than by elephants.

An elephant in pajamas (illustration: Madeline Hinton)

Figure 9-1. An elephant in pajamas (illustration: Madeline Hinton)

The joke works by playing with that cultural invariance, and by structuring the first sentence with what’s called a misplaced modifier—“in my pajamas” is placed in closer proximity to the word “elephant” than the word “I.” We satisfice the way we hear and assume the meaning of the first statement, based on cultural invariance, rather than the indexical proximity between “elephant” and “in my pajamas.” The punch line then completes the statement by shifting the context of the first statement—it turns out the elephant is the one in the pajamas, which is an absurdly silly image that “clicks” into recognition as the correct logical interpretation of the first statement. If he’d said “One morning, in my pajamas, I shot an elephant,” it would have been more accurate, but wouldn’t have been comedic.

Doing such a close reading and analysis of Groucho’s quip certainly takes some fun out of it. However, the exercise illustrates the degree to which meaning depends on context, and how context depends on meaning. It also shows how it’s hard to overthink context—getting it right demands some rigorous analysis.

Understandable language follows invariant conventions of structure, not unlike the physical environment. Language’s emergent structures came from nature the way trees and flowers did; it’s just that humans are the soil where language grows.

Keep in mind that grammatical rules were not bestowed upon us by grammar deities. Syntax emerges from bodies and environments.[187] Our language’s structure is resonant with the structures of human action. The grammatical rules we learn in school are just the patterns that have coalesced over millennia, identified and codified, the same way words emerge from popular usage and are only later identified as new conventions and added to official dictionaries. Language evolves, morphing to meet the changing pressures of its environment.

Just as physical structure happens when two kinds of matter intersect, and layout occurs when there’s a meaningful spatial relationship between various surfaces, language’s structure happens in the intersections and arrangements of parts of speech.

So, it makes sense that language is structured not unlike the elements of the environment we learned in Chapter 6. All languages have the equivalent of object and event—that is, some form of noun and verb.[188] Statements are nested within longer statements and narratives. Depending on the language, syntax (word order and proximity) can be more or less critical for understanding. Yet, all languages depend on some sort of structure, whether it’s provided by modulation of voice, bodily gesture, or grammatical pattern.

Language has to follow its own emergent laws within a given linguistic system; the success of a given utterance depends on its structure and context, its place nested in an environment. Grammar is often behind the most egregious contextual errors in modern life. What works well for Groucho Marx doesn’t work so well in a legal contract or a conversation with your boss. It especially doesn’t work well when we try telling computers what to do—they depend on literal structure even more than we context-interpreting humans.

So much of what we design depends on well-structured language that it doesn’t hurt to consider grammar an important element of good design. Still, the ultimate point I’m making is broader and deeper. We’re now immersed in ambient and pervasive technologies that are essentially made of language. Outside of a carnival fun-house, irony and infrastructure shouldn’t mix. A misplaced modifier can be the equivalent of a bridge collapse.

The Role of Metaphor

An important body of work in the last few decades has been about the connections between the body and language, especially how much of language uses bodily and spatial metaphors. In fact, one of the earliest works connecting language and embodiment theory is the 1980 book by George Lakoff and Mark Johnson, Metaphors We Live By (University of Chicago Press). At the time the book was published, the standard theory (originating in part from linguist Noam Chomsky) was that humans had universal, deep structures that gave rise to language; these structures gave language a formal logic, using repeatable patterns, much like a computer.[189] Language was thought of as disembodied function, and metaphor was considered to be decorative, poetic speech that wasn’t part of language’s core function.

Lakoff and Johnson argued the opposite, positing that language is “fundamentally metaphorical in nature.”[190] Language is the emergent set of behaviors, or techniques, we’ve developed to help us work through and communicate abstractions. The more abstract the concept being expressed, the more that expression relies on metaphor.

Lakoff and Johnson point out many metaphorical uses of the body, such as “give me a hand,” “do you grasp what I’m saying?” or “I need your support; can you get behind me on this?” They explore more sophisticated metaphors involving cultural categories, including how we tend to talk about argument in terms of war metaphors (“Defend your claims”; “Her attack targeted my plan’s main weakness.”).

Of course, there are other metaphors we use in design that aren’t so closely tied with the body, but still make use of conventions learned through nondigital, bodily experience. The personal computer “desktop”—with roots going back at least as far as the 1960s—is a foundational metaphor in graphical user interface design. Even though the desktop doesn’t literally behave in every way like a physical desk, it still provides enough concreteness to help users get started.

Sometimes, though, a metaphor doesn’t quite survive the translation. If we use metaphors inappropriately, it can be confusing. In Apple’s iOS, the category structure for organizing photos borrows from predigital camera-and-film photography. At one level, it nests pictures into the larger container of Albums, and then puts photos into categories within the Albums container, as depicted in Figure 9-2.

The iPhone “Albums” structure in iOS 6 that somehow contained a “Camera Roll”

Figure 9-2. The iPhone “Albums” structure in iOS 6 that somehow contained a “Camera Roll”

The categories, however, don’t align with how we expect the metaphor of “Albums” to work. For example, the “Camera Roll” is nested under “Albums,” but the metaphor has nothing to do with photo albums; it refers to film cameras that used physical rolls of film in canisters. Film rolls have a limited number of frames to use, and then you have to swap out the used film for a fresh roll. But that’s not how the iOS Camera Roll works either—it has no frame limit other than the memory of the device, and you don’t “swap it out” for a new roll at any point. It just continues to store whatever pictures you leave on the phone.

Interface metaphors don’t need to slavishly copy the physical world, but neither should they appropriate meanings only in the name of seeming familiar, without also behaving according to the expectations they set. When the labels present a nested structure that’s actually the opposite of the physical referents they borrow from, one wonders why use the metaphors at all.

Visual Information

Graphical information is also semantic but uses iconographic and indexical approaches in ways for which words are not as well suited. Visual information is especially good at borrowing from the objects of the physical environment to create explanations, metaphors, and spatial arrangements for conveying meaning.

Sometimes, graphics are used in strictly an iconic manner. Photographs, realistic paintings, and even abstract images, such as the stairs sign illustrated in Chapter 8 (Figure 8-2), can be used as icons for what they depict. Graphical user interfaces make heavy use of iconography, often more as symbols than strict icons. Some visual metaphors are for functions that have no present, physical referent. Figure 9-3 presents an example from a Macintosh computer: a padlock’s open or closed state represents whether an administrator has unlocked settings so that she can make changes to settings.

The locked and unlocked states in an OS X dialog box

Figure 9-3. The locked and unlocked states in an OS X dialog box

The icon (and the mechanical sounds the computer makes when activating it) brings a representation of physical information into a semantic display. It clarifies, with body-familiar imagery, the state of locked versus unlocked. Of course, there is no actual padlock in the computer. Digital information is abstract by nature. Therefore, it requires translation, such as these metaphors. Even before the invention of graphical interfaces, computers used similar metaphors but with typed commands, such as “get,” “put,” and “set.”

We also use graphics in less literal ways to physically represent abstract ideas. An early example is a diagram called the tetragrammaton (see Figure 9-4), which represented the Christian Holy Trinity.

A tetragrammaton from the twelfth century

Figure 9-4. A tetragrammaton from the twelfth century[191]

It takes something that people could not see in the physical world—the relationships between the Father, Son, and Holy Spirit, together in a single deity—and puts those together into a shape illustrating how the Christian God can be three beings and one being at the same time.

Even now, we use similar diagrams to explain abstract concepts, because making the ideas into representations of physical objects makes them easier to grasp. For example, the Venn diagram shown in Figure 9-5 shows the intersection of mathematical sets, or the shared qualities among multiple entities. It does a great job of making the invisible visible, working with otherwise disembodied ideas as if they were concrete objects.

Graphics are especially useful for giving form to the abstractions of mathematical measurement. The first known usage of visual explanation for data was by William Playfair in the late eighteenth century, as shown in Figure 9-6. The figure shows a trade balance relationship over time, involving England, Denmark, and Norway.

The “three circles of information architecture” introduced by Rosenfeld and Morville in the 1990sUsed in their consulting practice since the mid-1990s, but first mention in print in Information Architecture for the World Wide Web. Morville, Peter, Louis Rosenfeld. Designing Large-scale Web Sites. Sebastopol, CA: O’Reilly Media, 1998.

Figure 9-5. The “three circles of information architecture” introduced by Rosenfeld and Morville in the 1990s[192]

William Playfair’s time series of exports and imports of Denmark and Norway

Figure 9-6. William Playfair’s time series of exports and imports of Denmark and Norway[193]

In addition to this line-graph method, Playfair went on to invent the bar chart, the pie chart, and the circle graph—essentially creating the field of graphical statistics.

A recent practical example is the concept for a parking sign portrayed in Figure 9-7. Instead of the confusing jumble of words and numbers we usually see, the designer represented time spatially.

Visual information lets us model abstraction and work with thoughts and concepts symbolically, while managing to provide objects we can see, manipulate, and arrange. Like writing, this capability makes it possible for us to work with more-complex systems, over greater scale and longer periods of time. Even though our main focus is how words create information, they are often assisted by graphics, and vice versa. All of it is semantic information, and all of it functions as structure we add to our environment.

A graphical parking sign, by designer Nikki SyliantengPosted at toparkornottopark.com by Syliantent, Nikki, June 24, 2014 ().

Figure 9-7. A graphical parking sign, by designer Nikki Sylianteng[194]

Semantic Function

How does semantic information work for perception and cognition? If we reserve Affordance to mean physical information’s direct specification of physical opportunities for action, where does language fit into the model?

Language is fundamental and clearly affects our behavior and experience far beyond mere abstraction. Signifiers are all fine and good, but signification is too easily construed as a cloudy, disembodied concept. Although accurate, it runs the risk of detaching how we think of language from its truly visceral effects. At the same time, language is not the same as physical information. The word “hammer” doesn’t budge a nail, not even a smidgen.

I’ll be using the phrase semantic function to indicate the way we use language as part of our environment (see Figure 9-8). What I hope it conveys is that semantic information has a real, functional role as invariant structure in our surroundings. Even though language can be exceedingly abstract—as in a word like “love”—it can have real, physical consequences—as when our hearts can suddenly race when we hear, “I love you.” Language can be a sort of civil machinery, such as laws and contracts. It functions as instruction, like virtual guardrails for baking a cake or driving a car. So, semantic function is the near-equivalent of physical affordance but for semantic information.

From the perspective of the human agent, language and semantic conventions can become tacitly understood parts of the environment, to the point where the agent uses them with nearly the same familiarity and facility as the physical parts of the world.[195] In other words, to the agent, there is only information from the environment. The soft assembly of the agent’s experience uses semantic function and physical affordance indiscriminately, in whatever combinations necessary.

As designers, we have to assume that for users there is no meaningful affordance-versus-function separation.[196] From the user’s point of view, the main differences are along the explicit-to-tacit spectrum. But, to make something that users can easily use, we have to do the hard work of understanding these distinctions between affordance and semantic function in the work of design. Designers will likely continue using the term Affordance more broadly than I am specifying here. But if these physical-versus-simulated dimensions are not clarified in the work of design, it can lead to dangerous assumptions about what users perceive and understand.

Semantic function surrounds human perception and augments physical affordance

Figure 9-8. Semantic function surrounds human perception and augments physical affordance

The red stoplight in Figure 9-9 is a physical object that emits light in three colors. In the context and learned conventions of roadway driving, its red mode means Stop. Most of us respond to it physically, stopping in front of it. When learning to drive, we have to think about it more explicitly, but eventually we respond to it with little or no conscious attention. It becomes tacitly picked up, indirectly controlling part of our environment. Is it as directly perceived and controlling as a physical barrier? No, but it’s about as close as a semantic sign can come to being such a barrier.

A traffic light, displaying red to mean “stop”Wikimedia Commons:

Figure 9-9. A traffic light, displaying red to mean “stop”[197]

For a twenty-first-century person in the developed world, a huge portion of the environment is made up of these signs and symbols. They surround us as bountifully as trees, rocks, and streams once surrounded our ancestors.

Software interfaces are made entirely of semantic conventions. The only truly physical affordance on a digital device’s screen is the flat surface of the screen itself. The only way it can interact with us is through signs and symbols—words with simulated surfaces and objects on the screen, or sounds whose meanings also require learning as language. There is ongoing research to create haptic interfaces that mimic the contours of three-dimensional surfaces, but they will only enhance the simulation. We will still need to learn what the simulated button, edge, or other object actually does.

That’s because any object that controls something beyond its present physical form works a lot like language. Consider a typical light switch, such as the one in Figure 9-10. Even this simple mechanism might not be immediately clear in its function to someone who has never encountered one before. But for those of us who grew up around the use of such switches, there is intrinsic physical information that specifies the affordance of “flipping” up or down. That is all we know from looking at the object alone. What does the switch turn off or on? Is that even what it does? The answers depend on contextual relationships.

A domestic light switchPhoto by author.

Figure 9-10. A domestic light switch[198]

To know what this switch will do beyond its intrinsic structure, I have to either be told, or I have to flip the switch to learn what happens. I’ve often been startled by the angry growl of a garbage disposer in a sink when I expected to illuminate the kitchen. To really understand controls like this, we often resort to adding labels, or otherwise creating semantic context between the object and its ultimate effect.

The switch is acting as a symbol. It’s a signifier that could mean almost anything. Like language, our technological systems depend on contextual learning and associations. Yet, as we do with all familiar language, we conflate these elements. If you asked, “What does that switch do?” and I answered, “It flips up and down,” you’d think I was joking around. “Of course it flips up and down, but what does it do when you flip it?” you’d counter.

When we point at a switch to ask someone “can you turn on the light?” when the light is actually above our heads, we’ve merged the signifier and signified across space. We’re using one context to talk about and control another, essentially making them one. If we were designing a new light-control system, we would want to untangle the semantic function and physical affordance dynamics at work here, because they would inform how we might improve the system for use. At some point, though, we would have to again see it as one conflated, intermingled, nested system, because that’s how its users will need to perceive it, as well. This conflation happens when we learn anything, from how to use a computer mouse or game console controller, to how we learn to swipe left to right to unlock our smartphones.

We are a symbol-laden species, so the way we talk about the world pervades the world, affecting how we understand it and use it, regardless of how our bodies perceive intrinsic affordances. Just as a fallen branch might have an affordance of being picked up and held in the hand and then used as an extension of one’s arm, a word for the fallen branch—like “club” or “kindling”—offers a new dimension of meaning for the branch, a semantic function implying certain kinds of action. And, it does this without anyone having to actually pick up the stick and use it.

In that sense, for language-comprehending humans, a label can alter what the object actually is. It doesn’t change the physical form of the object, but for all practical purposes in human life, it’s a different object. When we design products and services, we are designing in the “for all practical purposes” realm. In that realm, language functions as environment. That’s because semantic function is not merely abstract; it shapes our very reality.

Tools for Understanding

Semantic function allows us to use words to name and organize our world. Gibson saw language as something that transforms the way knowledge works; it makes “knowing explicit instead of tacit. Language permits descriptions and pools the accumulated observations of our ancestors.”[199]It turns knowledge into environment that can persist with us across time, even if only as stories imperfectly told from one generation to the next.

Andy Clark calls language “a form of mind-transforming cognitive scaffolding: a persisting, though never stationary, symbolic edifice [playing a] critical role in promoting thought and reason.”[200] We use language to add structures to our environment, structures that inform us in ways that wouldn’t exist without language. A conversation creates environmental structures that we pick up as having meaning that supports collaborative action. A set of written instructions guides us in building a house or a piece of furniture, which we would not know how to create without those plans.

What this means is, language is infrastructure. Language can actually create new invariants for us to interact with and inhabit. It can have the effect of creating new elements in the environment—and even new environments. Symbolic language affords fluid usage of words for the things around us. Because there’s not a necessary one-to-one, never-changing relationship between word and object, we can label things in many ways. This labeling function is one of language’s most powerful abilities. Labeling is how we bring stability to our experience and make explicit sense of the world as humans.[201]

Clark describes this as language creating “a new realm of perceptible objects.”[202] Here are the key points he makes about labels in particular:

§ Labeling “functions as a kind of augmented reality trick,” where we supplement our surroundings with new structure.

§ Labels are “cheap” ways to group objects without having to actually move items around (and in some cases group things that we couldn’t physically move into piles anyway).

§ Labels are open-ended in the sense that we can group things that have no physically evident affordance for doing so.

§ Labels behave much like physical tools for piling (and, for the piles they create), and our cognition treats them much as we treat the physical equivalent.

§ Labels are, themselves, new objects added to the environment, for which we can then create new labels and pile at even higher levels of abstraction.

All these points have major implications for what language is to us, especially the last one: our cognition moves from physical-object to abstract-label-object with ease. We create systems with language that we use as additional “built environments.” Labels and categories are certainly enabled by our brains, but they’re not confined there. They are the trellises that shape the way our understanding grows. They are part of the environment that surrounds us, where we recognize them and orient ourselves around them the way we recognize and orient ourselves around landmarks.

The structure of our language can affect our ability to think through problems and articulate complex ideas. In recent work in composition education, teachers discovered that students were struggling to work through complex subject matter. It was partly because they lacked the linguistic scaffolding for doing so, especially the structural parts of speech that enable complex sentences: words such as “although,” “unless,” and “if.” After those traditional elements were reintroduced, students showed improvement in synthesizing a thoughtful, complex response to an instructor’s questions.[203]

This further reinforces the idea that language really is infrastructure for us. It gives us the “joints” for bending our thoughts in new directions and connecting them together into new concepts. We use language as part of our cognitive loop to reflect upon and come to new understandings about our environment and ourselves.

Semantic Architecture

Language provides not only a scaffolding for the physical world, but another world of language’s own making, created from abstractions that aren’t directly indicative of anything literally in the world at all. We live within the structures and constraints provided by those abstractions just as fully as we do among physical structures.

Gibson saw language as part of a continuum from simple affordances to highly complex “reciprocal affordances,” saying that when the affordances of vocalization become the semantics of speech—and “manufactured displays become images, pictures, and writing, the affordances of human behavior are staggering.”[204] Affordance of physical information undergirds and enables the workings of semantic function, which then introduces massively higher opportunities for complexity.

Clark echoes Gibson in saying, “The cumulative complexity here is genuinely quite staggering. We do not just self-engineer better worlds to think in. We self-engineer ourselves to think and perform better in the worlds we find ourselves in. We self-engineer worlds in which to build better worlds to think in.”[205] We don’t have two separate brains—one for language and one for physical things. Even though there is a big difference in the intrinsic nature of physical information versus the mediated nature of semantic information, our cognition does its best to take it all in as one environment.[206] When we say we’re trying to “clarify” a point or “make the complex clear,” it’s not just a metaphor; we’re trying to make an environment’s semantic function make coherent sense to our bodies.

It can surprise us, the degree to which language establishes structures that change the meaning of our behavior. Take for example an activity many of us encountered in childhood: the staring game.[207] Two people agree on simple rules: “We have to stare at each other, and maintain eye contact, without breaking it; whoever breaks it first loses.” Then, they begin staring at each other, probably trying various tricks to get the other to break contact, like telling a joke or making silly faces. Eventually someone breaks the stare, and the game is over.

If I walked up to you and just started staring, without this prior agreement, it would be...awkward. But, if we just have a quick verbal exchange about the rules, we create an information structure that we temporarily inhabit together. We’ve built a sort of place that gives us a new context for behavior. Even though it’s only a simple game with a simple rule, it requires mountains of contextual meaning to exist at all.

Language makes places on its own, but also participates in making and remaking all the other human places we inhabit. The built environment and language are all part of an interconnected system of human meaning-making through using and dwelling.[208] William Mitchell proposes that architecture is as important to language as language is to architecture. “The cognitive function of architecture (distinct from its function of providing shelter) is to create a rich environment for symbol, language, and discourse grounding, and act as the glue of communication that holds communities together.”[209] Language is part of the human-made environment like everything else we build. It establishes structures and rules that we live in together. It creates architecture.


[180] Barrett, Louise. Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton, NJ: Princeton University Press, 2011:214, Kindle edition.

[181] Pulvermüller, Friedemann. “Brain mechanisms linking language and action.” Nature Reviews Neuroscience July, 2005;6:576-82 (http://bit.ly/10eMjlB).

[182] Perrone-Bertolotti, Marcela, Jan Kujala, Juan R. Vidal, Carlos M. Hamame, Tomas Ossandon, Olivier Bertrand, Lorella Minotti, Philippe Kahane, Karim Jerbi, and Jean-Philippe Lachaux. “How Silent Is Silent Reading? Intracerebral Evidence for Top-Down Activation of Temporal Voice Areas During Reading.” The Journal of Neuroscience December 5, 2012;32(49):17554-62 (http://bit.ly/1wujhIr).

[183] Braukus, Michael, and John Bluck. NASA Develops System To Computerize Silent, “Subvocal Speech.” NASA News March 17, 2004 (http://1.usa.gov/1FsgmWJ).

[184] Deacon, Terrence W. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W.W. Norton & Company, Inc., 2011, Kindle edition.

[185] Meyer, Robinson. “Researchers Discover the Hot New Technology: Throwing Javelins.” The Atlantic Online December 2, 2013 (http://theatln.tc/1ybTOVw).

[186] Wilson, Andrew D., and Sabrina Golonka. “Embodied cognition is not what you think it is.” Frontiers in psychology February 12, 2013. doi: 10.3389/fpsyg.2013.00058. See more at http://bit.ly/1shEeC8 and http://bit.ly/1shElO6.

[187] Deacon, Terrence W. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W.W. Norton & Company, Inc., 2011:354, Kindle edition.

[188] There are exceptions, depending on how we define noun and verb. The language of Tonga, for example, has a different morphology, but still can be mapped within a noun/verb “prototype framework.” Broschart, Jürgen. “Why Tongan does it differently: Categorial distinctions in a language without nouns and verbs.” Linguistic Typology, de Gruyter, January 1, 1997;1(2) (http://bit.ly/1rnQHEk).

[189] Chomsky’s theories argue, in part, that the “deep structure” of universal grammar has a logical purity that isn’t always translated to the messy “surface structure” of actual language use. This Platonic-forms approach runs counter to an embodied understanding of cognition, which unifies everything into one, nested, naturally messy system.

[190] Lakoff and Johnson, Metaphors We Live By. Chicago: University of Chicago Press, 1980:3.

[191] http://commons.wikimedia.org/wiki/File:Tetragrammaton-Trinity-diagram-12thC.jpg

[192] Used in their consulting practice since the mid-1990s, but first mention in print in Information Architecture for the World Wide Web. Morville, Peter, Louis Rosenfeld. Designing Large-scale Web Sites. Sebastopol, CA: O’Reilly Media, 1998.

[193] http://commons.wikimedia.org/wiki/File:Playfair_TimeSeries-2.png

[194] Posted at toparkornottopark.com by Syliantent, Nikki, June 24, 2014 (http://bit.ly/1CMoWvl).

[195] I have previously used the phrase “semantic affordance” for this idea, both in early drafts of this book and in “What We Make When We Make Information Architecture” (Resmini, Andrea (Ed.) Reframing Information Architecture Springer, 2014) and various presentations. I have since changed course, to avoid muddling the core value of Gibson’s affordance theory.

[196] Wilson, Andrew D., and Sabrina Golonka. “Embodied cognition is not what you think it is.” Frontiers in psychology 2013;4(58).

[197] Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Redtrafficlight.svg

[198] Photo by author.

[199] Gibson, J. J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:263.

[200] Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (Philosophy of Mind) London: Oxford University Press, 2010:44.

[201] Weick, Sutcliffe, Obstfeld. “Organizing and the Process of Sensemaking Organization Science.” INFORMS 2005;16(4):409-421.

[202] Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (Philosophy of Mind) London: Oxford University Press, 2010:45-46.

[203] http://theatln.tc/1ybUhH6

[204] Gibson, J. J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979:137.

[205] Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (Philosophy of Mind) London: Oxford University Press, 2010, Kindle locations 1424-28.

[206] Golonka, Sabrina. “Language: A task analysis (kind of).” Posted in Notes from Two Scientific Psychologists, May 25, 2012 (http://bit.ly/1x1ua4A).

[207] I owe Frederick van Amstel for this example, which he used as an instance of an “interaction” during his presentation at the 2012 Interaction conference in Dublin, Ireland.

[208] Mitchell, William J. Placing Words: Symbols, Space, and the City. Cambridge, MA: MIT Press, 2005:11, Kindle edition.

[209] Mitchell, William J. Placing Words: Symbols, Space, and the City. Cambridge, MA: MIT Press, 2005:12, Kindle edition.