Computational thinking - Computer Science: A Very Short Introduction (2016)

Computer Science: A Very Short Introduction (2016)

Chapter 7 Computational thinking

A certain mentality

Most sciences in the modern era—say, after the Second World War—are so technical, indeed esoteric, that their deeper comprehension remains largely limited to the specialists, the community of those sciences’ practitioners. Think, for example, of the modern physics of fundamental particles. At best, when relevant, their implications are revealed to the larger public by way of technological consequences.

Yet there are some sciences that touch the imagination of those outside the specialists by way of the compelling nature of their central ideas. The theory of evolution is one such instance from the realm of the natural sciences. Its tentacles of influence have extended into the reaches of sociology, psychology, economics, and even computer science, fields of thought having nothing to do with genes or natural selection.

Among the sciences of the artificial, computer science manifests a similar characteristic. I am not referring to the ubiquitous and ‘in your face’ technological tools which have colonized the social world. I am referring, rather, to the emergence of a certain mentality.

This mentality, or at least its promise, was articulated passionately and eloquently by one of the pioneers of artificial intelligence, Seymour Papert, in his book Mindstorms (1980). His aim in this work, Papert announced, was to discuss and describe how the computer might afford human beings new ways of learning and thinking, not only as a practical, instrumental artefact but in much more fundamental, conceptual ways. Such influences would facilitate modes of thinking even when the thinkers were not in direct contact with the physical machine. For Papert, the computer held promise as a potential ‘carrier of powerful ideas and of seeds of cultural change’. His book, he promised, would speak of how the computer could help humans fruitfully transgress the traditional boundaries separating objective knowledge and self-knowledge, and between the humanities and the sciences.

What Papert was articulating was a vision, perhaps utopian, that went well beyond the purely instrumental influence of computers and computing in the affairs of the world. This latter vision had existed from the very beginnings of automatic computation in the time of Charles Babbage and Ada, Countess of Lovelace in the mid-19th century. Papert’s vision, rather, was the inculcation of a mentality that would guide, shape, and influence the ways in which a person would think about, perceive, and respond to, aspects of the world—one’s inner world and the world outside—which prima facie have no apparent connection to computing—perhaps by way of analogy, metaphor, and imagination.

Over a quarter of a century after Papert’s manifesto, computer scientist Jeanette Wing gave this mentality a name: computational thinking. But Wing’s vision is perhaps more prosaic than was Papert’s. Computational thinking, she wrote in 2008, entails approaches to such activities as problem solving, designing, and making sense of intelligent behaviour that draws on fundamental concepts of computing. Yet computational thinking cannot be an island of its own. In the realm of problem solving it would be akin to mathematical thinking; in the domain of design it would share features with the engineering mentality; and in understanding intelligent systems (including, of course, the mind) it might find common ground with scientific thinking.

Like Papert, Wing disassociated the mentality of computational thinking from the physical computer itself: one can think computationally without the presence of a computer.

But what does this mentality of computational thinking entail? We will see some examples later but before that let us follow AI researcher Paul Rosenbloom’s interpretation of the notion of computational thinking in terms of two kinds of relationships: one is interaction, a concept introduced earlier (see Chapter 2) to mean, in Rosenbloom’s phrase, ‘reciprocal action, effect or influence’ between two entities. However, interaction can signify unidirectional influence of one system A on another system B (notationally, Rosenbloom depicted this as ‘AB’ or ‘BA’) as well as bidirectional or mutual influence (notationally ‘A ← ➔ B’). By implementation Rosenbloom meant to ‘put into effect’ a system A at a higher abstraction level in terms of interacting processes within a system B at a lower level of abstraction (notationally, ‘A/B’). A special case of implementation is simulation: B simulates A (A/B) when B acts to imitate or mimic the behaviour of A.

Using these two relationships, Rosenbloom explained, the simplest representation of computational thinking is when a computational artefact (C) influences the behaviour of a human being (H): C ➔ H. Rosenbloom then goes further. Instead of just a human being H, suppose we consider a human simulating a computational artefact C: C/H. In this case we have the relationship C ➔ C/H, meaning that computational artefacts influence human beings who simulate the behaviour of such artefacts. Or we may go still further: consider a human being H simulating mentally a computational artefact C which itself has implemented or is simulating the behaviour of some real world domain D: D/C/H. For example, suppose D is human behaviour. Then D/C means using a computer to simulate or model human behaviour. And D/C/H means a human being mentally simulating such a computer model of human behaviour. This leads to the following interpretation of computational thinking: C ➔ D/C/H.

More nuanced interpretations are possible, but these interpretations in terms of interaction and implementation/simulation suffice to illustrate the general scope of computational thinking.

Computational thinking as mental skills

The most obvious influence computing can exercise on people is as a source of mental skills: a repertoire of analytical and problem solving tools which humans can apply in the course of their lives regardless of the presence or absence of actual computers. This was what Jeanette Wing had in mind. In particular, she took abstraction as the ‘essence’, the ‘nuts and bolts’ of computational thinking. But while (as we have seen throughout this book) abstraction is undoubtedly a core computational concept, computer science offers many more notions that one may assimilate and integrate into one’s kit of thinking tools. I am thinking of heuristic methods, weak and strong; the idea of satisficing rather than optimizing as a realistic decision-making objective; of thinking in algorithmic terms and comprehending when and whether this is the appropriate pathway to problem solving; of the conditions and architecture of parallel processing as means for approaching multitasking endeavours; of approaching a problem situation from the ‘top down’ (beginning with the goal and the initial problem state, and refining the goal into simpler subgoals, and the latter into still simpler subgoals, etc.) or ‘bottom up’ (beginning with the goal and the lowest level building blocks and constructing a solution by composing building blocks into larger building blocks, and so on). But what is significant is that to acquire these tools of thought demands a certain level of mastery of the concepts of computer science. For Wing this entails introducing computational thinking as part of the educational curriculum from an early age.

But computational thinking entails more than analytical and problem solving skills. It encompasses a way of imagining, by way of seeing analogies and constructing metaphors. It is this combination of technical skills and imagination that, I think, Papert had in mind, and which provides the full richness of the mentality of computational thinking. We consider now some realms of intellectual and scientific inquiry where this mentality has proved to be effective.

Thinking computationally about the mind

Certainly, one of the most potent, albeit controversial, manifestations of this mentality is in thinking about thinking: the influence of computer science on cognitive psychology. Turning Turing’s celebrated question—whether computers can think (the basis of AI)—on its head, cognitive psychologists consider the question: Is thinking a computational process?

The response to this question reaches back to the pioneering work of Allan Newell and Herbert Simon in the late 1950s, in their development of an information processing theory of human problem solving which combined such computational issues as heuristics, levels of abstraction, and symbol structures with logic. Much more recently it has led to the construction of models of cognitive architecture, most prominently by researchers such as psychologist John Anderson and computer scientists Allen Newell, John Laird, and Paul Rosenbloom. Anderson’s series of models, called, generically, ACT, and that of Newell et al. called SOAR, were both strongly influenced by the basic principles of inner computer architecture (see Chapter 5). In these models the architecture of cognition is explored in terms of memory hierarchies holding symbol structures that represent aspects of the world, and the manipulation and processing of symbol structures by processes analogous to the instruction interpretation cycle (ICycle). These architectural models have been extensively investigated both theoretically and empirically as possible theories of the thinking mind at a certain abstraction level. Another kind of computationally influenced model of the mind begins with the principles of parallel processing and distributed computing, and envisions mind as a ‘society’ of distributed, communicating, and interacting cognitive modules. An influential proponent of this kind of mental modelling was AI pioneer Marvin Minsky. As for cognitive scientist and philosopher Margaret Boden, she titled her magisterial history of cognitive science Mind as Machine (2006): the mind is a computational device, by her account.

The computational brain

Representing or modelling the neuronal structure of the brain as a computational system and, conversely, computational artefacts as networks of highly abstract neuron-like entities has a history that reaches back to the pioneering work of mathematician Warren Pitts and neurophysiologist Warren McCulloch, and the irrepressible John von Neumann in the 1940s. Over the next sixty years a scientific paradigm called connectionism has evolved. In this approach, the mentality of computational thinking is expressed most specifically in the design of highly interconnected networks (hence the term ‘connectionism’) of very simple computational elements which collectively serve to model the behaviour of basic brain processes that are the building blocks in higher cognitive processes (such as detecting cues or recognizing patterns in visual processes). Connectionist architectures of the brain are at a lower abstraction level than the symbol processing cognitive architectures mentioned in the previous section.

The emergence of cognitive science

Symbol processing cognitive architectures of mind and connectionist models of the brain are two of the ways in which computational artefacts and the principles of computer science have influenced the shaping and emergence of the relatively new interdisciplinary field of cognitive science. I must emphasize that not all cognitive scientists—for instance the psychologist Jerome Bruner—take computation to be a central element of cognition. Nonetheless, the idea of understanding such activities as thinking, remembering, planning, problem solving, decision making, perceiving, and conceptualizing and understanding by way of constructing computational models and computation-based hypotheses is a compelling one; in particular, the view of computer science as a science of automatic symbol processing served as a powerful catalyst in the emergence of cognitive science itself. The core of Margaret Boden’s history of cognitive science, mentioned in the previous section, is the development of automatic computing.

Understanding human creativity

The fascinating subject of creativity, ranging from the exceptional, historically original kind to the personal, everyday brand, is a vast topic that has attracted the professional attention of psychologists, psychoanalysts, philosophers, pedagogues, aestheticians, art theorists, design theorists, and intellectual historians and biographers; not to speak of the more self-reflexive creators themselves (scientists, inventors, poets and writers, musicians, artists, etc.). The range of approaches to, models and theories of, creativity is, accordingly, bewilderingly large, not least because of the many definitions of creativity.

But at least one community of creativity researchers has resorted to computational thinking as a modus operandi. They have proposed computational models and theories of the creative process that draw heavily on the principles of heuristic computing, representation of knowledge as complex symbol structures (called schemas), and the principles of abstraction. Here too, such is its compelling influence, computational thinking has afforded a common ground for the analysis of scientific, technological, artistic, literary, and musical creativity: a marriage of many cultures as Papert had hoped for.

For example, literary scholar Mark Turner has applied computational principles to the problem of understanding literary composition, just as philosopher of science and cognitive scientist Paul Thagard strove to explain scientific revolutions by way of computational models, and the present author, a computer scientist and creativity researcher, constructed a computational explanation for the design and invention of technological artefacts and ideas in the artificial sciences. The mentality of computational thinking has served as the glue that binds these different intellectual and creative cultures into one. In many of these computational studies of creativity, computer science has provided a precision of thought in which to express concepts pertaining to creativity which was formerly absent.

To take an example, the writer Arthur Koestler in his monumental work The Act of Creation (1964) postulated a process called ‘bisociation’ as the mechanism by which creative acts are effected. By bisociation, Koestler meant the coming together of two or more unconnected concepts and their blending, resulting in an original product. However, precisely how bisociation occurred remained unexplained. Computational thinking has afforded some creativity researchers (such as Mark Turner and this writer) explanations of certain bisociations in the precise language of computer science.

Understanding molecular information processing

In 1953, James Watson and Francis Crick famously discovered the structure of the DNA molecule. Thus was initiated the science of molecular biology. Its concerns included understanding and discovering such mechanisms as the replication of DNA, transcription of DNA to RNA, and translation of RNA into protein—fundamental biological processes. Thus the notion of molecules as carriers of information entered the biological consciousness. Theoretical biologists influenced by computational ideas began to model genetical processes in computational terms (which, incidentally, also led to the invention of algorithms based on genetical concepts). Computational thinking shaped what was called ‘biological information processing’ or, in contemporary jargon, bioinformatics.

Epilogue: is computer science a universal science?

Throughout this book the premise has been that computer science is a science of the artificial: that it is centred on symbol processing (or computational) artefacts; that it is a science of how things ought to be rather than how things are; that the goals of the artificers (algorithm designers, programmers, software engineers, computer architects, informaticists) must be taken into account in understanding the nature of this science. In all these respects the distinction from the natural sciences is clear.

However, in the last chapter we have seen that computational thinking serves as a bridge between the world of computational artefacts and the natural world, specifically, that of biological molecules, human cognition, and neuronal processes. Could it be, then, that computing not only affords a mentality but that, more insidiously, computation as a phenomenon embraces the natural and the artificial? That computer science is a universal science?

In recent years some computer scientists have thought precisely along these lines. Thus, Peter Denning has argued that computing should no longer be thought of as a science of the artificial, since information processes are abundantly found in nature. Denning and another computer scientist, Peter Freeman, have contended that in the past few decades the focus of (some computer scientists’) attention has shifted from computational artefacts to information processes per se—including natural processes.

For Denning, Freeman, and yet another computer scientist Richard Snodgrass, computing is, thus, a natural science since computer scientists are as much in the business of discovering how things are (in the brain, in the living cell, and, even, in the realm of computational artefacts) as in elucidating how things ought to be. This point of view implies that computational artefacts are of the same ontological category as natural entities; or that there is no distinction to be made between the natural and the artificial. Snodgrass, in fact, invented a word to describe the natural science of computer science: ‘Ergalics’, from the Greek root ‘ergon’ (εργων), meaning ‘work’.

Paul Rosenbloom, in broad agreement with Snodgrass, but wishing to avoid a neologism, simply identified the computer sciences alongside the physical, life, and social sciences, as the ‘fourth great scientific domain’.

The uniqueness of computer science as constituting a paradigm of its own has been an abiding theme of this book, and so Rosenbloom’s thesis is consistent with this theme. The question is whether one should distinguish between the study of natural information processes and that ofartificial symbolic processes. Here, the distinction between information and symbol seems justified. In the natural domain, entities do not represent anything but themselves. Entities such as neurons, or the nucleotides that are the building blocks of DNA, or the amino acids constituting proteins, do not represent anything but themselves. Thus, I find it problematic to refer to DNA processing as symbol processing, though to refer to these entities as carriers of non-referential information seems valid.

Ontologically, I think, a distinction has to be made between computer science as a science of the artificial and computer science as a natural science. In the former, human agency (in the form of goals and purpose, accessing knowledge, effecting action) is part of the science. In the latter case, agency is avowedly absent. The paradigms are fundamentally distinct.

Be that as it may, and regardless of any such possible ontological difference, what computer science has given us, as the preceding chapters have tried to show, is a remarkably distinctive way of perceiving, thinking about, and solving a breathtakingly broad spectrum of problems—spanning natural, social, cultural, technological, and economic realms. This is surely its most original scientific contribution to the modern world.

Further reading

The reader may wish to study the topics of the various chapters in more depth. The following list is a mix of some classic and historically influential (and still eminently readable) works and more contemporary texts; a mix of essays and historical works written for a broad readership and somewhat more technical articles.