- Home
- Dasgupta, Subrata
It Began with Babbage Page 2
It Began with Babbage Read online
Page 2
Throughout the centuries, especially since the 17th century—the age of Galileo and Newton, Descartes and Bacon, Kepler and Huygens—much ink has been spilled on the questions: What is science? What constitutes scientific knowledge? How does science differs from nonscience? How does scientific knowledge differ from other kinds of knowledge? Despite the vast, animated conversations on these issues, the debate continues. As one modern historian of physics has remarked, every attempt to “fix” the criterion of “scientificity” has failed.4
Etymologically, science is rooted in the Latin adjective scientificus, used by medieval scholars to mean “referring to knowledge that is demonstrable” as opposed to intuitive knowledge. During the 17th century, we find that it appears in the name of the Académie des Sciences in Paris (1666). However, in ordinary speech, even into the early 19th century, “science” was often used to mean knowledge acquired by systematic study or a skill. In Jane Austin’s novel Pride and Prejudice (1813), we find a character that refers to dancing as a science.
In fact, until well into the 19th century, “science” and “philosophy” (especially as “natural philosophy”) were more or less synonymous. In 1833, however, the word scientist was first deployed by the Englishman William Whewell (1794–1866), marking the beginning of the separation of science from philosophy.
During the 20th century, disciplines called philosophy of science, sociology of science, and history of science, and, most recently, the generic science studies have come into being with objects of inquiry that are the nature of the scientific enterprise. As one might expect, there has been (and continues to be) much debate, discussion, and (of course) disagreement on this matter.
Practicing scientists, however, harbor less anxiety about their trade. They broadly agree that “scientificity” has to do with the method of inquiry. They broadly subscribe to the idea that, in science, one gives primacy to observation and reasoning; that one seeks rational explanations of events in the world; that a highly critical mentality is exercised on a continuous basis by viewing a scientific explanation at all times as tentative—a hypothesis that must always be under public scrutiny, tested either by way of observation or experiment, and rejected if the evidence contradicts the hypothesis; and that scientific knowledge, being about the empirical world, is always incomplete. They also agree that a new piece of scientific knowledge is never an island of its own. It must cohere with other pieces of knowledge scientists have already obtained; it must fit into a network of other ideas, concepts, evidence, hypotheses, and so on.
All of this pertain to the natural sciences. What about the sciences of the artificial?
VI
There is, of course, common ground. The sciences of the artificial, like the natural sciences, give primacy to rational explanation, they demand a critical mentality, they acknowledge the tentativeness and impermanence of explanations, and they involve constructing hypotheses and testing them against reality by way of observation and experiment.
The crucial distinction lies in the nature of the things to be explained. For the natural sciences, these are natural objects. For the artificial sciences, these are artificial objects. Thus, the artificial sciences involve activities entirely missing in the natural sciences: the creation of the things to be explained—a process that (in general) involves the creation of a symbolic representation of the artifact (in some language), and then making (or putting into effect) that artifact in accordance with the representation.5 The former is called design; the latter, implementation.
Design and implementation are thus the twins at the heart of the sciences of the artificial—activities entirely missing in the natural sciences. And the consequences are profound.
First, explanations in a science of the artificial will be of two kinds: (a) hypotheses about whether the design of an artifact satisfies the intended purpose of the artifact and (b) hypotheses about whether the implementation satisfies the design.
Second, a design is something specific. One designs a particular artifact (a bridge, the transmission system for a car model, a museum, a computer’s operating system, and so on). So, really, the design of the artifact is a hypothesis (a theory) that says: If an artifact is built according to this design, it will satisfy the purpose intended for that artifact. A design, then, is a theory of the individual artifact (or a particular class of artifacts), and an implementation is an experiment that tests the theory.
Third, there is a consequence of this view of designs-as-theories. We noted that, within a natural science, an explanation (in the form of a law or a theory or a hypothesis or a model) does not stand on its own. It is like a piece of a jigsaw puzzle that must fit in to the jigsaw as a whole. If it does not, it is either ignored or it may lead to a radical rehaul of the overall network of knowledge, even the construction of a new jigsaw puzzle.
A science of the artificial has no such constraints. Because it deals with the design and implementation of artifacts, if the design and then the implementation result in an artifact that meets the intended purpose, success has been achieved. The particular artifact is what matters. The success of the artifact produces new knowledge that then enriches the network of knowledge in that science of the artificial. But, there is no obligation that this new knowledge must cohere with the existing network of knowledge. Thus, although a natural science seeks unified, consistent knowledge about its subject matter, an artificial science may be quite content with fragmented knowledge concerning individual artifacts or individual classes of artifacts.
One of the great glories of (natural) science is its aspiration for what scientists call universal laws and principles, which cover a sweeping range of phenomena that are true for all times and all places. Newton’s law of gravitation, Kepler’s laws of planetary motion, Harvey’s explanation of the circulation of blood, the laws of chemical combination, the theory of plate tectonics in geology, Darwinian natural selection, and Planck’s law are all examples of universal laws or principles.
The search for universals is also not unknown in the sciences of the artificial. For example, explanations of metallurgical techniques such as tempering and annealing, or the behavior of structural beams under load, or the characteristics of transistor circuits all have this universal quality. There is always an aspiration for universal principles in the artificial sciences.
However, the attainment of universal knowledge is not the sine qua non of progress in the sciences of the artificial. Design and implementation of the individual artifact always have primacy. A particular machine, a particular building, a particular system—these are what ultimately matter. If an architect designs a museum that, when built (implemented), serves the purpose for which it was intended, the project is deemed successful. Its relationship to other museums and their architectures may be of great interest to that architect and her colleagues, and to architectural historians, but that relationship is of no consequence as far as that museum project itself is concerned. So also for the design and implementation of a particular computer or a particular kitchen appliance or a particular medical device. Ultimately, a science of the artificial is a science of the individual.
In the chapters that follow, we will witness the unfolding of these ideas in the case of one particular science of the artificial: computer science.
VII
We will be traversing the historical landscape from a particular vantage point: the second decade of the 21st century. We will be looking to the past—admittedly, not a very remote past, because computational artifacts are a relatively recent phenomena.
One of the dilemmas faced by historians is the following: To what extent do we allow our current circumstances to influence our judgment, assessment, and understanding of the past? This question was first raised famously in 1931 by the (then-very young) British historian Herbert Butterfield (1900–1979).6 Discussing the so-called English Whig historians of the 19th century (Whigs were the liberals or progressives, in contrast to the Tories, the conservatives), Butterfield of
fered a scathing critique of these historians who, he said, valorized or demonized historical figures according to their own 19th-century values. This viewing of the past through the lens of the present thus came to be called, derisively, whiggism or, more descriptively, present-centeredness.
Ever since Butterfield, conventional wisdom has advocated that present-centeredness should be avoided at all cost. The past must be judged according to the context and values of that past, not of the historian’s own time. Yet, the fact is, historians select events and people of the past as objects of historical interest in the light of their current concerns and values. The cautionary point is that the historian must negotiate a narrow and tricky path, eschewing judging the past according to current values or concerns, yet selecting from the past according to his current concerns. We will also see, in this book, that as 21st-century readers (historians or nonhistorians, scientists or nonscientists, academics or general readers), we often understand aspects of the history of computer science better by appealing to concepts, words, terms, and phrases that are used now. And so, often, I allow the intrusion of present-centered language as a means to understanding things of the past. In other words, I strive to achieve a judicious blend of whiggism and antiwhiggism in this narrative.7
VIII
So, even before we embark on this story of the genesis of computer science, the reader is forewarned about the nature of this science. It is a science of many hues. To summarize:
1. Its domain comprises computational artifacts that can be material, abstract, or in between (liminal), and that can function automatically (that is, with minimal human intervention) to manipulate, process, and transform symbols (or information).
2. It is, thus, a science of symbol processing.
3. Its objective is to understand the nature of computational artifacts and, more fundamentally, their purposes (why they come into the world), and their making (how they come into the world).
4. It is, thus, a science of the artificial.
5. The how of their making comprises collectively the twin processes of design and implementation.
6. In general, design is both the process by which a symbolic representation of an artifact is created as well as the symbolic representation itself. Implementation is both the process by which a representation is put into effect, as well as the artifact that is the outcome of that process.
7. It is a science of the ought rather than of the is.
8. It is (primarily) a science of the individual. With these caveats in mind, let us proceed with the story.
NOTES
1. P. Dear., (2006). The intelligibility of nature. Chicago, IL: University of Chicago Press.
2. H. A. Simon., (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.
3. Liminality refers to a state of ambiguity, of betwixt and between, a twilight state.
4. P. Galison., (2010). Trading with the enemy. In M. Gorman (Ed.), Trading zones and interactive expertise (pp. 26–51). Cambridge, MA: MIT Press (see especially p. 30).
5. I have borrowed the phrase putting into effect to signify implementation from P. S. Rosenbloom. (2010). On computing: The fourth great scientific domain (p. 41). Cambridge, MA: MIT Press.
6. H. Butterfield (1973). The Whig interpretation of history. Harmondsworth, UK: Penguin Books (original work published 1931).
7. E. Harrison. (1987). Whigs, prigs and historians of science. Nature, 329, 233–234.
1
Leibniz’s Theme, Babbage’s Dream
I
THE GERMAN MATHEMATICIAN Gottfried Wilhelm Leibniz (1646–1716) is perhaps best remembered in science as the co-inventor (with Newton) of the differential calculus. In our story, however, he has a presence not so much because, like his great French contemporary the philosopher Blaise Pascal (1623–1662), he built a calculating machine—in Pascal’s case, the machine could add and subtract, whereas Leibniz’s machine also performed multiplication and division1—but for something he wrote vis-à-vis calculating machines. He wished that astronomers could devote their time strictly to astronomical matters and leave the drudgery of computation to machines, if such machines were available.2
Let us call this Leibniz’s theme, and the story I will tell here is a history of human creativity built around this theme. The goal of computer science, long before it came to be called by this name, was to delegate the mental labor of computation to the machine.
Leibniz died well before the beginning of the Industrial Revolution, circa 1760s, when the cult and cultivation of the machine would transform societies, economies, and mentalities.3 The pivot of this remarkable historical event was steam power. Although the use of steam to move machines automatically began with the English ironmonger and artisan Thomas Newcomen (1663–1727) and his invention of the atmospheric steam engine in 1712,4 just 4 years before Leibniz’s passing, the steam engine as an efficient source of mechanical power, as an efficient means of automating machinery, as a substitute for human, animal, and water power properly came into being with the invention of the separate condenser in 1765 by Scottish instrument maker, engineer, and entrepreneur James Watt (1738–1819)—a mechanism that greatly improved the efficiency of Newcomen’s engine.5
The steam engine became, so to speak, the alpha and omega of machine power. It was the prime mover of ancient Greek thought materialized. And Leibniz’s theme conjoined with the steam engine gave rise, in the minds of some 19th-century thinkers, to a desire to automate calculation or computation and to free humans of this mentally tedious labor. One such person was English mathematician, “gentlemen scientist,” and denizen of the Romantic Age, Charles Babbage.6
II
Charles Babbage (1791–1871), born into the English upperclass, did not need to earn a living. Son of a wealthy banker, he studied at Trinity College, Cambridge, and cofounded with fellow students John Herschel (1792–1871) and George Peacock (1791–1858) the Analytical Society, the purpose of which was to advance the state of mathematics in Cambridge.7 Babbage left Cambridge in 1814, married the same year, and, with the support of an allowance from his father and his wife’s independent income, settled in London to the life of a gentleman scientist, focusing for the next few years on mathematical research.8 In 1816, he was elected a Fellow of the Royal Society (FRS), the most venerable of the European scientific societies, founded in 1662.9
In 1828, the year after he inherited his late father’s estate and became a man of independent means in his own right, and a widower as well,10 Babbage was elected to the Lucasian chair of mathematics in Cambridge—the chair held by Isaac Newton from 1669 to 1702,11 (and, in our own time, by Stephen Hawking from 1979–2009), and still regarded as England’s most prestigious chair in mathematics. Babbage occupied this chair until 1839, although—treating this appointment as a sinecure—he never actually took up residence in Cambridge nor did he deliver a single lecture while he held this chair.
In his memoirs, Passages from the Life of a Philosopher (1864), Babbage claimed that his first thoughts along the lines of Leibniz’s theme came to him while he was still a student in Cambridge, around 1812 to 1813. He was sitting half-asleep in the rooms of the Analytical Society, a table of logarithms open before him. A fellow member of the Society, seeing him in this state, asked what he was dreaming about, to which he replied that he was thinking how these logarithms could be calculated by a machine.12
We do not know the truthfulness of this account. Anecdotes of scientists and poets ideating in a state of semisleep or in a dream are not uncommon. Celebrated examples include the German scientist Friedrich August von Kekulé (1829–1896), who dreamed the structure of the benzene molecule,13 and the English poet Samuel Taylor Coleridge (1772–1834), who imagined the unfinished poem “Kubla Khan” while sleeping under the influence of opium.14
If this is true, the dream must have lain buried in Babbage’s subconscious for a very long time—until about 1819—when, occupied with ways of calibrating astronomical instruments a
ccurately, he began thinking about machines to compute mathematical tables.15 Writing elsewhere in 1822, Babbage mentions working on a set of astronomical tables with his friend, the multidisciplinary scientist Herschel, and discussing with Herschel the possibility of a machine powered by a steam engine for performing the necessary calculations.16
Thus it was that, beginning in 1819, Babbage conceived the idea and began designing the first of his two computational artifacts, the Difference Engine. Its aim was the expression of Leibniz’s theme in a specific kind of way—the fast, automatic, and reliable production of mathematical tables of a certain kind. The name of the machine was derived from the computational procedure it would use to compute the tables, called the method of differences, a method already well known for the manual preparation of tables.17
Babbage tells us what he wanted of his machine. First, it must be “really automatic”—that is, when numbers were supplied to it, it would be able to perform mathematical operations on them without any human intervention.18 From an engineering point of view, this meant that after the numbers were placed in the machine, it would produce results by mechanisms alone—“the mere motion of a spring, a descending weight” or some other “constant force.”19 Second, the machine must be accurate, not only in the generation of numbers, but also in the printed tables, for this was an arena where inaccuracy and unreliability were known to creep in. This meant that the computing machine must be coupled directly with the printing device and, in fact, must drive the latter automatically so that no error-prone human intervention would be admitted.20
Mechanizing the preparation of mathematical tables would not only free human mental labor for other less tedious tasks, but also would speed up the process and eliminate human fallibility and replace it with machine infallibility. We are seeing here an elaboration of Leibniz’s theme and of what Babbage had apparently dreamed of some half-dozen years before.
The Difference Engine was to be a “special-purpose” machine, because it could produce mathematical tables only. However, by deploying the method of differences, it was general within the confines of this special purposeness; the method of differences offered a general principle by which all tables might be computed by a single, uniform process.21