Free Novel Read

It Began with Babbage Page 13


  50. Ibid., p. 201.

  51. Aiken & Hopper, op cit., p. 203.

  52. H. A. Simon. (1976). Administrative behavior (3rd ed.). New York: Free Press (original work published 1947).

  53. See, for example, H. A. Simon. (1983). Reason in human affairs. Oxford: Basil Blackwell; H. A. Simon. (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.

  54. Ibid., 1996, op cit., pp. 27–30.

  55. A. Newell & H. A. Simon. (1972). Human problem solving (pp. 681, 703). Englewood-Cliffs, NJ: Prentice-Hall.

  56. Simon, 1996, op cit., pp. 111–138; S. Dasgupta. (2009). Design theory and computer science (pp. 32–35, 62–65). Cambridge, UK: Cambridge University Press (original work published 1991).

  57. Randell, op cit., p. 188.

  58. Aiken & Hopper, op cit., p. 203.

  59. Recall that floating-point arithmetic had been conceived by Stibitz at least as far back as 1942.

  60. Randell, op cit., p. 188.

  61. Ibid., p. 187.

  62. Ibid., p. 188.

  63. M. V. Wilkes. (1985). Memoirs of a computer pioneer (p. 157). Cambridge, MA: MIT Press.

  64. K. Zuse. (1975a). Method for automatic execution of calculations with the aid of computers. Patent application. Extract reprinted in Randell (pp. 159–166), op cit., Trans. R. Basu (original work published 1936).

  65. Ibid., p. 159.

  66. Ibid., p. 162.

  67. Ibid.

  68. Ibid.

  69. Ibid.

  70. K. Zuse. (1975b). The outline of a computer development from mechanics to electronics. Reprinted in Randell (pp. 171–186), op cit., p. 179 (original work published 1962 in German).

  71. Ibid.

  72. H. Shreyer. (1939). Technical computing machines. Unpublished memorandum. Published in Randell (pp. 167–169), op cit., p. 168.

  73. Zuse, op cit., p. 178.

  74. Randell, op cit., p. 156.

  75. Zuse, op cit., p. 179.

  76. Ibid.

  77. Ibid. This article was written in 1962 in German. My reference is to the excerpted English translation in which such terms as program, word, exponent, mantissa, and single address code—very much established during the 1960s—were used.

  78. Ibid.

  79. Ibid.

  80. Ibid.

  81. Ibid.

  82. Ibid., p. 180.

  83. Ibid., pp. 183–184.

  84. B. Randell. (1980). The Colossus. In N. Metropolis, J. S. Howlett & G. C. Rota. (Eds.), A history of computing in the twentieth century (pp. 47–92). New York: Academic Press (see especially p. 55).

  85. Ibid., op cit., p. 78.

  86. A. Hodges. (1987). Alan Turing: The Enigma (pp. 166–170). New York: Simon & Schuster.

  87. Randell, 1980, op cit., p. 71.

  88. Ibid., pp. 47, 65.

  89. I.. J. Good. (1980). Pioneering work on computers at Bletchley. In Metropolis, Howlett, & Rota, op cit., pp. 31–45 (original work published 1976).

  90. Randell, 1980, op cit., p. 71.

  91. Ibid., p. 74.

  92. Good, op cit., p. 42; Randell, 1980, op cit., p. 66.

  93. Randell, 1980, op cit., p. 72.

  94. Ibid., p. 73.

  95. Ibid., p. 74.

  96. Ibid., p. 47.

  97. Ibid., p. 80.

  98. Hodges, op cit., p. 305.

  99. Quoted in Hodges, op cit., p. 306.

  100. J. H. Wilkinson. (1980). Turing’s work at the National Physical Laboratory and the construction of pilot ACE, DEUCE and ACE. In Metroplois, Howlett, & Rota, op cit., pp. 101–114.

  6

  Intermezzo

  I

  BY THE END OF World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even an actuary, some working in solitary mode, some in twos or threes, others in small teams, some backed by corporations, others by governments, many driven by the imperative of war—had developed a shadowy shape of what the elusive Holy Grail of automatic computing might look like. They may not have been able to define a priori the nature of this entity, but they were beginning to grasp how they might recognize it when they saw it. Which brings us to the nature of a computational paradigm.

  II

  Ever since the historian and philosopher of science Thomas Kuhn (1922–1996) published The Structure of Scientific Revolutions (1962), we have all become ultraconscious of the concept and significance of the paradigm, not just in the scientific context (with which Kuhn was concerned), but in all intellectual and cultural discourse.1

  A paradigm is a complex network of theories, models, procedures and practices, exemplars, and philosophical assumptions and values that establishes a framework within which scientists in a given field identify and solve problems. A paradigm, in effect, defines a community of scientists; it determines their shared working culture as scientists in a branch of science and a shared mentality. A hallmark of a mature science, according to Kuhn, is the emergence of a dominant paradigm to which a majority of scientists in that field of science adhere and broadly, although not necessarily in detail, agree on. In particular, they agree on the fundamental philosophical assumptions and values that oversee the science in question; its methods of experimental and analytical inquiry; and its major theories, laws, and principles. A scientist “grows up” inside a paradigm, beginning from his earliest formal training in a science in high school, through undergraduate and graduate schools, through doctoral work into postdoctoral days. Scientists nurtured within and by a paradigm more or less speak the same language, understand the same terms, and read the same texts (which codify the paradigm).

  However, rather like a nation’s constitution, a paradigm is never complete or entirely unambiguous. There are gaps of ignorance within it that need to be filled—clarifications, interpretations, and unknowns that must be known, and open problems that must be solved. These are the bread-and-butter activities of most practitioners of that science. Kuhn called the sum of these activities normal science. In doing normal science, the paradigm as a whole is never called into question; rather, its details are articulated.

  We will see, as our story unfolds, that there is much more to Kuhn’s theory of paradigms and how it can explain scientific change. We also note that Kuhn’s theory has been explored widely and criticized severely.2 But here, rather as he had postulated paradigms as frameworks for doing science, we can use his theory of paradigms as a framework for interpreting history, to lend some shape to this unfolding history of computer science.

  Let us consider, for our immediate purpose, one of his key historical insights. This is the situation in which a paradigm has yet to emerge within a discipline. The absence of a paradigm—the preparadigmatic stage—marks a science that is still immature and perhaps even marks uncertainty that it is a science. In this condition, there might exist several “competing schools and subschools of thought.”3 They vie with one another, with each school having its own fierce adherents. They may agree on certain aspects of their burgeoning discipline, but they disagree on other vital aspects. In fact, according to Kuhn, leaving aside such fields as mathematics and astronomy, in which the first paradigms reach back to antiquity, this situation is fairly typical in the sciences.4

  And, in the absence of a shared framework, in the absence of a paradigm, anything goes. Every fact or observation gleaned by the practitioners of an immature science seem relevant, perhaps even equally significant.

  III

  This was the situation in computing circa 1945. No one had yet ventured to speak of a science of computing, let alone something as precise as a disciplinary name such as computer science. As we have seen, even the word computer was not yet widely in place to signify the machine rather than the person. For a science of computing to be spoken of, there had to be some semblance of a paradigm to which the current, few dozen practitioners of the field could pay allegiance. There was no solid evidenc
e of a paradigm—yet. On the other hand, certain elements had emerged as common ground—in fact, some reaching back to Babbage himself.

  First, the central focus of all the protagonists in this story so far, beginning with Babbage, was a machine to perform automatic computation: a computational artifact (see Prologue). This artifact was basically a material one, and so the physical technology was always at the forefront in the minds of the people involved. Yet (again, beginning with Babbage and his sometime collaborator Lovelace), the material artifact was not an island of its own. Unlike almost all material artifacts that had ever been invented and built before, there was an intellectual activity involved in preparing a problem to be solved by automatic computing machines. As yet, there was no agreed-on name for this activity or its product. The term program was still some way off.

  Second, a fundamental organization of an automatic computing machine—its internal architecture—had been clarified: there must be a means of providing the machine with information, and a means by which the results of computation could be communicated to the user—input and output devices. There must be a store or memory to hold the information to be used in a computation or the results of computation. There must be an arithmetic unit that can actually carry out the computations. Even the possibility of parallel processing—using two more arithmetic units, even multiple input and output devices—was “in the air.” There was also the possibility of specialized units for specific kinds of mathematical operations such as multiplication and the extraction of square roots, or for operations to “look up” mathematical tables. There must be a means for controlling the execution of a computational task and a means for specifying what the computational task is to be.

  Third, the distinction between special-purpose and general-purpose computers was rather vague. The machines that had been conceived or actually built and used thus far were designed to perform specific kinds of computational tasks (some very specific, some spanning a range of problems within a problem class). The dominant class of problems for which computing machines were developed, up to this point, were mathematical or, at least, numeric. The Colossus, in contrast, was specialized toward the class of logical (or, equivalently, Boolean) problems. A general-purpose machine must provide capabilities to process tasks spanning different classes of problems. This means that the physical machine itself must provide the means for the efficient execution of these different tasks. Such capability was yet lacking.

  Fourth, as noted earlier, the words programmable and computer program had yet to emerge. The terms still in common use circa 1945 were “paper tape controlled” or “plugboard controlled”. Zuse, as we have seen, used the term “computational plan”, which is perhaps closest to program. Aiken and Hopper spoke of “sequence tape”. But, the idea of programmability, reaching back to Babbage and Lovelace, was, circa 1945, a shared concept.

  Fifth, and last, certain other terms had emerged to form the nucleus of a computing vocabulary: “floating-point representation”, “binary”, and “binary coded decimal” in the context of numbers. Another was “register” to signify the individual units of information storage, linked either directly with arithmetic units or as collections to serve as the machine’s memory.

  This much seemed to be agreed on. However, there were different opinions and views on other fundamental matters. How should numbers be represented? Some had come to appreciate the advantages of binary notation whereas others clung to the familiar decimal system. How large should the unit of information storage (in present-centered language, the word size) be? What should be the form of the computational plan?

  Then there was the matter of the physical technology of computers. Purely mechanical technology—gears, levers, cams, sprocket and chain, the stuff of kinematics, the domain of mechanical engineering—still prevailed, but had also given way to the guile of electrical technology. Electrical relays and electromagnets had become the preferred and trusted physical basis for building computing machines. There was even an elegant mathematics—Boolean algebra—that could be applied to the design of binary switching circuits out of which electrical components would be made.

  However, World War II raged on, and the imperatives of faster means of computation became more urgent, the lure of electronic circuit elements became increasingly more attractive. On August 14, 1945, 5 days after America exploded its second atomic bomb upon Nagasaki, the Japanese surrendered. The Germans had already surrendered in May. World War II was finally over. The state of computing was scarcely in the minds of anyone in the world, save for a few dozen of those who were involved in its development before and during the war years—in America, Britain, and Germany. But for these few people, the state of computing and computing machines mattered. In the light of a Kuhnian framework, the situation, however, was very much in a preparadigmatic state.

  IV

  An aspect of this preparadigmatic state included the larger theoretical questions: What kind of discipline was computing? Was it a discipline at all?

  We noted at the beginning of this book that scientists, as a community, agree implicitly and broadly that what makes a discipline scientific is, above all, its methodology (see Prologue, Section V)—the use of observation, experimentation, and reasoning; a critical stance; and an ever-preparedness to treat explanations (in the form of hypotheses, theories, laws, or models) as tentative and to discard or revise them if the evidence demands this.

  In the artificial sciences, explanations are about artifacts, not nature. Here, scientists address the question whether such and such an artifact satisfies its intended purpose (see Prologue, Section III). We also noted that, in the case of artifacts of any reasonable complexity, design and implementation are activities that lie at the heart of the relevant artificial sciences, activities missing in the natural sciences. Designs serve as theories about a particular artifact (or class of artifacts), and implementations serve as experiments to test the validity of the theories (see Prologue, Section VI).

  We have observed thus far in this story the emergence of several of these features. From as far back as Charles Babbage, we see, for example, the separation of design from implementation. In fact, in Babbage’s case, it was all design, never implementation. It was left to others to implement the Difference Engine and to test Babbage’s theory. The Analytical Engine was detailed in theory, but the theory was never tested.

  With the advent of electromechanical machines, we observe the strongly empirical/experimental flavor of computing research. The families of machines, whether at Bell Laboratories, IBM, Bletchley Park, or Zuse’s workplace in Germany, reveal the emphasis on building individual, specific machines; ascertaining their appropriateness for their intended purposes; revising or modifying the design in the light of their performance; or even creating a new design because of changes in purposes and new environmental factors (such as the availability of new technologies).

  Although computing research had enjoyed a very short life thus far, evolution was in evidence. Phylogenies were created. However, this evolutionary process was not Darwinian, for the latter demands lack of purpose, randomness, chance. Rather, it was evolution driven by purpose. Each member of an evolutionary family was the product of an intended goal or purpose, a design that constituted a theory of the proposed artifact—and a hypothesis that if the artifact was built according to the design, it would satisfy the intended purpose; and an implementation that tested the theory followed by modification or revision of the design as a result of the experiment or modified purposes; and a new design. Each design became a theory of (or for) a particular computing machine. Each implementation became an extended experiment that tested the associated theory.

  Almost as an aside stood Alan Turing’s work. His abstract machine was a purely logico-mathematical device, albeit a device quite alien to most mathematicians and logicians of the time. At this stage of our story, Turing’s machine stands in splendid isolation from the empirical, experimental designas-theory, implementation-as-experim
ent work that was going on in Britain, the United States, and Germany before and during the war years. The Turing machine had had no impact on the design of computing machines thus far. Even in Bletchley Park, despite the fact that Turing had worked there, despite the fact that many of his colleagues there knew of his 1936 paper, the architecture of the Colossus was quite uninfluenced by Turing’s machine.5

  On the other hand, the inventors and designers of computing machines during the 1930s and throughout the war years, be they mathematicians, physicists, astronomers, or engineers, clearly envisioned their machines as mathematical and scientific (and, in the case of the Colossus, logical) instruments. In this sense, they were mathematical machines.

  The data processing, punched-card machines pioneered by Hollerith and evolved by such companies as IBM during the first third of the 20th century were, if not mathematical, certainly number processors. So mathematics in some form or another was the central preoccupation of these designers. Unlike the Turing machine, though, they were real artifacts. They had to be built. And machine building was the stuff of engineering.

  This is where matters stood circa 1945. If people recognized that a discipline of computing was emerging, they had no name for it, nor was there a firmly established shared framework, a paradigm, in place. At best, these early pioneers may have thought their unnamed craft lay in a kind of no-man’s land between a new kind of mathematics and a new kind of engineering.

  NOTES

  1. T. S. Kuhn. (1970). The structure of scientific revolutions (2nd ed.) Chicago, IL: University of Chicago Press (original work published 1962).

  2. The literature on Kuhn’s theory of paradigms is vast. An early set of responses is the collection of essays in I. Lakatos & A. Musgrave. (Eds.). (1970). Criticism and the growth of knowledge. Cambridge, UK: Cambridge University Press. An important later critique is L. Laudan. (1977). Progress and its problems. Los Angeles, CA: University of California Press. See also G. Gutting. (1980). Paradigms and revolutions. Notre Dame, IN: University of Notre Dame Press. For a more recent critical study of Kuhn, see S. Fuller. (2000). Thomas Kuhn: A philosophical history for our times. Chicago, IL: University of Chicago Press. This book also has an extensive bibliography on Kuhn and his theory. Kuhn’s own thoughts following the publication of the second edition of Structure, in response to his critics, are published in J. Conant & J. Haugeland. (Eds.). (2000). The road since Structure. Chicago, IL: University of Chicago Press.