The Web of Learning:
Staying Real in a Virtual World

©2005 Michael S. Mahoney

Program in History of Science, Princeton University

Lecture delivered at the University of South Carolina in Celebration of the College of Arts and Sciences, 6 April 2005.

The Web of Information

We are gathered at an exciting time in the life of your university and in the lives of universities in general. In the presence of the computer and the Internet, we are rethinking the relations among the disciplines and how we talk to one another and inform one another. Indeed, it is a time when the so-called "information revolution" seems to be throwing into question the very nature of the university, as it reshapes most of the disciplines being pursued there. We've gone online. I'm online, my courses are online, my university is online, other colleges and universities are even more online, some universities are online only. The Library of Congress is moving online, extensive collections of texts are online with Google promising even more, the Encyclopedia Britannica is online, the human genome is online. Data are everywhere, information abounds. Looking for something? Just google it. So why do we need colleges and universities? What are we doing here? What am I doing here? Why are people spending four or more years attending lectures, participating in sections and seminars, doing problem sets, working in the lab, and looking for books in the library? Why don't we just send them all off into cyberspace and have them check back at intervals for the appropriate degree?

Well, first of all, there is a lot less out there than meets the eye. What's on the Web remains but a fraction of the information still held in traditional media, in particular the book, and it is much more fragile. It requires software to decipher its digital code, which is written on a medium with a relatively short life-span, and it requires hardware to run the software and connect it to the medium. Both the software and the hardware are obsoleted at an alarming rate, require re-mediation. By contrast, I'm holding in my hand a 1604 edition of Euclid's Elements of Geometry with commentary by Christoph Clavius. Here is an equally old pocket edition of the Arithmetic and Algebra of Petrus Ramus. Barring an accident, both will still be around much longer than any digital version of them, which as far as I know does not yet exist. Their contents are still directly readable by anyone who knows Latin, the contents are randomly accessible as well as sequentially ordered, and the index forms a system of links within the text.

Second, if you've spent time on the Web, you know that much of the information is not very good. There's much of value out there, but much more that's not. You need discernment to make good use of it. It's not a particularly good tool for youngsters, because they don't have that discernment yet. It's a great place to go if you already know what you're looking for. Harold Macmillan, the former Prime Minister of England, once remarked that the function of a university is to teach its students to "spot the rot"; that has never been more true than with the Web, which embodies what might be called the information version of "Gresham's law", where bad information drives out the good.

But most important, the web of information is - at best - just that, information. As John Seely Brown and Paul Duguid show in their thoughtful and provocative study, The Social Life of Information, information in itself doesn't do much. It does not constitute knowledge, nor can it do so without a knowing subject. In short, it needs, well, it needs a life - or rather a living community. That's where institutions of learning come in, especially institutions that foster learning across the disciplines. If you'll bear with me for a bit, I'd like to show you how the experience of computing and the history of science over the last couple of decades reinforces rather than undermines what schools and universities have been doing for the last century. As students and faculty we are here to give life to information and make it knowledge, or learning. In the process, I hope to provide an example of how understanding our technoscientific culture needs the perspectives of both science and the humanities.

A Historian's Perspective

As someone originally trained as a medievalist, I watch the "information revolution" or "computer revolution" with special interest, because it is younger than I am. I was in at the beginning, or close to it, and I didn't see it coming. During my senior year at college, I had a part-time job as a computer programmer, working in machine language with a roomful of vacuum tubes. Several months of wrestling with the device persuaded me that it wasn't very interesting and it wasn't going anywhere; it didn't have much of a future. Only during the 1980s, with the perspective of 20 years' experience as a historian of science, did I appreciate the significance of what was happening, and I felt something akin to what another medievalist, Henry Adams, described in his autobiography, The Education of Henry Adams, which I first read about that same time.(1)

1. Which I first read about that time, when I was about 40, which is a good time to read the Education. It's wasted on students; they haven't been around long enough to discover that they haven't been educated.
Often in old age he puzzled over the question whether, on the doctrine of chances, he was at liberty to accept himself or his world as an accident. No such accident had ever happened before in human experience. For him, alone, the old universe was thrown into the ash-heap and a new one created. He and his eighteenth-century, troglodytic Boston were suddenly cut apart - separated forever - in act if not in sentiment, by the opening of the Boston and Albany Railroad; the appearance of the first Cunard steamers in the bay; and the telegraphic messages which carried from Baltimore to Washington the news that Henry Clay and James K. Polk were nominated for the Presidency. This was in May, 1844; he was six years old; his new world was ready for use, and only fragments of the old met his eyes. (Education, 5)(2)
2. Note the connecting thread here: these mid-19th century inventions moved people faster and brought them closer together; one might even speak of a "transportation revolution" (as indeed some American historians do) and the beginnings of the "information revolution".

As those of you who have read the Education know, any education that Adams would receive over the next twenty years belonged to that old world, and he felt uneducated for his own century. In the Education he is still wrestling with what he ought to have known for that and what he should know for the 20th century.

Adams was six when those things happened to change his world. The theme of his autobiography is his search for an education appropriate to the new world created by industrial technology and mathematical science. The Education was one of two works, the other being Mt. Saint Michel and Chartres, that were triggered by his visit to the Paris Exposition of 1900, he stood in awe before the dynamos in the Gallery of Machines. In his historical eyes they symbolized the multiplicity of mechanical forces that since the 12th century had steadily replaced the Virgin Mary (very much as woman, as the feminine) as the organizing force of society, and he struggled to comprehend the meaning of that jump in human experience.

Satisfied that the sequence of men led to nothing(3) and that the sequence of their society could lead no further, while the mere sequence of time was artificial, and the sequence of thought was chaos, he turned at last to the sequence of force; and thus it happened that, after ten years' pursuit, he found himself lying in the Gallery of Machines at the Great Exposition of 1900, with his historical neck broken by the sudden irruption of forces totally new." (382)
3. Recall that he was writing as someone who had spent seven years teaching medieval history at Harvard; indeed, he had established the professional study of history on the German model in the American university. He later presented his theory of the dynamics of history in The Degradation of the Democratic Dogma, published posthumously in 1919, when he also received the Pulitzer Prize for the Education.

Not just the dynamo, but radium and the forces hidden behind it.

Now a century later, when I was six years old, it was the atomic bomb and the computer that would change the world into which I was born, the former in immediate and obvious ways, the latter at first slowly and in ways not yet fully determined even yet. For the first 30 years after the war, the bomb and nuclear power attracted most of the attention and anxiety. It was not all death and destruction. The "Atomic Age" promised to be an era of prosperity fueled by energy "too cheap to meter". Automobiles, trains, planes, homes, industry would all draw their power from nuclear reactors of various sizes and formats, and society would assume new forms around the possibilities of ubiquitous, unlimited energy. And it was not just energy. As this stunning illustration from Collier's Magazine of 1946 showed, the atom even promised relief from crippling disease. Some of those visions became reality, some turned into nightmares. Fifty years later, despite nuclear power plants all over the world, both fixed and afloat, the phrase "atomic age" is more likely than not to evoke images of a nuclear winter of desolation, and we don't like to be reminded of it. When's the last time you heard someone say we live in the atomic age?

Instead, for the past several decades, we have increasingly - and in general hopefully - thought of ourselves as living in the Computer, or Information Age, where information verges not only on being "too cheap to meter" (not for long; people are finding ways to charge for it) but on being ubiquitous, indeed woven into the very fabric of our world. One of the striking things about computers is that somewhere close to 90% of them are never seen. When we think of the computer, we think of this machine next to me or of the desktop, but we don't think about the 170, 180, or 200 computers that are in our automobile engines, in our toasters and television sets, and eventually embedded in the walls around us enabling the room to become "intelligent". One does not have to look far to find assertions that the world is ultimately nothing but information. We see it in learned journals, we see it in science fiction, we see it in comic strips. The claims encompass both the sciences and the humanities, as if they are distinguished only by the specific form of the information that constitutes them. I think the humanities, in particular, run the risk of being absorbed in this informational view of the physical world.

Now I'm far from Adam's eminence as a historian, but I do wrestle with the historical meaning of this transition to information and the information society, and the cultural forces it reflects. And, as Brown and Duguid's book suggests, I am not alone in my concern; they are concerned about it from the point of view of business and large-scale organizations. I do have a peculiar view of it. As I said, I was programming this machine, the Datatron 204. [See illustrations in slide presentation] So what I was programming was not far from the original machines, and it was certainly a far cry from the machine that sits here. One of the things I do when doing the history of computing in my courses and want to make a point about the way society shapes things is to defy people to look at that machine [the Datatron], that roomful of vacuum tubes and look at this ThinkPad, and tell me that somehow the latter is evolutionarily inherent in the former, that you can look at the Datatron and say "I know where that's going. That's going to lead to the ThinkPad." Not on your life!

So, unimpressed in my first encounter with a computer, I turned down the offer of a permanent position. I didn't become a programmer, much less a computer scientist. I became a historian of science, focusing at first on science in antiquity and the Middle Ages and then moving forward to the 17th century. It was as an historian of science that I came back to the computer in the 1980s and started to study it closely both in itself and as an historical phenomenon. Being a historian with some knowledge of computing had two advantages, one generic, one specific.

 

Generically, it disposed me to look through and beyond the hype that has surrounded computers almost from the beginning. Historians are wary of claims of novelty, and they take a close, critical look at "revolutions" (which really aren't what they used to be). In computing, in particular, there's always a revolution; sometimes, it's a big revolution, sometimes a small one. When you go back five years later to see how the revolution turned out, you find that it has been postponed or canceled owing to technical difficulties. It's not that there is nothing new under the sun. Of course there is. The question is what's new about it. The new can emerge only out of the old, and its novelty can be understood only in that context.(4) Science and technology do not just happen to societies; rather, they reflect what societies want to know about the world and be able to do in it. Inventions arise out of perceived needs and then often wind up being put to other purposes. A lot happens out in what my colleague, Ruth Schwartz Cowan, author of More Work for Mother, calls "consumption junction", where the machines meet the consumer. Owners of Ford's Model T kept turning it into a tractor, until he finally designed one. Edison conceived of the phonograph as a dictation machine for business offices. Users decided that it was a new medium of entertainment, leading Edison into the recording business. As you know, something very similar has happened to the computer.

4. Marx makes the point in Capital: new technologies emerge in the form of the old and only through their use acquire a form suitable to them.

Specifically, over the past twenty years historians of science and computer scientists have been following their own disciplinary paths toward convergent views on the nature of human knowledge. Historians of science find themselves quite at home in the sociological and managerial bibliography of Brown's and Duguid's book. We are reading a lot of literature in common with them; in some cases we are writing that literature. So let me offer a quick sketch of what I found computer people had been learning and what historians of science were beginning to learn at the time I went and took another look at the machine I didn't think had a future but by then did have a past and so was open to my gaze as a historian.

Let me begin with a feature that I did not appreciate for a long time, indeed that I have only recently come to appreciate. We speak of the computer ("the impact of the computer", "the computer in society"), but it is what I call a "false singular". There is no one computer; there are computers - computers that do different things of different kinds - and the plural is essential. It is essential for two reasons.

First, insofar as there is the computer this is it: it's the conceptual scheme of the Turing machine, which is the most abstract version of what you can do if you imagine that you have a device that can be in one of a finite number of states and that can read a potentially infinite tape containing a finite number of symbols in a given sequence, and each of the states consists of a set of instructions: in the current state, given this input, write this output, shift one cell to the right or left, and enter a given next state. Alan Turing showed in 1936 that if those are the only resources you have - the computer - then you can compute anything that is computable. That is, it defines the limits of computation. What this machine can do in a finite number of steps constitutes what we can compute in the broadest and most general sense of "compute": not just arithmetic, but logical functions in particular. That is what lies behind the notion that the computer can do anything for which clear and logically unambiguous instructions can be given. If you can lay it out in formal logic, according to the rules of deduction, this machine either can compute it, or it isn't computable.

Now whether that encompasses everything in the world is an open question.

There are in principle things that cannot be computed, and we know, often from bitter experience, that there are in practice things that cannot be computed, e.g. which muscles to engage and in what sequence so as to walk, an intractable problem that most every human solves around age 1 - whether by computing is again another question. (As another sage once said, nature doesn't have problems to solve; only humans have problems with nature.) But, clearly, the reduction of the world to information won't work if world is not computable.

But this scheme is the most abstract version. You never meet this. You meet individual machines. And not only is the device plural, but so too is its history. The history of the computer itself is relatively short and direct. It's at the center of this diagram [see figure in slide presentation] and it is the development of the first electronic numerical integrator and computer (ENIAC). This was an all-electronic calculator. In its subsequent design, when John von Neumann got together with its designers, John Mauchly and J. Presper Eckert, based on his reading not only of Turing's paper but also on the famous paper by McCulloch and Pitts on the logic immanent in nerve nets, and designed the electronic discrete variable automatic computer (EDVAC), this is the stored-program computer capable of operating on its own instructions, that is, capable of carrying out logic. And this then became the physical Turing machine, limited by finite capacity and finite speed, from which most others have followed. The history of the computer is basically that story, which one can then carry back to the earlier stories of mechanical calculation and of mathematical logic. But it's a very limited story.

Now, as this diagram suggests, up until now historians have tended to make that development the central node of what is a convergent account. The way that you do the history of computing - and I've shown this to authors who have recognized it at once - is to go back and try to find every possible antecedent of computing based on what we now do, and then write the history as if all of these groups were waiting for computer to come along and solve their problems and even demanding it. So we take all these histories and we bring them to convergence the computer, and then we follow the mainline of evolution of the computer. And that's interesting, too, because the way most histories go, when the minicomputer appears, the mainframe disappears, and when the micro appears the mini disappears, forgetting that in evolution the old species doesn't disappear. There are still chimps around, doing very well what it is that chimps do. Humans were an add-on. So, one just has to type "mainframe" into Google to find that mainframes are alive and well, and keeping IBM in business.

And everything else is, well, an impact of computing. Here are these poor fields, just standing there, and then the computer comes along and -bang! - impact. Now, if you look back on [Turing's] device, that story is not going to work. At the beginning the computer couldn't have an impact, because to do so it would have to impose its nature on things, and as a conceptual scheme, as an idea, the computer has no nature: it is protean. Anything you want to do, anything you can specify, you can do with a computer. That's a great possibility. How do I do it? So, as various activities picked up the computer, they had to figure out how it could be used. It didn't dictate itself how it could be used. It couldn't have an impact until people figured out what to do with it. Different groups of people saw different possibilities in it, and they had different experiences as they sought to realize those possibilities. One can speak of them as "communities of computing".

Communities of Computing

This is how I would revise that diagram. Here on the right is our story of the development of the computer, and with that we now have computers, which are available to - what? - to fields that had been going on for a long time, doing what they do by other means. So one doesn't want to speak of the history of computing, but of histories of computing. Indeed, I would say we want to talk about histories of computings (keep it all plural), because what it does is relocate the agency. The agency lies in the activities. So we haven't got one story, or rather the real story is of how the computer became part of the histories of communities of practitioners who sought to use it for their practice.

The main groups of interest to us include [see chart]:

  • Data processing, which in one sense is as old as human civilization, going back to the first scribes. But, if one is going to talk about systematic thinking about how one collects, organizes, and manipulates data, then the story goes back to sometime in the 18th century. A wonderful book by Jon Agar, The Government Machine, is the story of the growth of the British bureaucracy. And he would argue that, as far as British computing is concerned, it was the structure of the bureaucracy that dictated the structure of the computer, not the other way around. The "machinery of government" went from being a metaphor to a model to a statement of what government was about. In the United States the rise of the corporation and the rise of the state were driving the collection and manipulation of data, which thus has a long history before it meets the computer and has to think about what can be done with it.
  • Management has been around since the days of Charles Babbage, at least. How do you organize people to a common task?
  • The organization of production, in particular the machinery: how do you design a factory, how do you distribute power through it, how do you place people and machines, in what sequence? That goes back to the Industrial Revolution.
  • The design and maintenance of large systems, electricity and telecommunications: these go back to the late 19th century. This is what Bell Labs and GE Labs were concerned with from their inception decades before the computer.
  • The military had been worrying about command and control since the origin of armies .

So all these activities were ongoing, and then the computer became available, and what we need to do is learn to tell the story of what happened in each of these communities as they tried to take their part of the world, their experience of the world, and get it into the machine. And, as I said, at the beginning, they weren't sure. The machine couldn't tell them how to do it. Computer scientists and software engineers couldn't tell them how to do it. They know about the machine; they didn't know about these people's worlds.

And it has not been easy. "Just let us put a few machines in, and you'll be surprised what happens." Well, there were surprises, ugly surprises. "I never imagined that that could happen to my office." "What do you mean you're six months behind? What do you mean you've lost records? What do you mean I can't recover them?" Clearly, there have been successes. It's easy to overtell the story, and I have to remind myself that - let's see: I got invited by email to come down here, so I got online and checked out flights and made an electronic reservation, and then I went to the airport and put my electronic credit card into the machine and got my boarding pass. Then, of course, when I get on the airplane, I have some faith in computers, because computers are flying it. Anyone who has flown on a 777 has to have faith in computers.

What was new to the world was this group of activities [histories chart, lower right], which do come from the computer. You don't have computer science unless you have a computer. And scientific computation took a different form after one had available high-speed computation, and indeed this led to the field that now calls itself computational science. So we have computer science, computer systems, computer theory, artificial intelligence, artificial life as new fields, along with this field of human augmentation, which is a sort of hybrid discipline between computer science and application to problems in particular of military command and control.

Let me say something very quickly about computer systems. The two major things I have mind are operating systems and programming systems. They aim, on the one hand, at making the computer easier to use, supposedly to increase the productivity of the programmer. Again that has happened, but it is striking how little impact it has had in terms of just the amount of productivity that one can get. On the other hand, there is the question of shielding the computer from the user. That is, if the programmer writes a program, you don't want him to run the program and then have the computer crash, especially if it's a system computer. So you want not only to make it easier to use but also to isolate the programmer from the machine in order to make the computer more secure and to stop people from stepping on one another's toes. Operating systems are arguably the most complex engineering systems ever designed, when one thinks about what a large, mainframe, multiuser, networked operating system accomplishes in terms of the number of things going on at once. The important thing here is that operating systems reflect a social order. They determine who will have access to what, who you have to be in order to do certain things on the computer. You want to keep the computer secure. That is a social issue, not a technical issue. What that means for my purposes tonight is that there are a lot of people in the computer

Brown and Duguid refer at one point (144) to the "6-Ds" that characterize claims for the revolutionary nature of the Web: demassification, decentralization, denationalization, despatialization, disintermediation, disaggregation. All are true to an extent, and all are ultimately deceptive. To take one example, the web purportedly "disintermediates" by removing intermediaries between the user and the information he or she seeks. You have direct access, no teacher standing in the middle, no authority figure. It's just you and your "personal" computer in direct contact with a vast web of information, with nothing and no one in between: no teacher standing in your way, no authority figure. But in fact there's a lot in between you and the Web, and in particular a lot of people in between, who are mediating your interaction with the web. Computer systems do not remove people: they multiply them and then hide them. As Lawrence Lessig shows in Code, and other Laws of Cyberspace, your very access to the Net and what you can and can't do through that access are controlled by code, by protocols and programs, all of which are products of human design, reflecting decisions by their creators about what the Web is about and what it ought to be doing, and what you ought to be doing on it. In Lessig's eyes, it involves a lot of political issues, and his complaint is that they really are political issues and are not being discussed in a political forum, treated as political issues.

Your "personal" computer itself is no less shaped, indeed inhabited (virtually) by thousands of people who have decided, individually and in concert, what you can do - and what you can't do - with it. (My experience at the hotel) They reside in the operating system - the 200-300 MB that disappeared from your disk when you first set up your computer - and in the equally large and complex applications. The industry speaks of software as "tools", implying that the application you run, your word processor and the like, are neutral means that you use to do what you want to do. Yet none of this software can be developed without a user in mind. That is, the program is designed to do what the developer or, more likely, developers (many of them) think the typical or targeted user will want to do and how she or he will want to do it. By means of the layout, the menu structure, and in particular the help system, the developers work to fit the user to the model for which the program was intended: the tool shapes not only the task but the user. Jef Raskin, the recently deceased creator of the Apple Macintosh, didn't think the user had been treated very well; his study, The Humane Interface: New Directions for Designing Interactive Systems, is a wonderful study of how the machine gets in the way.

There's nothing inherently sinister about this; it's the way all technology works. Automobiles are designed as they are in part to make sure that people use them in a certain way. They hide the spark plugs because they don't want you going in and cleaning and regapping them. They have computers to do that. So, you open the hood and say, "Gee, I could never work on that", and the response of the automobile industry is "Good! The EPA is delighted." But it's worth bearing in mind that your computer is designed that way, too. You never work alone. (Think of the implications for teaching. Ask a teacher, "When you bring a computer into the room, how many other people are you bringing in, and what have they decided about what your students are going to learn?) You're working with people to whose knowledge you have access only through its embodiment in the programs they write.

This brings me to what computer people were learning in '70s and '80s. First of all, in each of these fields, figuring out how you're going to get your part of the world into the computer is a tough problem. And then you get down to programming, and that's where the rubber hits the road. That's where the compromises have to be made, that's where what you intended becomes what you can do. And that's where programming tools were supposed to help. The idea was, "It's really just programming, so we can somehow make it easier to write programs by building ever more elaborate programming systems and let the computer take over more and more of the work." But it wasn't happening. Throughout the late '60s and into the '70s there emerged what practitioners themselves referred to as the "software crisis": basically, every large project was behind schedule, over budget, and wasn't meeting specifications. Indeed, in some cases in the Defense Department, something like only 30% of the projects were actually delivered, much less delivered on time.

This was happening at the same time that people were talking about making software the subject of an engineering discipline, the field of software engineering. Now, to some extent, programmers themselves constituted a major part of the problem. There never seemed to be enough of them. While it was evident that some of them were better at it than others, it was hard to pinpoint why. There were all sorts of programming aptitude tests developed. There were all kinds of empirical studies that showed as much as a 27:1 difference in performance of programmers. But no one could ever figure out why. What did programmer A know that made her a better programmer than B? The only correlation they could find between another activity and programming was music. Apparently musicians make good programmers.

But one thing was sure. Programmers were unmanageable. It was hard to know whether they were making progress, or where they were in the project, or when they would be finished, or how well they were doing the job. That was a case of the "90-90 rule": programming projects are 90% complete 90% of the time. Almost from the outset, then, visionaries were, on the one hand, confidently predicting the obsolescence of the programmer. Herbert Simon in 1960: "Don't worry about turning your system over to the computer programmers. There won't be any in 1985, because the machine will be doing the job itself. The job will be automated, as intelligent computers program themselves." Now, to some extent again, a lot of the load has been taken off. But it hasn't gone that far, there have never been enough programmers, and there still aren't; you just have to look for them elsewhere in the world.

Now, the main response, as I say, was to try to determine the scientific foundations of programming and to develop scientific tools to take over the programmer's job. What that meant was the problem of trying to separate what programmers know from programmers. It was a matter of figuring out what people know and how they know it and how we can elicit that knowledge from them. As far as programming itself is concerned, it has remained a craft to this day. It cannot be reduced to a science; at least, we haven't been able to do it.

But more important than that, is that, even if that project had succeeded, it turned out that it wouldn't have added much to the problem of software development, because programming wasn't really where the problems lay. These were accidental matters. The failures in programming, people found, lay in the area of specification and design. If you're going to put part of your world into a computer, what do you do? You start with your part of the world, and through some kind of systems analysis of it you arrive at a computational model. This is what can be automated - by implication this is what cannot. Then you specify more closely and design the system until you have a computational model that can be specifically implemented, and then you go to programming and you go through various stages until you wind up with a finite state machine that is an operating model of the system of interest.

Now the thing about programming is that all of that progress in the development of programming systems and so on only addressed the bottom half of this diagram, that is, it addressed the question, "Are we building the system right?" But that's not were the problems lay. Two thirds of the problems encountered at the end originated up here, where the question is "Are we building the right system?" And ultimately, you came out up here: "What system are you trying to build?" And what makes that hard, when you get away from scientific problems, you get away from numerical computation, and you get into air traffic control rooms, you get into hospitals and nurses' stations, you get into legal offices, you get into other offices, and people are at work, and now you come in and say, "I want to use the computer here". What is it you want to do? Well, that partly a question of what are these people doing? And here it takes a while, but you learn that people don't work to rules. In fact, the unions know that the quickest way to slow something down is to work to rules. What people do is not their job description. Nurses don't follow their job descriptions; they do things they're not supposed to do according to the job description. What's their job description? It's the line the doctors draw in the sand that says "we're doctors and you're nurses." "But, while you're at it, will you do this, this, and this, because we don't have time to get there?" Paralegals don't do what the job description says. And air traffic controllers certainly don't do what the rules say they do.

I have an article titled "Boys' Toys and Women's Work, Feminism Engages Software Engineering". One of the arguments I make there is that it is in this area that feminism holds promise of making software engineering better, because feminist literature teaches us to find the hidden work, to look at situations and find out what's really going on and who's doing it. What people know is more that what their resume contains, what people implies what they know, and people learn by doing.

Part of the problem here is that people aren't always good at knowing what they know, and therefore when it comes to documenting what they've done, the documentation is not helpful. I was part of a group at Bell Labs and we went to the maintenance person and said, "For this project we're about to embark on, what kind of documentation would you like?" And she said, "I'd like to know why you didn't do things." We said "Why?" And she said, "Well, we'll be looking over old software, and we'll take a look at something and we'll say 'There's an easier way to do that. I wonder why they didn't?' And we'll spend six months working on it, and we'll find out why they didn't." It would never occur us to say why we didn't do something. (By the way, this is a problem, I think, that has bedeviled artificial intelligence. Artificial intelligence runs up against this problem of what people know and how they know it and in what context they know it.)

The Practice of Science

Now, that's what computer science people learned. Now let me cut quickly to what historians of science were doing at about the same time. Because it fits, and it's related to that passage from my "Reading a Machine" that Davis read. That is, that going to history of technology reinforces something that historians of science ought to know, which is that texts capture (only) part of the record. It's the final part, it isn't all that led up to it. It doesn't reflect what people did. It's a way to write things up. And when one knows that, and one can learn it by trying to deal with a form of scientific thinking that doesn't have texts, namely the devices, which one then has to learn as products of engineering. What was going on in history of science in the '70s and '80s is that we were beginning to deal with the implications of Thomas S. Kuhn's basic destruction of the notion of scientific method. Up until Kuhn's book, The Structure of Scientific Revolutions", in the mid-1960s historians of science took it as their task to document the evolution and progress of the scientific method, which everyone, by the way, after the war believed in: that there was a scientific method that could be applied to any area. Look at any description of the scientific method and try to think of a field that it doesn't apply to. So one looked for the origins of the scientific method in the works of Galileo and Francis Bacon, the latter often called the "father of the scientific method", and followed its development down to the present.

But that approach began to wane following the appearance of a work titled The Structure of Scientific Revolutions by Thomas S. Kuhn, who was my adviser and senior colleage at Princeton for many years. It is arguably the most important and influential book about science written in the 20th century; certainly, it was the most widely read, especially among non-scientists. What he showed us - and I don't want to talk about paradigms here, the most frequently used and most misunderstood term in the English language - is that science did not and had not ever proceeded by the canons of the scientific method, at least not when it counted most. Hypotheses have come from various sources of inspiration, experiments always leave room for interpretation and seldom proved conclusive in disputes among scientists in entrenched theoretical positions.

Therefore, Kuhn told us, don't listen to what they say. Watch what they do. Now, inventors and engineers don't talk much. Theirs is not a literate enterprise, so we're forced to watch what they do, at least insofar as you can reconstruct what they do from the artefacts. I had the immense privilege at the 1993 conference on the longitude problem (which was the source of Dava Sobel's book) of watching the antique horologers talk about John Harrison's clocks. Those people could build a clock as accurate as, or more accurate than Harrison's. They were skilled machinists; they were clockmakers. And so they looked at Harrison's clocks, and they said: "You see what he did here? You see what he did there? Yeah, look at that!" And I just stood in awe, not only of what they were telling me of Harrison, but of this ability, once you have that kind of craft knowledge, then to look at the work of a craftsman and tell us what was the problem and how did he solve it. And as historians we have to work through the problems with the resources available to the practitioners at the time.

After all, we don't learn science from books but from problem sets, not from lab manuals but from working in the lab and learning how to make experiments work. We learn by doing, which is why it is harder to turn a historian into a historian of science by teaching science quickly than to turn a scientist into a historian. If you have an historical sense, becoming a historian is a matter of reading. For historians of science and technology that has meant finding ways to watch practitioners at work, probing their notebooks, reading their correspondence, looking over their shoulder at the lab bench, in some cases by trying to recreate the experiment using the materials and resources then available. One wants to catch them in the process of getting the answer but before they have it. One has to get into the job oneself, working with the concepts and materials then at hand and try to work out the answer alongside the historical subject. The results, I can report personally, are always surprising. "Where did that come from? Whoa! I see where that expression came from, but how do you get to the next one? Oh, I see. It's in the diagram the particular way you drew it. Where did you learn to do that?"

Trying to figure out what they were doing reinforces something we also knew about, which is the difficulties of replication and repetition at a distance on the basis of description. It wasn't the perversity of the Cartesians that led them to reject Newton's theory of light - that white light is made up of monochromatic colors - but rather the fact that, when they read his description of his prism experiments, they couldn't get them to work. They kept getting colored fringes around the supposedly monochromatic bands. The reason was that Newton hadn't fully described what he had done. He'd experimented a great deal, but he described it as "Well, I walked into the room one after, took a prism, held it up to the light," and so on, "See, it's obvious." Harry Collins, a sociologist of science, talks about the TEA laser and how people couldn't build one and make it work from the description. Someone who had actually built one had to come do it. A student of mine, Ross Bassett at North Carolina State, has written on the development of the MOS.(5) As that was first developed at IBM and then other companies tried to take up the process, a person at IBM named Frank Wanlass had to go with it and get together with them and do it with them, and only then could they make the process work. Davis suggests that with the spectrometer something like the same problem came up. What's happening there - you can imagine the dialogue. People are working along, then:

Whoa! Stop! Why did you just do that?
What?
What you just did.
Oh, you have to do that.
You didn't say so.
I didn't think I had to. Everybody knows you have to do that.

Information is separable, knowledge is not, unless it is embodied in a device, in which case it's a different kind of separated knowledge; it's "black-boxed". So, when you want to learn something, it's best to learn it from someone who knows. In particular, you have to find out what the tacit knowledge of that person is.(6)

5. Ross Knox Bassett, To the Digital Age: Research Labs, Start-Up Companies, and the Rise of MOS Technology, Baltimore: Johns Hopkins Press, 2002.

6. In reconstructing James Joule's now classic measurement of the mechanical equivalent of heat, using Joule's detailed description and materials from the time, historian of science Heinz Otto Sibum had great difficulty in maintaining a constant temperature. Further investigation revealed that Joule did so by drawing on his experience as a brewer's son. For details, see his article, "Reworking the mechanical value of heat: Instruments of precision and gestures of accuracy in early Victorian England," Studies in History and Philosophy of Science 26 (1995), 73-106.

The Web of Learning

Moving along separate paths toward quite different goals, computer scientists and historians of science have nonetheless converged on a common result about the nature of human knowledge and how it is acquired. Knowledge is something that humans share by virtue of belonging to societies, and they acquire it in interaction with one another. What we know, we at first learn from others. We extend it by answering questions to which other people want to know the answers or by posing new questions that others recognize as meaningful and interesting. Some knowledge is not communicable by words, but acquired through experience, and shared tacitly.

Brown and Duguid (p. 138) note that "What people learn about, then, is always refracted through who they are and what they are learning to be." Think a bit about what is wrapped up in the phrase "who they are". Personal knowledge is a reflection of personal history, a reflection of social experience. We are each of us the total of our experiences, the momentary outcomes of our histories. When we assemble for the purpose of learning, we bring those histories with us. Those histories are the source of our knowledge, both explicit and tacit. They are what we bring to bear on the subject at hand. In communities of practice, we interweave our histories to form a web of learning. This is what makes information more than data. Data are not information until they answer a question, i.e. resolve uncertainty. You can't retrieve information until you know what you're looking for, i.e. until you have a clear uncertainty to be resolved. And you acquire knowledge by becoming ever more critically aware of what you don't know, i.e. by knowing the question. (I can always spot an autodidact, who has all kinds of answers but no idea of what the question is.) So, the function of education is to learn what the questions are and to ask new questions of your own. That's learning to be, not just learning about things; that's learning to know how, not just that. It's what we gain by doing things together with those who know and with those who are learning. And so our task is to form communities of practice and hence of learning - and the plural is important, to share histories and, as it were, to" steal knowledge" from one another.(7)

7. Brown and Duguid attribute the notion to Rabindrath Tagore; see their "Stolen Knowledge".

In an Age of Information, driven to automate every realm of human thought and endeavor, it is essential that scientists and humanists steal knowledge from one another. Universities are set up for it. They are designed to advance knowledge even as they preserve and transmit it (indeed, transmitting it is the best way to preserve it). Students and faculty carry out those functions by stealing knowledge from one another. Faculty do that work in the open where students can see them - not only in the classroom, where they instruct, but in the library, in laboratories, workshops, colloquia, seminars, conferences, and conversations, where they get together with fellow practitioners to exchange and negotiate what we know. Students do it in their own forums. So all I can say to students when they arrive on campus is welcome: watch, listen, join in. Get entangled in our living web of learning.

Thank you.