Monday, February 16, 2009

the Future ofthe Human Race

PREFACE
What Remains To Be Discovered
Mapping the Secrets of the Universe the Origins of Life and the Future ofthe Human Race

By JOHN MADDOX
Free Press
Read the Review

This book springs from a question first asked by my son Bruno: "If you're editor of Nature, why can't you say what will be discovered next?" In 1995, when I knew I would be leaving that thoroughly international science journal after nearly 23 years as editor-in-chief, it seemed that it would be useful to set down in simple language an account of what scientists are hoping to achieve. One of the joys of being such an editor is listening to researchers eagerly enthuse about the significance and possible outcomes of their work, while knowing at the same time that they would never be so enthusiastic about their research in print. Why not distill that unrestrained chatter into an account of where science is heading, of what remains to be discovered?

I was much helped when the Brookhaven National Laboratory invited me, in June 1996, to deliver its annual Pegram Lectures. It was an opportunity to see whether my ideas about the future of science hung together, and, having completed the lecture series, I was encouraged and emboldened. As work on the manuscript of this book proceeded, my son Bruno came to my aid. He raised serious and thoughtful objections to an early version, and I am deeply grateful for his perceptive criticism ever since.

What remains to be discovered is not, of course, the same as what will be discovered. It is possible to tell what loose ends are now dangling before us, but not how they will eventually be pulled together. People who knew that would take themselves off to a laboratory confident that a Nobel Prize would soon be on the way.

Science at present is a curious patchwork. Fundamental physics is perhaps the oddest: the research community is divided into those who believe that there will be a "theory of everything" very shortly and those who suspect (or hope) that the years ahead will throw up some kind of "new physics" instead. History is on the side of the second camp, to which I belong. By contrast, exuberant molecular genetics seems in a state in which any problem that can be accurately defined can be solved by a few weeks' work in a laboratory. There, it is more difficult to tell what problems will emerge -- as they certainly will.

I am aware that many important fields of science are not touched by this survey of outstanding problems. The most obvious is that of the solar system. The closing third of this century has seen a quite remarkable transformation of our views of how the Earth is built. The doctrine of plate tectonics (continental drift) has been firmly established. Superficially, the matter appears to have been tidied up. But a little reflection shows that to be an illusion. The mechanism that drives the tectonic plates over Earth's surface is far from clear. More to the point, it remains to be seen how the same ideas can be applied to the understanding of other solid objects in the solar system -- both planets such as Venus and satellites of Jupiter such as the strange object Io. And exactly how were the planets formed from the solar nebula, anyway? These are all absorbing questions but no new principles are involved.

......

Two close friends of mine have read the penultimate version of this text. Professor Maxime Frank-Kamanetsky, a molecular biologist whom I first met in Moscow in 1986 and who is now a member of the faculty at Boston University, and Dr. Henry Gee, Nature's resident paleontologist who has a catholic interest in all of science, have both made valuable and constructive suggestions. I owe them a great debt, although the errors and the omissions that remain are my own responsibility.

I am also grateful to my publishers, who have put up with my vacillation, and particularly to Stephen Morrow of The Free Press, who has helped enormously to shape this text by providing a stream of pointed and detailed comment on its successive versions, always with intelligence and good humor.

......

And the message? Despite assertions to the contrary, the lode of discovery is far from worked out. This book provides an agenda for several decades, even centuries, of constructive discovery that will undoubtedly change our view of our place in the world as radically as it has been changed since the time of Copernicus. Indeed, the transformation in prospect is likely to touch the imagination of all of us dramatically. How shall we feel when we know the true history of the evolution of Homo sapiens from the great apes? And when there are found to be, or even to have been, living things elsewhere in the galaxy?

But that is merely the tip of the iceberg of future discovery. The record shows that generations of scientists have been repeatedly surprised by discoveries that were not anticipated and could not have been guessed at by much earlier versions of a book like this. Who, at the end of the nineteenth century, could have smelled out the way in which physics would be turned on its head by relativity and quantum mechanics, and how the structure of DNA would have made life intelligible? And who, now, dares say that the days of surprise are over?



London, January 1998



Introduction: The River of Discovery


This century has been so rich in discovery and so packed with technical innovation that it is tempting to believe that there can never be another like it. That conceit betrays the poverty of our collective imagination. One purpose of this book is to take up some of the questions now crying out for attention, but which cannot yet be answered. The record of previous centuries suggests that the excitement in the years ahead will spring from the answers to the questions we do not yet know enough to ask.

It is an abiding difficulty in science that perspective is distorted because the structure of scientific knowledge makes its own history seem irrelevant. We forget that modern science in the European tradition is already 500 years old, dating from the time of the Polish astronomer Copernicus. The Copernican revolution was a century-long and, ultimately, successful struggle to establish that the Sun, not Earth, is at the center of the solar system.

The contributions of the ancient Chinese, the extinct civilizations of the Indus Valley, the Babylonians and Greeks, naïve though they may now seem, cannot be scorned. Copernicus and those who followed profited from them. And it remains a source of wonder that Chinese records of past astronomical events have been indispensable in the interpretation, within the past 30 years, of the exploding stars known as supernovae; that only now is a systematic search being made of the same records for plants that may be sources of still-useful therapeutic drugs; that the Athenian Greek Erastothenes made a good estimate of the circumference of Earth more than 2,000 years ago; and that the concept of zero in arithmetic, of which even Euclid was innocent, was formulated on the Indian subcontinent soon after the beginning of the modern era. Even the alchemists who distracted the Middle Ages with their search for the philosopher's stone that would turn anything it touched into gold should not be despised; they understood that chemical reactions can turn one substance into another that looks very different.

There is nevertheless a clear distinction between modern science and its precursors: the interplay between observation and explanation was formerly less important than it is now. A theory qualifies as an explanation only if it can be and has been tested by observation or experiment, employing when necessary measurements more sensitive than the human senses can yield. A further novelty of the modern idiom is that each phenomenon -- the existence of the Universe, the fact of life on Earth, and the working of the brain -- demands a physical explanation.

Copernicus is explicitly acknowledged in what came to be called the Copernican principle, the rule that in trying to understand the world, a person should not assume that he or she occupies a privileged position. The Copernican principle also applies to the history of discovery: How can we suppose that science has reached its apogee in the twentieth century? We are right to marvel at what has been accomplished in the past 100 years -- but we forget that there would have been the same sense of achievement at the end of each of the three preceding centuries.

Looking back at the seventeenth century from 1700, an historian could fairly have boasted that there had never previously been a century of scientific prosperity like it, and certainly not since the resurgence of Greek science at Alexandria in the second century. In Britain, the century had begun with a forceful argument in favor of experiment by Francis Bacon. His compatriot, William Harvey, had fitted action to word by dissecting animals and people for almost the first time since Galen 1,200 years earlier, discovering the functions of the heart, the arteries, and the blood. In France, already recognizable as the cradle of modern mathematics, prolific René Descartes had embarked on his astonishing philosophical account of the nature of the world. A closet Copernican, he argued that the solar system and the fixed stars beyond are but a kind of machine driven by God. His legacy to succeeding centuries was the system of geometry in the language of algebra, still called Cartesian geometry.

Galileo was the first to use the modern idiom of science and, in that sense, was the first scientist. Whether or not he used the Leaning Tower of Pisa as a laboratory in the 1580s, he established that acceleration creates force, as when one's body is pushed back into one's forward-facing seat in an aircraft taking off. From that principle, Galileo saw, the mass of an object measured by its weight on Earth must be identical with the mass inferred from what happens when it collides with other objects anywhere in the universe or is accelerated by some force. This conclusion is called the equivalence principle. It is a discovery of the first importance about the nature of gravitational attraction, and it is one of the foundations of modern theories of relativity. And famously, Galileo discovered the satellites of Jupiter with a marvelous new invention known as the telescope.

Isaac Newton, at the University of Cambridge, showed, in the 1680s, that the orbits of the planets are the result of an attractive force between the Sun and each planet. Thus gravity was discovered. Newton's law of gravitation is universal. It accounts not only for the orbits of the planets and those of artificial satellites about Earth, but also for the attraction between Earth and, say, a falling apple, as well as for the roughly spherical shape of Earth, the Moon, the planets and the Sun and stars. Newton also spelled out the rules that specify how objects move under the influence of mechanical forces and, for good measure, he devised a novel mathematical technique, known as differential calculus, for calculating the orbits of planets and other trajectories. His synthesis of two centuries of intelligent speculation about the nature of the world was first published in 1687; it is now known simply as the Principia. The second edition, in Latin like the first, appeared in 1713 and circulated widely on the European mainland.

Newton had to invent the mathematics as he went along; the full importance of his accomplishment was evident to others only after the differential calculus had been made into a usable tool by French and German mathematicians. The consequences of these developments were profound. Newton's mechanics, as originally defined, amounted to a series of statements about the behavior of "bodies" that were essentially single points endowed with mass and subjected to external forces (such as gravitational attraction). The mathematicians generalized Newton's system in ways Newton could not have foreseen.

Newton's genius set the agenda of science for the two centuries to follow. Could there ever again be a century of such marvelous accomplishment?

There was. While a small army of mathematicians in France and Germany were busy turning Newton's clumsy version of the differential calculus into a usable tool, science was beginning to come to terms with electricity and magnetism. The discoveries soon included the following: that electricity can have either a positive or a negative charge, that these charges can neutralize each other if put together, that a sufficient amount of electric charge can cause a spark to travel through the air (lightning), that like charges repel and unlike charges attract, that steady currents of electrical charge can be made to flow through metal wires, that currents of electricity can affect the direction in which a nearby magnet points and that, as Luigi Galvani found in 1794, a current or a shock of electricity will make the muscle of a dead frog twitch.

Although electricity and magnetism preoccupied the eighteenth century, by the end of it Carl von Linné in Sweden (better known as Linnaeus) had devised the system by which animals and plants are still classified today. Antoine Lavoisier had laid the foundations of modern chemistry by the time of his execution in 1794 during the French Revolution. The astronomers, notably William Herschel in Britain, were building ever more powerful telescopes. And salesmen were trudging through the industrial enterprises of western Europe, offering newly designed steam engines as replacements for traditional water wheels (not to mention human drudgery) as a source of motive power. Was that not a century to boast about?

But then came the age of certainty. In variety and subtlety, the discoveries of nineteenth-century science outdid those of all previous centuries. The idea that matter is made of atoms, indivisible particles [whose properties on the tiniest scale mirror those of the same matter on the scale of everyday reality] was firmly established by John Dalton, a teacher in the north of England, within the first two decades. He proposed that the only strictly indivisible atoms are those of substances such as carbon and copper, which he called elements. All other substances, carbon dioxide and copper oxide, for example, are not elements but combinations of atoms. Dalton also concluded that each different kind of atom has a different weight: hydrogen atoms are the lightest, and carbon atoms are each roughly twelve times heavier.

The nineteenth century also put the concept of energy on a firm foundation. Since Galileo's time, it had been understood that one form of energy can sometimes be converted into another: Lift an object a certain height above the ground, against the downward pull of gravity, and then let it fall. It will reach the ground again with a speed that increases with the height to which it is lifted. The energy of the object's motion is its kinetic energy. Galileo was the first to show that (in the absence of friction and other disturbances) this energy increases in proportion with the vertical distance the object falls. He inferred that, at its greatest height, the object must somehow be endowed with an equal quantity of potential energy in virtue of its displacement against the gravitational attraction. But heat, light, electricity and magnetism are also phenomena that embody energy, all of which can be converted into other forms. Heat, for example, will power a steam engine that can be used to generate mechanical energy, and electricity will make a lightbulb glow. What rules govern these conversions?

That question was answered by James Prescott Joule, also from the north of England, whose careful measurements demonstrated by 1851 that, if disturbing influences are avoided, no energy is lost in the conversion of one form of energy into another. The doctrine is that energy is conserved. Support for this idea remains unanimous.

In 1865 the German Rudolf Clausius introduced the idea of entropy, which is a measure of the degree to which the internal energy of an object is not accessible for practical purposes, and which is in the present day also equated with the degree of internal disorder on an atomic scale. That loose set of concepts eventually became known as the second law of thermodynamics: other things being equal, there will be a tendency for the entropy or the disorder of an isolated system to increase. The principle that energy is conserved became known as the first law of thermodynamics. Linked with that is the simple truth, amply borne out by common observation, that heat does not flow spontaneously from a lower to a higher temperature. This set of nineteenth-century concepts has a deep significance. Uniquely among the laws of physics, the second law specifies the directions in which systems change with the passage of time; it defines what has been called the "arrow of time," the capacity of most physical systems to evolve in one direction only.

Elsewhere surprises came thick and fast. Charles Darwin's theory of the evolution of species by natural selection, published in 1858, was one of them. The geologists and fossil hunters had learned enough of the fossil record to know that many once-successful forms of life had disappeared from the surface of Earth, to be replaced by others. The notion that there may have been a patterned evolution of life-forms was not new, but had been suggested by the apparent gradation with time of the fossils found in successive layers of sedimentary rocks. Darwin's proposal was that the evolution of living things is driven by the interaction between the environment of a population of animals or plants and the variations of form or fitness that arise naturally, but unpredictably, within the population.

The theory created a sensation, not only by its assertion of a link between human beings and the great apes or even because it is godless theory. It changed the terms of the debate about humans' place in the world by emphasizing that people are part of nature, certainly in origin and perhaps even for the rest of time. Here is the Copernican principle at work again.

Many of the developments of the age of certainty marked an important trend in the practice of fundamental science -- that of bringing together phenomena of different kinds under a single umbrella of explanation. In the 1860s, James Clerk Maxwell, a Scot then teaching in London, put forward a mathematical scheme for describing in one set of equations both electricity and magnetism. His prize was not just a coherent account of unified electromagnetism, but an explanation of the phenomenon of light. A ray of light is indeed a wave phenomenon, and the speed of light in empty space is simply related to the electrical and magnetic properties of empty space. Maxwell's wave theory is an explanation of all kinds of electromagnetic radiation, most of which have since been discovered.

At the end of the nineteenth century, Maxwell's triumph raised a conceptual difficulty. A ray of light, or some other form of radiation, may be a pattern of oscillating electromagnetic fields, but it can have an existence independent of its source; the flashes of light from an exploding star, for example, keep on traveling outward long after their source has vanished. What could sustain the vibrations of such a disembodied flash? Maxwell took the view that there must be something, which he called the lumeniferous œther, filling all of empty space. How else could one part of an electromagnetic field influence its neighboring elements? Only after a quarter of a century of fruitless and fanciful searching did people appreciate that they were looking for a will-o'-the-wisp: the lumeniferous æther was no different from empty space, or the vacuum as it is called. It has taken almost a whole century since to reach some (imperfect) understanding of the subtlety of the vacuum.

The discoveries of the last two decades of the nineteenth century came in a sensational flurry. In France, Louis Pasteur showed that the fermentation that turns milk into cheese is accomplished by bacteria, and went on to demonstrate the germ theory of infectious disease. In the 1880s, Heinrich Hertz, then at Karlsruhe in Germany, generated invisible Maxwell waves in the radio-frequency range; two decades later, Guglielmo Marconi spanned the Atlantic with these waves, founding the global communications industry. In 1895, W. K. Röntgen at Würtzburg in Bavaria discovered that an electrical discharge in a tube from which as much as possible of the atmospheric gas had been extracted would lead to the emission of a novel kind of radiation, capable of penetrating people's flesh but not their bones -- X rays. The following year, Antoine Bequerel in Paris found that similar radiation is given off by such substances as uranium; they were the first elements to be called radioactive. just a year later, at Cambridge (England), J. J. Thompson proved the existence of the atom of negatively charged electricity, now called the electron.

Thompson thus brought the nineteenth century full circle. It began with the proof of the reality of atoms, a different kind of atom for each kind of chemical element, and it ended with the demonstration that even atoms are not indivisible. Maxwell's definition, that an "atom is a body which cannot be cut in two," was found to be false. For the atoms of electricity were evidently but components of atoms from which they had been separated. The fragility of the atom became a question for the twentieth century.

Maxwell's search for the æther, misguided though it proved to be, was a search for a mechanism to account for electromagnetism; 200 years earlier, Newton had been content to describe the gravitational attraction between objects without asking why nature relies on his and not on some other law. Similarly, in biology, what distinguished Darwin's work from earlier speculations was not the fact of evolution, but that he offered a mechanism for it. In the nineteenth century, "Why?" rather than "How?" became the overriding question.

The nineteenth was also the century in which mathematics became the handmaiden of scientific inquiry. By the end of the century, any problem in which physical objects are influenced by external and internal forces, and respond as Newton prescribed, could be restated as a problem in mathematics. Exuberantly people set out to tackle the deformation of solid objects by external forces, the motion of fluids such as water when driven by mechanical pumps and the propagation of seismic signals from distant earthquakes through the solid Earth. Thus were numerous problems of importance in engineering solved and the twentieth century equipped with many of the mathematical tools that would be required in those changed circumstances; fluid dynamics spawned aerodynamics once aircraft had been invented, for example. So as the century ended, there emerged a tendency to believe that once a problem had been stated mathematically, it had been solved; the idea that the underlying physics might be in error was hardly entertained.

The century thus ended on a triumphant note. Not only had fundamental physics been reduced to a series of problems in mathematics that would in due course be solved, but the closing decades of the century were made prosperous by technology resting on science that was itself the product of the same century. The dyestuffs industry and the chemical industry more generally were the products of the atomic theory and what followed from it. The electrical industry (harbinger of the communications industry) had already begun to change the world. For science and technology, the nineteenth century was certainly the best there had yet been. Only now do we know that it was merely a beginning.

......

At the great mathematical congress held in Paris in 1900, David Hilbert, the outstanding mathematician of his time, produced a list of the problems still to be dealt with in mathematics. One was to find a proof of Fermat's last theorem (accomplished only in 1995); another was nothing less than to find a systematic procedure for demonstrating the truth or falsity of all propositions in mathematics. (As will be seen, the second project had an unexpected sequel in the 1930s.) In 1900 we had many achievements in all fields of science behind us and many apparent contradictions before us. With the benefit of hindsight, it is not too difficult to reconstruct the outline of a book with the title What Remains to Be Discovered (in 1900). What scientific puzzles then in the minds of courageous people would the author have identified? There were several.

What is space made of? What does energy have to do with matter? In the 1880s two U.S. scientists, Albert A. Michelson and Edward W. Morley, set out to find direct proof of the reality of James Clerk Maxwell's lumeniferous æther, which was supposed not merely to be fixed in space, but to be in some sense space itself. Then Earth in its orbit about the Sun must be rushing through the æther at great speed. Michelson and Morley erected in Ohio a piece of equipment that, with the help of mirrors, sent the same beam of light traveling along two paths at right angles to each other and then reflected the two beams back to their common starting point. Because one of these paths would usually be more in line with Earth's supposed motion through the æther than the other, the speed of light along the two paths should be different. The measurements showed it to be the same.

At the time, many scientists were disconcerted. By showing that there is no lumeniferous æther, the experiment had thrown doubt on the mechanism of Maxwell's electromagnetic waves. George Fitzgerald at Trinity College, Dublin, had immediately taken the Michelson-Morley experiment as a sign that the dimensions of quickly moving objects appear to contract (in the direction of the motion), a suggestion quickly taken up by Hendrik Lorentz at Leiden in the Netherlands. Meanwhile, Henri Poincaré of Paris was openly drawing attention to the need for a redefinition of space and time. (In 1904, Poincaré would carry the message across the Atlantic on a lecture tour of U.S. universities.) The author of our apocryphal work could not have known, of course, that Einstein would put the issue to rest just five years after the turn of the century with the publication of his special theory of relativity.

Albert Einstein, first at Zurich, then Berlin and finally at Princeton in the United States, was to make an incomparable contribution, both as an innovator and a critic, to the deepening of understanding that marks the twentieth century. His theory of relativity is, operationally, a correction of Newtonian mechanics for objects traveling at a significant fraction of the speed of light. In the familiar low-speed world, a fixed force applied for some fixed but short length of time to an object that is free to move will produce a fixed increase of the velocity of the object (or of its speed in the direction of the force). Not so in special relativity: the greater the speed of the object, the smaller will be the increase of the velocity. What then happens to the remainder of the energy expended by the force? It ends up as an increase of the mass of the moving object. The energy turns into mass, or substance.

Several counterintuitive conclusions follow. First, nothing can travel faster than light, whose velocity therefore has a special status. Second, energy can be converted into mass and mass into energy (whence the energy of exploding nuclear weapons). Third, the Newtonian notions of absolute position and time are devoid of meaning; only relative distances and times make sense (in which respect, Einstein dutifully followed the precepts of Ernst Mach, one of the founders of the positivist school of philosophy, that theories should refer only to quantities that are measurable). Fourth, there can be no lumeniferous æther, for that would allow the determination of absolute speeds or distances. Fifth, we live in a four-dimensional world, because the three dimensions of ordinary space are conjoined with the extra dimension of time.

Beginning in 1905, it has been commonplace to describe the counterintuitive character of special relativity as "paradoxical." Turning mass into energy sounds a little like turning water into wine, and was indeed often represented as a near-miracle in the 1950s, the early days of the nuclear power industry. But this now-common happening is not a miracle, merely a fact of life. Mass is energy and energy is mass. Their equivalence is both a discovery about the nature of mass and an extension of Galileo's equivalence principle. There is no paradox in special relativity, all of which has been amply confirmed by experiment. The theory runs against our intuition because our senses lack experience of objects moving almost at the speed of light.

The importance of this key development in our understanding of space-time became clear only in succeeding decades. And even now the character of empty space is still not fully resolved. What the Michelson and Morley experiment showed was that the æther could not serve as a means by which space and time can be given absolute meaning. It is an open question whether the abandonment of action at a distance compels the notion that space and time have an internal structure of their own. The notion of the ether was perhaps still tenable until 1905, but with a modicum of foresight on the part of our hypothetical author, speculation about the structure of space and time would surely have featured in What Remains to Be Discovered (in 1900).

Our author would also have had to ask: what is the nature of heat? Difficulties that had arisen during the nineteenth century in the treatment of radiation mostly centered on the question why the quality of the radiation emitted by a hot object changes in a characteristic way as the temperature is increased. It would have been a matter of common observation that radiation from objects such as the human body is invisible but still sensible by the hands, that a smoldering wood fire is a more prolific source of essentially similar radiation, that pieces of iron can be heated in a blacksmith's fire until they glow red and that the radiation from the Sun consists of "white light." By the end of the nineteenth century, the facts had been clearly established: all objects emit radiation that spans a spectrum of frequencies, and that spectrum is shifted to higher frequencies as the temperature is increased. In the 1880s, the German physicist Ludwig Boltzmann had even been able to show that there must be a universal mathematical description of how the intensity of the radiation in such a spectrum varies with the frequency, but all attempts to discover what form that description takes ended in failure.

The question was eventually decided by Max Planck at Berlin in 1900 -- it would have been great news in our hypothetical book. Planck's objective was to explain why the amount of energy radiated at a particular frequency increases with the temperature, but that, at any temperature, there is a frequency at which the intensity of radiation is a maximum. Earlier, in 1893, the German physicist Wilhelm Wien had shown that this maximum frequency increases with the absolute temperature -- the temperature measured in relation to the absolute zero. Planck's radical proposal was that radiation of a particular frequency exists only as quanta, as Planck called them. The greater the frequency, the greater the energy of the quanta concerned. Planck acknowledged that "the physical meaning [of quanta] is not easily appreciated," a confession his contemporaries were only too ready to accept. What Planck had done was to discover that even energy consists of atoms. Then Einstein (in 1905) proved the point by explaining why a certain minimum frequency (and thus energy) of light is required to extract electrons from metals or to make semiconductors carry an electrical current -- now the basis of photographers' light meters.

The discovery of the electron and of radioactivity would certainly have been prominent features of What Remains to Be Discovered (in 1900). J. J. Thompson, who had shown that electrons have the same properties whatever atoms they are torn from, acknowledged the subversiveness of his discovery by saying in a public lecture in 1897 that, "The assumption of a state of matter more finely subdivided than the atom of an element is a somewhat startling one..." -- before going on to defend it. Bequerel's discovery of radioactivity (in 1896) also led people to the conclusion that atoms are not indivisible, although the point was not proved until the new century had begun. The two discoveries together led directly to the series of experiments in which Ernest Rutherford, a New Zealander itinerant between Cambridge, Montreal (McGill University) and Manchester, established (in 1910) that all atoms are constructed from a number of electrons and a nucleus carrying positive electrical charge and embodying most of the mass of the atom. Typically, the dimensions of the nucleus are one ten-thousandth of those of the atom as a whole.

Our hypothetical book could not have anticipated these developments, although it would no doubt have had much to say concerning the continuing uncertainty about the dimensions of atoms as a whole (resolved at about the time of Rutherford's model of the atom). Nor could its author have guessed that, in 1913, Niels Bohr (settled briefly at Manchester with Rutherford) would set out to reconcile the properties of the simplest atom, that of hydrogen, with Planck's discovery that energy is transferred between atoms only in quanta.

The connection between atomic structure and the quantal character of energy is that particular atoms emit radiation with a precisely defined frequency -- the so-called spectral lines. An author of a book about discoveries yet to be made could not have failed to note that by 1860, characteristic spectral lines had been recognized as a means of analyzing the chemical composition of stars, the Sun included. By the 1890s, a detailed investigation of the spectral lines of hydrogen (many of which are in the ultraviolet) had revealed mathematical regularities in the frequencies at which these appear. In 1900, the Swedish physicist Johannes Rydberg first proposed that the frequency of the spectral lines of hydrogen is most simply expressed as the difference between two numbers for which a simple mathematical formula can be given. In 1913, that and later developments provided Bohr with an essential clue to the structure of the hydrogen atom: the electrons travel around the nucleus much as if they are planets revolving about a star, except that only a restricted set of orbits is allowed, each with a well-defined energy. Then the regularities in the frequencies of the spectral lines is understood; each spectral line arises when electrons change from one allowed orbit to another, and its frequency corresponds to the difference between the two energies.

Bohr's discovery that only some orbits are stable is not a restriction of what is physically possible imposed by the then-new quantum theory, but, on the contrary, is a kind of exemption license from classical expectations. In Maxwell's theory, electrons or any other electrically charged particles should not be able to travel in tight orbits with the dimensions of atoms without losing all their energy as radiation -- so much had been clear since the time of Lorentz. Far from discovering a novel restriction, Bohr had hit upon conditions in which an electron in a tiny hydrogen atom can remain indefinitely in its orbit without losing energy. He called these orbits stationary states. Disappointingly, for Bohr and his contemporaries, atoms other than that of hydrogen remained inexplicable. A decade (and the First World War) passed before that deficiency was made good. The recognition that radiation, say a beam of light or the heat from a domestic radiator, can be understood by supposing that it consists of indivisible quanta characterized by frequencies that may span a considerable range is superficially at odds with the idea that radiation is a wavelike phenomenon, first advocated by Huyghens in the eighteenth century and substantiated by Maxwell in the 1860s; it amounts to a return to Newton's notion that a beam of light consists of "corpuscles" of different colors. By 1913, when Einstein published a second crucial paper on quantum mechanics (which, among other things, included the principle on which lasers function), physics had no choice but to accept that radiation is both wavelike and corpuscular; the corpuscles were christened photons, the indivisible atoms of radiation.

In 1924, the French scientist Louis de Broglie turned the argument around; if photons are both wavelike and corpuscular, may not electrons also have wavelike as well as corpuscular properties? The conclusion that they do was not proved by experiment until 1926, but sated by earlier surprises, the research community cheerfully took de Broglie at his word. The real surprises came only in 1925 and 1926, when two German scientists independently put forward formal systems of mechanics designed to account for the strange assumptions underlying Bohr's account of the hydrogen atom. First, Werner Heisenberg at Göttingen (with Max Born and Pasqual Jordan) devised a system called "matrix mechanics" from which they were able to calculate the properties of quantized systems. The starting point for that enterprise was Heisenberg's proof that it is not possible simultaneously to measure the position and the speed of a particle such as an electron. This became known as his uncertainty principle. Almost at the same time, Erwin Schrödinger (an Austrian, but then at the University of Zurich) devised his "wave mechanics" which seemed to cast in the familiar language of mathematical physics the properties of quantum systems. It fell to Paul Dirac, then a young researcher at the University of Cambridge, to show that the two descriptions are equivalent to each other.

Those developments now constitute what is called quantum mechanics, the conceptual development that most markedly characterizes the twentieth century. It is a remarkably successful tool for calculating the properties of systems on an atomic scale, and also a system whose internal self-consistency is so compelling that it has been used successfully to predict that particles of matter have previously unexpected properties and even that particles of matter not previously known must somewhere or somehow exist. But in no sense is quantum mechanics a license for the belief that physical phenomena on a very small scale do not follow the principle that to each event (or happening), there is a cause. Rather, some causes may have several consequences whose likelihood can be calculated from the rules of quantum mechanics. Nor is everybody now content with the meaning of quantum mechanics. On the contrary, great attention has been given to this and related questions in recent years. The goals are practical as well as philosophical. Is it possible to avoid the limits imposed by Heisenberg's uncertainty principle? Is it possible to design computers that exploit the knowledge that a single cause may have several consequences? At what stage in the evolution of the contemporary computer industry will further progress be limited by the small size of the electrical components etched into the surfaces of pieces of silicon?

The emergence of quantum mechanics could not have been foreseen in 1900, and it would have required a perceptive author indeed to appreciate that gravitation would be as important an issue as it was in the twentieth century. There were clues, notably the failure of Newton's theory of gravitation to account for the details of the motion of some celestial objects -- especially the rate of twisting of the elliptical orbit of Mercury about the Sun and the difficulty of making accurate predictions of the return of periodic comets, even well-known objects such as Halley's comet. There was also a current of theoretical speculation, triggered by the new knowledge of electromagnetism and typified by imaginative Fitzgerald's remark in 1894 that "gravity is probably due to a change in the structure of the ether produced by the presence of matter." In 1900, Lorentz read a paper on gravitation to the Amsterdam Academy of Sciences, starting a trend that occupied Europe's principal physicists for a decade and a half, until Einstein produced his general theory of relativity (which would have been better called a "relativistic theory of gravitation") in 1913. Fitzgerald's guess was vindicated: if "structure of the ether" is replaced by "curvature of space," we have the essence of Einstein's theory.

The general theory is equivalent to Newtonian gravitation provided that the concentration of mass (or, the same thing, of energy) is not too great; it is also naturally consistent with the special theory of relativity. Its effect, unique among physical theories, is to represent gravitational interactions geometrically, by the geometry of space-time. The gravitational field is not imposed on space-time as, say, Picasso applied paint to canvas: rather, it is space-time. That is another illustration of how, whenever people dispense with the Newtonian notion of action as a distance, they are compelled to endow empty space with properties that Euclid never dreamed of. Einstein's theory of gravitation has survived all the tests it has been possible to make of it; that is the basis of the widely shared belief that Einstein's theory of gravitation is the outstanding achievement of human intellect and imagination of the twentieth century. Yet, as will be seen, there remains the horrendous unsolved problem of how to reconcile the theory of gravitation with quantum mechanics.

The author of What Remains to Be Discovered (in 1900) would also, of course, have had to ask, What is life? There was much to say about how the living world came about due to the achievements of biologists in the nineteenth century. Pride of place would have gone to Darwin's theory of evolution by natural selection, published in 1858, which quickly became the guiding principle of biology in the closing decades of the nineteenth century. Nevertheless, the author of our book would have startled readers with news of the rediscovery of Gregor Mendel's observations in the 1850s of the patterns of inheritance in plants. By the turn of the century, Mendel's observations seemed a direct challenge to Darwin's notion that the inheritable variations that account for evolution are invariably small variations. That the implicit contradiction would lead to energetic research could have been foreseen in 1900; that it would lead first to the foundation of modern genetics (chiefly at Columbia University in New York) and then, in the 1930s, to a recognition that the apparent conflict is not a conflict at all would have been more difficult to predict.

Modern biology was being created. That cells are the essential units of living things had been recognized in the 1830s, when microscopes capable of making cells visible to an observer first became widely available. By the 1880s, the German physiologist August Weissman had shown that, from early in an embryo's development, the cells that are eventually responsible for reproduction (called germ cells) are physically distinct from ordinary body cells (called somatic cells). Weissman also recognized the structures in the cell nuclei that appear to be concerned with the transfer of inheritable characteristics from one generation of cells or even of whole organisms to their successors; these structures are called chromosomes. He also concluded that, in sexually reproducing organisms, germ cells differ from somatic cells in having only half as many chromosomes. The group at Columbia University added detail to this picture. In quick succession came the proof that inheritable characteristics are determined by entities called genes; that the genes are arranged in a linear fashion along the chromosomes, much like beads on a string; and that different versions of the same gene are responsible for alternative versions of similar characteristics, say the color of a person's skin. Although the physical character of the genes was not known for half a century, the science of genetics (so named only in 1906) was given a solid foundation that is not yet superseded.

The eventual rapprochement between Darwinism and genetics could not have been anticipated in 1900. In retrospect, it is curious that so little attention was paid to the constitution of genes. What were they made of? The difficulty was partly technical in that there was no way the components of cells, and the chromosomes in particular, could be separated from each other and thus studied in isolation. But in due course it became clear that chromosomes have two components: protein and a material called nucleic acid. In the decade after the Second World War, the crucial question was whether the stuff of inheritance consists of protein or of nucleic acid.

It was in 1900, as it happens, that Emil Fischer, Germany's towering genius of chemistry, was unraveling the molecular structure of both protein molecules and nucleic acids. Perhaps the author of What Remains to Be Discovered (in 1900) could have made a lucky guess at whether nucleic acids or proteins are the stuff of inheritance, but it took until the end of the Second World War to establish that the particular nucleic acid involved in chromosome structure is DNA; chemically, it seemed unpromising material to provide the functions of genes because of its repetitive structure. DNA molecules are polymers of only four distinct chemical units differing very little from each other. How could such uniform and apparently featureless molecules generate the great variety with which the living world abounds?

The answer came in April 1953, when two young men at the University of Cambridge, J. D. Watson and Francis H. C. Crick, built a structural model of the DNA molecule whose details are in themselves sufficient to illustrate how these molecules function as repositories of genetic information. This structure of DNA was self-evidently also the repository of the recipe by means of which the cells in all organisms carry out the specific functions required of them. Indeed, its chemical structure even embodies the recipe by means of which a single fertilized egg in a sexually reproducing organism can develop into a fully functioning adult; what is called ontogeny had at last been unambiguously brought within the bounds of rational inquiry. That was the springboard for a detailed exploration of what has proved to be the universal biochemical machinery of living things, which continues still at breakneck pace.

New industries (under the general label of biotechnology) have been spawned, radically novel therapeutic techniques (such as gene therapy) await refinement and there is a prospect that the breeding of productive animals and plants will be enormously improved by the techniques now becoming available. The structure of DNA ranks with Copernicus's successful advocacy of the heliocentric hypothesis in importance. In 1900, a few brave spirits may have hoped that an understanding of life would be won during the century just beginning, but there cannot have been many of them.

......

These observations of what has happened in science in this century illustrate two important truths. First, new understanding does indeed spring from current understanding, and usually from contradictions that have become apparent. Second, while it may be possible confidently to guess in which fields of science new understanding will be won, the nature of the discoveries that will deepen understanding of the world cannot be perfectly anticipated. This book does not pretend to describe discoveries yet to be made, but rather suggests which areas of science are ripe for discovery; that harvest of discoveries will be crucial both for the self-consistency of science itself and for its consequences in the wider world.

The text is divided into three parts: Matter, Life and Our World. As will be seen, contradictions abound, perhaps most conspicuously at the intersection of the apparently successful theories of particle physics and of the expanding universe. In the light of past experience, it is folly to believe that a so-called theory of everything is waiting to be formulated, but there may be what many physicists refer to as a "new physics," a physics regulated by principles not yet imagined. And in the life sciences, made exuberant by the understanding that has flowed from the discovery of the structure of DNA, problems abound: How did life begin? How will biology make comprehensible the vast amounts of data now being gathered? How the brain functions both in the everyday world and as the human attribute of mind is hardly clearer now than at the beginning of the century.

The river of discovery will continue to flow without cessation, deepening our understanding of the world and enhancing our capacity to forfend calamity and live congenial lives. As will be seen from the final chapter, there are crucial lessons in this tale for governments, the research profession and the rest of us.

(

Copyright 1999 The New York Times Company

No comments:

Post a Comment

Blog Archive