Physics, major science, dealing with the fundamental constituents
of the universe, the forces they exert on one another, and the results
produced by these forces. Sometimes in modern physics a more sophisticated
approach is taken that incorporates elements of the three areas listed
above; it relates to the laws of symmetry and conservation, such as those
pertaining to energy, momentum, charge, and parity.
Physics is closely related to the other natural sciences and, in a sense,
encompasses them. Chemistry, for example, deals with the interaction of
atoms to form molecules; much of modern geology is largely a study of the
physics of the earth and is known as geophysics; and astronomy deals with
the physics of the stars and outer space. Even living systems are made
up of fundamental particles and, as studied in biophysics and biochemistry,
they follow the same types of laws as the simpler particles traditionally
studied by a physicist.
The emphasis on the interaction between particles in modern physics,
known as the microscopic approach, must often be supplemented by a macroscopic
approach that deals with larger elements or systems of particles. This
macroscopic approach is indispensable to the application of physics to
much of modern technology. Thermodynamics, for example, a branch of physics
developed during the 19th century, deals with the elucidation and measurement
of properties of a system as a whole and remains useful in other fields
of physics; it also forms the basis of much of chemical and mechanical
engineering. Such properties as the temperature, pressure, and volume of
a gas have no meaning for an individual atom or molecule; these thermodynamic
concepts can only be applied directly to a very large system of such particles.
A bridge exists, however, between the microscopic and macroscopic approach;
another branch of physics, known as statistical mechanics, indicates how
pressure and temperature can be related to the motion of atoms and molecules
on a statistical basis.
Physics emerged as a separate science only in the early 19th century;
until that time a physicist was often also a mathematician, philosopher,
chemist, biologist, engineer, or even primarily a political leader or artist.
Today the field has grown to such an extent that with few exceptions modern
physicists have to limit their attention to one or two branches of the
science. Once the fundamental aspects of a new field are discovered and
understood, they become the domain of engineers and other applied scientists.
The 19th-century discoveries in electricity and magnetism, for example,
are now the province of electrical and communication engineers; the properties
of matter discovered at the beginning of the 20th century have been applied
in electronics; and the discoveries of nuclear physics, most of them not
yet 40 years old, have passed into the hands of nuclear engineers for applications
to peaceful or military uses.
The Babylonians, Egyptians, and early Mesoamericans observed the motions
of the planets and succeeded in predicting eclipses, but they failed to
find an underlying system governing planetary motion. Little was added
by the Greek civilization, partly because the uncritical acceptance of
the ideas of the major philosophers Plato and Aristotle discouraged experimentation.
Some progress was made, however, notably in Alexandria, the scientific
center of Greek civilization. There, the Greek mathematician and inventor
Archimedes designed various practical mechanical devices, such as levers
and screws, and measured the density of solid .
Little advance was made in physics, or in any other science, during
the Middle Ages, other than the preservation of the classical Greek treatises,
for which the Arab scholars such as Averroës and Al-Quarashi, the
latter also known as Ibn al-Nafis, deserve much credit. The founding of
the great medieval universities by monastic orders in Europe, starting
in the 13th century, generally failed to advance physics or any experimental
investigations. The Italian Scholastic philosopher and theologian Saint
Thomas Aquinas, for instance, attempted to demonstrate that the works of
Plato and Aristotle were consistent with the Scriptures. The English Scholastic
philosopher and scientist Roger Bacon was one of the few philosophers who
advocated the experimental method as the true foundation of scientific
knowledge and who also did some work in astronomy, chemistry, optics, and
machine design.
The advent of modern science followed the Renaissance and was ushered
in by the highly successful attempt by four outstanding individuals to
interpret the behavior of the heavenly bodies during the 16th and early
17th centuries. The Polish natural philosopher Nicolaus Copernicus propounded
the heliocentric system that the planets move around the sun. He was convinced,
however, that the planetary orbits were circular, and therefore his system
required almost as many complicated elaborations as the Ptolemaic system
it was intended to replace. The Danish astronomer Tycho Brahe, believing
in the Ptolemaic system, tried to confirm it by a series of remarkably
accurate measurements. These provided his assistant, the German astronomer
Johannes Kepler, with the data to overthrow the Ptolemaic system and led
to the enunciation of three laws that conformed with a modified heliocentric
theory. Galileo, having heard of the invention of the telescope, constructed
one of his own and, starting in 1609, was able to confirm the heliocentric
system by observing the phases of the planet Venus. He also discovered
the surface irregularities of the moon, the four brightest satellites of
Jupiter, sunspots, and many stars in the Milky Way. Galileo's interests
were not limited to astronomy; by using inclined planes and an improved
water clock, he had earlier demonstrated that bodies of different weight
fall at the same rate (thus overturning Aristotle's dictums), and that
their speed increases uniformly with the time of fall. Galileo's astronomical
discoveries and his work in mechanics foreshadowed the work of the 17th-century
English mathematician and physicist Sir Isaac Newton, one of the greatest
scientists who ever lived.
Starting about 1665, at the age of 23, Newton enunciated the principles
of mechanics, formulated the law of universal gravitation, separated white
light into colors, proposed a theory for the propagation of light, and
invented differential and integral calculus. Newton's contributions covered
an enormous range of natural phenomena: He was thus able to show that not
only Kepler's laws of planetary motion but also Galileo's discoveries of
falling bodies follow a combination of his own second law of motion and
the law of gravitation, and to predict the appearance of comets, explain
the effect of the moon in producing the tides, and explain the precession
of the equinoxes.
The Development of Mechanics
The subsequent development of physics owes much to Newton's laws of
motion , notably the second, which states that the force needed to accelerate
an object will be proportional to its mass times the acceleration. If the
force and the initial position and velocity of a body are given, subsequent
positions and velocities can be computed, although the force may vary with
time or position; in the latter case, Newton's calculus must be applied.
This simple law contained another important aspect: Each body has an inherent
property, its inertial mass, which influences its motion. The greater this
mass, the slower the change of velocity when a given force is impressed.
Even today, the law retains its practical utility, as long as the body
is not very small, not very massive, and not moving extremely rapidly.
Newton's third law, expressed simply as "for every action there is an equal
and opposite reaction," recognizes, in more sophisticated modern terms,
that all forces between particles come in oppositely directed pairs, although
not necessarily along the line joining the particles.
Although the ancient Greeks were aware of the electrostatic properties
of amber, and the Chinese as early as 2700 BC made crude magnets from lodestone,
experimentation with and the understanding and use of electric and magnetic
phenomena did not occur until the end of the 18th century. In 1785 the
French physicist Charles Augustin de Coulomb first confirmed experimentally
that electrical charges attract or repel one another according to an inverse
square law, similar to that of gravitation. A powerful theory to calculate
the effect of any number of static electric charges arbitrarily distributed
was subsequently developed by the French mathematician Siméon Denis
Poisson and the German mathematician Carl Friedrich Gauss.
A positively charged particle attracts a negatively charged particle,
tending to accelerate one toward the other. If the medium through which
the particle moves offers resistance to that motion, this may be reduced
to a constant-velocity (rather than accelerated) motion, and the medium
will be heated up and may also be otherwise affected. The ability to maintain
an electromotive force that could continue to drive electrically charged
particles had to await the development of the chemical battery by the Italian
physicist Alessandro Volta in 1800. The classical theory of a simple electric
circuit assumes that the two terminals of a battery are maintained positively
and negatively charged as a result of its internal properties. When the
terminals are connected by a wire, negatively charged particles will be
simultaneously pushed away from the negative terminal and attracted to
the positive one, and in the process heat up the wire that offers resistance
to the motion. Upon their arrival at the positive terminal, the battery
will force the particles toward the negative terminal, overcoming the opposing
forces of Coulomb's law. The German physicist Georg Simon Ohm first discovered
the existence of a simple proportionality constant between the current
flowing and the electromotive force supplied by a battery, known as the
resistance of the circuit. Ohm's law, which states that the resistance
is equal to the electromotive force, or voltage, divided by the current,
is not a fundamental and universally applicable law of physics, but rather
describes the behavior of a limited class of solid materials.
The historical concepts of magnetism, based on the existence of pairs
of oppositely charged poles, had started in the 17th century and owe much
to the work of Coulomb. The first connection between magnetism and electricity,
however, was made through the pioneering experiments of the Danish physicist
and chemist Hans Christian Oersted, who in 1819 discovered that a magnetic
needle could be deflected by a wire nearby carrying an electric current.
Within one week after learning of Oersted's discovery, the French scientist
André Marie Ampère showed experimentally that two current-carrying
wires would affect each other like poles of magnets. In 1831 the British
physicist and chemist Michael Faraday discovered that an electric current
could be induced (made to flow) in a wire without connection to a battery,
either by moving a magnet or by placing another current-carrying wire with
an unsteady—that is, rising and falling—current nearby. The intimate connection
between electricity and magnetism, now established, can best be stated
in terms of electric or magnetic fields, or forces that will act at a particular
point on a unit charge or unit current, respectively, placed at that point.
Stationary electric charges produce electric fields; currents—that is,
moving electric charges—produce magnetic fields. Electric fields are also
produced by changing magnetic fields, and vice versa. Electric fields exert
forces on charged particles as a function of their charge alone; magnetic
fields will exert an additional force only if the charges are in motion.
These qualitative findings were finally put into a precise mathematical
form by the British physicist James Clerk Maxwell who, in developing the
partial differential equations that bear his name, related the space and
time changes of electric and magnetic fields at a point with the charge
and current densities at that point. In principle, they permit the calculation
of the fields everywhere and any time from a knowledge of the charges and
currents. An unexpected result arising from the solution of these equations
was the prediction of a new kind of electromagnetic field, one that was
produced by accelerating charges, that was propagated through space with
the speed of light in the form of an electromagnetic wave, and that decreased
with the inverse square of the distance from the source. In 1887 the German
physicist Heinrich Rudolf Hertz succeeded in actually generating such waves
by electrical means, thereby laying the foundations for radio, radar, television,
and other forms of telecommunications.
The behavior of electric and magnetic fields in these waves is quite
similar to that of a very long taut string, one end of which is rapidly
moved up and down in a periodic fashion. Any point along the string will
be observed to move up and down, or oscillate, with the same period or
with the same frequency as the source. Points along the string at different
distances from the source will reach the maximum vertical displacements
at different times, or at a different phase. Each point along the string
will do what its neighbor did, but a little later, if it is further removed
from the vibrating source (Oscillation). The speed with which the disturbance,
or the message to oscillate, is transmitted along the string is called
the wave velocity (Wave Motion). This is a function of the medium, its
mass, and the tension in the case of a string. An instantaneous snapshot
of the string (after it has been in motion for a while) would show equispaced
points having the same displacement and motion, separated by a distance
known as the wavelength, which is equal to the wave velocity divided by
the frequency. In the case of the electromagnetic field one can think of
the electric-field strength as taking the place of the up-and-down motion
of each piece of the string, with the magnetic field acting similarly at
a direction at right angles to that of the electric field. The electromagnetic-wave
velocity away from the source is the speed of light.
The apparent linear propagation of light was known since antiquity,
and the ancient Greeks believed that light consisted of a stream of corpuscles.
They were, however, quite confused as to whether these corpuscles originated
in the eye or in the object viewed. Any satisfactory theory of light must
explain its origin and disappearance and its changes in speed and direction
while it passes through various media. Partial answers to these questions
were proposed in the 17th century by Newton, who based them on the assumptions
of a corpuscular theory, and by the English scientist Robert Hooke and
the Dutch astronomer, mathematician, and physicist Christiaan Huygens,
who proposed a wave theory. No experiment could be performed that distinguished
between the two theories until the demonstration of interference in the
early 19th century by the British physicist and physician Thomas Young.
The French physicist Augustin Jean Fresnel decisively favored the wave
theory.
Interference can be demonstrated by placing a thin slit in front of
a light source, stationing a double slit farther away, and looking at a
screen spaced some distance behind the double slit. Instead of showing
a uniformly illuminated image of the slits, the screen will show equispaced
light and dark bands. Particles coming from the same source and arriving
at the screen via the two slits could not produce different light intensities
at different points and could certainly not cancel each other to yield
dark spots. Light waves, however, can produce such an effect. Assuming,
as did Huygens, that each of the double slits acts as a new source, emitting
light in all directions, the two wave trains arriving at the screen at
the same point will not generally arrive in phase, though they will have
left the two slits in phase. Depending on the difference in their paths,
"positive" displacements arriving at the same time as "negative" displacements
of the other will tend to cancel out and produce darkness, while the simultaneous
arrival of either positive or negative displacements from both sources
will lead to reinforcement or brightness. Each apparent bright spot undergoes
a timewise variation as successive in-phase waves go from maximum positive
through zero to maximum negative displacement and back. Neither the eye
nor any classical instrument, however, can determine this rapid "flicker,"
which in the visible-light range has a frequency from 4 × 1014 to
7.5 × 1014 Hz, or cycles per second. Although it cannot be measured
directly, the frequency can be inferred from wavelength and velocity measurements.
The wavelength can be determined from a simple measurement of the distance
between the two slits, and the distance between adjacent bright bands on
the screen; it ranges from 4 × 10-5 cm (1.6 × 10-5 in) for
violet light to 7.5 × 10-5 cm (3 × 10-5 in) for red light with
intermediate wavelengths for the other colors.
The first measurement of the velocity of light was carried out by the
Danish astronomer Olaus Roemer in 1676. He noted an apparent time variation
between successive eclipses of Jupiter's moons, which he ascribed to the
intervening change in the distance between Earth and Jupiter, and to the
corresponding difference in the time required for the light to reach the
earth. His measurement was in fair agreement with the improved 19th-century
observations of the French physicist Armand Hippolyte Louis Fizeau, and
with the work of the American physicist Albert Abraham Michelson and his
coworkers, which extended into the 20th century. Today the velocity of
light is known very accurately as 299,292.6 km (185,971.8 mi sec) in vacuum.
In matter, the velocity is less and varies with frequency, giving rise
to a phenomenon known as dispersion. also Optics; Spectrum; Vacuum.
Maxwell's work contributed several important results to the understanding
of light by showing that it was electromagnetic in origin and that electric
and magnetic fields oscillated in a light wave. His work predicted the
existence of nonvisible light, and today electromagnetic waves or radiations
are known to cover the spectrum from gamma rays (Radioactivity), with wavelengths
of 10-12 cm (4 × 10-11 in), through X rays, visible light, microwaves,
and radio waves, to long waves of hundreds of kilometers in length (. X
Ray). It also related the velocity of light in vacuum and through media
to other observed properties of space and matter on which electrical and
magnetic effects depend. Maxwell's discoveries, however, did not provide
any insight into the mysterious medium, corresponding to the string, through
which light and electromagnetic waves had to travel (. the Electricity
and Magnetism section above). Based on the experience with water, sound,
and elastic waves, scientists assumed a similar medium to exist, a "luminiferous
ether" without mass, which was all-pervasive (because light could obviously
travel through a massless vacuum), and had to act like a solid (because
electromagnetic waves were known to be transverse and the oscillations
took place in a plane perpendicular to the direction of propagation, and
gases and liquids could only sustain longitudinal waves, such as sound
waves). The search for this mysterious ether occupied phycisists' attention
for much of the last part of the 19th century.
The problem was further compounded by an extension of a simple problem.
A person walking forward with a speed of 3.2 km/h (2 mph) in a train traveling
at 64.4 km/h (40 mph) appears to move at 67.6 km/h (42 mph), to an observer
on the ground. In terms of the velocity of light the question that now
arose was: If light travels at about 300,000 km/sec (about 186,000 mi/sec)
through the ether, at what velocity should it travel relative to an observer
on earth while the earth also moves through the ether? Or, alternately,
what is the earth's velocity through the ether? The famous Michelson-Morley
experiment, first performed in 1887 by Michelson and the American chemist
Edward Williams Morley using an interferometer, was an attempt to measure
this velocity; if the earth were traveling through a stationary ether,
a difference should be apparent in the time taken by light to traverse
a given distance, depending on whether it travels in the direction of or
perpendicular to the earth's motion. The experiment was sensitive enough
to detect even a very slight difference by interference; the results were
negative. Physics was now in a profound quandary from which it was not
rescued until Einstein formulated his theory of relativity in 1905.
The First Law of Thermodynamics
The equivalence of heat and work was explained by the German physicist
Hermann Ludwig Ferdinand von Helmholtz and the British mathematician and
physicist William Thomson, 1st Baron Kelvin, by the middle of the 19th
century. Equivalence means that doing work on a system can produce exactly
the same effect as adding heat; thus the same temperature rise can be achieved
in a gas contained in a vessel by adding heat or by doing an appropriate
amount of work through a paddle wheel sticking into the container where
the paddle is actuated by falling weights. The numerical value of this
equivalent was first demonstrated by the British physicist James Prescott
Joule in several heating and paddle-wheel experiments between 1840 and
1849.
That performing work or adding heat to a system were both means of
transferring energy to it was thus recognized. Therefore, the amount of
energy added by heat or work had to increase the internal energy of the
system, which in turn determined the temperature. If the internal energy
remains unchanged, the amount of work done on a system must equal the heat
given up by it. This is the first law of thermodynamics, a statement of
the conservation of energy. Not until the action of molecules in a system
was better understood by the development of the kinetic theory could this
internal energy be related to the sum of the kinetic energies of all the
molecules making up the system.
The Second Law of Thermodynamics
While the first law indicates that energy must be conserved in any
interactions between a system and its surroundings, it gives no indication
whether all forms of mechanical and thermal energy exchange are possible.
That overall changes in energy proceed in one direction was first formulated
by the French physicist and military engineer Nicolas Léonard Sadi
Carnot, who in 1824 pointed out that a heat engine (a device that can produce
work continuously while only exchanging heat with its surroundings) requires
both a hot body as a source of heat and a cold body to absorb heat that
must be discharged. When the engine performs work, heat must be transferred
from the hotter to the colder body; to have the inverse take place requires
the expenditure of mechanical (or electrical) work. Thus, in a continuously
working refrigerator, the absorption of heat from the low temperature source
(the cold space) requires the addition of work (usually as electrical power),
and the discharge of heat (usually via finned coils in the rear) to the
surroundings (. Refrigeration). These ideas, based on Carnot's concepts,
were eventually formulated rigorously as the second law of thermodynamics
by the German mathematical physicist Rudolf Julius Emanuel Clausius and
by Lord Kelvin in various alternate, although equivalent, ways. One such
formulation is that heat cannot flow from a colder to a hotter body without
the expenditure of work.
From the second law, it follows that in an isolated system (one that
has no interactions with the surroundings) internal portions at different
temperatures will always adjust to a single uniform temperature and thus
produce equilibrium. This can also be applied to other internal properties
that may be different initially. If milk is poured into a cup of coffee,
for example, the two substances will continue to mix until they are inseparable
and can no longer be differentiated. Thus, an initial separate or ordered
state is turned into a mixed or disordered state. These ideas can be expressed
by a thermodynamic property, called the entropy (first formulated by Clausius),
which serves as a measure of how close a system is to equilibrium—that
is, to perfect internal disorder. The entropy of an isolated system, and
of the universe as a whole, can only increase, and when equilibrium is
eventually reached, no more internal change of any form is possible. Applied
to the universe as a whole, this principle suggests that eventually all
temperature in space becomes uniform, resulting in the so-called heat death
of the universe.
Locally, the entropy can be lowered by external action. This applies
to machines, such as a refrigerator, where the entropy in the cold chamber
is being reduced, and to living organisms. This local increase in order
is, however, only possible at the expense of an entropy increase in the
surroundings; here more disorder must be created.
This continued increase in entropy is related to the observed nonreversibility
of macroscopic processes. If a process were spontaneously reversible—that
is, if, after undergoing a process, both it and all the surroundings could
be brought back to their initial state—the entropy would remain constant
in violation of the second law. While this is true for macroscopic processes,
and therefore corresponds to daily experience, it does not apply to microscopic
processes, which are believed to be reversible. Thus, chemical reactions
between individual molecules are not governed by the second law, which
applies only to macroscopic ensembles.
From the promulgation of the second law, thermodynamics went on to
other advances and applications in physics, chemistry, and engineering.
Most chemical engineering, all power-plant engineering, and air-conditioning
and low-temperature physics are just a few of the fields that owe their
theoretical basis to thermodynamics and to the subsequent achievements
of such scientists as Maxwell, the American physicist Willard Gibbs, the
German physical chemist Walther Hermann Nernst, and the Norwegian-born
American chemist Lars Onsager.
Relativity
To extend the example of relative velocity introduced with the Michelson-Morley
experiment, two situations can be compared. One consists of a person, A,
walking forward with a velocity v in a train moving at velocity u. The
velocity of A with regard to an observer B stationary on the ground is
then simply V = u + v. If, however, the train were at rest in the station
and A was moving forward with velocity v while observer B walked backward
with velocity u, the relative speed between A and B would be exactly the
same as in the first case. In more general terms, if two frames of reference
are moving relative to each other at constant velocity, observations of
any phenomena made by observers in either frame will be physically equivalent.
As already mentioned, the Michelson-Morley experiment failed to confirm
the concept of adding velocities, and two observers, one at rest and the
other moving toward a light source with velocity u, both observe the same
light velocity V, commonly denoted by the symbol c.
Einstein incorporated the invariance of c into his theory of relativity.
He also demanded a very careful rethinking of the concepts of space and
time, showing the imperfection of intuitive notions about them. As a consequence
of his theory, it is known that two clocks that keep identical time when
at rest relative to each other must run at different speeds when they are
in relative motion, and two rods that are identical in length (at rest)
will become different in length when they are in relative motion. Space
and time must be closely linked in a four-dimensional continuum where the
normal three-space dimensions must be augmented by an interrelated time
dimension.
Two important consequences of Einstein's relativity theory are the
equivalence of mass and energy and the limiting velocity of the speed of
light for material objects. Relativistic mechanics describes the motion
of objects with velocities that are appreciable fractions of the speed
of light, while Newtonian mechanics remains useful for velocities typical
of the macroscopic motion of objects on earth. No material object, however,
can have a speed equal to or greater than the speed of light.
Even more important is the relation between the mass m and energy E.
They are coupled by the relation E = mc2, and because c is very large,
the energy equivalence of a given mass is enormous. The change of mass
giving an energy change is significant in nuclear reactions, as in reactors
or nuclear weapons, and in the stars, where a significant loss of mass
accompanies the huge energy release.
Einstein's original theory, formulated in 1905 and known as the special
theory of relativity, was limited to frames of reference moving at constant
velocity relative to each other. In 1915, he generalized his hypothesis
to formulate the general theory of relativity that applied to systems that
accelerate with reference to each other. This extension showed gravitation
to be a consequence of the geometry of space-time and predicted the bending
of light in its passage close to a massive body like a star, an effect
first observed in 1919. General relativity, although less firmly established
than the special theory, has deep significance for an understanding of
the structure of the universe and its evolution. . also Cosmology.
X Rays
These very penetrating rays, first discovered by Roentgen, were shown
to be electromagnetic radiation of very short wavelength in 1912 by the
German physicist Max Theodor Felix von Laue and his coworkers. The precise
mechanism of X-ray production was shown to be a quantum effect, and in
1914 the British physicist Henry Gwyn-Jeffreys Moseley used his X-ray spectrograms
to prove that the atomic number of an element, and hence the number of
positive charges in an atom, is the same as its position in the periodic
table (. Periodic Law). The photon theory of electromagnetic radiation
was further strengthened and developed by the prediction and observation
of the so-called Compton effect by the American physicist Arthur Holly
Compton in 1923.
Electron Physics
That electric charges were carried by extremely small particles had
already been suspected in the 19th century and, as indicated by electrochemical
experiments, the charge of these elementary particles was a definite, invariant
quantity. Experiments on the conduction of electricity through low-pressure
gases led to the discovery of two kinds of rays: cathode rays, coming from
the negative electrode in a gas discharge tube, and positive or canal rays
from the positive electrode. Sir Joseph John Thomson's 1895 experiment
measured the ratio of the charge q to the mass m of the cathode-ray particles.
Lenard in 1899 confirmed that the ratio of q to m for photoelectric particles
was identical to that of cathode rays. The American inventor Thomas Alva
Edison had noted in 1883 that very hot wires emit electricity, called thermionic
emission (now called the Edison effect), and in 1899 Thomson showed that
this form of electricity also consisted of particles with the same q to
m ratio as the others. About 1911 Millikan finally determined that electric
charge always arises in multiples of a basic unit e, and measured the value
of e, now known to be 1.602 × 10-19 coulombs. From the measured value
of q to m ratio, with q set equal to e, the mass of the carrier, called
electron, could now be determined as 9.110 × 10-31 kg.
Finally, Thomson and others showed that the positive rays also consisted
of particles, each carrying a charge e, but of the positive variety. These
particles, however, now recognized as positive ions resulting from the
removal of an electron from a neutral atom, are much more massive than
the electron. The smallest, the hydrogen ion, is a single proton with a
mass of 1.673 × 10-27 kg, about 1837 times more massive than the
electron (. Ion; Ionization). The "quantized" nature of electric charge
was now firmly established and, at the same time, two of the fundamental
subatomic particles identified.
In 1913 the New Zealand-born British physicist Ernest Rutherford, making
use of the newly discovered radiations from radioactive nuclei, found Thomson's
earlier model of an atom with uniformly distributed positive and negative
charged particles to be untenable. The very fast, massive, positively charged
alpha particles he employed were found to deflect sharply in their passage
through matter. This effect required an atomic model with a heavy positive
scattering center. Rutherford then suggested that the positive charge of
an atom was concentrated in a massive stationary nucleus, with the negative
electron moving in orbits about it, and positioned by the electric attraction
between opposite charges. This solar-system-like atomic model, however,
could not persist according to Maxwell's theory, where the revolving electrons
should emit electromagnetic radiation and force a total collapse of the
system in a very short time.
Another sharp break with classical physics was required at this point.
It was provided by the Danish physicist Niels Henrik David Bohr, who postulated
the existence within atoms of certain specified orbits in which electrons
could revolve without electromagnetic radiation emission. These allowed
orbits, or so-called stationary states, are determined by the condition
that the angular momentum J of the orbiting electron must be a positive
multiple integral of Planck's constant, divided by 2 p, that is, J = nh/2p,
where the quantum number n may have any positive integer value. This extended
"quantization" to dynamics, fixed the possible orbits, and allowed Bohr
to calculate their radii and the corresponding energy levels. Also in 1913
the model was confirmed experimentally by the German-born American physicist
James Franck and the German physicist Gustav Hertz.
Bohr developed his model much further. He explained how atoms radiate
light and other electromagnetic waves, and also proposed that an electron
"lifted" by a sufficient disturbance of the atom from the orbit of smallest
radius and least energy (the ground state) into another orbit, would soon
"fall" back to the ground state. This falling back is accompanied by the
emission of a single photon of energy E = hf, where E is the difference
in energy between the higher and lower orbits. Each orbit shift emits a
characteristic photon of sharply defined frequency and wavelength; thus
one photon would be emitted in a direct shift from the n = 3 to the n =
1 orbit, which will be quite different from the two photons emitted in
a sequential shift from the n = 3 to n = 2 orbit, and then from there to
the n = 1 orbit. This model now allowed Bohr to account with great accuracy
for the simplest atomic spectrum, that of hydrogen, which had defied classical
physics.
Although Bohr's model was extended and refined, it could not explain
observations for atoms with more than one electron. It could not even account
for the intensity of the spectral colors of the simple hydrogen atom. Because
it had no more than a limited ability to predict experimental results,
it remained unsatisfactory for theoretical physicists.
The understanding of atomic structure was also facilitated by Becquerel's
discovery in 1896 of radioactivity in uranium ore (. Uranium). Within a
few years radioactive radiation was found to consist of three types of
emissions: alpha rays, later found by Rutherford to be the nuclei of helium
atoms; beta rays, shown by Becquerel to be very fast electrons; and gamma
rays, identified later as very short wavelength electromagnetic radiation.
In 1898 the French physicists Marie and Pierre Curie separated two highly
radioactive elements, radium and polonium, from uranium ore, thus showing
that radiations could be identified with particular elements. By 1903 Rutherford
and the British physical chemist Frederick Soddy had shown that the emission
of alpha or beta rays resulted in the transmutation of the emitting element
into a different one. Radioactive processes were shortly thereafter found
to be completely statistical; no method exists that could indicate which
atom in a radioactive material will decay at any one time. These developments,
in addition to leading to Rutherford's and Bohr's model of the atom, also
suggested that alpha, beta, and gamma rays could only come from the nuclei
of very heavy atoms. In 1919 Rutherford bombarded nitrogen with alpha particles
and converted it to hydrogen and oxygen, thus producing the first artificial
transmutation of elements.
Meanwhile, a knowledge of the nature and abundance of isotopes was
growing, largely through the development of the mass spectrograph. A model
emerged in which the nucleus contained all the positive charge and almost
all the mass of the atom. The nuclear-charge carriers were identified as
protons, but except for hydrogen, the nuclear mass could be accounted for
only if some additional uncharged particles were present. In 1932 the British
physicist Sir James Chadwick discovered the neutron, an electrically neutral
particle of mass 1.675 × 10-27 kg, slightly more than that of the
proton. Now nuclei could be understood as consisting of protons and neutrons,
collectively called nucleons, and the atomic number of the element was
simply the number of protons in the nucleus. On the other hand, the isotope
number, also called the atomic mass number, was the sum of the neutrons
and protons present. Thus, all atoms of oxygen (atomic no. 8) have eight
protons, but the three isotopes of oxygen, O16, O17, and O18, also contain
within their respective nuclei eight, nine, or ten neutrons.
Positive electric charges repel each other, and because atomic nuclei
(except for hydrogen) have more than one proton, they would fly apart except
for a strong attractive force, called the nuclear force, or strong interaction
that binds the nucleons to each other. The energy associated with this
strong force is very great, millions of times greater than the energies
characteristic of electrons in their orbits or chemical binding energies.
An escaping alpha particle (consisting of two protons and two neutrons),
therefore, will have to overcome this strong interaction force to escape
from a radioactive nucleus such as uranium. This apparent paradox was explained
by the American physicists Edward U. Condon, George Gamow, and Ronald Wilfred
Gurney, who applied quantum mechanics to the problem of alpha emission
in 1928 and showed that the statistical nature of nuclear processes allowed
alpha particles to "leak" out of radioactive nuclei, even though their
average energy was insufficient to overcome the nuclear force. Beta decay
was explained as a result of a neutron disruption within the nucleus, the
neutron changing into an electron (the beta particle), which is promptly
ejected, and a residual proton. The proton left behind leaves the "daughter"
nucleus with one more proton than its "parent" and thus increases the atomic
number and the position in the periodic table. Alpha or beta emission usually
leaves the nucleus with excess energy, which it unloads by emitting a gamma-ray
photon.
In all these nuclear processes a large amount of energy, given by Einstein's
E = mc2 equation, is released. After the process is over, the total mass
of the product is less than that of the parent, with the mass difference
appearing as energy. . Nuclear Energy.
Developments in Physics Since 1930
The rapid expansion of physics in the last few decades was made possible
by the fundamental developments during the first third of the century,
coupled with recent technological advances, particularly in computer technology,
electronics, nuclear-energy applications, and high-energy particle accelerators.
Rutherford and other early investigators of nuclear properties were
limited to the use of high-energy emissions from naturally radioactive
substances to probe the atom. The first artificial high-energy emissions
were produced in 1932 by the British physicist Sir John Douglas Cockcroft
and the Irish physicist Ernest Thomas Sinton Walton, who used high-voltage
generators to accelerate protons to about 700,000 eV and to bombard lithium
with them, transmuting it into helium. One electron volt is the energy
gained by an electron when the accelerating voltage is 1 V; it is equivalent
to about 1.6 × 10-19 joule (J). Modern accelerators produce energies
measured in million electron volts (usually written mega-electron volts,
or MeV), billion electron volts (giga-electron volts, or GeV), or trillion
electron volts (tera-electron volts, or TeV). Higher-voltage sources were
first made possible by the invention, also in 1932, of the Van de Graaff
generator by the American physicist Robert J. Van de Graaff.
This was followed almost immediately by the invention of the cyclotron
by the American physicists Ernest Orlando Lawrence and Milton Stanley Livingston.
The cyclotron uses a magnetic field to bend the trajectories of charged
particles into circles, and during each half-revolution the particles are
given a small electric "kick" until they accumulate the high energy level
desired. Protons could be accelerated to about 10 MeV by a cyclotron, but
higher energies had to await the development of the synchrotron after the
end of World War II (1939-1945), based on the ideas of the American physicist
Edwin Mattison McMillan and the Soviet physicist Vladimir I. Veksler. After
World War II, accelerator design made rapid progress, and accelerators
of many types were built, producing high-energy beams of electrons, protons,
deuterons, heavier ions, and X rays. For example, the accelerator at the
Stanford Linear Accelerator Center (SLAC) in Stanford, California, accelerates
electrons down a straight "runway," 3.2 km (2 mi) long, at the end of which
they attain an energy of more than 20 GeV.
While lower-energy accelerators are used in various applications in
industry and laboratories, the most powerful ones are used in studying
the structure of elementary particles, the fundamental building blocks
of nature. In such studies elementary particles are broken up by hitting
them with beams of projectiles that are usually protons or electrons. The
distribution of the fragments yields information on the structure of the
elementary particles.
To obtain more detailed information in this manner, the use of more
energetic projectiles is necessary. Since the acceleration of a projectile
is achieved by "pushing" it from behind, to obtain more energetic projectiles
it is necessary to keep pushing for a longer time. Thus, high-energy accelerators
are generally larger in size. The highest beam energy reached at the end
of World War II was less than 100 MeV. A bigger accelerator, reaching 3
GeV, was built in the early 1950s at the Brookhaven National Laboratory
at Upton, New York. A breakthrough in accelerator design occurred with
the introduction of the strong focusing principle in 1952 by the American
physicists Ernest D. Courant, Livingston, and Hartland S. Snyder. Today
the world's largest accelerators have been or are being built to produce
beams of protons beyond 1 TeV. Two are located at the Fermi National Accelerator
Laboratory, near Batavia, Illinois, and at the European Laboratory for
Particle Physics, known as CERN, in Geneva, Switzerland. . Particle Accelerators.
About 1911 the Austrian-American physicist Victor Franz Hess discovered
that cosmic radiation, consisting of rays originating outside the earth's
atmosphere, arrived in a pattern determined by the earth's magnetic field
(. Cosmic Rays). The rays were found to be positively charged and to consist
mostly of protons with energies ranging from about 1 GeV to 1011 GeV (compared
to about 30 GeV for the fastest particles produced by artificial accelerators).
Cosmic rays trapped into orbits around the earth account for the Van Allen
radiation belts discovered during an artificial-satellite flight in 1959
(. Radiation Belts).
When a very energetic primary proton smashes into the atmosphere and
collides with the nitrogen and oxygen nuclei present, it produces large
numbers of different secondary particles that spread toward the earth as
a cosmic-ray shower. The origin of the cosmic-ray protons is not yet fully
understood; some undoubtedly come from the sun and the other stars. Except
for the slowest rays, however, no mechanism can be found to account for
their high energies and the likelihood is that weak galactic fields operate
over very long periods to accelerate interstellar protons (. Galaxy; Milky
Way).
In 1935 the Japanese physicist Yukawa Hideki developed a theory explaining
how a nucleus is held together, despite the mutual repulsion of its protons,
by postulating the existence of a particle intermediate in mass between
the electron and the proton. In 1936 Anderson and his coworkers discovered
a new particle of 207 electron masses in secondary cosmic radiation; now
called the mu-meson or muon, it was first thought to be Yukawa's nuclear
"glue." Subsequent experiments by the British physicist Cecil Frank Powell
and others led to the discovery of a somewhat heavier particle of 270 electron
masses, the pi-meson or pion (also obtained from secondary cosmic radiation),
which was eventually identified as the missing link in Yukawa's theory.
Many additional particles have since been found in secondary cosmic
radiation and through the use of large accelerators. They include numerous
massive particles, classed as hadrons (particles that take part in the
"strong" interaction, which binds atomic nuclei together), including hyperons
and various heavy mesons with masses ranging from about one to three proton
masses; and intermediate vector bosons such as the W and Z0 particles,
the carriers of the "weak" nuclear force. They may be electrically neutral,
positive, or negative, but never have more than one elementary electric
charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of
lighter particles. Each particle has its antiparticle and carries some
angular momentum. They all obey certain conservation laws involving quantum
numbers, such as baryon number, strangeness, and isotopic spin.
In 1931 Pauli, in order to explain the apparent failure of some conservation
laws in certain radioactive processes, postulated the existence of electrically
neutral particles of zero-rest mass that nevertheless could carry energy
and momentum. This idea was further developed by the Italian-born American
physicist Enrico Fermi, who named the missing particle the neutrino. Uncharged
and tiny, it is elusive, easily able to penetrate the entire earth with
only a small likelihood of capture. Nevertheless, it was eventually discovered
in a difficult experiment performed by the Americans Frederick Reines and
Clyde Lorrain Cowan, Jr. Understanding of the internal structure of protons
and neutrons has also been derived from the experiments of the American
physicist Robert Hofstadter, using fast electrons from linear accelerators.
In the late 1940s a number of experiments with cosmic rays revealed
new types of particles, the existence of which had not been anticipated.
They were called strange particles, and their properties were studied intensively
in the 1950s. Then, in the 1960s, many new particles were found in experiments
with the large accelerators. The electron, proton, neutron, photon, and
all the particles discovered since 1932 are collectively called elementary
particles. But the term is actually a misnomer, for most of the particles,
such as the proton, have been found to have very complicated internal structure.
Elementary particle physics is concerned with (1) the internal structure
of these building blocks and (2) how they interact with one another to
form nuclei. The physical principles that explain how atoms and molecules
are built from nuclei and electrons are already known. At present, vigorous
research is being conducted on both fronts in order to learn the physical
principles upon which all matter is built.
One popular theory about the internal structure of elementary particles
is that they are made of so-called quarks (. Quark),
which are subparticles of fractional charge; a proton, for example, is
made up of three quarks. This theory was first proposed in 1964 by the
American physicists Murray Gell-Mann and George Zweig. Despite the theory's
ability to explain a number of phenomena, no quarks have yet been found,
and current theory suggests that quarks may never be released as separate
entities except under such extreme conditions as those found during the
very creation of the universe. The theory postulated three kinds of quarks,
but later experiments, especially the discovery of the J/psi particle in
1974 by the American physicists Samuel C. C. Ting and Burton Richter, called
for the introduction of three additional kinds.
In 1931 the American physicist Harold Clayton Urey discovered the hydrogen
isotope deuterium and made heavy water from it. The deuterium nucleus,
or deuteron (one proton plus one neutron), makes an excellent bombarding
particle for inducing nuclear reactions. The French physicists Irène
and Frédéric Joliot-Curie produced the first artificially
radioactive nucleus in 1933 and 1934, leading to the production of radioisotopes
for use in archaeology, biology, medicine, chemistry, and other sciences.
Fermi and many collaborators attempted a series of experiments to produce
elements beyond uranium by bombarding uranium with neutrons. They succeeded,
and now at least a dozen such transuranium elements have been made. As
their work continued, an even more important discovery was made. Irène
Joliot-Curie, the German physicists Otto Hahn and Fritz Strassmann, the
Austrian physicist Lise Meitner, and the British physicist Otto Robert
Frisch found that some uranium nuclei broke into two parts, a phenomenon
called nuclear fission. At the same time, a huge amount of energy was released
by mass conversion, as well as some neutrons. These results suggested the
possibility of a self-sustained chain reaction, and this was achieved by
Fermi and his group in 1942, when the first nuclear reactor went into operation.
Technological developments followed rapidly; the first atomic bomb was
produced in 1945 as a result of a massive program under the direction of
the American physicist J. Robert Oppenheimer, and the first nuclear power
reactor for the production of electricity went into operation in England
in 1956, yielding 78 million watts. . Nuclear Weapons.
Further developments were based on the investigation of the energy
source of the stars, which the German-American physicist Hans Albrecht
Bethe showed to be a series of nuclear reactions occurring at temperatures
of millions of degrees. In these reactions, four hydrogen nuclei are converted
into a helium nucleus, with two positrons and massive amounts of energy
forming the by-products. This nuclear-fusion process was adopted in modified
form, largely based on ideas developed by the Hungarian-American physicist
Edward Teller, as the basis of the fusion or hydrogen bomb. First detonated
in 1952, it is a weapon much more powerful than the fission bomb, a small
fission bomb providing the necessary high triggering temperature.
Much current research is devoted to producing a controlled, rather
than an explosive, fusion device, which would be less radioactive than
a fission reactor and would provide an almost limitless source of energy.
In December 1993 significant progress was made toward this goal when researchers
at Princeton University used the Tokamak Fusion Test Reactor to produce
a controlled fusion reaction that output 5.6 million watts of power. However,
the tokamak consumed more power than it produced during its operation.
At very low temperatures (near absolute zero), many materials exhibit
strikingly different characteristics (. Cryogenics). At the beginning of
the 20th century the Dutch physicist Heike Kamerlingh Onnes developed techniques
for producing these low temperatures and discovered the superconductivity
of mercury: It loses all electrical resistance at about 4 K. Many other
elements, alloys, and compounds do the same at their characteristic near-zero
temperature, with originally magnetic materials becoming magnetic insulators.
The theory of superconductivity, developed largely by the American physicists
John Bardeen, Leon N. Cooper, and John Robert Schrieffer, is extremely
complicated, involving the pairing of electrons in the crystal lattice.
Another fascinating discovery was that helium does not freeze but changes
at about 2 K from an ordinary liquid, He I, to the superfluid He II, which
has no viscosity and has a thermal conductivity about 1000 times greater
than silver. Films of He II can creep up the walls of their containing
vessels and He II can readily permeate some materials like platinum. No
fully satisfactory theory is yet available for this behavior.
An important recent development is that of the laser, an acronym for
light amplification by stimulated emission of radiation. In lasers, which
may have gases, liquids, or solids as the working substance, a large number
of atoms are raised to a high energy level and caused to release this energy
simultaneously, producing coherent light where all waves are in phase.
Similar techniques are used for producing microwave emissions by the use
of masers. The coherence of the light allows for very high intensity, sharp
wavelength light beams that remain narrow over tremendous distances; they
are far more intense than light from any other source. Continuous lasers
can deliver hundreds of watts of power, and pulsed lasers can produce millions
of watts of power for very short periods. Developed during the 1950s and
1960s, largely by the American engineer and inventor Gordon Gould and the
American physicists Charles Hard Townes, T. H. Maiman, Arthur Leonard Schawlow,
and Ali Javan, the laser today has become an extremely powerful tool in
research and technology, with applications in communications, medicine,
navigation, metallurgy, fusion, and material cutting.
Astrophysics
The construction of large and specially designed optical telescopes
has led to the discovery of new stellar objects, including quasars, which
are billions of light-years away, and has led to a better understanding
of the structure of the universe. Radio astronomy has yielded other important
discoveries, such as pulsars and the interstellar background radiation,
which probably dates from the origin of the universe. The evolutionary
history of the stars is now well understood in terms of nuclear reactions.
As a result of recent observations and theoretical calculations, the belief
is now widely held that all matter was originally in one dense location
and that between 10 and 20 billion years ago it exploded in one titanic
event often called the big bang. The aftereffects of the explosion have
led to a universe that appears to be still expanding. A puzzling aspect
of this universe, recently revealed, is that the galaxies are not uniformly
distributed. Instead, vast voids are bordered by galactic clusters shaped
like filaments. The pattern of these voids and filaments lends itself to
nonlinear mathematical analysis of the sort used in chaos theory. . also
Inflationary Theory.