by
Revised by
Physics Department
Ohio University
Introduction Original Definition Statistical Definition Light
Earth-Sun System Chemical Reactions Biosphere Time Asymmetry
Entropy is a term often used in the physical sciences and engineering, but it is seldom encountered in discussions in other fields. Jeremy Rifkin, in Entropy: Into the Greenhouse World , sets forth the thesis that this concept of entropy has a much broader applicability, in such fields as social science, politics, health, etc. One of the goals of our course is to explore whether this broadening of application makes sense.
One of our tasks, the one addressed by these notes, is to try to get some understanding of what entropy means when the term is used in its home territory, physical science and engineering. A problem immediately arises - as in most concepts used in these areas - the basic definition of entropy is mathematical. We are interested in the philosophical aspects rather than the practical aspects of entropy; so we will avoid mathematics; but of course some reference to mathematics is unavoidable - that is the nature of the way the world works.
First we will explore the practical problems of engineering which gave rise to the idea of entropy in the first place. The energy crises since the early 1970s have brought to our attention the fact that much of modern society depends on energy that is stored in various forms, such as petroleum, coal, natural gas, uranium, etc. When we use this stored energy, we must convert it into some other form. In the process some of the stored energy is "lost" (that is, it is converted to a non-useful form). Economy says that we should try to minimize this loss. Conversion of energy (in, e.g., steam engines) ushered in the Industrial Revolution, and the early pioneers in this field invented the idea of entropy as a bookkeeping aid when they studied how to avoid losses during energy conversion.
Then we will seek a more profound "explanation" of entropy by looking at it from a microscopic point of view. The words "microscopic" and "macroscopic" used in this context refer to the atomic scale and the directly observable scale, respectively. A tiny speck of dust that requires a microscope in order to see it is still considered to be a macroscopic object - it contains an enormous number of atoms (seeing things through an ordinary optical microscope is still considered to be a direct observation for our definition here).
We will discover that a microscopic definition of entropy leads to a statistical analysis. A law in the macroscopic domain (something certain to happen) turns out to just be something highly probable when viewed microscopically. But we will see that the huge number of basic entities (atoms) in any macroscopic system makes it possible to consider statistical statements to be essentially certain. And we get a bonus - we can extend the idea of entropy to situations quite different from those studied by the early engineers.
A thorough study of entropy will require us to study systems that contain not only matter but also electromagnetic waves (such as sunlight). When such waves are studied microscopically, an application of the most fundamental ideas of quantum physics is necessary. So we will digress to a quite brief discussion of that discipline. Then we will be able to explain how practically all of the "production" of entropy on the Earth depends on absorption of the waves given off by the Sun and subsequent sending off of waves from the Earth into space.
Then we will argue that living organisms in particular depend on this sun-driven entropy production mechanism. Living things are very different from the machines studied by the engineers. A complete physical description of life processes is not now available, and may not ever be; but we do have a general view of how living systems operate from a physical point of view. In particular we will see how consideration of entropy tells us that a living organism can never be isolated from its environment and survive.
Finally we will get quite philosophical and examine how changes in entropy relate to the "arrow of time." The basic laws of physics are symmetric with regards to the passage of time. But the behavior of the observable world is asymmetric with time. Except for the most contrived situations, if we view a movie being played backwards, it is obvious that something is wrong. In a final burst of speculation, we will link the local, present time evolution of phenomena to the global, historical evolution of the entire observable universe.
The Industrial Revolution can safely be said to have started when energy from burning coal instead of windmills, animal muscular force, etc., was first used to move things. People had been using flame for many centuries to keep themselves warm, to cook food, and to provide light. The great breakthrough came when machines were developed that could take energy from flames and produce motion, such as pumping water, turning spinning machinery, etc.
The development of these machines, namely steam engines, was accompanied by development of new disciplines in physical science and engineering which sought to understand better the processes involved in taking energy from the flame and producing useful motion. We will now outline the ideas which emerged from these developments in order to get an idea of what entropy was originally understood to be.
First we need to define some terms. A systemis a carefully defined collection of objects, so that it remains "perfectly clear" what is in the system and what is not. Let us take, as an initial example of a system, a pail that contains some water and a stirring stick. That seems simple enough. But we know that, if the air is dry, water will evaporate from the surface, so that water molecules initially included in the system as part of the water in the pail become diffused through the atmosphere. Do we still include those molecules as part of the system? Maybe. Maybe not. The point is that we must define the system carefully enough in the first place so that all such questions have a definite answer. For our immediate purposes, we will pretend that evaporation does not occur so that the same water molecules are in the pail at all times.
Everything that is not in the system we call its environment. Of course the close-by part of the environment will have the most influence on the system, but we must keep in the back of our minds that far away parts may also have significant effects.
There are certain physical quantities that can be readily measured in any system. For example, we could put the pail of water on a scale and find out how much it weighs. And we could put a thermometer into the water and find its temperature.
Temperature was one of the quantities investigated by the scientists of Europe in a flurry of activity preceding the onset of the Industrial Revolution. They established temperature scales that are still used (although greatly extended in both directions). One thing the pioneers did not know was where to put the zero of their scale in a natural way. Both of the two scientists whose temperature scales are still in common use selected temperatures of 0 and 100 degrees based on physical observations, and then defined their scales in terms of reproducible experimental conditions. Those reproducible conditions were the freezing of water to ice and the boiling of water to steam, at standard atmospheric pressure. Fahrenheit chose as his 0 and 100 degree temperatures the coldest and hottest days in his part of Germany during the years he was making his initial observations. This resulted in water's freezing point being at 32 degrees F and water's boiling point being at 212 degrees F. Celsius, working in the aftermath of the French Revolution and during the invention of the so-called "metric system" of measurements, just chose the freezing and boiling points of water as his 0 and 100 degree temperatures. Neither method is intrinsically more scientific; in fact, when temperatures are reported to the nearest degree, the Fahrenheit scale's smaller degree (180 degrees F span the same temperature range as 100 degrees C) means better precision.
The size of a degree, like that of any other unit of measurement, is an arbitrary choice. The same turns out notto be true of the choice of zero degrees. Although everyday experience does not provide a basis for choosing the temperature to be called "zero degrees," there are several different physical situations that all lead to the same choice, which is now known as "absolute zero." One of these is the relationship between pressure, volume, and temperature for any confined sample of a gas, and we shall argue presently that the discovery of entropy also leads to a natural way to define zero temperature, because of the association between temperature and the random motions of the atoms within the material. Those two approaches yield the same zero temperature, reinforcing the choice of "absolute" to describe it.
Let us introduce another system, which is just a flame from the burning of some fuel. If we measured the temperature of the flame, we would most likely find that it has a much higher temperature than the water in our first system has. Now let us place the pail of water over the flame. What will happen? The temperature of the water will increase, of course. The early investigators said that something must be flowing from the flame to the water to cause this rise in temperature of the water. They called that something heat. In everyday language, we more often use this word as a verb - we would say that the flame is "heating up" the water. In our technical language we want to be careful to keep "heat" a noun. When heat is added to a system, the temperature of the system usually rises.
Why be so careful? For one thing, adding heat is not the only way to increase the temperature of something. Suppose that, instead of putting the pail of water on a flame, we grab the stirring stick (ah, that is why it was there!) and stir the water vigorously. Careful measurement would reveal that the temperature of the water also increases in this instance. But no heat is being added. The word "heat" is reserved for whatever flows when there is a temperature difference. The temperature of the stirring stick is presumably the same as that of the water it is immersed in. What is important is not the temperature of the stirring stick, but the fact that the stirring agent moves the stirring stick and that the water resists the motion. So we need a new word. We say that the stirring agent is doing work. In summary, the temperature of a system can be increased either by adding heat (temperature difference) or doing work (resisted motion).
On the other hand, it is possible to add heat to a system without raising its temperature (without "heating it up"). For example, if the system is a pail of water within which a piece of ice is floating and melting, then putting the pail over a flame will not increase the temperature until all of the ice is melted. Figure 1 illustrates this in a general way by showing a graph of the temperature of the sample over a period of time. In this case the sample was heated steadily, starting at zero time. For a while its temperature rose steadily, but then it stopped rising and remained constant for a period of time, until finally it started to rise steadily again. The period of time when the graph is flat represents the period of time when the sample was changing from one physical state to another while absorbing heat but notchanging temperature. This is called a "phase change" to distinguish it from a chemical reaction, about which we will say more later.
Figure 1: Heating a sample through a phase change.
Let us digress and introduce a particular kind of a system, which we will call a mechanical system. An example of such a system is the solar system, which consists basically of the sun, the planets, and their satellites. We consider only the gross motions of these bodies. Such a system is very predictable in its behavior. Astronomers can calculate, for example, when solar eclipses would have occurred in ancient times and when they will occur at comparable times in the future. One cannot associate a temperature with the solar system - each body may have a characteristic temperature, but that value has nothing to do with the gross motion, which is notrandom.
But we can define a quantity, which we call the total mechanical energy, for a mechanical system. If we know the masses of all the bodies in the solar system, and if we know at any instant the locations and speeds of all the bodies, then we can plug those numbers into a formula and compute the total mechanical energy. One of the laws which Nature seems to respect is that, providing there is no interference from the environment, the total mechanical energy of a mechanical system does not change as time goes by. The numbers that go into the formula will keep changing, but the answer will always be the same. We say that "total mechanical energy is conserved".
Can we extend this idea to systems of all kinds? Yes, we can, providing we extend the definition of energy. If the bookkeeping is done properly, conservation of energy is a general law of Nature. For a system like our pail of water, the energy it contains, defined in the general sense, is called its internal energy. If no energy is added to or removed from a system, its internal energy stays fixed. If energy is added, its internal energy increases by a like amount.
Consider a mechanical system that is a robot powered by a spring that can be wound up, like a wind-up toy. When the spring is wound up, it contains a certain amount of mechanical energy. Now put the stirring stick in the pail of water into the hand of the robot and set the robot going so that it stirs the water as its spring runs down. The mechanical energy of the spring decreases as the spring runs down. Conservation of energy demands that this energy show up somewhere else. Sure enough, the internal energy of the water in the pail increases by this amount (provided no energy has gone anywhere else). Work, which we mentioned above, is just the transferring of energy between systems by mechanical means. Since the temperature of the water is increasing as the internal energy is increasing, we might expect a connection between them. That connection will be explored later when the same ideas are examined from a microscopic point of view.
Adding heat to water using a flame raises its temperature. Doing work stirring the water also raises its temperature. If a person finds a pail of water which is warmer than its environment, he or she cannot tell by examining the water which process done in the past actually raised the temperature. Since the temperature is higher, the internal energy is higher, regardless of how it got that way. The conclusion - adding heat to a system increases its internal energy.
Now we have set things up enough so that we can state one important law discovered in the dawn of the Industrial Revolution: energy is always conserved. Work may add (or take away) energy; heat may add (or take away) energy; the net result of these transfers of energy is an increase (or decrease) of internal energy by exactly the amount of the net transfer of energy by both means. This principle is known as the first law of thermodynamics.
If energy is always conserved, why should there ever be such a thing as an energy crisis? Please be patient. That is why we are studying entropy.
Let us now pause to define yet another definition (good grief!), the distinction between reversible workand irreversible work. The wind-up robot introduced above stirs the water and increases its temperature and internal energy. But if we now connect the stirring stick to the winder of the robot, the stirring stick will not wind up the robot as the water cools off. On the other hand, if our robot expends its mechanical energy by winding up another robot, that other robot can turn around and use the mechanical energy it has gained to wind up the first robot. The work can be arranged to go in either direction. The first situation is irreversible; the second one is reversible.
In like manner, we can discuss reversible heat transferand irreversible heat transfer. If a flame is used to add heat to water, the process is irreversible because of the considerable temperature difference between the flame and the water. Heat will only flow from the flame to the water - there is no way to get heat to flow from the water back to the flame. In order to have reversible heat flow, it is necessary to have two systems in contact which are only slightly different in temperature. Then a small increase in the temperature of the cooler system can cause it to be warmer than the other one, so that heat flows in the other direction. The catch in this setup is that the rate at which heat flows is proportional to the temperature difference, and truly reversible heat flow can take forever to happen!
Now (finally) we are ready to define entropy. We have already said that either heat or work can increase the internal energy of a system. Heat flow and mechanical work are two different methods for transferringenergy into or out of a system. The system cannot be said to "contain" heat or to "contain" work, but it does "contain" internal energy. And it does not matter how that internal energy got there. Entropy is similar to internal energy in this respect - it is something a system "contains," and it does not matter how it got there. Entropy is defined in order to differentiate in some sense between processes that increase internal energy reversibly and those that increase it irreversibly. Unlike many physical quantities (such as, kinetic energy or momentum), we do not define entropy here by telling you to measure this and measure that and put them together with this formula to calculate the entropy. Instead, we define entropy indirectly by telling you how to calculate the change in entropycaused by a process. Here goes...
Entropyis defined in such a way that if heat is added to a system reversibly, the increase in entropy of the system is equal to the amount of heat added divided by the absolute temperatureof the system as the heat is added. Not fair! We have used an undefined concept, absolute temperature, in order to define something else, entropy. Have faith. The definitions of these two ideas are indeed intertwined. But other relations between them eventually enable us to have sharp definitions of each one. For now, let us remark that the chief distinction of absolute temperature is where its zero is. We said earlier that one thing early workers could not do was to define a natural zero for their scale. Now we have that chance. We know that it is a mathematical disaster to divide by zero. So in order to avoid this disaster in the definition of entropy, the zero of absolute temperature must be so low that no process can actually get a physical system to zero temperature. But we would want to be able, in principle, to get as close as we like to zero temperature, in order that any positive value of temperature can have an example in some system.
Why is entropy, as defined here, at all important? Its original usefulness was to aid in the design of heat engines, machines which absorb heat and produce work. Making such a machine requires a certain amount of cleverness. As we saw above, it is easy to set up a system that absorbs work and produces heat - a pail of water with a stirring stick was our example. But it is not so easy to make things work the other way. However much we add heat to the water, the stirring stick will not start to move.
Figure 2: Gravity-assisted Heat Engine
Figure 2 shows a schematic plan of a toy heat engine that uses gravity to assist it in changing heat energy to mechanical energy. I call it a toy because it is not a practical design; nobody is going to build one to deliver energy in the real world. But it has the advantage of being relatively simple to explain. Let us set the scene. Suppose that in some location like Yellowstone Park there is a steep cliff having a boiling hot lake (heated by geysers, its temperature is T H ) at its foot and a freezing cold lake (from melting snow, its temperature is T C ) at its head. The heat engine is mainly a pipe that is covered by an insulating blanket (preventing heat from getting in or out) except for the regions at the lower right and the upper left where there is good thermal contact with the hot lake and the cold lake, respectively. The pipe is filled with a working gas, which we shall take to be air. The air at the lower right corner, in thermal contact with the hot lake, absorbs heat and becomes less dense than air in the rest of the pipe; so it rises up the right arm and is replaced by colder air coming right along the bottom leg (this is one place gravity plays its role). The air moves to the left across the top of the fan blades, forcing them to turn, doing work and giving up part of the energy provided by the heat flow from the lake at the lower right. The moving air, after pushing the fan blades to do work, comes in thermal contact with the cold lake at the upper left corner. It gives up heat to the cold lake and becomes more dense, falling down the left arm (this is another place where gravity plays its role). It returns through the bottom leg to go to the hot lake region and complete the cycle. This process continues, establishing a "steady-state" with heat continually flowing in from the hot lake, work continually being done on the fan, and heat continually flowing out to the cold lake. In the process some, but not all, of the heat removed from the hot lake is converted into work, transferring energy mechanically to whatever the fan's axle is driving.
What would happen if we did not remove the waste heat, Q C ? The circulating air would gradually warm up, as energy was added from the hot springs, and only part of it was removed as mechanical work by the fan. Eventually, the circulating air would all be at temperature T H , and no more heat would be added at the hot springs. Furthermore, there would be no density differences for gravity to use to drive the circulation. The flow would therefore stop, and no more work would be done. It is true that a small amount of work would be done initially, but after that, none. In order to permit sustained conversion of heat energy to useful work, is it essentialthat waste heat be extracted continually.
Let Q H be the heat being absorbed by the air from the hot lake, Q C be the heat being emitted into the cold lake, and W be the work done turning the fan. If these are the only energy flows, then the first law of thermodynamics, conservation of energy, tells us that Q H = Q C + W (the internal energy of each packet of air returns to its original value upon completing a cycle). Suppose that the fan is connected to a reversible electric generator, so that the work W is sent away as electrical work.
Heat engines of simple design, such as this one, can also be made to perform as a refrigerator (or a heat pump). Suppose that electrical work is taken from the network to run the generator in reverse, as a motor, making the fan go the other away around (clockwise). If the fan goes fast enough, it will force air to go around its path in the "wrong" direction. The air being pushed down the right arm will have its temperature increased as it is compressed, so that heat will then be emitted into the hot lake, and the air being pulled up the left leg will have its temperature decreased as it is expanded, so that it will absorb heat from the cold lake. A device that takes heat from a colder object and gives heat to a hotter object is a refrigerator or a heat pump (depending on whether we pay attention primarily to the cold or to the hot object, respectively). Let Q H ', Q C ', and W' be the two heats and the work, respectively, in this new situation. Again, ideally, Q H ' = Q C ' + W'.
Further suppose that we run the fan backward vigorously enough so that Q H = Q H ', exactly reversing the heat flow between the hot lake and the working gas. We would find that W' is probably larger than W. If W' = W, the heat engine is reversible. That is, everyenergy flow is exactly reversed. If we place two identical reversible heat engines side by side and connect the fans together with a gear so that the fans go in opposite directions, this contraption could hum on forever with no net supply of heat from the outside, since the heat that each one needs is exactly provided by the other one. If we found that W' was less than W, then we would have a good deal indeed. We could hook up our contraption with the gear and get out the difference, W' - W, without having to put any heat in. Here the second law of thermodynamics steps in. It states that no heat engine can be more efficient than a reversible one; so the best possible result is W' = W.
How do we connect this discussion with entropy? Recall that the absolute temperatures of the two lakes are T H and T C . If the engine is running in the forward direction, the hot lake loses an amount of entropy Q H /T H and the cold lake gains an amount of entropy Q C /T C . It turns out that, if the engine is reversible, the most efficient possible design, these two entropy changes have the same magnitude, so that the operation of the engine has no net effect on the entropy of its surroundings. These ratios can be equal only if Q C is non-zeroand is less thanQ H . In other words, the very best possible heat engine cannot transform all of the extracted energy into useful work; some energy is alwaysdiscarded into the cold temperature revervoir.
An analysis first presented by Sadi Carnot demonstrates that the best possible efficiency, in which the smallest fraction of the Q H is discarded as waste heat, will be achieved with the largest possible difference in temperatures between the hot and cold reservoirs. That is why many nuclear power reactors are pressurized: the pressure permits water to remain liquid (and therefore effective at removing heat) at higher temperatures.
Furthermore, no real heat engine is truly reversible. Heat escapes from the machinery in places that it is not supposed to, there is friction in the bearings, there is a mismatch in the temperature between the working fluid and the heat source, etc. These departures from ideal behavior lead to less useful work being done, and therefore a net increase of the entropy of the surroundings, caused by the operation of the engine.
Let's put it into a nutshell. Energy is conserved. But some energy "leaks" away into non-useful forms. Loss of useful energy is usually associated with increases in entropy. Improving efficiency in energy conversion goes hand in hand with avoiding entropy increases.
In order to state an important general principle, it is necessary to provide some definitions. We define an isolated systemas a system which exchanges neither energy nor matter with its environment. Water inside a tightly sealed, rigid container covered all over with a thick layer of insulation would be an approximation of an isolated system. It is, of course an idealization. A closed systemcan exchange energy with its environment, but it cannot exchange matter. If we leave off the thick layer of insulation in the above example, we have a closed system. An open systemexchanges both energy and matter with its environment. A container of water which is open on top so that the water can evaporate as well as give off heat is an open system.
Now we are prepared to state the second law of thermodynamics:
Any process that happens within an isolated system causes the entropy of the system to either stay the same or to increase. The entropy of an isolated system can never become smaller.
Consider the example of two wind-up robots which we introduced earlier. Suppose they are inside an insulated box to make an isolated system. The robots take turns winding each other up. If all the energy of the two robots stays in the form of mechanical energy, there is no change in the entropy. But if there is the least bit of friction in the mechanisms of the robots (which we would certainly expect to happen) then the mechanical energy originally stored in the springs of the robots will slowly but surely be converted to heat, the robots will come to a halt, and everything in the box will be a little bit warmer. If we put in the two toy heat engines with a gear so that one acts as a refrigerator, we get the same general result, depending on whether the heat engines are reversible.
The second law can be a powerful analytic tool. Sometimes the results of very complicated processes can be deduced by simply insisting that whatever happens cannot decrease the entropy of an isolated system.
But there is a bit of a hitch. An isolated system is itself an idealization. It is impossible to utterly cut off all flow of heat in or out of any region. For practical problems this is usually not a big deal. But it can be important if one is trying to understand a seemingly paradoxical situation. An extreme attitude to fix this problem is to say that if you can't cut it off then go ahead and include it all. The ultimate example of an isolated system is the entire Universe. So we sometimes see this statement of the second law: "Every process either does not change or else it increases the entropy of the Universe". But the universe may well be infinite in extent, and gravity working on a large scale can be very subtle; so it is not certain that this sweeping extension is really valid.
Let us summarize this section. Energy is stored in various ways, but we categorize the flow of energy into two kinds, work and heat. Heat is closely connected to temperature and work is not. Heat flows from a hotter object to a colder one. It is easy to convert work into heat, but doing so is usually considered wasteful. One must be more clever to convert heat into work, and it is always necessary to send some heat back into the environment when doing so. Entropy is transferred along with heat, in the amount calculated as the heat transferred divided by the absolute temperature at which it is transferred. The second law states that the entropy of an isolated system (or, if you like, the Universe) always increases or stays the same during any process; it never decreases. One of the aims of a designer of efficient machinery is the minimization of this inevitable increase of entropy.
Let us now study a different definition of entropy, one that is more theoretical, which can be shown to be consistent with the entropy we have already defined in the situations dealt with so far. We will also see that this definition expands the concept into new situations. The different definition uses ideas from statistics.
Suppose you are interested in studying the birth rate in Lower Elbonia, and you go to the Health Ministry of that country and ask them for statistics on births. They may well take you to an enormous filing cabinet stuffed with a piece of paper full of data for every baby born since 1875, when record keeping started. That might be where you have to start your study, but eventually you would want to sort out of that mess a few meaningful numbers, such as the average numbers of babies born daily for each of the last ten years. Taking raw data and distilling out a few numbers, such as the average values, which characterize the raw data, is what is involved in the science of statistics.
The raw data that is involved in our different definition of entropy is more nebulous but potentially much more voluminous than that Lower Elbonian filing cabinet. The numbers are not written down anywhere, and the averages are determined by measurement processes. To describe these data, we need to set down some more definitions (good grief!). First we must realize that seemingly continuous matter is really made up of minute parts, which we call atoms. Often several atoms are chemically linked to form molecules. A closed system or an isolated system (from which matter can neither enter or leave) is then some definite set of molecules. We define the microscopic stateof such a system to be the complete specification of the species, the position, and the motion of every molecule in the system (by species, we mean the particular atoms in the molecule and the particular configuration of the chemical linking, so that every molecule in a species is identical with every other one). Since the number of molecules in even a speck of dust is truly enormous, we see that the raw data of a microscopic state cannot possibly be written down. In fact it cannot even be determined by any measurement process. It is just a theoretical construct; something that we can describe quite fully but that we can never have a concrete realization of.
The macroscopic stateof the same system is defined to be the results of the measurement of a few quantities, like pressure, volume, and temperature, which can be measured directly. Some other quantities, such as internal energy and entropy, cannot be directly measured, but their values can be inferred from the values of the directly measurable quantities. By applying the laws of physics to the microscopic variables, we can interpret the few quantities we can measure as being statistical averages over the microscopic state.
Perhaps we should state a technical point here. The macroscopic quantities will be sharply defined only if the microscopic behavior is quite homogeneous over the sample. If the molecules in one corner are acting considerably differently from those in another corner, any average we take will wash out these differences and consequently may lose a lot of the physical meaning. So, if necessary, we define an intermediate scale. We divide the system into a large number of small subsystems, each of which still has a huge number of molecules in it so that statistical treatment is useful, but which are each small enough so that each one ought to be homogeneous. For example, the air in a room is likely to be cooler near the floor and warmer near the ceiling. It is to some extent an article of faith that such a breakdown is always possible. Then we would expect the temperature, for example, to vary smoothly over the system as it is determined in each of these subsystems. And we will require that the entropy that we define by statistical means must be an additive quantity, so that the total entropy of the system is just the sum of the values determined for each of the many subsystems.
Now we will try to understand the statistical treatment of the molecules with models involving more familiar objects. Take coins for example. If we throw a lot of coins on the floor, we expect half of them to show heads. But of course it does not follow that exactly half will show heads each time. There are fluctuations from the expected behavior. What we expect is that if we throw the coins many, many times, and average the number of heads, then the average number of heads will be one half the number of coins. If we really get a different result, we say that the coins are not fair. Physically, a fair coin should have identical construction of the head and tail sides so that there is no physical reason why one would be more likely to come up than the other one. We can make statistical inferences about the behavior of molecules because we believe that they are very fair in this same sense. All molecules of a given species are exactly alike. All points in space where the molecules might be are exactly equivalent. All directions that a molecule's velocity might point are equally likely. Et cetera.
Back to the coins. We do a trivial example to illustrate how a microscopic state might be defined. Throw two coins. There are just four ways that the two coins can end up: HH, HT, TH, and TT, where of course H means a coin showing head and T means a coin showing tail and the first letter refers to the first coin and the second letter refers to the second coin. These are the microscopic states. To define the macroscopic state we simply say that we do not care which coin is which, so that the specification of the macroscopic state is just the total number of heads showing. So there are three macroscopic states - zero heads, one head, and two heads. In this simple case, there is not very much of a decrease in the number of states between the microscopic and macroscopic definitions. But consider three coins. The microscopic states are: HHH, HHT, HTH, THH, HTT, THT, TTH, and TTT, and the macroscopic states are: zero heads, one head, two heads, and three heads. Eight states are reduced to four. If we throw four coins there are sixteen microscopic states and five macroscopic states. In general the number of microscopic states is the number two (because there are two sides to each coin) raised to the power of the number of coins, while the number of macroscopic states is one more than the number of coins.
Now we are finally ready to state the statistical definition of entropy. First we will define what we will call likelihood(not a standard name). The likelihood of a macroscopic state is equal to the number of microscopic states that are consistent with it. For the two coin example, the likelihood of zero heads is one, of one head is two, and of two heads is one. If the coins are fair, the state with likelihood two is twice as likely to occur as are the states with likelihood one. Then we define the entropyof a macroscopic state to be equal to the natural logarithm of the likelihood. The actual definition used in practice includes a constant of proportionality in order to adjust the units of measurement, so that this definition agrees numerically with the earlier definition of entropy, but we will ignore the constant for the time being.
What is the natural logarithm? It is a function that mathematicians find to be very useful. The abbreviation is "ln" - the natural logarithm of two is written ln(2). If you have a calculator with a button that is labeled "ln", it will give you the value of this function. For example, ln(1) = 0, ln(10) = 2.3026 to five significant figures, and ln(1,000,000,000) = 20.7233. The natural logarithm can be defined mathematically in a number of ways, some of which we will discuss in class. A graph of the natural logarithm follows the curve displayed here:
Figure 3: The Natural Logarithm
The key geometric points are that
The key arithemetic characteristic of the logarithm that we should mention is that the logarithm of the product of two numbers is the sum of the logarithms of those two numbers. For example, ln(2) + ln(3) = ln(2x3) = ln(6). That is why we use the logarithm in our calculation of entropy: it makes entropy an additive quantity.
The behavior of a modest number of coins in this model can be traced using a computer program. The choice of the next coin to turn is decided by a random number generator (similar to the one which generates a state lottery selection "by the machine"). Eventually, sure enough, the state is reached where half the coins are heads, the other half tails. But there is a lot of "noise"; the state jiggles noticeably about this equilibrium position. In the real situation in Nature, there is an enormous number of "coins" (far too many to be studied in a computer model), so that the departures from equilibrium are too small to notice.
The system of a set of coins illustrates the concept of entropy, but it does not mimic very well any real physical system. This system has only two macroscopic variables defined - the total number of coins and the number of coins showing heads. All real physical systems have at least one more macroscopic variable, which for an isolated system would be related to the total energy of the system. We will not pursue a more realistic system in these notes, but the interested reader is referred to a chapter in the book by Bent and to the paper by Styer .
Another topic to which the statistical approach to entropy particularly lends itself is the entropy of mixing, but we will not pursue those details here.
Light is a specific example of a general phenomenon called electromagnetic waves. Radio waves, X-rays, and gamma-rays are other examples. Light can be narrowly defined to be only those electromagnetic waves to which our eyes are sensitive ("visible light"), but for the sake of brevity we will let "light" be synonymous with "electromagnetic waves" from now on.
One of Einstein's many contributions to science was showing that light does not always act like a wave. There are many situations where light behaves like a particle (waves are continuous; particles are discrete). A particle of light is called a photon. In fact, when we see something, our eyes are detecting photons. For our purposes in these notes, the wave nature of light is irrelevant, and we will only be concerned with light as being photons.
A collection of photons is similar in many respects to an ordinary gas, such as the atmosphere, with photons taking the place of molecules. If photons reflect from a mirror, they exert a pressure on it, just as the molecules of an ordinary gas exert a pressure on a surface when they bounce from it. The collections of photons that we are interested in can also have a temperature assigned to them. They will expand to fill any volume to which they are confined. So we can legitimately speak of a photon gas.
But there are some important differences between photons and gas molecules. Photons in empty space always travel at the same speed, which we call the speed of light (fast enough to go more than seven times around the Earth in one second), whereas the speed of a molecule depends on its energy and is always less than the speed of light. There is no chemical affinity whatever between photons, so that no matter how much we cool off a photon gas it will never liquefy or solidify. Gravity holds the molecules of the atmosphere to the Earth, but it has hardly any effect on photons.
Most importantly, photons can be emittedand absorbedby the molecules of an ordinary substance. A photon that has just been emitted did not previously exist, and a photon once absorbed no longer exists. Hence, the number of photons is a changeable thing. But a photon that does not encounter matter can live indefinitely. Astronomers regularly see photons from far away galaxies, which have taken billions of years to reach Earth. Emission and absorption of photons by condensed matter (e.g., solids and liquids, but not gases) is strongly affected by the temperature of the matter. The average energy of an emitted photon is directly proportional to the absolute temperature of the glowing object, and the energy carried away per unit time by the emitted photons increases with the fourth power of the absolute temperature of the emitter. If a particular substance is reluctant to absorb photons of any specific color of light (it has a shiny, reflective surface), it is equally reluctant to emit them; and if it is eager to absorb them (it has a dull black surface), it is equally eager to emit them.
We are interested in photons because they are the primary way by which objects separated by empty space exchange energy in the form of heat. All objects emit photons, and all of them absorb them, but the hotter ones emit many more, each with more energy, so that the net flow of energy carried by photons is from hotter objects to cooler ones, agreeing with the property of heat that it always flows from hot to cold.
It is sufficiently accurate for our purposes to say that the entropy of a photon gas is primarily determined by the number of photons it contains, and secondarily by the variety of directions and energies of those photons. As an object emits photons, it increases the entropy of the photon gas around it; and as it absorbs them, it decreases that entropy. We will see later in a specific and important example how such emission and absorption adheres to the requirements of the second law of thermodynamics.
Seeing things usually depends on what we might call "high quality photons." Such photons are emitted by objects, such as the Sun or an incandescent filament, that are much hotter than our general surroundings. These photons are reflected in varying degrees from cold objects into our eyes, and we see those objects by the contrast, the variation in the light reaching our eyes. At room temperature, the photons that are emitted by those objects because of their temperature do not register in our eyes at all because they are too low in energy to activate the chemistry in the rods and cones in the retinas of our eyes. Suppose we lived in a world where everything did emit light that we could see. Imagine being in a pottery kiln (ignore the fact that the human body cannot withstand such heat). When the kiln is cold, all is black and we see nothing. As the temperature in the kiln rises, the pots give off red light characteristic of their warmer temperature. But so do the walls of the kiln, our own skin, and everything else in the kiln. So all is red, there is no contrast, and we see nothing!
High energy photons, in addition to their ability to initiate the chemistry in our retina that lets us see, also share other characteristics. In particular, the higher the photon's energy, the shorter the wavelength of the associated electromagnetic radiation. But shorter wavelengths are required in order to resolve finer detail, whether looking by eye or using instruments to detect "light" with photons so energetic that they are not visible to the human eye. In order to "see" detail on the scale of atoms, you must use photons of energy corresponding to temperatures of about 100,000,000 degrees Kelvin (or Celsius). Shining "light" with such energetic photons onto an atom can be counted on to give it quite a whack when the photon bounces off, sending the atom who knows where. Shining light with less energetic photons does not permit establishing the location of the atom that bounced the photon to your eye (or other detector) with useful precision. This may be interpreted as an application of the Heisenberg Uncertainty Principle, and it provides yet another explanation of why Maxwell's Demon cannot succeed.
The composition of the Sun is approximately three-fourths hydrogen (by weight) and one-fourth helium, the two lightest elements in the periodic table. There are small amounts of all the other elements, which are not important for this discussion. (If you are curious, an intermediate-level discussion of the origins of the various elements that are present in the Sun and in the Earth can be found in the Notes on Modern Physics and Ionizing Radiation .) These materials are in the plasma state, which means that because of the high temperature the electrons do not remain connected to the nuclei of their atoms, so that we have a fluid composed of highly energetic, electrically charged particles, both nuclei and electrons, all intermingled into a "very hot soup."
Near the center of the Sun, where the temperature is the highest, nuclear fusion reactions are continuously happening, which turn four hydrogen atoms into a helium atom with an enormous amount (relatively speaking) of energy released each time. But these reactions happen at a very slow rate, so that a minute fraction of the hydrogen in the Sun is converted into helium in any time period relevant to human beings. The Sun has been shining for about five billion years, and should keep shining for a comparable time into the future. The energy released by the fusion reactions is partially converted into photons, but photons cannot travel very far in a dense plasma. They are quickly absorbed, and then the next layer, which was heated when it absorbed them, emits new photons because of its higher temperature. But each layer is a bit cooler than the one below it, so that it emits photons with a bit less energy than those which it absorbed. Gravity also gets weaker layer by layer, and the plasma is therefore less dense. Finally a layer is reached where the plasma is thin enough so that the photons can escape into empty space and continue unimpeded with the speed of light. This layer is the visible surface of the Sun (the photosphere) which, at about 5,000 degrees Celsius, is much hotter than the surface of the Earth but is much cooler than the inside of the Sun, where temperatures are on the order of 100,000,000 degrees Celsius.
The Earth moves in an orbit that keeps it at roughly the same distance from the Sun all the time. The surface of the Earth absorbs a certain fraction of the photons from the Sun that fall on it (we ignore for now complications brought about by the Earth's atmosphere). Absorbing photons causes the Earth to heat up. But the Earth also emits photons into space because of its temperature. Emitting photons causes the Earth to cool off. If the Earth has just the right temperature, it will lose energy at exactly the same rate as it gains it, its temperature will not change, and a steady-state situation will have been achieved. This balance in fact determines the average temperature of the Earth, and that average temperature stays at about the same value all the time.
The equilibrium temperature is determined by the balance between absorption and emission. The absorption and emission of any color of light will go together, as we commented earlier. However, the light being absorbed is mostly visible light, and the light being emitted by the Earth is mostly infrared, so changes in the composition of the surface of the Earth (e.g., continental ice and snow cover or fields and forests) or of the atmosphere (e.g., concentration of methane, carbon dioxide, or water vapor) can shift the balance dramatically. If the efficiency of emission of infrared light decreases, or the efficiency of absorption of visible light increases, then the equilibrium temperature (the temperature at which the total energy absorbed each year balances the total energy radiated that year) will be warmer, and vice versa.
Figure 4: Earth absorbs light from the Sun and re-radiates light in all directions.
As is evident from Fig. 4, the photons absorbed from the Sun were all headed in just about the same direction; the photons re-radiated by the Earth go in all different directions; and each re-radiated photon carries less energy ("is redder") than the solar photons that are absorbed by the Earth.
Sunlight falling on the surface of the Earth heats it up. Imagine holding up a loop (for example, of coat-hanger wire) so that the Sun's rays go through it perpendicular to the plane of the loop. Then let the light that has gone through the loop fall on a flat absorbing surface held at a variable angle. The amount of energy per square meter of the absorbing surfacewill be a maximum if the surface is perpendicular to the path of the light (parallel to the plane of the defining loop), and falls to zero as the surface becomes parallel to the path of the light (because the light that goes through the defining loop will be spread out over a larger and larger area of the absorbing surface). This explains why the North and South poles are so much colder than the equator.
Outside the atmosphere, the intensity of solar radiation ("insolation") at the Earth's orbital distance from the Sun is about 1,340 W/m 2 . The total power delivered to the cross-section area of the Earth (πR 2 ) is spread out over the surface area (0.5 * 4πR 2 ) of the hemisphere that is toward the Sun (i.e., "in daylight"), so that the average power per square meter during daytime is one-half, and the average power per square meter, including nighttime, is one-quarter of the outer-space value quoted in the first sentence of this paragraph. Some of the solar energy that strikes the atmosphere is absorbed or reflected by the atmosphere, so that the intensity at the surface of the Earth is about 1,000 W/m 2 at noon in the tropics; the average intensity over the whole surface, including night and day, is therefore about 250 W/m 2 . This incident intensity of sunshine, mostly visible light, is approximately balanced (over any extended period of time) by a combination of reflection (perhaps 5 - 10 %) together with the emission of earthshine, mostly infrared light. Recent research ( Hansen , et al, 2005) indicates that there is currently an imbalance of about 0.85 W/m 2 greater absorption than emission. This indicates that a small further increase in the temperature of the Earth can be expected, even if nofurther increase in atmospheric greenhouse gases takes place.
How do the seasons arise? There are two factors that influence the heating of the surface by the Sun's light:
Which of these two explains the temperature variations we call "seasons?" If it were the first, then the whole world would have seasons at the same time. But we know that the northern hemisphere and the southern hemisphere have opposite seasons, so we can be sure that the second effect is much greater than the first.
There is another continuous source of energy for the Earth which we should mention here. When the Earth formed, some of the atoms which were included in its makeup (such as uranium) were radioactive. Whenever a radioactive atom decays, some energy is released. This process keeps the interior of the Earth much hotter than it would otherwise be. In fact the Earth has a molten core (believed to be mostly iron and nickel). Not only do volcanos result from this molten interior, but also the pieces of solid outer crust are somehow pushed around by the molten interior (together with the Earth's rotation) so that they occasionally smash together to push up high mountains, or split apart to form oceanic trenches or rift valleys. Water evaporated from the ocean by the Sun's energy falls as rain on the volcanos and mountains, eroding them and causing certain chemical species to accumulate as minerals in ore bodies.
Although radioactive, U-238 is so nearly stable that its half-life is 4.51 billion years, comparable to the age of the Earth. Thus, nearly half of the original U-238 is still present. Virtually all other radioactive constituents of the original Earth have since decayed, but small amounts of some are continually formed by interactions between cosmis rays and the upper atmosphere (e.g., C-14), and by the decay of the remaining U-238 (e.g., Radon).
If we ignore for simplicity the small (relatively speaking) flow of energy from the interior of the Earth, we see that we have a beautiful example of the second law of thermodynamics at work. In the steady state, the Earth absorbs and emits the same amount of energy in any given amount of time. But since the surface of the Sun is at a much higher absolute temperature than is the surface of the Earth (just about 20 times as hot), each absorbed photon has a much higher energy than each emitted photon does. So the Earth emits a much higher numberof photons per unit time than it absorbs. We said in the previous section that the entropy of a photon gas is primarily determined by the number of photons. Furthermore, the emitted photons are scattered about in all directions, while the absorbed photons were all headed in nearly the same direction. (Solid geometry tells us that the photons headed toward the Earth from the Sun are headed in only one in 100,000of all the possible directions for photons leaving the solar system.) So on the grounds of photon numbers and on the grounds of the range of the photons' directions, the process of absorption and re-emission of photons by the Earth is continuously increasing the entropy of the photon gas leaving the Solar System. In the same vein, each layer of the Sun emits more photons than it absorbs because each layer is successively cooler so that the entropy in the photons fighting their way our of the Sun is also continuously increasing.
Chemistry concerns itself with situations wherein the molecules in a system interact with each other and result in new sorts of molecules. If the system is isolated or even just closed, then it contains exactly the same atoms after a reaction occurs as it did before, but those atoms are rearranged into different molecules. A veryschematic chemical reaction can be written:
Here A and B stand for the molecules that are there in the beginning, and C and D stand for the molecules that result from the reaction. We allow that A and B do not necessarily specify two molecules - any number of molecules can in general react. The same goes for C and D. And A, B, etc., can also represent a macroscopic amount of a substance that has a definite preselected number of molecules in it.
We want to focus our attention on the arrow in the reaction formula. The notation using a point on only one end of the arrow implies that the reaction proceeds in a definite direction. It is an irreversiblereaction. Indeed, if we study chemistry in school, we are usually given reagents in the laboratory which, when mixed together, give rise to irreversible reactions: a solid forms and sinks to the bottom of the test tube, or a gas bubbles out of the solution and escapes; but we should not be surprised by now to learn that if we do the experiment under very different conditions, it may be possible to get the reaction to go in the opposite direction - to start with C and D and end up with A and B. An extreme example of an irreversible reaction is an explosion; under ordinary circumstances, A and B are solids and C and D are gases which want to occupy much more volume. But it is possible that if you mixed together C and D in a reaction vessel (which would have to be very strong) at such high temperature and pressure that A and B would be gases, then you might get A and B. A less violent example is the tarnishing of metals. A silver object exposed to the atmosphere at room temperature develops over time a layer of unattractive material on its surface which is just the products of chemical reactions between silver and gases in the atmosphere. A gold object under the same circumstances remains shiny. But if you put a tarnished silver object in an oven and raise the temperature high enough, then the tarnish will disappear. The responsible chemical reactions go the other way at high enough temperature. In like manner, if you put a shiny gold object in a freezer and lower the temperature sufficiently, tarnish will develop on the gold. So for each metal of this kind ("noble") there is a temperature at which, at atmospheric pressure, the tarnishing reaction can go either way. It is a reversible reaction at that temperature.
Chemical reactions add another facet to the idea of equilibrium. An isolated system that is in mechanical equilibrium and thermal equilibrium might still not be in its true equilibrium state if it is not also in chemical equilibrium. A chunk of carbon in a tank of oxygen might be in chemical equilibrium - it depends on whether or not the tank of carbon dioxide which would result from a chemical reaction has a higher value of entropy.
It is almost certain that chemical reactions are accompanied by a change in entropy. The number of molecules in the final state is probably different, the amount of energy tied up in chemical bonds is different, and the new molecules will have different sorts of energy levels in excited states. So all physical properties, including the entropy, will be different. But how can the change be determined?
One method can be used if conditions are known under which the given reaction will happen reversibly. First take the reactants carefully and reversibly (except for heat flow) from the initial conditions to the reversible situation, let the reaction take place carefully and reversibly, and finally bring the products carefully and reversibly to the desired final state. All the while, measure the heat absorbed or emitted by each step of the process and the absolute temperature at which each step occurs. The original definition of entropy can then be invoked to calculate the change in entropy.
A specific case of this method, arguably more theoretical, uses the third law of thermodynamicsto choose the conditions under which the reaction is reversible. According to the third law (first put forward in an alternative form by Nernst in 1905), every pure substance in the form of a perfect crystal at absolute zero temperature has zero entropy. If this law is true, the absolute entropy of any pure substance can be defined, providing one is able to keep track either experimentally or theoretically of the entropy changes (using the original definition of entropy) as the material is warmed up from absolute temperature to its final temperature. A corollary of the third law is that chemical reactions occur without change of entropy at absolute zero. In practice, chemical reactions happen more and more slowly as the temperature is lowered, so that we would never actually try to do chemistry at absolute zero - it would take forever! But we can calculate the entropy change needed to get the reactants down to absolute zero, imagine that the reaction did take place with no change in entropy, and then calculate the entropy change in the products upon warming up to the final temperature. Results from these two methods seem to be in agreement. The third law is not quite as well verified as are the other laws of thermodynamics (being about half their age), but it does seem to be surviving experimental tests. There are arguments of a theoretical nature, based upon the statistical definition of entropy, for its validity, as well.
The biosphere is the system containing the living material found near the surface of the Earth. We human beings have a special interest in it because we are part of it and depend on it for our food. There is a large amount of interaction among parts of the biosphere, but it is often useful to separate out individual organismsfor study (each person is an organism, but it is difficult to provide a concise and accurate definition). In the following six enumerated paragraphs, we outline those properties of living material which are pertinent to our discussion.
The information content of the DNA is in the form of which base-pairs form the cross links between the double-helix, in what order. The entire information content of a full complement of a normal human cell's DNA is a few Gigabytes, about equal to the capacity of a standard DVD.
Now we are in a position to discuss the second law as it operates in the biosphere. An organism taking in material and organizing it certainly is decreasing the entropy of that material. So the activity inside the organism is working opposite to the trend of increasing entropy. But the organism is an open system. It discharges material to the environment which is disorganized and higher in entropy than the material taken in. The chemical reactions doing the organizing (except in green plants) probably produce a net amount of energy which is sent to the environment as heat. So the organism is an island of decreasing entropy, which, through its interaction with the environment, increases the entropy of the environment enough to insure that the entropy of the Universe is appropriately increased. And of course the organism is just playing a delaying game anyway. When the time comes, it dies, and up goes the entropy of whatever it is composed of at that time.
A subtle point can be made here. Multicellular organisms grow considerably in size while they develop. The entropy of the system is proportional to the size of the system. So the entropy of an organism actually does increase as time goes by. The organizing power of biological processes just makes it increase more slowly than it would if it stayed in equilibrium with its environment. Some researchers are using these same ideas to analyze the whole ensemble of living things. If we had all possible realizations of the combinations of the genetic code in DNA, there would be many, many more species than we see in Nature, thereby increasing the entropy of the biosphere. But evolution, as it chooses which species actually come forward, acts in such a way as to keep the entropy of the biosphere from increasing as fast as it would in an equilibrium situation.
Green plants are a special case, but in the end the story is the same. When a photon is taken from the "photon gas" by photosynthesis, the entropy of the photon gas decreases by one. If all the energy of the photon is used in the chemical reaction, then the plant does not have to discharge any heat to the environment. So the plant seems to be taking a serious run at the second law. But again it is at best a delaying action. Eventually the plant is eaten, burned, or attacked by fungi and the photons go back into space to complete the eventual entropy increase.
If the photons from the Sun are simply absorbed, the lower energy photons go back to space almost immediately. So we see that the entire biosphere (except the "biopoints") operates on delaying the entropy increase associated with absorbing solar photons and re-emitting terrestrial ones. And we can deduce the absolute physical limit on the activity of the biosphere - absorption of every solar photon by a green plant via photosynthesis. But even in the thickest jungle only a small fraction of the available photons are actually used by the plants in this way. And there are vast regions - such as the deserts where insufficient water is available - where the natural system falls far short of this potential. So people help the biosphere reach toward this limit by irrigating arid regions, etc.
Of course it is important to modern day technology that certain geological processes have indefinitely delayed the entropy increase a bit by burying the remains of living material before complete degradation could take place. Coal, oil, and natural gas are thus reserves of low entropy put away long ago. We are currently burning these reserves at a much higher rate than the geological processes buried them, so that the rate of entropy increase is now some amount greater the long term average. And carbon dioxide is being added to the atmosphere at a rate larger than photosynthesis can use it or the oceans can dissolve it. The fossil fuels will probably be exhausted on a time scale minuscule compared to the geological time scale of their creation and preservation.
Time is a physical variable that is perhaps less well understood than most. We will identify and discuss very briefly five different concepts of time: psychological, dynamic, biological, cosmic, and entropic.
The human brain is equipped with a memory capability. When events happen, they are stored in the memory for longer or shorter times depending on how "memorable" the person judges them to be. We are also equipped with facilities to anticipate the future. But we regard these two facilities as qualitatively different. Suppose we see a mirage. We store that fact in memory. Later we realize that it was indeed a mirage - that the event we remember was not true. We do not wipe it out of memory; instead we record a new memory stating that the previous one was wrong. Suppose we anticipate a sunny day. If we see dark clouds gathering, we wipe out that anticipation and anticipate a rainy day instead. But in memory we have stored our previous false anticipation. So we perceive the future and the past as being quite different. Psychological timeis a variable which moves in one direction, recording the actualization or contradiction of anticipations and their storage in memory.
Dynamic time, by contrast, can go, in theory, in either direction. Consider for example the gross motions of the Solar System. If we write down the dynamical equations of motion for this system and then make time go the other direction we get a description of the motion which is just as consistent with the physical laws of nature as the one with the time going in the original direction. Let us state this idea in an operational way. Suppose some intelligent beings came from another stellar system and recorded the motions of our Solar System on a video tape. Then they go back to their home planet and play the tape to an audience of experts on stellar systems. They could not tell for sure from viewing the pictures whether the tape was being played backward or forward. If we choose the situation carefully, we can find examples of dynamic time at all scales down to the motion of electrons in an isolated atom. Clocks and other time-measuring devices are based on dynamic time. In the theory of relativity, we go so far as to regard time as another dimension, like east, north, or up.
Biological timeis measured on the grand scale by the evolution of the species. Biologists may have an objective way to judge the complexity of an organism. Using this scale, they can say that present-day species are, on average, more complex than those which existed, say, 500 million years ago, although they are notmore complex than those of 1 million years ago. Furthermore, genetic material is not passed along to the next generation with perfect accuracy, so that mutations gradually accumulate. This permits estimation of how recently the last common ancestor of two organisms lived. So the biologists say that we tell which is future and which is past by observing in which direction the species become more complex (for the most distant past) and in which direction populations differentiate (for any time scale, including the more recent past). Of course one must be very patient to use this criterion.
Cosmic timeis similar, but on a still grander scale. Evidence now points to the idea that the Universe had a definite beginning time - the "hot big bang" theory. ("In the beginning there was nothing, which exploded.") The Universe went through a definite set of stages in a definite time sequence. If we want to know in which direction of time is the past, we ask in which direction of time the Universe was much smaller in scale and much hotter. One piece of evidence is the motion of distant galaxies: they are going away from us in all directions, and the farther away they are, the faster they are receding from us. If someone showed us a movie in which the galaxies were coming toward us from all directions, we would say that the movie was being played backwards. According to this theory, it is possible that cosmic time will reverse (compared to other time measures) at some remote future time. If there is enough mass out there, the gravitational attraction will eventually stop the expansion and start a contraction. After that time, the galaxies will indeed come toward us again and the Universe will eventually end in a "hot big crunch". So far we do not see enough mass out there, but there may be some that we are not aware of. Stay tuned.
After all this introduction, we come to the arrow of time that is closely related to our subject of study, entropic time. The second law says that duringa process, the entropy of an isolated system usually increases and never decreases. During refers of course to the passage of time. Reversible processes are governed by dynamic time, which in theory can go in any direction. But any irreversible process carries with it information about the direction of the passage of time. Suppose you are watching a video recording of a lake in the summer time and you spot an ice cube suddenly appear and then grow larger. Or you see a bouncing ball bounce higher each time. Or you see the scrambled pieces of a jigsaw puzzle fall onto a table while coming together to form the solved puzzle. In each case you see entropy decreasing and you suspect that the recording is being played backwards.
With all these examples (and others that we have not mentioned), how do we objectively discuss time? We give you what I think is the physicists' bias. Dynamic time is fundamental for the quantitative measurement of time - it tells us how long a second is. Entropic time then distinguishes fundamentally the future from the past. But we perform a calibration initially using psychological time so that our definitions agree with common sense. Then other time perceptions are compared to these fundamental ones. For example, if someone came by and said that he remembered heat flowing from cold objects to hot ones, we would say that he is confused.
The main lesson is perhaps that history cannot in the end be a repetition of never-ending cycles. Time has a direction, and physics teaches us that some aspects of the environment go in an unchangeable direction. Life will then likely be smoother for us all if we learn to adjust our society to go in that same direction.
Return to Top Tier III 415A Home Page
Dick Piccard revised this file ( https://people.ohio.edu/piccard/huwe/index.html ) on April 23, 2023.
Please E-mail any comments or suggestions to piccard@ohio.edu .
(740) 593–9381 | Building 21, The Ridges
Ohio University | Athens OH 45701 | 740.593.1000 ADA Compliance | © 2018 Ohio University . All rights reserved.