Halley’s Comet is arguably the most celebrated celestial object. It does in a sense, provide the link between humanity’s belief in superstition and science. In its previous appearances, Halley’s Comet was often viewed as a bad omen. The most famous case its appearance before the Battle of Hastings in 1066. King Harold II viewed it as a bad omen and suffered mortal wounds during the battle. Halley’s Comet also represents a triumph of science. Utilizing Kepler’s Laws of Motion, Edmond Halley predicted the comet that had appeared in 1682 would reappear in 1759. Halley died in 1743, but when the comet made its predicted appearance, it was named in Halley’s honor.
The first known recorded appearance of Halley’s Comet was in 240 B.C. Chinese astronomers referred to comets as “Broom stars” that appeared in the sky for weeks at a time. The comet has returned roughly every 76 years since. The size and brightness of the comet varies on each visit due to its distance from the Earth on each visit and increasing light pollution on Earth. The closest the comet has been to the Earth was in 837. Halley’s came within 4 million miles of Earth and its tail was 90 degrees. This is the equivalent distance from the horizon to directly overhead. Halley’s last appearance in 1985-86 was somewhat disappointing. Its closest approach to Earth was 38 million miles and while visible, was not quite the remarkable sight that it had been in earlier visits. However, it was during that approach the Giotto probe was able to take photographs 376 miles from the comet nucleus. Below are images of Halley’s Comet recorded during its prior visits.
This is part of the Bayeux Tapestry commemorating the Battle of Hastings in 1066. The comet appears in the image top center. King Harold II of England took it as a bad omen and indeed, he was killed in the battle. William the Conqueror of Normandy won the battle and pronounced himself King of England. This tapestry can be viewed by the public in the Bayeux Tapestry Museum. Bayeux, incidentally, is located just a few miles from the American D-Day landing site on Omaha Beach.
This painting by Giotto di Bondone of Italy called Adoration of the Magi. A comet is used to represent the Star of Bethlehem. Painted in 1304, three years after Halley’s appearance in 1301, the comet appears top center and is generally believed to be derived from the appearance of Halley’s Comet. It is located in the Cappella degli Scrovegni in Padua, the same town Galileo would make historic observations with his telescope in 1610. In 1986, the European Space Agency (ESA) named it’s Halley’s Comet space probe Giotto, in honor of the artist.
This is a painting by Samuel Scott in 1759. The scene depicts Halley’s Comet over London. Westminster Abbey is visible to the far left. This was the appearance predicted by Edmond Halley, who had passed away 15 years earlier.
Halley’s appearance in 1910 provided astronomers with the first opportunity to photograph the comet. This series of photographs shows the comet over a two month period as it approached and then receded from Earth. The images were taken from Mt. Wilson’s 60-inch telescope. This was one of Halley’s most celebrated appearances. It’s tail stretched 30 degrees across the night sky. In fact, on May 18, 1910, the Earth passed through the tail of Halley’s Comet. Entrepreneurs sold “comet pills” which were supposed to counteract the effects of cyanide gas which had been detected in the tail. Of course, the tail is far too tenuous to have any effect on life on Earth.
The front page article from the New York Times of this event is quite interesting. Mark Twain, who was born a couple of weeks after the previous visit by Halley’s Comet in 1835, predicted in 1909 that, “I came in with Halley’s Comet in 1835…and I expect to go out with it.” Mark Twain died on April 21, 1910.
While Halley’s last appearance in 1986 may have been disappointing to earthbound viewers, it provided us with the first space mission (Giotto) to a comet. The nucleus of the comet is approximately 16 x 8 x 8 kilometers. The closest image was taken 95 seconds prior to Giotto’s nearest approach, which was 376 miles. The nucleus itself is one of the darkest objects in the solar system. It is, in fact, darker than coal. The jets are emanating from the sunlit side are not uniform in nature. This could account for some of the irregularities detected in the orbit of Halley’s Comet. The comet will begin its next approach to the Sun in 2024, and will be visible on Earth again in 2061.
*Image atop post is engraving from Halley’s 1682 visit. Edmund Halley was 26 years old during this visit.
“The distinction between past, present, and future is only a stubbornly persistent illusion.” – Albert Einstein
A comprehensive overview of the theory of relativity and its applications in astronomy would require a course in itself. The purpose of this post will be to give a brief overview of the subject and in particular, the history of its development as a theory. What I would like to stress that despite its fearsome reputation as being difficult to understand, the major concepts of the theory can be understood by the public. In its most advance form, the mathematics of relativity can provide a challenge to any student of physics. However, this is true of any area of physics. You will not find many physics students tell you that a graduate level electricity & magnetism course is a breeze. However, the subject of electricity & magnetism can be presented in a manner that the public can understand. The difficulties of relativity lie in that it deals with phenomena we do not ordinarily observe in our lives. Relativity provides accurate predictions in two areas where Newton’s Laws do not. These are when matter has velocity near the speed of light and/or is located near a large gravity well (such as a star or a black hole). However, I do want to stress that outside of these two situations, Newton’s Laws and Einstein’s Theory of Relativity give essentially the same results.
Beginnings
In the latter part of the 1800’s, physics was thought by many to be a dead science. Newton’s Laws were considered the final say in predicting the behavior of matter in motion. James Clerk Maxwell, using four equations, successfully provided a comprehensive explanation of the properties of electricity and magnetism. The major problems in physics and astronomy seemed to be solved. However, as the century came to a close, cracks were appearing in this assumption. One was the failure of Newton’s Laws to accurately predict the orbit of Mercury around the Sun. The perihelion (closest approach to the Sun) advanced 574″ (about 1/6 of a degree) per century, which is 43″ more than the 531″ advance predicted by Newton’s Laws. This advance is caused by the presence of the other planets in the solar system. For a time, scientists thought the extra advance in Mercury’s orbit was due to the presence of an undiscovered planet. As none was found, a new explanation was required. An image of the advance is depicted below. It is exaggerated to demonstrate the effect.
Einstein’s Papers
In 1905, Albert Einstein, who was working as a technical expert in a Swiss patent office, published four landmark papers (in addition to his doctoral dissertation) revolutionizing physics. This year is often called “annus mirabilis” or miracle year. The topics of these four papers are the following:
1. The photoelectric effect demonstrating light behaves as a stream of particles as well as waves. It was known at the time that a beam of light would knock electrons off a metal surface. This is similar to a baseball thrown on a beach. The impact of the ball will knock sand in the air. The accepted theory at the time was light consisted as a series of waves and this could not explain the photoelectric effect. Einstein showed that light behaves as a stream of discrete particles as well. Thus, light has a duality in that it behaves as a stream of particles as well as waves. This discovery is the foundation of quantum physics.
2. The second paper concerned the nature of Brownian motion explaining that heat is created by the motion of atoms and molecules. It was this paper that put the rest the ongoing debate if atoms existed as the constituent particles of matter.
3. The Special Theory of Relativity. This paper was concerned with the motion of objects in non-accelerating frames of reference. This means gravity is not a factor in the Special Theory as opposed to the later developed General Theory of Relativity.
4. The mass-energy equivalence principle. This paper gave us that famous equation E = mc2.
The last two papers will be discussed below.
The Special Theory of Relativity
As mentioned earlier, James Clerk Maxwell, in the mid-1800’s, formulated four basic equations outlining the properties of electricity and magnetism. One outcome of these equations is electromagnetic radiation travels at a rate of 3.0 x 108 m/s (186,282 miles per second). This rate of speed is constant regardless of the observer’s velocity relative to the radiation. What exactly does this mean? Think of yourself on a highway and your speed is 55 mph. The car in the lane next to you is moving at 60 mph. That car will pass you at a rate of 5 mph, as their velocity is that much faster than your velocity. Now, let’s ramp up the speed of your car to 186,277 miles per second. This is exactly five miles per second slower than the speed of light. Remember, light is just a form of electromagnetic radiation. Imagine a beam of light traveling in the lane next to your car. At what rate of speed would it pass you? All your life’s experience would lead you to answer five miles per second. But that would be the incorrect answer! The light beam would pass you at a rate of 186,282 miles per second as if you were standing still. This is true regardless of your velocity relative to the beam of light. The genius of Einstein was to realize that contrary to what we perceive, the speed of light is constant for all observers and time is variable as a function of your velocity. The Special Theory of Relativity leads to the following conclusions:
1. As an object (or person) approaches the speed of light, their clock slows down compared to a stationary observer. If you were to take a round-trip voyage 100 light years away and travel at 99.995 percent of the speed of light, you would only age two years but arrive back on Earth 200 years later. In popular entertainment, the original 1968 movie Planet of the Apes gives a reasonably accurate portrayal of this effect.
2. Mass of an object increases as it approaches the speed of light. In fact, the mass of an object approaches infinity as it approaches the speed of light. This is why the speed of light is the maximum speed obtainable in our universe. As its mass approaches infinity, the force required to accelerate it approaches infinity.
3. The length of an object appears to decrease to a stationary observer as it approaches the speed of light.
4. E = mc2. This is the equation that gives us the understanding of nuclear fusion that occurs in the Sun. As hydrogen fuses to form helium, the mass of the helium atoms is less than the mass of the original hydrogen atoms. The difference is converted to energy. The Sun converts 4.3 million tons of mass into energy each second. A fraction of which reaches the Earth providing the energy to sustain life.
General Theory of Relativity
After publishing his Special Theory of Relativity, Einstein spent the next ten years working out the General Theory of Relativity. It is general in that it applies to all reference frames, accelerating and non-accelerating. This theory was published in 1916 and provided a dramatically different way of looking at gravity. Unlike Newton, who postulated gravity was a force between two bodies, Einstein postulated that gravity represents a curvature in space-time itself. Lets look at an analogy. Think of a trampoline with nothing on it. This represents a universe with no mass in it. If you rolled a golf ball across it, the ball would move in a straight line. Now place a baseball (which could represent a planet) on the trampoline. The ball would depress the trampoline slightly. Now roll the golf ball again. As it approached the baseball, the depression in the trampoline would cause the golf ball to move in a curved motion. Now place a bowling ball (this could represent a star) on the trampoline. The depression becomes more pronounced and the path of the golf ball as it moves towards the bowling ball becomes more curved. In fact, if the golf ball got too close to the bowling ball, its path would curve into the bowling ball much like a meteor would fall to the Earth’s surface if captured by Earth’s gravity well. The video below describes the difference between Newton & Einstein’s theories on gravity.
Experimental Proof
Einstein’s General Theory of Relativity predicted the advance of Mercury’s perihelion accurately. Remember, the predictions of relativity and Newton’s Laws diverge in two circumstances. When an object travels near the speed of light and when it is located near a large gravity well. Mercury is the closest planet to the Sun. This closeness is enough for predictions of its motion using the theory of relativity to vary slightly than Newton’s Laws. While this created a buzz in the physics community, relativity did not gain general acceptance until it passed an experimental test in 1919. Relativity predicts that light would be deflected by the Sun’s gravity. A beam of light would follow the path of space-time. If space-time is curved, then the path of light is curved as well. On May 29, 1919, British astronomer Arthur Eddington led an expedition to measure a star’s position near the Sun during a solar eclipse. Einstein’s theory predicted a deflection of 1.75 seconds of arc as opposed to Newton’s Law predicting the deflection at 0.875 seconds of arc. The measurements came in at 1.98 and 1.61 seconds of arc. These measurements are within the range of 30 seconds of arc error allowed for observational uncertainties and proved light was deflected by the Sun’s gravity well. Both the London Times and the New York Times reported the story and Einstein quickly became, by far, the most famous scientist of the era.
The Cosmological Constant
The General Theory of Relativity yields a field equation which takes the following form: (Ruv)-1/2 (guv) R = (8)(π)Tuv – Λ(guv)
The subscripts in the equation are indications of what are called stress tensors. This enables mathematicians to express a complex set of equations in a compact form. You can think of this as a mathematical version of a zip file. This equation explains how matter and energy ( Tuv) curves space-time [(Ruv)-1/2 (guv)]. Now, I won’t go into the gory details of this equation. In fact, Einstein himself needed help with the complexities of the mathematics when he derived it. What is important about this equation is it predicts the universe must be either contracting or expanding as matter will deform space-time.
Einstein was not satisfied with this result. At the time, the universe was considered to be a permanent unchanging entity. What Einstein did to correct this was to add the constant Λ in the right side of the equation. This constant changes the equation providing a stable universe offsetting the effects of gravity on space-time. During the 1920’s, Georges Lemaitre argued the cosmological constant was not required and the universe could expand after originating from a primeval atom. Lemaitre used relativity to formulate what would later be called the Big Bang theory. Edwin Hubble (whom the Hubble Space Telescope was named after) discovered that all the galaxies in the universe were receding from each other. The universe was expanding! Relativity, in its original form, had predicted this result. Einstein would later admit his addition of the cosmological constant was an error.
New knowledge contradictory to our preconceived ideas can form a disequilibrium in our minds that can take time to sort out . Even Albert Einstein, who did as much as anybody to revolutionize physics, suffered once from an inability to overcome a preconceived idea. In this case, he believed the universe was static. It is something we all must guard against. In science, we must let the evidence point us to a conclusion and not allow a preconceived conclusion allow us to define the evidence. It should be noted that once the evidence of the Big Bang arrived, Einstein came around as a supporter of the theory rather than sticking with an outdated idea of the universe. As I speak of preconceived ideas, most would assume when Einstein was awarded the Nobel Prize in 1921, it would have been for relativity. However, he won the Nobel Prize for his explanation of the photoelectric effect.
If you want to read more about Einstein and Relativity
The following sources I highly recommend for anybody who desires a greater understanding of the theory of relativity.
Issacson, W., (2007) Einstein. New York, Simon and Schuster.
A very readable biography of Einstein includes non-mathematical overviews of Einstein’s work. I found this book very enlightening describing the educational and life experiences that enabled Einstein to make breakthroughs where others failed.
Guttfreund, H. & Renn, J., (2015). The Road to Relativity. Princeton, Princeton University Press.
This book contains Einstein’s original manuscript for the theory of general relativity with a page by page interpretation for the public. It also has an excellent historical background on how Einstein developed the theory.
Einstein, A., Relativity: The General and Special Theory.
Want to learn about relativity directly from the source? This is Albert Einstein’s attempt to describe the theory to the public. The book can be purchased in the usual online outlets but is also in the public domain and can be read online for free, for example, here.
Lambourne, R., (2010). Relativity, Gravitation, and Cosmology. Cambridge University Press.
If you are seeking a textbook to get started on relativity, this is the best treatment I have seen. It will walk you through the algebra of special relativity to the tensors of general relativity. The text has many problems to work through to obtain a solid understanding of the subject. Lambourne has in the past taught a short course in relativity at Oxford’s Department for Continuing Education open to the public. Sounds like a good way to spend a week in the summer.
*Image atop post is the gravitational lensing of a galaxy by another galaxy in front. Typically, such lensing can result in two or more images of an object but if the alignment is just right, it will form a ring structure. Gravitational lensing was predicted by Einstein in 1915. Credit: ESA/NASA/Hubble
Packing up my books as I prepare to move from Buffalo to New York City is a bit like watching my life pass in front of my eyes. My collection of books began in the early 1970’s. This was just before the emergence of book chains and finding what you wanted required searching in a hodgepodge of venues. In the neighborhood, supermarkets and corner stores had book racks. For some reason, the Erb Deli had sci-fi books not found anywhere else. At the Thruway Mall, J.C. Penney’s had a book department in the basement and downtown there was Ulbrich’s on Main St. The 2nd floor book department at Ulbrich’s was accessed by a stairway hidden off to the side. You’d never know it existed unless someone told you it was there.
Two oddities stand out from this era. One being a plethora of TV and movie adaptations and the other is the heavy sludge of 1970’s pseudoscience. Before the VCR, the only on-demand technology available was books. The pseudoscience holds up to my discerning adult eyes about as well as one would expect. For example, space-faring civilizations that solved the physics of interstellar travel would certainly not appear similar to 20th century astronauts, anymore than the Wright brothers flying outfits bear any similarity to jet fighter pilots. These books are valueless except perhaps as a historical curiosity. That decade’s embrace of UFO’s and ancient alien visitations may somehow explain part of my generation’s gullibility to the likes of Alex Jones.
Anyways, there is one interesting aspect to these books, none of them mention Roswell. That 1947 episode did not hit the popular radar until Jesse Marcel, an Air Force desk jockey who was a Walter Mitty type prone to exaggeration, gave the National Enquirer his long tale of a UFO crash in New Mexico in 1980.
From the late 70’s to mid 80’s, my collection is a bit fallow. My move to Houston, the ultimate unwalkable city, in high school blunted my daily access to book sources and later in my college years, I was too buried in textbooks to read a whole lot else. However, there was the most excellent Spectrum Books, located in what I can only describe as an upscale strip mall (only in Houston) on Westheimer Road. By the mid 1980’s, the Mom & Pop book venues were by and large replaced by chains such as Waldenbooks in the malls. And those chains heavily stocked Stephen King. I read the 1,100 page It bit by bit at my stops to the Bryant St. Laundromat. This facility served as an odd fulcrum of life at Buffalo State. With its constant drone of dryers in motion and on the wall a poster of a perpetually upcoming Pete Seeger concert, I met more students and profs here than any other place save the campus pub. Not a bad reading spot, really.
It was Stephen King who turned me on to Jack Finney in the book Danse Macabre. Finney’s work was anything but macabre. You can kind of think of Finney as the Twilight Zone on steroids. To start off, I’d recommend The Third Level, Second Chance, and Home Alone. As luck would have it, Finney’s classic short stories were compiled and republished in the late 1980’s. I would find those in a newer, larger chain, Barnes & Noble. I never had the antipathy that others had for Barnes & Noble. Perhaps because the first one I stepped in was the decidedly off-beat 5th Avenue New York City location. Fact is, B&N offered a far superior selection than any other place and the people who manage and work there love the business. I’d venture to guess I have bought more books there than any other venue.
Not everything I wanted could be found at B&N. As the 80’s came to a close, it was an exciting time for astronomy. President Bush proposed a crewed mission to Mars while at the same time NASA’s great observatory program (including the Hubble) was taking shape. The former never got off the ground, the price tag was simply too large. The latter was one of NASA’s great success stories and continues to pay dividends to this day. It was difficult to find a wide array of astronomy books in stores back then. I relied on the Astronomy Book Club from Sky & Telescope magazine.
It worked on the same premise as the old record clubs. You got six books essentially for free and you agreed to buy four more over the next three years. I entered into this with a bit of trepidation. When I was in high school, I joined a record club and some of the records were of sub-par quality. For example, the mix on Led Zeppelin IV muffled the drums. How do you muffle John Bonham? There was no need to worry. A good chunk of my collection came from the book club, ranging from astronomy texts to popularized science. This book club no longer exists as a result of technological advances in the 1990’s.
The internet radically changed the nature of the book business. None more than Amazon. I initially embraced Amazon with great enthusiasm but these days, I do not buy from Amazon unless I really, really, have to. Like a lot of the tech world, Amazon has metastasized from a entrepreneur start-up to an oversized behemoth. Besides concerns over the possible lack of competition in the book world, the working conditions in Amazon’s warehouses are atrocious. The final straw was hearing Amazon’s decision to place an ambulance outside its warehouses to treat heat stroke rather than install air conditioning for its employees.
During my college days, I worked in a NAPA warehouse unloading trucks in Texas. I remember being dehydrated to the point when I did have access to water, I felt a cold rush to my head when I took a drink. Jeff Bezos’ net worth is approaching $150 billion, certainly enough to afford better working conditions. For now, I’ll stick with Barnes & Noble and whatever indie stores I can find.
Speaking of which, the original 5th Avenue Barnes & Noble store has closed, a victim of gentrification. A lot of indie businesses have fallen pray to this trend. Still, in a city of 8.5 million, I have to believe there are some interesting bookstores to explore, not unlike the first time I set out to buy books in the 1970’s. I am looking forward to some good hunting.
*Nope, image atop post is not my place but the old Cincinnati Public Library which was torn down in 1955. Photo: Public Library of Cincinnati & Hamilton County
Classical music was considered passe during the 1970’s. As a result, I never really learned about Aaron Copland in school. I heard his music in bits and pieces without context. Simple Gifts from Appalachian Spring opened for CBS News special reports and Hoe-Down from Rodeo was used for beef commercials. These rural themes are remarkable compositions as Copland was raised in Brooklyn and educated in Paris. Copland’s most famous composition was Fanfare for the Common Man. This composition took an odd twist in the 1970’s as ELP performed a version often used as a sports theme while the original was played as a prelude to Rolling Stones concerts. As the title suggests, Copland intended the piece for something quite different than to extol celebrity.
Copland derived the title from then VP Henry Wallace’s Century of the Common Man speech in 1942. Wallace thought of World War II as a global version of the American Civil War. That is, a global struggle to eliminate slavery under fascism and free the common man. Interwoven into that was the common man draftee armed services fighting the war both in Europe and the Pacific. Wallace identified literacy as a key foe of totalitarianism and that the population must be well fed and housed to be well-educated.
The term common man can be a source of derision. In 1945, Wilhelm Reich wrote the 130 page essay, Listen, Little Man, pillorying the common man for allowing to be grifted in support of fascism. A few decades later, under less dire circumstances, hockey coach Herb Brooks would tell his players that “common men go nowhere.” I suppose this is arguing semantics, but I think everyone is a common man in some aspect of their lives. To continue the hockey analogy, Bobby Orr once commented that his greatness on the ice was contained in a bubble, it did not transfer to life outside the rink.
Even if a career path is found that vaults one to greatness, outside that world, you’re going to be a common man with all the potential pratfalls. It’s why Ben Carson can be a distinguished neurosurgeon and believe the biblical Joseph built the pyramids to store grain. It is also why Carson is woefully unqualified to lead HUD. Much worse, it’s why Hans Asperger and Werner Von Braun, two accomplished scientists, collaborated with the Nazi regime to accelerate their careers. As educators, we have to think of student success in broader terms than just career advancement.
There is the proverbial three-legged stool. That is, providing an education not only in subject content and physical education, but ethical training as well. This may have provided a braking mechanism in say, the mortgage bubble. When I worked in the mortgage industry during this era, I saw some managers, when confronted with the high risk of mortgages beginning around 2003 retort, “Bleep you, we’re making money.” – without the courtesy of the bleep. However, I suspect something more than ethics is required.
In academia, we’re used to fact-based debates. Typically, the argument with the best model to explain the facts wins. Beyond academia, that’s really not how things work. More often than not, arguments are based on social positioning. People tend to align with positions that maintain their status within their social group. It’s not a trivial concern. The ability to earn a living is usually dependent upon one’s social network. This is especially true in regions that are economically stagnant. It can be a powerful motivator for ill-advised actions.
Infrastructure, physical and social, not only move goods, but can transmit ideas. Good and bad, unfortunately. Studies have shown that civic associations were a key component in spreading Nazism. Embracing Nazi politics was a means of maintaining social status within various sub-cultures. Given that Germany was in the throes of the Great Depression, social status meant being employable. Add to that pogroms had been ongoing in Eastern Europe for a century normalizing violence against the Jewish population. While Hitler amplified that greatly, that ongoing ethical/moral lapse had already left the door ajar for Nazism.
Is there any way education could prevent such social rot from spreading? I won’t pretend to have a definitive answer for that, but below are a few ideas as food for thought.
A rigorous study of ethics should be completed before high school graduation. This alone is not sufficient. Students should be trained to stand against the crowd. Intellectual achievement alone does not provide this skill. During World War I, Bertrand Russell demonstrated this trait by holding firm against nationalism that prompted the catastrophic events from 1914-18:
“I knew it was my business to protest, however futile that protest might be. I felt that for the honour of human nature those who were not swept off their feet should show that they stood firm.”
Henry Moseley, who had organized the periodic table by atomic number, did not. He would die at Gallipoli in 1915, cutting short a brilliant scientific career. Moseley’s disdain for foreigners imbued him with a nationalistic enthusiasm for a useless war.
We also have to emphasize to know what we don’t know. I never went to trade school so I do not dispense advice on how to fix plumbing. That’s innocent enough, but as already mentioned, many a fine mind has ventured outside their lanes. This is how John Maynard Keynes, who gave us an understanding of the Great Depression and how to end it, also made the dreadful decision to embrace eugenics. We have to impress upon our students it is the argument, not the person, that wins academic debates. I’ve seen to many people root for their side like sports fans and not analyzing the arguments itself. That approach can take you down the wrong path like whales following the leader to beach themselves.
Learning subject content is a key component of education, but that alone does not make a well-rounded student. An ability to discern between good and poor reasoning has to be developed. In addition, diversity of experience and community is a crucial factor of education.
It’s easy to look back at the era when Henry Wallace made his Century of the Common Man speech and think of the negatives. Top of this list would be Jim Crow segregation, but there were positive aspects to draw upon. Buffalo, where I grew up, had steel mills but also the Philharmonic with the groundbreaking Lukas Foss. Next to pro sports was the Albright-Knox and it’s famous 1965 Festival of the Arts. Another example is Columbus, IN. During the 1940’s, the CEO of Cummins, Inc, a diesel engine manufacturer, commissioned architects such as I.M. Pei, Eero Saarinen, and Cesar Pelli, making this town of 45,000 a pioneer in modern architecture. Today, far too often, I see community interests listing too heavily towards sports or guns. We need to be better than that.
I hear a lot of arguments which is more important, trade school vs universities vs community college. It’s a dumb argument. You need all that for a functional society. True, we eventually specialize to make a living, but it’s no reason not to have an appreciation of what other occupations bring to the table. Instilling respect for honest work is important. I have far more respect for the honest work of often disparaged burger flippers than say, private equity managers who have pushed for unneeded dental work on children. Beyond respect for other occupations, we need to build respect for people in other communities.
It’s constructive to take city students out to the country and vise-versa to see how people live and work in those regions. International travel is helpful, but not always available due to lack of resources. But certainly, webcasts between two classes across the globe can be set up. Stereotypes arise most easily when people have never met each other. One reason why some of the powers that be favor segregation.
Education needs to build connections between people, disciplines, and cultures. This infrastructure of knowledge and ideas has to be guided by a sense of ethics. Ideally, the internet can help build these social connections, but it can also break down these connections. Educational institutions need to act as vanguard against that breakdown. If we don’t succeed in that, we are in danger of going from the Century of the Common Man to the Century of the Grifter.
From 1958 to 1972, Leonard Bernstein presented a series of educational programs on the nature of music dubbed Young People’s Concerts. The very last one televised on March 26, 1972 was the very first one that I watched, a presentation of Gustav Holst’s The Planets. Most of the series is now available on YouTube, and among the programs are What is Orchestration, What is Classical Music, and What is a Melody? While I can appreciate music, the process of creating music always seemed a bit of a mystery to me. Bernstein is excellent in demystifying that process for this no longer quite so young person. It’s not an exaggeration to say Bernstein did for music what Carl Sagan or Neil deGrasse Tyson did for astronomy.
In 1967, Bernstein hosted a special called Inside Pop – The Rock Revolution. While he called rock 95% trash, Bernstein said the new music and its message should be listened to and taken seriously. By 1972, Bernstein seemed a bit cynical on that, at least the embrace of astrology over science that started during that era. Holst’s The Planets was based on astrology and Bernstein went through great pains to distinguish that from science. Any astronomy teacher who receives a paper with the class title Astrology 101 can relate. Nonetheless, we cannot control the beliefs a student has coming into a class, but we can use that to bridge the gap into a scientific understanding of the universe.
Bernstein starts things off with a rousing version of Mars – Bringer of War. Mars was the Roman god of war and the planet was given that designation as a result of its blood-red appearance. The reddish hue of the Martian surface can be seen with the naked eye when Mars approaches opposition. This occurs when Mars and the Sun are on opposite sides of Earth and is when Mars is closest. Opposition of Mars happens every 26 months and the next is July 27, 2018. These events also provide the optimal launch window to the red planet. Oxidation of iron in the Martian dust that creates the red color, oxidation being a fancy word for rusting. The same process occurs in parts of Oklahoma which has red soil.
The most famous association of Mars with war was H.G. Wells’ War of the Worlds. We now know that intelligent life does not exist on Mars. As late as the 1950’s, it was still thought that vegetation could survive on Mars. The Mariner missions of the 1960’s disproved that. However, the space age has proven oceans once existed on Mars and the subsurface still has quite a bit of water. It is possible for microbial life to thrive in the Martian subsurface. Perhaps ironic, as it was Earth’s microbes that did in the invading Martians in War of the Worlds. It is food for thought at NASA’s Planetary Protection office charged with preventing cross contamination between Earth and Mars.
Bernstein concludes that Mars – The Bringer of War is an ugly piece of music and that is appropriate as what is uglier than war? Unspoken was the Vietnam War still casting an ugly shadow over America in 1972. Six years later, John Williams would use this piece as an inspiration for his Star Wars score. From politics to pop culture, perhaps an indication of America’s beginning stages of healing during that period.
Next up is Venus – Bringer of Peace. Bernstein notes Venus was actually a god of love, but astrologers use Venus to symbolize peace. Venus is the brightest of all the planets from our vantage point on Earth. Venus is anything but peaceful. The atmosphere is 96% carbon dioxide and a runaway greenhouse effect heats the surface enough to melt lead. The atmosphere is so thick that pressure is 90 times greater than Earth’s. NASA has never tried to land on Venus, but the Soviet Venera program made 10 landings between 1970 and 1981. The landers lasted from 23 minutes to two hours before being overwhelmed by the harsh conditions.
The brightness of Venus that seems so peaceful to us on Earth is caused by the reflection of light from sulfuric acid clouds. Some 70% of sunlight that hits Venus is reflected back into space. This compares to 30% for Earth. As Venus occupies an orbit inside Earth’s, it does not appear to stray too far from the Sun, becoming visible just after sunset or just before sunrise. This is even more so for Mercury.
Bernstein introduces Mercury – The Winged Messenger by noting how Holst employs double keys and rhythms as Mercury is perceived as a double-dealing, tricky sort. It only takes Mercury 88 days to orbit the Sun and as it oscillates from one side of the Sun to the other, it changes from morning object, hidden by the Sun, to evening object in less than 2 months. Mercury has some other tricks up its sleeve, such as ice in permanently shadowed polar craters. Mercury lacks an atmosphere so heat is not distributed from sunlight to dark areas allowing ice to form in the closest planet to the Sun.
Then comes Jupiter – Bringer of Jollity. This is the most famous piece in the suite. While I do not think of Jupiter as jolly, it can be described as boisterous. Jupiter is a source of radio emissions that are detected with ham radios on Earth. Jupiter’s intense magnetic field accelerates charged particles creating the radio emissions. Jupiter’s moon Io is flexed by the giant planet’s gravity, making it the most volcanic body in the Solar System, so much so, its surface resembles a pizza. As Io ejects this material into space, it becomes ionized and is fed into Jupiter’s magnetic field providing a source for radio emissions.
Due to time constraints, Bernstein elected to skip the pieces on Saturn and Neptune which he described as slow and ponderous. As this program was geared for children, I suspect even then, these pieces would have had trouble keeping the attention of the audience. After Uranus – The Magician, (no jokes made on the pronunciation, this was at the Lincoln Center), Bernstein wrapped things up with an improvised piece called Pluto – The Unpredictable. Holst composed The Planets before Pluto was discovered. And Pluto did turn out to be unpredictable, so much so that it is no longer considered a planet. Rather, it was the first Kuiper Belt object discovered. It was not until the 1990’s that others would be detected. So, no need to fret about missing Pluto in this musical set.
I don’t frown upon someone who has an emotional reaction when gazing at the night sky. We’re not Vulcans. The planets and stars inspire more than just science. It can inspire music and art among other things including, shudder, astrology. As far as the latter goes, one hopes to transition a student from a belief in superstition to science, but be aware, that usually does not occur overnight. That aside, Holst’s The Planets still presents a nifty opportunity for an interdisciplinary take on the Solar System as it did for me on that sunny, cold early Spring Sunday afternoon 46 years ago.
*Image atop post – Leonard Bernstein leads the New York Philharmonic in its rendition of Jupiter – Bringer of Jollity.
As part of an interview process, I was recently asked to provide a demo lesson in physics. The class had just started its unit on electronics, so I decided to teach with an online interactive simple circuit to give a conceptual basis for current, voltage, and resistance. In my experience, these topics are often presented in abstract form right away with students drawing circuit diagrams and cranking out solutions to equations without getting an intuitive sense what these concepts are. This is exacerbated by the fact that while we can observe the end result of an electrical system, we cannot see the inner workings of one.
There are two analogies that can be used for an electrical circuit. One is a water system, the other is a roller coaster. I’ll go over both here. For the demo lesson, I used the roller coaster. The school was in New York City and my thinking was the students would have, for the most part, experience riding a roller coaster. There is Paterson Falls in New Jersey, but most people I talked to in the region were not aware of those falls. I became aware of it while watching the movie Paterson. Had the lesson been in Buffalo where I live, I would have used the water system example as Niagara Falls is such a prominent feature in local geography.
Current defines the flow of electricity in a circuit in the direction of positive charge. It’s actually the flow of loose negatively charged electrons that create a current, but this convention was defined before the nature of the atom was unveiled in the 20th Century. Electrical charge is conserved, that is, it cannot be created of destroyed. One unit of charge is a Coulomb (C), and a flow of 1 C/s is referred to as an amp. During my high school years, students would brag about how many amps their stereos had, which delighted our parents no end.
If a stream has a flow of 10 gallons per second, we could call that its current. If you are watching a roller coaster and observe 10 cars pass a point in one second, then 10 cars per second is its current. The same holds true for a circuit, a flow of 10 units of charge in a wire is 10 C/s or 10 amps. A circuit has to complete a loop for current to flow. A switch in the on position completes a loop and allows a current to flow through the system. The off position breaks the loop. However, it takes more than a switch to create a current, and that’s where voltage comes in.
If an object is on the ground, it has zero potential energy. If we lift the object above the ground it gains potential energy. That potential energy is converted to kinetic energy if we release the object. Go back to the roller coaster analogy. How much potential energy do the cars have while level on the ground? Zero. The coaster adds potential energy by lifting the cars up on a hill. Coney Island’s Cyclone is 85 feet tall whereas modern coasters can be 200-300 feet tall. The potential energy is converted to kinetic energy as you reach the top and begin to drop. Batteries do the same by adding potential to a circuit. This potential is measured in volts.
In the water analogy, think of a canal that is level. Current does not flow and in fact, this causes canals to be stagnant and a health hazard. The canals of Amsterdam are flushed each morning for this reason. It is also why the Buffalo segment of the Erie Canal was filled in during the 1920’s. It is this segment that I-190 was built upon. What happens when you add a height difference? Think of Niagara Falls. It adds a current and potential energy which is used to produce hydroelectric power. Water in the amount of 748,000 gallons per second drops 180 feet into 25 turbines producing 2.6 megawatts of energy.
The lines from a power plant can have voltage in the hundreds of thousands. Transformers drop that to 120 volts before entering a household. Voltage can also be thought of as pressure. Think of a pressure washer. Higher pressure can deliver water farther. Higher voltage can send a spark longer. So while voltage and current are proportional to each other, they are not the same thing. You need voltage to start a current.
The final piece of the puzzle is resistance. This is akin to friction on the roller coaster. Without friction, a roller coaster would never stop but would travel in a continuous loop. Friction between the cars and rails converts kinetic energy into heat and is dissipated into the surrounding air. Hence, an engine has to push the coaster up the hill again to start another trip around the loop. Resistance in a circuit does the same. Energy in the circuit is converted by resistance in the wire and dissipated as heat. This causes voltage to drop as current travels in the loop. The battery serves the same purpose as the hill in the coaster. It adds voltage or potential to restart the current around the loop.
Superconductivity represents a state of zero resistance. This requires a very cold temperature. During the 1980’s, a ceramic material was discovered that raised the known temperature of superconductivity from 30 K to 92 K. The media at the time presented this as hope of building practical superconductive systems that would bring about high efficiencies to electric generation. Since then, progress has been slow on this front, at least in terms of some expectations after that discovery. You can think of a superconductive circuit as a roller coaster that would not require energy to start each successive loop after the initial potential was added.
The PhET interactive above allows the class to build their own circuits and analyze the relationship between current, voltage, and resistance. For the sake of the demo lesson, I used the Physics Classroom interactive as it is a bit more easier to get it up and running given the limitations involved of a demo lesson. Over the long haul, the PhET interactive is more robust. Both will allow a student to adjust voltage and current to see how it affects the circuit.
The key points for the class to learn are:
A circuit must be a closed loop from one terminal of the battery to the other for a current to flow. A switch in the off position breaks the loop while the on position closes the loop. A car ignition key serves the same function.
A potential or voltage must be applied to the circuit to get the current flowing. Otherwise, it would be like trying to ride a flat roller coaster.
Voltage or potential will drop as the current travels through the loop. This is analogous to a roller coaster lowering in elevation (and potential energy) as it completes the ride, eventually to be grounded.
In increase in voltage will increase current and an increase in resistance will decrease current. This is the basis for Ohm’s Law or I = V/R.
Of all the concepts here, voltage or potential tends to be the most difficult. The roller coaster example is just one of several that can be used. I think it best for a teacher to be flexible and use whatever example is most effective for each student. Another example could be that as the battery being like a water pump. The pump applies pressure in the circuit and thus, starts current. A slingshot could be used as well. As a battery forces a positive current towards the positive terminal, the two like charges want to repel each other. Once the positive charge is released into the wire, it is as if the positive terminal slingshots that charge inducing a current.
The key to the lesson is to enable students to visualize and obtain an intuitive grasp of the concepts of current, voltage and resistance. Once accomplished, the class can move on to real circuits and will have a better understanding what a voltmeter or ammeter is telling them as well as what the variables to Ohm’s law signify.
“If it isn’t true in the extreme, it cannot be true in the mean.”
That, at least, was an argument I heard in an undergrad philosophy class. As we’ll learn, what happens in extreme environments are quite different from the confines of the conditions the human body has evolved in. The conditions we live in are not typical of the universe, one which is mostly hostile to life. And just like the physical sciences, the social sciences can present some extreme conditions that provide counter-intuitive results.
I’ll start with absolute zero. At this temperature all atomic motion ceases. On the Kelvin scale it is 0 degrees, on the more familiar scales it is – 459.67 F or – 273.15 C. You can’t actually reach absolute zero. Heat transfers from a warmer to a cooler object. So ambient heat will always try to warm an object that cold. However, you can get awfully close to absolute zero. In fact, we’ve gotten as close as a billionth of a degree above absolute zero. And this is close enough to see matter behave in strange ways.
At these temperatures, some fluids become superfluids. That is, they have zero viscosity. Liquid helium becomes a superfluid as it is cooled towards absolute zero and having zero viscosity means no frictional effects inside the fluid. If you stirred a cup of superfluid liquid helium and let it sit for a million years, it would continue to stir throughout that time. The complete lack of viscosity means a superfluid can flow through microscopic cracks in a glass (video below). Good thing coffee isn’t a superfluid.
Is there an opposite of absolute zero, a maximum temperature? You’d have to take all the mass and energy (really, one and the same, remember Einstein’s mass-energy equivalence E = mc2) and compress it to the smallest volume possible. These were the conditions found just after the Big Bang formed the universe. The smallest distance we can model is Planck length equal to 1.62 × 10-35 m. How small is this? A hydrogen atom is about 10 trillion trillion Planck lengths. Any length smaller than this general relativity, which describes gravity, breaks down and we are unable to model the universe.
What was the universe like when it was only a Planck length in radius?
For starters, it was very hot at 1032 K, and very young at 10–43 seconds. This unit of time is referred to as Planck time and is how long a photon of light takes to transverse a Planck length. At this point in the young universe, the four fundamental forces of nature, gravity, electromagnetic, electroweak, and electrostrong, were unified into a single force. By the time the universe was 10-10 seconds old, all four forces branched apart. It would take another 380,000 years before the universe became cool enough to be transparent and light could travel unabated. Needless to say, the early universe was very different than the one we live in today.
How will the universe look at the opposite end of the time spectrum?
One possibility is a Big Rip. Here, the universe expands to the point where even atomic particles, and time itself, are shredded apart. In the current epoch, the universe is expanding, but the fundamental forces of nature are strong enough to hold atoms, planets, stars, and galaxies together. Life obviously could not survive a Big Rip scenario unless, as Michio Kaku has postulated, we can find a way to migrate to another universe. That would be many, many billions of years in the future and humanity would need a way to migrate to another star system before then. It is not known with complete certainty how the universe will end. For starters, a greater understanding of dark energy, the mysterious force that is accelerating the expansion of the universe, is required to ascertain that.
Other extremes that we do not experience, but we know the effects are include relativity, where time slows as you approach the speed of light or venture near a large gravity well such as a black hole. In the quantum world, particles can pop in and out of existence unlike anything we experience in our daily lives. The key point is as we approach extreme boundaries, we simply cannot extrapolate what occurs away from those boundaries. Often what we find at the extreme ends of the spectrum is counter-intuitive.
One might ask if this is the case beyond the hard physical sciences. Recent experience indicates that at least in economics, the answer is yes.
Under most scenarios, a growth in currency base greater than the demand for currency will result in inflation. A massive increase in the currency base will end with hyperinflation. The classic case was in post World War I Germany. In the early 1920’s, to make payments on war reparations, Germany cranked up the printing press. In 1923, this was combined with a general strike so you had a simultaneous increase in currency and decrease of available goods to buy. At one point, a dollar was worth 4.2 trillion marks. After the 2008 financial crisis, the Federal Reserve embarked on quantitative easing which greatly expanded the United States currency base. Many predicted this expansion would result in inflation. It didn’t happen.
What gives?
In the aftermath of a banking crisis, demand for cash increases. If that demand is not met, spending falls, unemployment increases, bank loan defaults increase, leading to bank failures and a further fall in money supply. This was the feedback loop in play during 1932, which was a very deflationary environment. The expansion of the currency base simply offsets deflationary pressure rather than starting inflation. The extreme limit being faced here is the zero percent Fed Funds rate making bonds and cash pretty much interchangeable.
Unlike the physical sciences, ideology can muddy the waters in economic thinking. However, the evidence is quite clear on this. The same phenomena was observed both in Sweden in the mid-1990’s and Japan over the past decade. It also happened in the United States during the late 1930’s. In that case, Europeans shipped gold holdings to America in anticipation of war. During that era, central banks sterilized imported gold by selling securities to stabilize the currency base. Facing the deflation of the Great Depression, the U.S. Treasury opted not to sterilize the flood of gold from Europe. The result was the currency base increased 366% but inflation only rose 27% (an average of 3% annually) from 1937-45.
The lesson here is, if you find yourself examining the most extreme conditions or up against a boundary, whether it is the speed of light, the infinite gravity of a black hole, the coldest temperature or lowest interest rate possible, it’s not sufficient to extrapolate the mean into the extreme. You have to look into how these extreme environments alter the manner how systems operate. In many cases, your intuition from living in conditions not in the extreme can lead you astray. However, if you let observations, rather than preconceptions, guide you, some interesting discoveries may be in store.
*Image atop post is the formation of a Bose-Einstein condensate as temperature approaches absolute zero. Predicted by Satyendra Nath Bose and Albert Einstein in 1924, as temperatures approach absolute zero, many individual atoms begin to act as one giant atom. Per the uncertainty principle, as an atom’s motion is specified as close to zero, our ability to specify a location of that atom is lost. The atoms are smeared into similar probability waves that share identical quantum states (right). Credit: NASA/JPL-Caltech.
About 4.6 billion years ago, a large molecular cloud gave birth to the Sun. Within our Solar System, the Sun contains 99.8% of its mass, the rest going to the planets, moons, comets, and asteroids. The Sun is now halfway through its expected lifetime. During the course of a human lifetime, the Sun does not change much. It rises and sets the same times each year, its energy output does not vary much, and the average human will see about seven solar cycles. Over the course of billions of years, the Sun does and will continue to evolve. If the human race survives that long, that will have implications for its future.
The majority of the Sun’s life is spent on what astronomers call the main sequence. During this time, the Sun fuses hydrogen into helium, a fraction of this mass is converted into energy providing the sustenance for life on Earth. This reaction converts 4 hydrogen atoms into 1 helium atom plus two left over hydrogen atoms. The process converts 0.71% of the original 4 hydrogen atoms’ mass into energy. Each second, the Sun transforms 4 million tons of mass into energy. If the Sun was the size of Earth, this would be the equivalent of converting 12 tons of mass into energy each second. You might worry that this would burn up the Sun in short order, but the Sun is very large and if you divide its mass by 4 million, it would use up its mass in 4.9725 × 1023 seconds, or 1.58 × 1016 years. The Sun will not exist that long as there are other factors in play.
There are two major forces acting within the Sun. One is the force of gravity as the Sun’s mass compresses its core. This compression heats up the core to a temperature of 15 million K. A temperature of 12 million K is required to start nuclear fusion. Here you see the challenge of using fusion as an energy source on Earth. Hydrogen bombs use fusion to explode, but require a fission atomic bomb to detonate it by delivering the required heat to start the fusion process. Controlled fusion would make for a great energy source on Earth, but it is problematic to create a temperature of 12 million K. Current research is looking into high energy lasers to heat hydrogen enough to commence controlled fusion.
Once fusion starts in the Sun’s core, this creates the second force in play, an outward pressure generated by heat. This outward force perfectly balances the inward force of gravity preventing the Sun from collapsing upon itself. This balancing act, referred to as hydrostatic equilibrium, is one of nature’s great regulators. It is this balancing act that regulates short-term solar output so that it varies only a fraction of a percent. This modulation of solar output provides a stable environment on Earth required for life. However, over the course of a few billion years, it’s a different story.
As the Sun’s core converts hydrogen into helium, it becomes denser and hotter. This in turn gradually makes the Sun more luminous. The Sun is 30% more luminous today than 4 billion years ago. In about 1 billion years, the Sun will become hot enough to boil off the oceans on Earth. If humanity can survive its foibles over that time, it will need to move off the Earth to exist. Colonizing Mars within that time frame is certainly doable. What may not be doable, is interstellar colonizing when the Sun ends its main sequence stage. Just before that occurs, another event will impact the Sun.
In about 4 billion years, the Milky Way will collide with its neighbor, the Andromeda galaxy. While galaxies frequently collide, stars do not. If the Sun was the size of a grain of sand, the nearest star would be another grain of sand over four miles away. What could happen is the Sun may be ejected from the Milky Way. The result of this collision is that the two spiral galaxies will combine to form one giant elliptical galaxy in a process that will cover 2 billion years (video below). It’s impossible to model whether or not the Sun will be part of this new galaxy, but either way, the Sun will become a red giant afterwards.
A star becomes a red giant when it runs out of hydrogen in its core. The rate of fusion slows down causing gravity to compress the core. As a result, the shell of hydrogen outside the now helium core ignites. The hotter core creates an outward pressure expanding the star greatly. When the Sun turns into a red giant in 5 billion years, Mercury, Venus, and possibly Earth will be incinerated. A red giant’s surface is much cooler than the Sun is today, but is much more luminous. That may sound counter-intuitive, but think of it this way. One 100-watt light bulb is brighter than one 60-watt light bulb. However, 100 60-watt light bulbs is brighter than one 100-watt light bulb. Besides temperature, stellar radius also factors into a star’s luminosity. The Sun still has a few more steps to complete in its life cycle.
The red giant phase of the Sun will end in a helium flash. This occurs when the core is compressed to a degenerate state where electrons are packed to the point where all possible states are occupied. The compression heats the core to the required 100 million K to commence helium fusion into carbon. This in turn breaks down the degenerate state of the core and the Sun will become a yellow giant. The Sun is not large enough to fuse carbon.
However, the intense heat of helium fusion will generate even more outward pressure and expand the Sun’s radius even further so its outer shell becomes transparent, and cool. So cool, that elements such as carbon and silicon solidify into grains and are expelled out by an intense solar wind. At this stage, the Sun will be a Mira variable for 10 million years. After this, the Sun will enter the final stages of its life as a white dwarf surrounded by a planetary nebula.
A white dwarf is the exposed core of a star. Comprised of carbon and oxygen, it is not large enough to fuse atoms. Its heat is akin to a car engine still being warm after it has been turned off. While an engine will cool off in a few hours, it will take trillions of years for a white dwarf to go completely dark. This is longer than the current age of the universe at 13.7 billion years. The planetary nebula’s life is much shorter.
The term planetary nebula is a holdover from the days when these nebulae resembled planets in telescopes. With the Hubble Space Telescope, we now know planetary nebulae can also take the shape of bipolar jets. How the Sun will look we do not know. We do know that the core will no longer be capable of holding on to its outer shell. The planetary nebula will disperse into interstellar space in 10,000 years.
These gases will not only hold the remnants of the Sun, but the planets and the very atoms that make up our bodies. The Sun itself is a remnant of a prior star. We know this as trace amounts of metal exist in the Sun. These metals are produced by fusion, or if the star is large enough, a supernova explosion. Colliding galaxies compress interstellar gas igniting star formation. As the Andromeda galaxy collides with the Milky Way, it is very possible what used to make up the Sun will form a new star, with planets, and possibly, plants, then animals, and finally, intelligent beings.
The cycle of life begins anew.
*Image atop post is from NASA’s Solar Dynamics Observatory.
In some quarters of the media, global warming is presented as a natural rebound from an epoch known as the Little Ice Age. Is it possible the rise in global temperatures represents a natural recovery from a prior colder era? The best way to answer that is to understand what the Little Ice Age was and determine if natural forcings alone can explain the recent rise in global temperatures.
The Little Ice Age refers to the period from 1300-1850 when very cold winters and damp, rainy summers were frequent across the Northern Europe and North America. That era was preceded by the Medieval Warm Period from 950-1250 featuring generally warmer temperatures across Europe. Before we get into the temperature data, lets take a look at the physical and cultural evidence for the Little Ice Age.
You can see the retreat of the glaciers in the Alps at the end of the Little Ice Age to the current day. In the Chamonix Valley of the French Alps, advancing glaciers during the Little Ice Age destroyed several villages. In 1645, the Bishop of Geneva performed an exorcism at the base of the glacier to prevent its relentless advance. It didn’t work. Only the end of the Little Ice Age halted the glacier’s advance in the 19th century.
The River Thames Frost Fairs
The River Thames in London froze over 23 times during the Little Ice Age and five times, the ice was thick enough for fairs to be held on the river. When the ice stopped shipping on the river, the fairs were held to supplement incomes for people who relied on river shipping for a living. These events happened in 1684, 1716, 1740, 1789, and 1814. Since then, the river has not frozen solid enough in the city to have such an activity occur. An image of the final frost fair is below:
The Year Without a Summer
The already cold climate of the era was exacerbated by the eruption of Mt. Tambora on April 10, 1815. If volcanic dust reaches the stratosphere, it can remain there for a period of 2-3 years, cooling global temperatures. The eruption of Mt. Tambora was the most powerful in 500,000 years. Its impact was felt across Europe and North America during the summer of 1816. From June 6-8 of that year, snow fell across New England and as far south as the Catskill Mountains. Accumulations reached 12-18 inches in Vermont. In Switzerland, a group of writers, stuck inside during the cold summer at Lake Geneva, decided to have a contest on who could write the most frightening story. One of the authors was Mary Shelley and her effort that summer is below:
Let’s take a look at what the hard data says about the Little Ice Age. Below is a composite of several temperature reconstructions from the past 1,000 years in the Northern Hemisphere:
The range of uncertainty is wider as we go back in time as we are using proxies such as tree rings and ice cores rather than direct temperature measurements. However, even with the wider range of uncertainty it can be seen that temperatures in the Northern Hemisphere were about 0.50 C cooler than the baseline 1961-90 period. Was the Little Ice Age global in nature or was it restricted to the Northern Hemisphere?
Recent research indicates that the hemispheres are not historically in sync when it comes to temperature trends. One key difference is that the Southern Hemisphere is more dominated by oceans than the Northern Hemisphere. The Southern Hemisphere did not experience warming during the northern Medieval Warm Period. The Southern Hemisphere did experience overall cooling between 1571 and 1722. More dramatically, the Southern Hemisphere is in sync with the Northern Hemisphere since the warming trend began in 1850. This indicates the recent global warming trend is fundamentally different than prior climate changes.
Keep in mind that we are dealing with global averages. Like a baseball team that hits .270, but may have players hitting anywhere between .230 and .330, certain areas of the globe will be hotter or cooler than the overall average. During the 1600’s, Europe was colder than North America, and the reverse was the case during the 1800’s. At it’s worst, the regional drops in temperature during the Little Ice Age were on the order of 1 – 2 C (1.8 to 3.6 F). At first glance, that might not seem like much. We tend to think in terms of day-to-day weather and there is not much difference between 0 and 2 C (32 and 35 F). But yearly averages are different than daily temperatures.
We’ll take New York City as an example. The hottest year on record is 2012 at 57.3 F. The average annual temperature is 55.1 F. If temperatures were to climb by 3 F, the average year in New York City would become hotter than the hottest year on record. Again, using the baseball example, a player’s game average fluctuates more so than a career batting average. You can think of daily weather like a game box score, and climate as a career average. It’s much more difficult to raise a career batting average. In the case of climate, it takes a pretty good run of hotter than normal years to raise the average 2-3 F.
Lets go back to the climate history. Global temperatures dipped about 0.5 C over a period of several centuries during the Little Ice Age. Since 1800, global temperatures have risen 1.0 C. This sharp increase gives the temperature graph the hockey stick look. The latest warming trend is more than just a return to norm from the Little Ice Age. There are two other factors to consider as well. One is the increasing acidity of the oceans, the other is the cooling of the upper atmosphere.
Carbon dioxide reacts with seawater to form carbonic acid. Since 1800, the acidity of the oceans have increased by 30%. A rise in global temperatures alone does not explain this, but an increase in atmospheric carbon dioxide delivered to the oceans via the carbon cycle does. As carbon dioxide in the atmosphere increases, it traps more heat near the surface. This allows less heat to escape into the upper atmosphere. The result is the lower atmosphere gets warmer and the upper atmosphere gets cooler. The stratosphere has cooled 1 C since 1800. A natural rebound in global temperatures would warm both the lower and upper atmosphere, observations do not match this. However, increased carbon dioxide in the atmosphere does explain this.
The Little Ice Age looms large historically in that the colder climate played a role in many events leading to modern day Europe and America. What caused the Little Ice Age? That is still a matter of debate. The Maunder Minimum, a sustained period of low solar activity from 1645 to 1715, is often cited as the culprit. However, solar output does not vary enough with solar activity to cause the entire dip in global temperatures during the Little Ice Age. As the old saying goes, correlation is not causation. That’s were the science gets tough. You need to build a model based on the laws of physics explaining causation. While the cause of the Little Ice Age is still undetermined, the origin of modern global warming is not. To deny that trend is caused by human carbon emissions, you have to explain not only the warming of the lower atmosphere, but the cooling of the upper atmosphere and increase in ocean acidity.
To date, no one has accomplished that.
*Image atop post is Hendrick Avercamp’s 1608 painting, Winter Landscape with Ice Skaters. Credit: Wiki Commons.
The evolution of the world can be compared to a display of fireworks that has just ended, some few red wisps, ashes, and smoke. Standing on a cooled cinder, we see the slow fading of the suns, and we try to recall the vanished brilliance of the origin of the worlds.” – Fr. Georges Lemaitre
Since the ancient astronomers, humans have wondered about the origins of the universe. For most of history, mythology filled the void in our knowledge. Then with Issac Newton, scientists began to assume the universe was infinite in both time and space. The concept of a universe that had a discrete origin was considered religious and not scientific. During the 20th century, dramatic advancements in both theory and observation provided a definitive explanation how the universe originated and evolved. Most people I talk to, especially in America, are under the impression the Big Bang is just a theory without any evidence. Nothing could be further from the truth. In fact, every time you drink a glass of water, you are drinking the remnants of the Big Bang.
In 1916, Einstein published his general theory of relativity. Rather than viewing gravity as an attractive force between bodies of mass, relativity describes gravity as mass bending the fabric of space-time. Think of a flat trampoline with nothing on it. If you roll a marble across the trampoline, it moves in a straight path. Now put a bowling ball on the trampoline, the marble’s path is deflected by the bend in the trampoline. This is analogous to the Sun bending space-time deflecting the paths of planets. Once Einstein finished up on relativity, he endeavored to build models of the universe with his new theory. These models produced one puzzling feature.
The equations describing the universe with relativity produced the term dr/dt. The radius of the universe could expand or contract as time progresses. If you introduced matter into the model, gravity would cause space-time and the universe itself to contract. That didn’t seem to reflect reality, and Einstein was still operating with the Newtonian notion of an infinite, unchanging universe. To check the contraction of the universe, Einstein included a cosmological constant to relativity to offset the force of gravity precisely. By doing this, Einstein missed out on one of the great predictions made by his theory.
Balancing forces are not unheard of in nature. In a star like the Sun, the inward force of gravity is offset by the outward force of gas as it moves from high pressure in the core to lower pressure regions towards the surface. This is referred to as hydrostatic equilibrium and prevents the Sun from collapsing upon itself via gravity to form a black hole. Einstein’s cosmological constant served the same purpose by preventing the universe as a whole from collapsing into a black hole via gravity. However, during the 1920’s, a Catholic priest who was also a mathematician and astrophysicist, would provide a radical new model to approach this problem.
Georges Lemaitre had a knack for being where the action was. As a Belgian artillery officer, Lemaitre witnessed the first German gas attack at Ypres in 1916. Lemaitre was spared as the wind swept the gas towards the French sector. After the war, Lemaitre would both enter the priesthood and receive PhD’s in mathematics and astronomy. The math background provided Lemaitre with the ability to study and work on relativity theory. The astronomy background put Lemaitre in contact with Arthur Eddington and Harlow Shapley, two of the most prominent astronomers of the time. This would give Lemaitre a key edge in understanding both current theory and observational evidence.
It’s hard to imagine, but less than 100 years ago it was thought the Milky Way was the whole of the universe. A new telescope, the 100-inch at Mt. Wilson, would provide the resolution power required to discern stars in other galaxies previously thought to be spiral clouds within the Milky Way. One type of star, Cepheid variables, whose period of brightness correlates with its luminosity, provided a standard candle to measure galactic distances. It was Edwin Hubble at Mt. Wilson who made this discovery. Besides greatly expanding the size of the known universe, Hubble’s work unveiled another key aspect of space.
When stars and galaxies recede from Earth, their wavelengths of light are stretched out and move towards the red end of the spectrum. This is akin to the sound of a car moving away from you. Sound waves are stretched longer resulting in a lower pitch. What Hubble’s work revealed was galaxies were moving away from each other. Hubble was cautious in providing a rational for this. However, Fr. Lemaitre had the answer. It wasn’t so much galaxies were moving away as space was expanding between the galaxies as allowed by relativity theory. Lemaitre also analyzed Hubble’s data to determine that the more distant a galaxy was, the greater its velocity moving away from us. Lemaitre published this result in an obscure Belgian journal. Hubble would independently publish the same result a few years later and received credit for what is now known as Hubble’s law. This law equates recessional velocity to a constant (also named after Hubble) times distance.
It would require more resolving power to determine the final value of the Hubble constant. In fact, it took Hubble’s namesake, the Hubble Space Telescope to pin down the value which also provides the age of the universe.
In the meantime, the debate on the origin of the universe still needed to be settled. Lemaitre favored a discrete beginning to the universe that evolved throughout its history. Specifically, Lemaitre felt vacuum energy would cause the expansion of the universe to accelerate in time and thus, kept Einstein’s cosmological constant, albeit with a different value to speed up the expansion. Einstein disagreed and thought the cosmological constant was no longer required. By 1931, Einstein conceded the universe was expanding, but not accelerating as Lemaitre thought. A decade later, the most serious challenge to the Big Bang theory emerged.
The label Big Bang was pinned on Lemaitre’s theory derisively by Fred Hoyle of Cambridge, who devised the Steady State theory. This theory postulated an expanding universe, but the expansion was generated by the creation of new hydrogen. Hoyle scored points with the discovery that stellar nucleosynthesis created the elements from carbon to iron via fusion processes. Although Hoyle proved the Big Bang was not required to form these heavy elements, he still could not provide an answer to how hydrogen was created. It would take some modifications to the Big Bang model to challenge Hoyle’s Steady State model.
During the 1940’s, George Gamow proposed a hot Big Bang as opposed to the cold Big Bang of Georges Lemaitre. In Gamow’s model, the temperature of the universe reaches 1032 K during the first second of existence. Gamow was utilizing advancements in quantum mechanics made after Lemaitre proposed his original Big Bang model. Gamow’s model had the advantage over Hoyle’s Steady State model as it could explain the creation of hydrogen and most of the helium in the universe. The hot Big Bang model had one additional advantage, it predicted the existence of a background microwave radiation emanating from all points in the sky and with a blackbody spectrum.
A blackbody is a theoretical construct. It is an opaque object that absorbs all radiation (hence, it is black) and remits it as thermal radiation. The key here is to emit blackbody radiation, an object has to be dense and hot. A steady state universe would not emit blackbody radiation whereas a big bang universe would in its early stages. During the first 380,000 years of its existence, a big bang universe would be a small, hot, and opaque. By the time this radiation reached Earth some 13 billion years later, the expansion of the universe would stretch out these radiation wavelengths into the microwave range. This stretching would correlate to a cooling of the blackbody radiation somewhere between 0 and 5 K or just 5 degrees above absolute zero. Detection of this radiation, called the Cosmic Microwave Background (CMB) would resolve the Big Bang vs. Steady State debate.
In 1964, Arno Penzias and Robert Wilson were using the 20-foot horn antenna at Bell Labs to detect extra-galactic radio sources. Regardless of where the antenna was pointed, they received noise correlating to a temperature of 2.7 K. Cosmology was still a small, somewhat insular field separate from the rest of astronomy. Penzias and Wilson did not know how to interpret this noise and made several attempts to rid themselves of it, including two weeks cleaning out pigeon droppings from the horn antenna. Finally, they placed a call to Princeton University where they reached Robert Dicke, who had been building his own horn antenna to detect the CMB. When the call ended, Dicke turned to his team and said:
“Boys, we’ve been scooped”
Actually, the first time the CMB was detected was in 1941 by Andrew McKeller but the theory to explain what caused it was not in place and the finding went forgotten. Penzias and Wilson published their discovery simultaneously with Robert Dicke providing the theory explaining this was proof that the young universe was in a hot, dense, state after its origin. Georges Lemaitre was told of the confirmation of the Big Bang a week before he passed away. Penzias and Wilson were awarded the Nobel in 1978. Back at Cambridge, Fred Hoyle refused to concede the Steady State theory was falsified until his death in 2001. Some believe this refusal, among other things, cost Hoyle the Nobel in 1983 when it was awarded for the discovery of nucleosynthesis. Hoyle was passed in favor of his junior investigator, Willy Fowler.
It would take 25 more years before another mystery of the CMB was solved. The noise received by Penzias and Wilson was uniform in all directions. For the stars and galaxies to form, some regions of the CMB had to be cooler than others. This was not a failure on Penzias and Wilson’s part, but better equipment with higher resolution capabilities were required to detect the minute temperature differences. In 1989, the COBE probe, headed by George Smoot, set out to map these differences in the CMB. The mission produced the image below. The blue regions are 0.00003 K cooler than the red regions, just cold enough for the first stars and galaxies to form. This map is a representation of the universe when it was 380,000 years old.
Could we peer even farther into the universe’s past? Unfortunately, no. The universe did not become transparent until it was 380,000 years old, when it cooled down sufficiently for light photons to pass unabated without colliding into densely packed particles. It’s similar to seeing the surface of a cloud and not beyond.
The first nine minutes of the COBE probe produced a spectrum of the CMB. The data was plotted against the predicted results of a blackbody spectrum. The results are below:
The data points are a perfect match for the prediction. In fact, the CMB represents the most perfect blackbody spectrum observed. The universe was in a hot, dense state after its creation.
The late 1990’s would add another twist to the expansion of the universe. Two teams, one based in Berkeley and the other at Harvard, set out to measure the rate of expansion throughout the history of the universe. It was expected that over time, the inward pull of gravity would slow the rate of expansion. And this is what relativity would predict once the cosmological constant was pulled out, or technically speaking, equal to zero. The two teams set about their task by measuring Type Ia supernovae.
Like Cepheid variables, Type Ia supernovae are standard candles. They result when a white dwarf siphons off enough mass from a neighboring star to reach the size of 1.4 Suns. Once this happens, a supernova occurs and as these transpire at the same mass point, their luminosity can be used to calibrate distance and in turn, the history of the universe. A galaxy 1 billion light years away takes 1 billion years for its light to reach Earth. A galaxy 8 billion light years away takes 8 billion years for its light to reach Earth. Peering farther into the universe allows us to peer farther back in time as well.
What the two teams found sent shock waves throughout the world of astronomy.
The first 10 billion years of universe went as expected, gravity slowed the expansion. After that, the expansion accelerated. The unknown force pushing out the universe was dubbed dark energy. This puts the cosmological constant back into play. The key difference instead of operating from an incorrect assumption of a static universe, data can be used to find a correct value that would model an increasingly expanding universe. Georges Lemaitre’s intuition on the cosmological constant had been correct. While the exact nature of dark energy needs to be worked out, it does appear to be a property of space itself. That is, the larger the universe is, the more dark energy exists to push it out at a faster rate.
Besides detecting the CMB, cosmologists spent the better part of the last century calculating the Hubble constant. The value of this constant determines the rate of expansion and provides us with the age of the universe. The original value derived by Hubble in the 1920’s gave an age for the universe that was younger than the age of the Earth. Obviously, this did not make sense and was one reason scientists were slow to accept the Big Bang theory. However, it was clear the universe was expanding and what was needed was better technology affording more precise observations. By the time I was an undergrad in the 1980’s, the age of the universe was estimated between 10-20 billion years.
When the Hubble Space Telescope was launched in 1990, a set of key projects were earmarked for priority during the early years of the mission. Appropriately enough, pinning down the Hubble constant was one of these projects. With its high resolution able to measure the red shift of distant quasars and Cepheid variables, the Hubble was able to pin down the age of the universe at 13.7 billion years. This result has been confirmed by the subsequent WMAP and Planck missions.
The story of the Big Bang is not complete. There is the lithium problem. The Big Bang model predicts three times the amount of lithium as is observed in the universe. And there is inflation. In the first moments of the universe’s existence, the universe was small enough for quantum fluctuations to expand it greatly. Multiple models exist explaining how this inflation occurred and this needs to be resolved. This would determine how the universe is observed to be flat rather than curved and why one side of the universe is the same temperature as the other when they are too far apart to have been in contact. An exponential expansion of the universe during its first moments of existence would solve that.
Then there is the theory of everything. In his 1966 PhD thesis, Stephen Hawking demonstrated that if you reverse time in the Big Bang model, akin to running a movie projector in reverse, the universe is reduced to a singularity at the beginning. A singularity has a radius of zero, an infinite gravity well and infinite density. Once the universe has a radius less than 1.6 x 10–35 meter, a quantum theory of gravity is required to describe the universe at this state as relativity does not work on this scale.
When discussing these problems with Big Bang skeptics, the tendency is to reply with a gotcha moment. However, this is just scientists being honest about their models. And if you think you have an alternative to the Big Bang, you’ll need to explain the CMB blackbody spectrum, which can only be produced by a universe in a hot dense state. And you’ll need to explain the observed expansion of the universe. It’s not enough to point out the issues with a model, you’ll need to replicate what it gets right. While there are some kinks to work out, the Big Bang appears to be here to stay.
You don’t need access to an observatory or a NASA mission to experience the remnants of the Big Bang. Every glass of water you drink includes hydrogen created during the Big Bang. And if you tune your television to an empty channel, part of the static you see is noise from the CMB. The Big Bang theory and the observational evidence that backs it up is one of the landmark scientific achievements of the 20th century, and should be acknowledged as such.
*Image atop post is a timeline of the evolution of the universe. Credit: NASA/WMAP Science Team.