Science in Extreme Conditions

If it isn’t true in the extreme, it cannot be true in the mean.”

That, at least, was an argument I heard in an undergrad philosophy class.  As we’ll learn, what happens in extreme environments are quite different from the confines of the conditions the human body has evolved in.  The conditions we live in are not typical of the universe, one which is mostly hostile to life.  And just like the physical sciences, the social sciences can present some extreme conditions that provide counter-intuitive results.

I’ll start with absolute zero.  At this temperature all atomic motion ceases.  On the Kelvin scale it is 0 degrees, on the more familiar scales it is – 459.67 F or – 273.15 C.  You can’t actually reach absolute zero.  Heat transfers from a warmer to a cooler object.  So ambient heat will always try to warm an object that cold.  However, you can get awfully close to absolute zero.  In fact, we’ve gotten as close as a billionth of a degree above absolute zero.  And this is close enough to see matter behave in strange ways.

At these temperatures, some fluids become superfluids.  That is, they have zero viscosity.    Liquid helium becomes a superfluid as it is cooled towards absolute zero and having zero viscosity means no frictional effects inside the fluid.  If you stirred a cup of superfluid liquid helium and let it sit for a million years, it would continue to stir throughout that time.  The complete lack of viscosity means a superfluid can flow through microscopic cracks in a glass (video below).  Good thing coffee isn’t a superfluid.

Is there an opposite of absolute zero, a maximum temperature?  You’d have to take all the mass and energy (really, one and the same, remember Einstein’s mass-energy equivalence E = mc2) and compress it to the smallest volume possible.  These were the conditions found just after the Big Bang formed the universe.  The smallest distance we can model is Planck length equal to 1.62 × 10-35 m.  How small is this? A hydrogen atom is about 10 trillion trillion Planck lengths. Any length smaller than this general relativity, which describes gravity, breaks down and we are unable to model the universe.

What was the universe like when it was only a Planck length in radius?

For starters, it was very hot at 1032 K, and very young at 1043 seconds.  This unit of time is referred to as Planck time and is how long a photon of light takes to transverse a Planck length.  At this point in the young universe, the four fundamental forces of nature, gravity, electromagnetic, electroweak, and electrostrong, were unified into a single force.  By the time the universe was 10-10 seconds old, all four forces branched apart.  It would take another 380,000 years before the universe became cool enough to be transparent and light could travel unabated.  Needless to say, the early universe was very different than the one we live in today.

How will the universe look at the opposite end of the time spectrum?

One possibility is a Big Rip.  Here, the universe expands to the point where even atomic particles, and time itself, are shredded apart.  In the current epoch, the universe is expanding, but the fundamental forces of nature are strong enough to hold atoms, planets, stars, and galaxies together.  Life obviously could not survive a Big Rip scenario unless, as Michio Kaku has postulated, we can find a way to migrate to another universe.  That would be many, many billions of years in the future and humanity would need a way to migrate to another star system before then.  It is not known with complete certainty how the universe will end.  For starters, a greater understanding of dark energy, the mysterious force that is accelerating the expansion of the universe, is required to ascertain that.

Other extremes that we do not experience, but we know the effects are include relativity, where time slows as you approach the speed of light or venture near a large gravity well such as a black hole.  In the quantum world, particles can pop in and out of existence unlike anything we experience in our daily lives.  The key point is as we approach extreme boundaries, we simply cannot extrapolate what occurs away from those boundaries.  Often what we find at the extreme ends of the spectrum is counter-intuitive.

One might ask if this is the case beyond the hard physical sciences.  Recent experience indicates that at least in economics, the answer is yes.

Hyperinflation rendered German marks so worthless they were used for wallpaper. Credit: Georg Pahl/German Federal Archives.

Under most scenarios, a growth in currency base greater than the demand for currency will result in inflation.  A massive increase in the currency base will end with hyperinflation.  The classic case was in post World War I Germany.  In the early 1920’s, to make payments on war reparations, Germany cranked up the printing press.  In 1923, this was combined with a general strike so you had a simultaneous increase in currency and decrease of available goods to buy.  At one point, a dollar was worth 4.2 trillion marks.  After the 2008 financial crisis, the Federal Reserve embarked on quantitative easing which greatly expanded the United States currency base. Many predicted this expansion would result in inflation.  It didn’t happen.

What gives?

In the aftermath of a banking crisis, demand for cash increases. If that demand is not met, spending falls, unemployment increases, bank loan defaults increase, leading to bank failures and a further fall in money supply.  This was the feedback loop in play during 1932, which was a very deflationary environment.  The expansion of the currency base simply offsets deflationary pressure rather than starting inflation. The extreme limit being faced here is the zero percent Fed Funds rate making bonds and cash pretty much interchangeable.

Unlike the physical sciences, ideology can muddy the waters in economic thinking.  However, the evidence is quite clear on this.  The same phenomena was observed both in Sweden in the mid-1990’s and Japan over the past decade.  It also happened in the United States during the late 1930’s.  In that case, Europeans shipped gold holdings to America in anticipation of war.  During that era, central banks sterilized imported gold by selling securities to stabilize the currency base.  Facing the deflation of the Great Depression, the U.S. Treasury opted not to sterilize the flood of gold from Europe.  The result was the currency base increased 366% but inflation only rose 27% (an average of 3% annually) from 1937-45.

The lesson here is, if you find yourself examining the most extreme conditions or up against a boundary, whether it is the speed of light, the infinite gravity of a black hole, the coldest temperature or lowest interest rate possible, it’s not sufficient to extrapolate the mean into the extreme.  You have to look into how these extreme environments alter the manner how systems operate.  In many cases, your intuition from living in conditions not in the extreme can lead you astray.  However, if you let observations, rather than preconceptions, guide you, some interesting discoveries may be in store.

*Image atop post is the formation of a Bose-Einstein condensate as temperature approaches absolute zero.  Predicted by Satyendra Nath Bose and Albert Einstein in 1924, as temperatures approach absolute zero, many individual atoms begin to act as one giant atom.  Per the uncertainty principle, as an atom’s motion is specified as close to zero, our ability to specify a location of that atom is lost.  The atoms are smeared into similar probability waves that share identical quantum states (right).   Credit:  NASA/JPL-Caltech.

Trump, Change, and the White Working Class

With some 60,000,000 votes tallied for Trump, I am aware there are among those votes diverse motivations.  Many voted for Trump in the hope he would focus on the revival of the manufacturing sector.  If I thought his policy team would prioritize pushing unemployment down to 4%, offer more access to trade school/college for retraining, and so on, I would not have written this post.  However, there is no denying the racist tone of the Trump campaign and its negative effect on the nation.  This post is specifically geared towards that aspect of the upcoming Trump presidency. 

With the election over and the surprise result in, the punditry is engaged in a fit of self-examination over the lack of understanding of the “forgotten” white working class.  This ongoing media tragicomedy includes proposed Marlin Perkins type forays into the heartland.  Like many disasters, this one has a confluence of causes.  The Northern racial aspect of the Trump campaign, as in the South, has its origins in labor history.  While in the South racial antipathy has its roots in slavery, in the North its roots are in market competition, or elimination thereof.

In 2016, when we apply for a job, we put together a resume with our job experience, education, and accomplishments.  In the old industrial economy, social/political machine connections played an oversized role.  In Buffalo, various ethnic groups lived in insular neighborhoods.  The Polish lived on the East Side, Irish on the South Side, and Italians on the West Side.  These ethnic groups would come to dominate certain industries such as the Irish on the waterfront.  How do you keep the other ethnic groups out?  You assign them inferior status using ethnic slurs and stereotypes are part of the enforcement mechanism.

While these various groups would bump up against each other from time to time, they formed an equilibrium in a region that was growing in jobs and population.  The great migration of African-Americans from the South during the 1950’s and 60’s was on a local scale, regarded as a competitive threat much like current immigration is viewed nationally among the white working class.  From 1940-70, Buffalo’s African-American population grew from 18,000 to 72,000.  Some found good paying jobs in manufacturing, but most were locked out of the job market and the housing market as well due to redlining.  I recall the reaction in my white working class neighborhood when the first black family moved in during the mid-70’s.  Pamphlets with, from what we would call today Alt-Right, were passed around with swastikas.

Swastikas, even in that difficult situation, were considered outside the norm. There were plenty of World War II veterans still alive at the time.  However, a strong and violent reaction ensued necessitating a police car stationed outside the house 24 hours a day.  About a year or so later, the family moved out.  This was around the same time the industrial economy began to falter intensifying the competition for jobs.

The public (but not catholic) educational system specialized in class replication.  That is, preparing us for a life employed in manufacturing.  One morning, delivering the old Courier-Express, the headlines announced 5,000 layoffs at Bethlehem Steel.  During the same day, I attended a shop class that presented a lecture on the basics of steel making.  Even though it was obvious the manufacturing ship was sinking, the inertia of the educational system kept moving forward like the Titanic until it hit the iceberg.

Class replication was also enforced outside the school system.  For some, who attended high school on the college track, could be met with an onslaught of slurs from both friends and family.  It was not uncommon for some who received offers to attend college prep high schools to turn it down for that reason.  I think of this often when I hear of working class rage against the educational elite.  How many working class kids from that era could have escaped the economic trap of the post-industrial age in a different setting?

As an adult, you realize the verbal abuse slung around was simply from people who had little control of their lives and this was one way for them to exercise power.  Real small-minded stuff.  However, for a teenager, it can difficult to navigate that storm.

When discussing the working class today, those cultural mechanisms are still in place.  While the ethnic neighborhoods have by and large dissipated and merged into a single white self-identity, the reflex to discriminate against African-Americans (the way Muslim is now used as an epitaph is an euphemism for the n-word)  and newer immigrants still exists.  And that includes many who have since exited the working class.   Even if one is not a racist, and many in the white working class are not, you still benefit economically within the confines of this system.  What the Trump campaign has done is expand the norms how such discrimination is discussed.

The first time I ventured into Queens during the mid-eighties, it bore a striking resemblance to Buffalo.  The biggest difference is Queens was more light manufacturing rather than heavy manufacturing based, but by and large, pretty much working class.  The Trump family had left the working class by then and Donald was operating in Manhattan, but as the campaign showed, he still understood the racial buttons to push.  However, unlike past candidates who used dog whistles (states rights, welfare, etc,) Trump, being Trump, used a bullhorn.

Throughout the campaign nebulous ties were established with the Alt-Right.  During the aforementioned Buffalo neighborhood incident, the hate groups spewing swastika laced pamphlets were considered cranks with just a single neighborhood bookstore operation.  Even in a racial situation that was pretty tense.  Now those same type of groups have a link to the Oval Office.  And the effect is rippling down to the ground level with increased attacks on minority/immigrant communities.  Certainly, many in the white working class do not embrace this, but it’s undeniable racism permeates our society and those who do embrace/ignore this drove the rise of Trump to the presidency.

However, what succeeded decades ago within the confines of insular neighborhoods for the white working class to secure employment and resources by eliminating competition will fail on a national level.  The opposition is too great (Hillary Clinton drew 2 million more votes than Trump).  In a flip-flop of historical trends, resistance to discrimination on the ground level will blunt the federal government.  Trump’s trade policy, as outlined in another post, will not bring 1955 back.  At any rate, with telecommuting, neighborhoods do not geographically tie down jobs as they once did.  Paul Ryan, public university graduate/Ayn Rand fanboy, wants to scale back Medicare which strikes at the core of the Trump base.  While manufacturing jobs have actually increased by 800,000 nationally since 2010 and are expected to rise 17,000 locally the next five years, will the Trump administration address age discrimination or skill training required for older whites to be hired for these jobs?  Does not seem likely.  Meanwhile, America will continue its inexorable change into a more diverse society.

Personally, I find this change refreshing.  Why would I want to be locked in the social norms of a particular ethnic group?  I’d rather choose my own destiny. There is a cliche that the white working class votes against its own interest.  On a macro scale that can be true.  On a micro scale, some individuals view the ability to discriminate (or to be non-PC) as protecting their economic safe space.  What has happened is that space is growing smaller by the day and will continue to do so.

This election was not about inducing change but avoiding it.  And avoiding that change, regardless who is president, is not possible.   A common comeback from the most strident Trump supporters is “F*** you, we won.”  It’s the same yelp I heard decades ago from those who had little power in their lives.  The reality is, by insulating one’s self to change, you risk being left behind.  And that’s not the direction to go, either personally or the nation as a whole.

Education is Not a Business

And by that, I do not mean the administration of an educational institution should not be conducted in a businesslike manner.  What I mean is that students should not be treated in the same fashion as a business treats a customer.  Recent events have focused on for-profit colleges such as ITT Technical Institute which has closed due to irregularities in both academic standards and financial aid.  However, a well funded ideological movement is in place at a state level to promote a profit orientated curriculum.  Nominally, this is free-market ideology but as we’ll see, in fact, this is detrimental to a well functioning market economy.

Free market models taught in undergrad micro incorporate some pretty abstract concepts.  These include competition to the point that neither an individual buyer or seller can impact the price of a product.  Also, both buyer and seller has perfect knowledge of the market which leads to a rational transaction process.  These conditions result in an optimal allocation of resources.  This model is akin to the Carnot engine in physics.  No engine can run more efficiently than a Carnot engine.  However, the Carnot engine is impossible to build as it requires zero friction and would violate the 2nd law of thermodynamics.  What the Carnot engine does is help us understand the inefficiencies of real engines and the same is true of basic free market models.  In the case of education, perfect knowledge, or lack thereof, is the key.

Asymmetric information in a competitive situation presents profit opportunity depending on who holds the upper hand.  It is why inside information, despite its illegality, is often sought in financial markets. It is why websites offer pricing information on products such as gasoline to arm consumers in the market.  In its more egregious forms, it is how some auto repair shops dupe customers in repairs they do not need or doctors charge for procedures a patient does not require.  In a non-business sense, it is why a sports team will attempt to steal signals from the opposition to get the advantage in a competitive contest.  Information asymmetry is not always unethical, but it is why businesses do not voluntarily disclose their hands when making a deal, and why businesses, if they are smart, require due diligence before closing a deal.

Education has information problems on two fronts.  One is temporal and the other is the lack of information the student has enrolling in an educational institution.  As Alfred Marshall noted in 1890, students are in the dark as to how valuable in monetary terms their education will be years down the road.  Conversely, future employers have no way of investing in education years before they even meet a student.  For Marshall, this meant the free-market would in general fund education below optimal levels and necessitated public funding to make up the gap.  The lack of information on the student side of the equation also means an educational institution operating on a for profit basis may seek to exploit this asymmetry for gain just as a business would.

In terms of social policy, the role of education is to reduce, not to exploit, information shortfalls a student possesses.

Kenneth Arrow, in his landmark paper on asymmetric information in the health care industry, notes that social structures are established to protect a patient against exploitation.  Arrow notes that the societal expectation of a physician’s behavior towards a patient is much different than that of a salesman towards a customer.  A doctor is expected to act with the patient’s welfare in mind.  Some of the expectations are by nature of the social contract such as the Hippocratic Oath, some are monetary in nature such as ACA regulations which stipulate that reimbursement for services are tied to patient health outcomes.  Given that students enroll in a school with the same kind of information disadvantage, it is sensible that educational institutions operate within a similar framework.

Recently, the Frank-Dodd Act has enacted regulations upon financial institutions to act on a customer’s behalf in situations when the institution has a higher degree of product awareness.  One, of many, of the factors leading to the mortgage bubble was a lack of understanding of the product customers were signing into.  One such example would be daily simple interest mortgages.  These mortgages accrue interest on a daily, rather than a monthly, basis as most standard mortgages do.  The end result is that homeowners who paid their mortgage bill after the due date but before the late fee grace period expired would still be left with thousands of dollars in unpaid principal balances when the loan matured, risking default.  The new regulations endeavor to ensure customers do not enter such agreements without an understanding of such details.

If financial institutions are being required to act on a customers behalf, why on Earth would we not expect educational institutions to do the same, if not even more so, on a student’s behalf?

And if the role of education is not to arm students with information, what is it for?  One suspects those who desire to enforce free market ideology on education wish to keep students in the dark so they are always potential marks for the next scam.  Part of the current free market movement is to provide economic instruction based on an uncritical study of the works of Ayn Rand, a fiction writer, into the classroom.  That’s like teaching astronomy based on an uncritical viewing of Star Wars films.  I happen to enjoy Star Wars, but I am not going up to a NASA engineer and suggest they use The Force to get to Mars.  The role of education is to prompt students to up their intellectual game, to challenge assumptions, to bump their preconceptions against empirical observations, the exact opposite of what ideologues of any stripe do.  There may be an Absolute Truth to the universe, but it’s not going to be held in the confines of a single mind.

As for those who believe only “career-orientated” curriculum should be offered, Alfred Marshall, the father of classical economics, had this to say:

For a truly liberal general education adapts the mind to use its best faculties in business and to use business itself as a means of increasing culture

After all, a market-based economy, like democracy, will operate most efficiently with a well informed citizenry.

Trump, Trade, & Buffalo

During my days as an Econ major, one of my professors used to admonish us that even if an economic doctrine was outdated, if it had any staying power, some part of it most likely was insightful.  That is, don’t be so quick to put it up on a shelf and label as 100% toxic.  In this spirit, I am going to take a look at Donald Trump’s (And taking Trump in this spirit becomes more difficult with each passing day) ideas on trade and how it would apply to my hometown of Buffalo.  While visiting us this summer, Trump promised to bring tons of jobs back to Buffalo by renegotiating international trade treaties.  While most of Trump’s speech was a meandering stream of consciousness, this line resonated with the crowd in a city that is finally starting to turn things around after decades of manufacturing job losses.  Could such a policy bring back jobs to the working class in Buffalo?

It is said that success has many parents while failure is an orphan.  Actually, as we’ll find out, economic successes and failures both have many parents.  Both are a result of several factors coalescing together and it is unlikely a policy fixating on a single issue can change the momentum of one or the other.

In 1954, Buffalo had 152,000 manufacturing jobs.  Prior to the opening of the St. Lawrence Seaway, Great Lakes freighters unloaded in Buffalo to transfer goods into canal boats and later trains for shipment to the East Coast.  This made Buffalo a strategic spot for manufactures to locate.  In the 1800’s, grain came from the Midwest and was milled into various food products in Buffalo.  To process the large amounts of grain pouring into Buffalo Harbor, Joseph Dart invented the grain elevator.  These large structures remain a prominent feature on the city’s waterfront.

Grain elevators at foot of Main Street in 1900. These first generation wood elevators have been replaced by the modern cement cylindrical elevators. Credit: Detroit Publishing Co./Library of Congress

After the Erie Canal, trains, and grain, came electricity.  Nikola Tesla, leaving the employ of Thomas Edison, built with George Westinghouse the first hydroelectric plant in Niagara Falls.  Using alternating current which, unlike Edison’s direct current, did not require power plants every mile, this electricity could be delivered 20 miles south to Buffalo.  Buffalo became the “City of Light” and this new technology was featured prominently in the 1901 Pan-American Exposition.

http://www.buffalorising.com/wp-content/uploads/2014/03/Buffalo-tower.jpg
The Pan-American Expo Electric Tower, 1901. Credit: Buffalo History Museum.

At the same time of the Pan-American Exposition, land was being acquired south of Buffalo by the Lackawanna Steel Corp.  Buffalo was close to ore fields that supplied raw material and with cheap hydroelectricity along with access to Great Lakes shipping and Buffalo’s extensive rail network, this was an ideal spot for steel production.  By World War II, then known as Bethlehem Steel, the plant employed over 20,000 people.  The local steel production capabilities attracted the auto industry.  Some, like Pierce-Arrow did not last past the 1930’s, but Chevrolet and Ford became mainstays and employed thousands in several plants across the region.  In 1916, Glenn Curtiss moved his aviation production plant from Hammondsport in the Finger Lakes to Buffalo.  During the first half of the 20th Century, Buffalo was major hub for aircraft production with employment hitting 70,000 (about the same number Apple employs in the U.S.) during World War II.  Buffalo’s industrial development was a classic case of economic geographical clustering.

http://photos.wikimapia.org/p/00/01/93/11/04_big.jpg
Republic Steel, Mobil Oil refinery, Donner Hanna Coke, railroad network all intertwined in Buffalo’s Inner Harbor, 1958. Credit: Wiki Commons.

Geographic clustering of economic activity was addressed by Alfred Marshall in 1890 and as a theory, was dormant for another century until economists, especially Paul Krugman, gave it another look.  In particular, it was found the manufacturing sector benefits greatly from clustering while for the post-industrial economy the effects are more diffuse.  In the case of Buffalo, clustering was caused by access to transportation via canal, trains, and the Great Lakes connecting the Midwest and East Coast.  In 1950, half the population of the United States lived in a 500 mile radius from Buffalo providing a ready market for goods.  Niagara Falls presented a bottleneck that forced shipments to funnel through Buffalo  Being first also counts and the invention of the grain elevator, generation of AC current, and aviation production at the birth of the industry gave Buffalo a jump start.  Labor poured into the region both in the form of immigration and internal migration from rural areas.  The concentration of experienced labor also produces high productivity from knowledge spillovers as less experienced labor benefits from close proximity to more skilled workers.  This in turn can generate high wages when the labor market is competitive and in good bargaining position.

https://c1.staticflickr.com/3/2489/4128870663_06739ed860.jpg
Curtiss-Wright plant P-40 production in 1941. Photo: Dmitri Kessel, Life Magazine

In 1951, Fortune featured a cover story titled Made in Buffalo which described a dynamic and diverse manufacturing center.

How did it all unwind?

Again, many factors coalesced to produce Buffalo’s downward spiral.  In 1938, when the local auto industry began shifting from auto to component assembly, Bethlehem Steel would stop investing in its flat rolling capacity due to lack of demand.  After World War II, Curtiss-Wright laid off 35,000 workers and then left Buffalo for good in 1946 for Ohio.  Bell Aircraft also greatly downsized but stuck around long enough to build Chuck Yeager’s X-1 and the Apollo program’s lunar module simulator.  Eventually, Bell left for Texas in the 1960’s.  Other industries, for example, Westinghouse and Western Electric picked up the slack.  That was something Alfred Marshall would have predicted fifty years prior:

“A district which is dependent chiefly on one industry is liable to extreme depression, in case of a falling-off in the demand for its produce, or of a failure in the supply of the raw material which it uses. This evil again is in a great measure avoided by those large towns or large industrial districts in which several distinct industries are strongly developed.”

However, an infrastructure project in the 1950’s removed Buffalo strategic bottleneck location for transportation.

The completion of the St. Lawrence Seaway enabled shipping to bypass Buffalo and head directly to the East Coast or overseas.  Grain shipments dropped dramatically and many of the waterfront elevators were abandoned.  Still, the steel and auto industries were going strong.  Buffalo continued to grow and prosper along with the rest of the nation into the 1960’s, but the reduced diversity of the economy left the region increasingly vulnerable to economic shocks.

Buffalo’s winter grain fleet anchored in outer harbor during winter to supply wheat for milling. This annual sight vanished in the early 1970’s. Credit: https://www.wnyheritagepress.org/content/lake_ice_and_lake_commerce/index.html

The energy crisis during the 1970’s sparked a demand for smaller cars which Japanese auto-makers specialized in.  This reduced demand for products made in Buffalo’s auto plants and in turn, its steel mills.  Bethlehem Steel poured investments into its Indiana plant which was closer to the expanding population westward.  Poor labor relations, outdated production methods, and questionable management practices dropped Bethlehem’s employment from 22,000 in 1969 to 5,000 when finally closed in 1983.  Republic Steel, once home to 5,000 employees followed suit in 1984.  In 1985, Trico moved 1,000 jobs from Buffalo to Mexico where workers made less than $1 an hour.  As manufacturing de-clustered from Buffalo, the region became less and less attractive to locate.

And what is the point of this history?

This all happened before NAFTA went into effect in 1994.  Renegotiating NAFTA will not undo all the factors that drove manufacturing jobs from Buffalo.  This isn’t to say the matter should not be open to debate.  Personally, I do not believe nations with widespread child labor and lax environmental regulation should have unfettered access to American markets.  But a reworking of NAFTA will not magically bring jobs back to Buffalo.  In fact, it would likely hamper access to the 9 million Toronto-Niagara Peninsula market just across the border.  Given that Canada is America’s top trading partner in terms of exports, renegotiating NAFTA would definitely cost jobs in Buffalo while the benefits are at best, uncertain.

Allied Chemical discharging dyes into Buffalo River. Buffalo’s manufacturing legacy did not come without a price. Credit: New York Department of Environmental Conservation.

And this brings up the greatest flaw in the Trump plan, fixating on a single issue as an economic cure.  Typically, you’ll see this with taxes, most recently in Kansas.  Gov. Sam Brownback’s tax cuts were intended to entice business into the state.  Whatever enticement the tax cuts were to bring business in the state have been offset by cuts to education and infrastructure spending.  The latter reduces incentive for business to locate to Kansas.  Or take a look at New York City where residents have had to pay a city income tax in addition to state taxes since 1966.  During this period New York City has experienced a decade (1970’s) where it lost 800,000 residents but also has gained 1.1 million residents since 1990.  Taxes should be considered as a factor in economic policy, but it is not a sole determinant of economic growth.  And neither is trade.

Conversely, economic models tend to smooth over the rocky transition from employment in one economic sector to another.  What is happening to manufacturing in America is to some extent the same thing that happened to farming in the first half of the 20th Century.  In 1920, farmers were 30% of the American population.  Today, that figure is two percent.  Mechanization of farming has reduced the need for labor.  The same is true of manufacturing.  The days when a steel mill required tens of thousands of employees are over, leading to a migration of labor to low paying service sector jobs.  In academia or policy think tanks, this transition is often reduced to a mathematical abstraction.  Hopefully, the work of Angus Deaton, whose research has revealed a decline in life expectancy of working class white Americans, will provide some “ground truth” for economic models.

The cause of that decline in life expectancy is mostly related to alcohol and drug abuse.  For those of us on the ground level have certainly seen this in the struggle of economic transition.  Other parts of the equation are foreclosures, divorce, social isolation, and in the worst case scenario, suicide.  So what is the proper policy response?  You have to try a lot of things across several fronts.  And going into this, an understanding this will be a trial and error process.  Not everything tried will succeed.  Like any sort of forecasting, we are looking at probabilities of success.

On a national level, a fiscal/monetary policy goal of driving unemployment down to 4% should have highest priority.  This will make local efforts more manageable.  Pragmatism should have a priority over ideology in policy making.  The private and public sector are like air and gas in an auto engine.  An optimal mixture provides best performance.  On a state level, stop the starvation of public funding for state universities.  For those who do not go to college, open up access to skilled trade/technical training.  While the labor market has improved significantly since 2008, those who were ejected from the workforce have had difficulty with re-entry and unemployment duration remains at post-war highs.  Individuals who have lost jobs due to a financial crisis not of their making should not be treated as pariahs in the job market.  This will not remove from the political process the more unseemly aspects of the Trump campaign, but will ideally push it off to the sidelines where it belongs.

Over the past few years, Buffalo has undergone something of a renaissance.  The University of Buffalo’s new medical campus is spurring development in the city.  Immigrants and refugees are infusing new life to old neighborhoods while Elon Musk’s SolarCity is building the Western Hemisphere’s largest solar panel plant on the site where Republic Steel once resided.  Hopefully, this can give the region a jump start in an emergent industry and begin a clustering effect anew.  Although manufacturing has declined to 50,000 jobs in the area, ghosts of Buffalo’s past can still be seen.  The steel mills are gone but Chevy and Ford still employ thousands, if you hang out in Canalside long enough, eventually you’ll see a 700-foot lake freighter making a visit to one of the grain elevators still in operation, no longer the second largest rail center in the nation, on a quiet weekend morning I can still hear train activity in the Frontier Yard.  Powerful reminders of Buffalo’s past, but as an individuals, we need to look towards the future.  To quote an old Clint Eastwood character:

You improvise, you adapt, you overcome.”

It’s as good advice as any.

*Photo atop post is 2010 aerial view of Buffalo.  Credit:  Doc Searls/Wiki Commons.

Beware of Outliers

As we currently digest the run-up to the 2016 presidential election, it can be expected that the candidates will present exaggerated claims to promote their agenda.  Often, these claims are abetted by less than objective press outlets.  Now, that’s not supposed to be the press corps job obviously, but it is what it is.  How do we discern fact from exaggeration?  One way to do that is to be on the lookout for the use of outliers to promote falsities.  So what exactly is an outlier?  Merriam-Webster defines it as follows:

A statistical observation that is markedly different in value from the others of the sample.

The Wolfram MathWorld website adds:

Usually, the presence of an outlier indicates some sort of problem. This can be a case which does not fit the model under study, or an error in measurement.

The most simple case of an outlier is a single data point that strays greatly from an overall trend.  An example of this is the United States jobs report from September 1983.

bls
Credit: Bureau of Labor Statistics

In September 1983, the Bureau of Labor Statistics announced a net gain of 1.1 million new jobs.  As you can tell from the graph above, it is the only month since 1980 that has gained 1 million jobs.  And why would we care about a jobs report from three decades ago?  It is often used to promote the stimulus of the Reagan tax cuts.  When you see an outlier such as this being used to support an argument, you should be wary.  As it turned out, there is a simpler explanation for this that has nothing to do, pro or con, with Reagan’s economic policy.  See the job loss immediately preceding September 1983?  In August 1983, there was a net loss of 308,000 jobs.  This was caused by the strike of 650,000 AT&T workers who returned to work the following month.

If you eliminate the statistical noise of the striking workers from both months, you have a gain of over 300,000 jobs in August 1983, and 400,000 jobs in September 1983.  Those are still impressive numbers and require no need for the use of an outlier to exaggerate.  However, it has to be noted, it was the monetary policy of the Fed Chair Paul Volcker, rather than the fiscal policy of the Reagan administration that was the main driver of the economy then.  Volcker pushed the Fed Funds rate as high as 19% in 1981 to choke off inflation causing the recession.  When the Fed eased up on interest rates, the economy rebounded quickly as is the normal response as predicted by standard economic models.  So we really can’t credit Reagan for the recovery, or blame him for the 1981-82 recession, either.  It’s highly suspect to use an outlier to support an argument, it’s even more suspect to assume a correlation.

To present a proper argument, your data has to fit a model consistently.  In this case, the argument is tax cuts alone are the dominant driver determining job creation in the economy.  That argument is clearly falsified in the data above as the 1993 tax increases were followed by a sustained period of job creation in the mid-late 1990’s.  And that is precisely why supporters of the tax cuts equals job creation argument have to rely on an outlier to make their case.  It’s a false argument intended to rely on the fact that, unless one is a trained economist, you are not likely to be aware of what occurred in a monthly jobs report over three decades ago.  Clearly, a more sophisticated model with multiple inputs are required to predict an economy’s ability to create jobs.

When dealing with an outlier, you have to explore whether it is a measurement error, and if not, can it be accounted for with existing models.  If it cannot, you’ll need to determine what type of modification is required to make your model explain it.  In science, the classic case is the orbit of Mercury.  Newton’s Laws do not accurately predict this orbit.  Mercury’s perihelion precesses at a rate of 43 arc seconds per century greater than predicted by Newton’s Laws.  Precession of planetary orbits are caused by the gravitational influence of the other planets.  The orbital precession of the planets besides Mercury are correctly predicted by Newton’s laws.  Explaining this outlier was a key problem for astronomers in the late 1800’s.

At first, astronomers attempted to analyze this outlier within the confines of the Newtonian model.  The most prominent of these solutions was the proposal that a planet, whose orbit resided inside of Mercury’s, perturbed the orbit of Mercury in a manner that explained the extra precession.  This proposed planet was dubbed Vulcan, after the Roman god of fire.  Several attempts were made to observe this planet during solar eclipses and predicted transits of the Sun with no success.  In 1909, William W. Campbell of the Lick Observatory stated no such planet existed and declared the matter closed.  At the same time, Albert Einstein was working on a new model of gravity that would accurately predict the orbit of Mercury.

Vulcan’s Forge by Diego Velázquez, 1630. Apollo pays Vulcan a visit. Instead of having a real planet named after him, Vulcan settled for one of the most famous planets in science fiction.  Credit: Museo del Prado, Madrid.

The general theory of relativity describes the motion of matter in two areas that Newton could not.  That is, when located near a large gravity well such as the Sun or moving at a velocity close to the speed of light.  In all other cases, the solutions of Newton and Einstein match.  Einstein understood that if his new theory could predict the orbit of Mercury, this would pass a key test for his work.  On November 18, 1915, Einstein presented his successful calculation of Mercury’s orbit to the Prussian Academy of Sciences.  This outlier was finally understood and a new theory of gravity was required to do it.  Nearly 100 years later, another outlier was discovered that could have challenged Einstein’s theory.

Relativity puts a velocity limit in the universe at the speed of light.  A measurement of a particle traveling faster than this would, as the orbit of Mercury did to Newton, require a modification to Einstein’s work.  In 2011, a team of physicists announced they had recorded a neutrino with a velocity faster than the speed of light.  The OPERA (Oscillation Project with Emulsion-tRacking Apparatus) team could not find any evidence for a measurement error.  Understanding the ramifications of this conclusion, OPERA asked for outside help in verifying this result.  As it turned out, a loose fiber optic cable caused a delay in firing the neutrinos.  This delay resulted in the measurement error.  Once the cable was repaired, OPERA measured the neutrinos at its proper velocity in accordance with Einstein’s theory.

While the OPERA situation was concluding, another outlier was beginning to gain headlines.  This being the increase in the annual sea ice in Antarctica, seemingly contradicting the claim by climate scientists that global temperatures are on the rise.  Is it possible to reconcile this observation within the confines of a model of global warming?  What has to understood is this measurement is an outlier that cannot be extrapolated globally.  It only pertains to sea ice surrounding the Antarctica continent.

Glaciers on the land mass of Antarctica continue to recede, along with mountain ranges across the globe and in the Arctic as well.  Clearly something interesting is happening in Antarctica, but it is regional in nature and does not overturn current climate change models.  At least, none of the arguments I’ve seen using this phenomenon to rebut global warming models have provided an alternative model that also explains why glaciers are receding on a global scale.

Outliers are found in business as well.  Most notably, carelessly taking an outlier and incorporating it as a statistical average in a forecasting model is dangerous.  Lets take a look at the history of housing prices.

Credit: St. Louis Federal Reserve.
Credit: St. Louis Federal Reserve.

In the period from 2004-06, housing prices climbed over 25% per year.  This was clearly a historic outlier and yet, many assumed this was the new normal and underwrote mortgages and derivative products as such.  An example of this would be balloon mortgages, where it was assumed the homeowner could refinance the large balloon payment at the end of the note with newly acquired equity in the property as a result of rapid appreciation.  Instead, the crash in property values left these homeowners owing more than the property was worth causing high rates of defaults.  Often, the use of outliers for business purposes are justified with slogans such as this is a new era, or the new prosperity.  It turns out to be just another bubble.  Slogans are never enough to justify using an outlier as an average in a model and never be swayed by any outside noise demanding you accept an outlier as the new normal.  Intimidation in the workplace played no small role in the real estate bubble, and if you are a business major, you’ll need to prepare yourself against such a scenario.

If you are a student and have an outlier in your data set, what should you do?  Ask your teachers to start with.  Often outliers have a very simple explanation, such as the 1983 jobs report, that will not interfere with the overall data set.  Look at the long range history of your data.  In the case of economic bubbles, you will note a similar pattern, the “this time is different” syndrome.  Only to eventually find out this time was not different.  More often than not, an outlier can be explained as an anomaly within a current working model.  And if that is not the case, you’ll need to build a new model to explain the data in a manner that predicts the outlier, but also replicates the accurate predictions of the previous model.  It’s a tall order, but that is how science progresses.

*Image on top of post is record Antarctic sea ice from 2014.  This is an outlier as ice levels around the globe recede as temperatures warm.  Credit:  NASA’s Scientific Visualization Studio/Cindy Starr.

Minimum Wage & Unemployment: Confusing Micro and Macro

The recent movement to raise the minimum wage to $15.00/hr. has brought out the usual dire warnings that this will cause a significant increase in unemployment and lock out low wage workers from obtaining jobs. Intuitively, this seems to make sense. When one looks at this scenario from the viewpoint of a business owner, the common sense outcome is you would have to offset the increase in costs by reducing staff.

Classic demand and supply analysis of the labor market from Econ 101 would seem to confirm this as you can see below:

Wages set above equilibrium creates surplus of labor - unemployment.
Wages set above equilibrium creates surplus of labor – unemployment. Image: Wiki Commons.

Yet, the evidence is clear that unemployment does not rise with an increase of the minimum wage. Why should this produce such a counter-intuitive result? The key to the answer lies in the fundamental differences between microeconomics (study of individuals and firms) and macroeconomics (study of the economy as a whole) as well as a more complex model of labor markets developed the past few decades referred to as efficiency wage theory.

First, lets take a look at that classic Econ 101 model of labor markets since this is how the issue is most often debated in the popular media and general public.

Models of micro units in the economy are open systems. Take an employer for example, income flows into the employer from an outside entity (customers). Likewise, spending flows out to entities beyond the employer in the form of wages.  The argument against raising the minimum wage sees the cash flow out increasing without an increase in the cash flow in.  Hence, staff is reduced to offset this outflow.

Employer is an open system. the system is "permeable" as cash flows in and out of the system boundary.
Employer is an open system. the system is “permeable” as cash flows in and out of the system boundary.

The same does not hold for a national economy as a whole. Why? A macro unit, such as nation, is a closed system. As Paul Krugman says, in this scenario, everybody’s spending is someone else’s income. If spending drops overall in a macro unit, then income must necessarily drop as well.  This is the cause of business cycles.  An example of a closed system is below. The three major components of GDP are consumption, investment, and government spending. Unlike a household or business, there is no income to flow in from outside the system.  As we’ll see, modulating these business cycles is more important than minimum wage laws in reducing unemployment for low wage workers.

Closed System
Boundary around a closed system is “impermeable”. Cash flows remain inside the system and do not leak out.

As noted earlier, the classic micro model would indicate that an increase in the minimum wage forces employers to reduce staff and also increases the available pool of labor as higher wages induce more people to look for a job.  In this model, an increase in the minimum wage can represent a transfer of wealth from employers to employees, which is the real cause of the political friction on this issue.  Framed this way, it directly pits employees against employers.

Is this transfer of wealth fair?

In the late 1800’s, Alfred Marshall made the great defense for capitalism against the growing socialist movement. Marshall postulated that increased worker productivity would result in increased wages and that was the key to reducing the great poverty of the time. How to increase productivity? Marshall proposed an expansion of expenditures in public education. He was also recognized the productivity gains acquired from spillover knowledge. That is, less experienced workers increase productivity by working with more experienced workers.

The classical economic model derived by Marshall (and others) suggested that workers wages are commensurate with productivity in a free labor market. This model makes a few assumptions. Among them being:

Information is symmetric. That is, both employers and employees have the same knowledge of the existing labor market.

The labor market is competitive to the point where neither an individual employer nor employee can affect the wage rate.

The economy is always at full capacity.

To paraphrase Harry Callahan, a good economic model has got to know its limitations.

How close to reality are these assumptions? A good diagnostic is to compare productivity gains with labor costs (wages and benefits). If the model is correct, both these variables should match. What does the data show?

Below is a comparison between annual increases in worker productivity and real hourly compensation (which accounts for both wages and benefits) since 1979:

Data Source: St Louis Federal Reserve
Data Source: St Louis Federal Reserve

For the most part, compensation lags behind productivity.  Only periodically has compensation matched productivity and that includes the mid-1980’s and late 1990’s.  Hence, the economy usually operates at less than full capacity.  Since the late 1990’s, compensation has seriously lagged behind productivity.  This represents a transfer of wealth from labor to employers not predicted by classical micro models of the economy.  Consequently, the defense of free labor markets as a means to reduce poverty breaks down.  How to change that?

It helps to use real world case studies rather than fictional work.

One thing to avoid is fear of a supply shock by employers “going Galt” and reducing and/or quitting their businesses in a snit over increasing wages.  During the period between 1947-79, while unions were at their peak, wages kept up with productivity gains.  The result was an expanding middle class and business did just fine.  Employers might resent an increase in the minimum wage, but on an aggregate scale, will not have an incentive to downscale their business unless wage increases surpass productivity increases for a sustained period of time.  If that did not happen in the post-World War II period, it’s not likely to happen now.

One of the major critiques of the minimum wage law is that it locks entry level workers out of the job market by keeping wages above the market equilibrium level.  However, the main driver of unemployment for teenagers (by definition entry level employees), as with any section of the population, is the business cycle seen below:

fredgraph

The impact of minimum wage increases is dwarfed by the effect of the business cycle on unemployment.

Here is where macroeconomics comes into the picture.  Over the past 35 years, teenage unemployment topped 20% three times, all during recessions.  The Great Recession, created by the 2008 financial crisis, produced a teenage unemployment rate of over 25%.  The first step in creating job opportunities is to modulate the business cycle in a manner to avoid steep recessions.  A combination of New Deal banking regulations and appropriate monetary/fiscal policy was successful in this regard from 1947-1972.

It doesn’t make sense to oppose minimum wage laws as a means to decrease teenage unemployment if one is also opposed to employing monetary and fiscal policy to moderate business cycles.  The first priority in this direction is to regulate the financial sector so the risk of banking crisis are reduced.  It is financial crisis that cause periods of severe unemployment lasting 3-5 years, sometimes longer.  Prior to World War II, financial panics induced multi-year depressions in 1857, 1873, 1893, and 1929.  The last recession is not an isolated event, but a natural consequence of an unregulated financial sector.  Younger workers are significantly at risk during these events of long-term unemployment.

An additional step is to index the minimum wage to the inflation rate.  The minimum wage topped out at $10.69/hr (2013 dollars) in 1968 and has steadily eroded since.  Overall unemployment was 3.6% during that year with a teenage unemployment rate of 11-12%, a further indication that the business cycle is a greater determinant of teenage unemployment than the minimum wage.   Also, indexing minimum wage to productivity increases needs to be considered.  Paying for production is a reasonable proposition.

The perfect labor market model as presented by the demand and supply graph at the top of this post is an abstract concept.  That model relies on assumptions that cannot be fully realized in the real-world.  Think of it as the economic version of the Carnot engine, which represents the theoretical limit for engine efficiency.  You cannot build such an engine in the real world as it relies on a cycle that does not lose heat to friction.  Likewise, it is impossible to build a perfect labor market in the real world as efficiency is lost due to frictions such as incomplete information and an economy operating at less than full employment.

For an economist to claim free labor markets are efficient to the point where labor receives rising compensation with rising productivity is the same as an engineer who claims to have built a 100% efficient engine.

We need to realize labor is not the equivalent of widgets.  Classic demand and supply curves oversimplifies human behavior in the labor market.

However, a new, more complex theory of labor markets has emerged over the past few decades that merits real consideration as it predictions coincide with some real world observations.  And this is efficiency wage theory.  This theory predicts that employers have a menu of wages to pick from rather than a single market wage.  The tradeoffs involve low wage, low productivity, high employee turnover, vs. high wage, higher productivity, and low turnover.

Lets take a look how employers have responded to an increase in the minimum wage. For starters, layoffs are not the primary, or even a significant reaction. A variety of strategies are employed to offset the higher cost of labor. One is to train employees in a manner to boost productivity. Another is to reduce costly employee turnover. Also, some employers elect to reduce profits to cover the cost.  The real response to minimum wage increases are more reflective of the efficiency wage concept than the classic single wage demand and supply model.

New entrants in the labor market need to acquire social connections to move into higher wage brackets, even within the same skill level jobs.

A 1984 survey paper by current Fed chair Janet Yellen noted that efficiency wage theory predicts a two-tiered workforce.  One is a high wage workforce where jobs are obtained mostly via personal contacts.  Employers have a comfort level with employees they personally know and do not feel the need to go through expensive vetting and monitoring processes.  The other tier is a low wage, highly monitored workforce with a lot of turnover.  Sound familiar?  The latter seems to reflect low paid contract/temp workers who often perform the same functions as higher paid permanent employees at the same company.  Here again, the efficiency wage model seems to trump regular demand and supply.

It does appear to be time for policy makers to incorporate this more sophisticated model when addressing low wage workers and the chronically high unemployment rate in various demographic groups of the workforce.  In particular, is the need to promote a way for low wage workers to make the leap into the high wage sector.  As noted in Yellen’s paper, ability and education is not enough, social connections play a key role in obtaining a high wage job.  This, combined with proper fiscal/monetary policy, brings the best hope for lifting individuals from poverty.

It is often said that anyone who has taken Economics 101 will understand raising the minimum wage causes unemployment to increase.  The efficiency wage model is a bit beyond Econ 101, probably at an intermediate level course.  And that’s perfectly fine.  If you are diagnosed with a serious illness, do you want to be treated by a doctor who went to med school or is using Bio 101?  The same holds true in economics.

*Image on top is the Chicago Memorial Day Massacre of 1937.  Ten striking workers were killed and gave momentum for the Fair Labor Standards Act of 1938 which mandated the first minimum wage.  Photo:  U.S. National Archives and Records Administration.