Temperature Homogenization Activity

Temperature Homogenization Activity

 

Introduction: There is a great concern about climate change, but climate change affects local regions very differently. The following climate lesson explores local climate change in southern California and introduces students to the tools  that climate scientists use in order to extract regional and global trends from local climate data.  Students will learn why temperatures trends are not a simple average by simulating how scientists homogenized data.

The lesson adheres to the current pedagogy that argues teachers should not simply be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots (such as “Think, Pair and Share”) that allows students to investigate and discuss the issue with their peers.  

 

Teaching Objective I: Understanding Trends and Anomalies. Students will read and create quality graphs.

Teacher input: Give the students the graph “Maximum Temperature USHCN Raw Data” illustrating trends from 3 quality weather stations in the US Historical Climate Network, all  located at a similar latitude in southern California. In pairs or small groups, have them discuss the following questions and formulate questions of their own.

Think, Pair Share:

1) What factors might make Brawley so much warmer than the other stations.

2) Which station(s) experienced the warmest temperatures between 1930 and 1950?

3) Which station(s) experienced the most similar climate trend from 1900 to 2010?

4) Is climate change affecting all stations equally? 

 

 

 

Teacher input: Brawley is further inland and unlike stations closer to the coast doesn’t experience the ocean’s cooling effect. Drier desert conditions with few clouds and little vegetation creates a microclimate that heats up much more quickly than the other stations.  Brawley and Cuyamaca shared the most similar trends but may have been difficult to see due to micro-climate differences. To better extract climate trends that can be compared between stations experiencing varied micro-climates, scientists graph anomalies. Anomalies are simply individual differences relative to an average. In this exercise I recommend averaging temperatures from 1951 to 1980.

Instruct students to visit the USHCN website and download the raw data for the 3 stations into a spreadsheet like EXCEL. To determine anomalies relative to the 1951-1980 period, calculate the average temperature for each station for that time period, then subtract the station’s average from its raw data for each and every year. This remainders produce negative temperatures representing cooler years and positive temperatures representing warmer years than average. Have students create their own anomaly graph, and compare their charts with the anomaly graph below.

(Teacher note: Do not use an average from years later than 1980. During the mid 1980s there was a massive change in equipment that also required relocations that brought the weather stations closer to buildings. )

 

  

 

Think, Pair, Share: Have students discuss the value of using anomalies to extract regional trends.

Brainstorm about what factors could cause only Chula Vista’s micro-climate to suddenly warm relative to the other stations?

 


Teaching Objective II: Understanding Artificial Inhomogeneities 

Teacher input: Because each weather station exists in a unique micro-climate, individual differences cause each station to exhibit small cooling or warming trends that might not be seen at the other stations. For example, changes in vegetation, sheltering or waste heat from various building  configurations, or a difference in topography that funnels wind differently, all can affect short term temperature trends. Changes to the landscape such as removal of trees, a fire, or changes such as increased pavement affect how the soil holds the moisture and how much moisture is transpired into the air, which also affects temperatures. The resulting differences between weather stations are called inhomogeneities.

Natural inhomogeneities are expected and an integral part of local climate change. However, scientists must eliminate any artificial inhomogeneities caused by a growing population that alters the surface and adds waste heat, or when stations relocate to a different micro-climate. All 3 stations exhibited trends that were reasonably similar until 1982. So what caused Chula Vista to artificially warm, or conversely did Brawley and Cuyamaca  suddenly cool?

To answer that problem, instruct students to first visit the USHCN website and acquire the ID# for each station. Then have them visit NOAA’s Historical Observing Metadata Repository, plug in the ID# and look for information regarding any changes at that station and the year in which those changes occurred.

 

Think, Pair, Share:  Which station(s) moved and in which year?  Compare the changes in temperatures from any station that moved to the temperature changes for stations that did not move. Then determine if the re-location caused a warming or cooling. How did the re?location affect the temperature trend? 


Teacher input: Confirm the students’ research. According to the Historical Observing Metadata Repository, Chula Vista moved 2.5 miles in 1982, from a location situated along salt evaporation ponds to an urban setting surrounded by buildings. In 1985, new instruments were installed that required new cable connections. So the weather station was moved 190 feet, presumably closer to a building.

After the 1982 relocation, the temperature at Chula Vista rose by 6°F, in contrast to a drop in temperatures at the other 2 stations, so Chula Vista’s move very likely caused its temperature to rise artificially. An interpretation of artificial warming is also consistent with the relocation to a warmer urban setting.

There was no verifiable relocations or change of instrumentation at the other 2 stations. However 7 months of temperature data in 1992 for Brawley were reported via a Hygrothermograph (HTG) but that would not affect the 1982 comparisons.

 

 

Teaching Objective III: Homogenizing Data to Create Meaningful Regional Climate Trends

Teacher input: Good scientists do not blindly accept raw data. Data must undergo quality control analyses that adjust data for documented changes known to create changes unrelated to climate change. After accounting for artificial inhomogeneities, scientists adjust the data to create what they believe is a more realistic regional trend.

Based on what they have learned so far, ask the students to create a graph that best exemplifies southern California’s regional climate change. Simplify their task by using the graph (below) for just Chula Vista and Cuyamaca (which are only 15 miles apart). Students are free to adjust the data in whatever manner they feel best represents real climate change and corrects for artificial inhomogeneities.

 

  

 

Teacher input: After students have graphed their own temperature trends, have them compare their results with the graph below illustrating how USHCN climate experts actually homogenize the data. (The comparison should promote lively discussions as most students will create trends for both stations that resemble Cuyamaca raw data.)

 

 

 

Think, Pair, Share:  Discuss why climate experts created such different trends.  Why did scientists lower the high temperatures at Cuyamaca during 1930s to 1950s by 3 to 5°F? What other concerns may affect scientists’ expectations about how best to homogenize data.

Teacher input: Clearly the data was adjusted for other reasons than can be explained by Chula Vista’s relocation. Adjusting data for unknown reasons is different from quality control adjustments and is called homogenization. The use of homogenization is contentious because a change in a station’s trend is often assumed to be caused by unknown “artificial” causes. However the natural climate is always changing due to cycles of the sun, ocean oscillations like El Nino and the Atlantic Multidecadal Oscillation that alter the direction and strength of the winds, or natural landscape successions. So how can scientists reliably separate natural climate changes from undocumented “artificial” changes?

One method suggests comparing the data from more reliable weather stations that have undergone the least amount of known artificial changes to determine a “regional expectation.” That regional expectation can serve as a guide when adjusting trends at other less reliable stations. However as we have seen, sometimes the most reliable stations can undergo the greatest adjustments. So what other factors are in play?

Many scientists working for NOAA and NASA believe that rising CO2 explains recent temperature trends. In addition, many scientists suggest that the proximate cause of regional climate change is driven more by natural changes in ocean circulation. In 2014 climate scientists published a peer-reviewed paper (Johnstone 2014) suggesting that climate change along the coast of North America could be best explained by natural cycles of Pacific Decadal Oscillation (PDO) due to its affects on sea surface temperatures in the eastern Pacific. Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California.


 

Think, Pair, Share: Which data sets (raw or homogenized trends) best agrees with the hypothesis that ocean temperatures drive regional warming trends? Which data sets best agree with the hypothesis that rising CO2 drives regional warming trends? Could a belief in different hypotheses affect how temperature trends are homogenized?

Teacher input: Have students compare the temperatures trends for the northern hemisphere (below; created by the Japanese Meteorological Society and published by the National Academy of Science in 1977) with the new global trends presented by NASA’s Gavin Schmidt who argues 2014 was the warmest year on record. Point out that earlier scientific records suggested temperatures dropped by 0.6°C (1.1°F) between 1940 and the late 1970s, with the 1970s temperatures similar to the 1910s. Compare those temperatures with Schmidt’s 2014 graph that suggests 1980s temperature anomalies were 0.5°C higher than the 1910s.

 

 

Think, Pair, Share: Why does the period between 1940 and 1970s in the 2 graphs disagree so dramatically? Does the new graph by NASA’s Gavin Schmidt’s represent real climate change or an artifact of homogenization? If the difference was due to homogenization, is that a valid reason to alter older trends. If Gavin Schmidt’s starting point for the temperature data from 1970s to 2014 was lowered, so that 1970s temperatures were still similar to 1910s as suggested by earlier research, how much higher than the 1940s would the 2014 global temperature be?

 

 

Teaching Objective IV: What is In-filling?

Teacher input: As seen for the USHCN weather stations, raw data is often missing. Furthermore extensive regions around the world lack any weather stations at all. To create a global temperature climate scientists must engage in the art of infilling.  A recent scientific paper, Cowtan and Way (2014) used in-filling to contradict other peer-reviewed research that determined a pause to global warming for the past 15 years or more. By in-filling, these scientists argued that there was no hiatus and warming trend continued.

Give the students a map of the world and an umber colored pencil. Instruct them to lightly shade all the continents to show they have all warmed. Now provide the map (below) of Global Historical Climate Network Stations showing the station locations. Instruct students to simulate infilling, by darkening the regions on all the continents wherever weather stations are sparse (the whitest areas). Then give them the NOAA’s 1950-2014 map modeling the continent’s warmest regons.


i

 

 

 

Think, Pair, Share: Can in-filling reliably represent local temperatures trends? The warmest regions appear to be related to infilling. What other reasons would cause greater warming in the in-filled regions? Should regions without a sufficient density of weather stations be included in a global average temperature?

 

Extended Lesson Activities:

Have students pick a group of stations within a 500-mile area within similar ecosystems (i.e. the Great Plains, or Southeastern forests, New England Forest) and examine the differences between raw and homogenized temperature data for all those stations.  (See an example of such a comparison for Massachusetts in an analysis of moose migrating southwards) Use the metadata for those stations to find the most reliable stations that have not relocated, or have not changed instrumentation.

I suggest looking at just the maximum temperatures in this extensions activity because minimum temperatures are much more sensitive to landscape changes and other microclimate changes. The next activity will examine differences between maximum and minimum temperatures and the effects of the landscape on temperature trends.

 

Related Teaching moment: Tolerance and Respectful Debate

Have students write an essay about how our nation is fostering greater tolerance and compassion toward different ethnicities, races, religions and people with different sexual preferences. Then contrast that tolerance with the venom that’s been hurled at people for having different climate beliefs. 

Why is there such disagreement, passion about all this? What differences could this all make? – in your life, your home, your school??? 

Often those very different beliefs are simply a result of trusting different virtual realities created by statistical temperature trends. How would more respectful debate, between scientists who trust homogenized trends and those who don’t, help the public better understand climate change?


FEEDBACK

I would appreciate any comments to help improve this and subsequent lessons. Send all comments to landscapesandcycles [at] earthlink.net

If you know of any high school or college educators that might give this lesson a trial run, please pass it on.