Monday, December 22, 2014

The Shrinking Ozone Hole

Ozone Hole
Doesn't look too good, but the hole was bigger in the '90s.


 
               Good news for those of you that do not wear sun screen! Recent measurements show that the hole in the ozone layer is slowly filling back in, and it is not just a natural process that is causing this, it is a change in human action. Through a combination of international cooperation and internal regulation that is rarely seen in politics, the world banded together to combat an issue that very well could have led to the end of life as we know it. So what is ozone and why was it in danger? How did we stop it and why did we bother? For the sake of simplicity, I am going to focus on the actions of the United States and how they contributed to saving the ozone layer. The simplicity, however, does not save you from a short chemistry lesson that is integrally important in understanding this issue.
               Most people know that the chemical formula of the oxygen we breathe is O2. In the upper atmosphere of the earth, these ordinary oxygen molecules are hit with ultraviolet (UV) rays from the sun. This energy can break the bonds of the normal oxygen and cause it to link with another oxygen molecule to form ozone, the chemical formula of which is O3. The ozone is also hit by the radiation and a bond breaks, turning it back into normal a normal oxygen molecule. This process happens back and forth repeatedly in the upper atmosphere, largely shielding humans, and all other life forms for that matter, from harmful levels of cancer causing UV rays.
                From the 60’s onward, humans were finding creative and productive uses for nifty little compounds called chlorofluorocarbons (CFCs). These gaseous chemicals, composed of chlorine, fluorine, and carbon, were used in everything from aerosol sprays to refrigeration to insulation during this time. Two decades later in the early 80’s, scientists realized that the levels of ozone in the upper atmosphere were rapidly declining. They rushed to find what was causing the depletion and found that it was the CFCs that were creating this issue. The CFC gas often leaked into the atmosphere during production and out of products that were not well maintained. When this gas reached the upper atmosphere, it encountered the UV rays and some of its chlorine bonds broke. As chorine is very reactive, it can easily strip oxygen off of surrounding ozone molecules. The free oxygen atoms that help create ozone in the first place then link with the chlorine atoms or other free oxygens instead of to an oxygen molecule. This throws the balance of ozone creation and depletion off, eventually leading to an area around the South Pole where there is not enough ozone to shield the planet from UV rays.
                After discovering this alarming trend, scientists were quick to let the public know what the problem was and what was creating it. Public fear and outrage over this and other environmental catastrophes led to the green movement and ultimately two important pieces of legislation that decreased worldwide use of CFCs and ozone depleting products (ODPs). The first was the Montreal Protocols. Alarmed that the ozone layer was thinning, concerned delegates from most UN member countries gathered in Helsinki to do something about it. The plan they eventually came up with in 1987 was the Montreal Protocols which outlined the plans and timetables of decreasing and eliminating use of ODPs and CFCs. Eventually this plan was ratified by all UN member states. This led to a significant increase into ODP alternatives and a gradual decrease in CFC usage.
A chart showing the projected atmospheric concentration of ODPs with and without the Montreal Protocol.

As is the case with many environmental laws in the United States, there was stiff opposition to the plan by some chemical companies and businesses. This is where the second piece of legislation, the Clean Air Act (which had just been strengthened in 1990), comes into play. This law give the EPA the ability to investigate, fine, and shut down violators of the Clean Air Act, as well as lists prohibited chemicals due to their ozone depleting properties. These actions ensured the gradual yet significant drop in the usage of CFCs.

                With the major contributors of CFCs in check, the ozone layer has started to rebound. With time, the chemicals that deplete the ozone eventually are broken apart by the sun’s UV rays and become less dangerous and reactive gasses which no longer mess with ozone. Current estimates of the restoration of the ozone layer to pre-1980 levels range from 2050 to 2070. Either way, it is great to know that if the threat is dire enough, the world can come together to fix an aspect of the environment.




Sources:

http://www.epa.gov/ozone/mbr/index.html
http://en.wikipedia.org/wiki/Montreal_Protocol
http://www.epa.gov/ozone/strathome.html
http://www.epa.gov/air/caa/requirements.html

Monday, December 1, 2014

Genetic Patents: Who Owns the Genetic Material in Your Body?


Congratulations. For the time being, you are not owned by anyone. Are you surprised that you were before? If so, you aren’t the only one. You may be horrified to find out that until the recent Supreme Court decision of ACLU v. Myriad Genetics, a whopping 20 percent of the genetic material that is inside each one of our cells was patented by a host of pharmaceutical companies, labs, and individuals.

                All of this started in the 1980’s with the birth of DNA sequencing, bioengineering, and genetic manipulation. Until then, humans had found it difficult to manipulate or record the exact order and placement of the nucleotides that make up every organisms genetic material. As this became possible due to, at the time, earthshattering advances in science, companies began to seek ownership of this information and rights to all research concerning it. At that point, as is often the case with new technology, people did not grasp the gravity of the discovery or clearly see its potential for the future. The Supreme Court case Diamond v. Chakrabarty, came along and the justices ruled that though full and exclusive ownership of a natural gene was illegal, ownership of all processes concerning a natural gene in addition to any “man-made” gene was legal. With the relatively small amount of genes that were sequenced, not to mention lack of practical uses for them once they were isolated, the issue simply melted away.

                This all changed in the next 30 years. During this time, the human genome project had been begun and completed and scientist now knew how to manipulate, isolate, and find the function of most any gene in humans or other organisms. What had started as a slow trickle of patents concerning genetics had erupted into a deluge of patents all vying for the right to exclusively study a section of DNA. Not only was it animal and plant genes that were on the table, but human ones. There is great promise in companies that isolate and study these genes. For instance, a gene that contributes to a person developing diabetes may be able to be silenced if researched enough. However, most people disagree with this, arguing that exclusive rights to a gene bar all other institutions from researching as well. This is in addition to arguments against ownership to a part of the human, saying it makes no more ethical sense than being able to patent an organ like the heart or lungs.

                The Supreme Court upheld this sentiment recently in the case ACLU v. Myriad Genetics. The company Myriad Genetics had applied for a patent of two genes linked to breast cancer. This would give them exclusive rights to research and develop anything having to do with these genetic markers. Of course, this is mainly for financial gain. They had recently discovered that through testing with these genes, an extraordinarily accurate gene tests could predict the likelihood that a person would develop breast cancer. This meant that they could inflate prices of the test and make significantly more money. Though it sounds cutthroat, it is standard practice in business to do this to defend discoveries. The difference, the justices said, was that these were unaltered genes being copyrighted, not a process concerning them. They argued that they were made naturally meaning that they were no ones to exclusively own. With this, the copyrights of many companies were deemed void, opening research opportunities to anyone involved.
                This brings us to the next issue. Pharmaceutical companies can patent a lot that has to do with a gene. They can patent a method of finding the gene, isolating it, silencing it, or activating it. In addition, any change to a gene accomplished through any method makes that gene “man-made” and therefor eligible for a patent. This area of law is still relatively new and the boundaries have yet to be established. Depending on the direction the collective argument goes, we could one day live in a world in which certain genes found in peoples bodies naturally, but were “made” generations ago, are the intellectual property of a company. Are you scared yet? Excited? Only time will tell which side of the argument is right. Either way, this quiet court decision will go down as one of the most important in medical or scientific history.


For more information on this issue (or a better explanation that I have given), visit the link below and listen to the NPR discussion of the topic. There is also an embedded video outlining the basics of the subject.

http://www.npr.org/templates/story/story.php?storyId=125361332


 









Sources:


http://www.dailytech.com/Federal+Court+Rules+it+is+Illegal+to+Patent+Unaltered+Human+Genes/article18033.htm

http://www.extremetech.com/extreme/151686-do-you-own-your-own-genes-or-can-big-pharma-patent-them

Thursday, November 6, 2014

The (False?) Promise of Kinetic Energy Generation




                A rising trend in the green technology world is experimentation with power generation not from biofuel, water, or even wind, but the movements of people in their everyday life. This is called kinetic energy generation, and you may have heard of it briefly in the past few years. For instance, there is a dance floor which harnesses the movements of those above it to generate electricity. In addition, Nokia recently introduced a phone that charges itself by generating power from the vibrations of their user’s movements. Such electricity generation techniques seem to hold the key to sustainable energy as fossil fuels become harder to come by and consumers are looking for green alternatives. Before making judgments, however, one must examine the whole technology and its implications.

                First things first, how does this stuff work? In simple terms, a magnet bounces around with some copper springs to produce electricity. In more scientific terms, the kinetic energy generator works because of the Faraday principle. The movement of the magnet causes a movement of its magnetic field which moves electrons which are conducted by the copper wire into a battery. This is generally how these generators work, but as it is a relatively new and open field, producers of this technology are incredibly stingy with any real, solid, and thoroughly explained information.

                What we do know about generating kinetic power, is that it’s all about long periods of repetitive, constant motion. For instance, a biker, jogger, or walker who wears one of these devices while exercising would create much greater amount of energy than a baseball player who wears the device during a game. The repeating movements for long time periods are more important than the strength of each motion.

                Recently, the Department of Defense recognized the military possibilities of this technology for use in combat situations. Currently, soldiers lug twenty to thirty pound batteries in their rucksacks to power their communications tools. By adopting this kinetic energy generation techniques, soldiers could lose the heavy batteries and replace them with smaller ones that charge constantly as they move. This could increase mobility and energy reliability of troops. To ensure fast development of this technology, the DOD invested nearly $10 million their development.

                Well, if kinetic energy generators work for soldiers, they will surely work for civilians in the US right? In most cases, the answer is no. Kinetic energy generators rely on the movement of the wearer to create electricity. Most sedentary Americans produce only energy to power a phone for 15 to 30 minutes. This means that the technology is far from applicable to do anything more than briefly extend phone life. The cost of this technology is also an issue. As there is no possibility for municipal or government use of the energy, it would come down to consumer action to popularize the product. Current models of kinetic energy producing products sell for between $150 and $300, making it easier and cheaper to simply charge devices at power outlets. The only thing a consumer really gains from this product is a superficial (and largely false) sense of helping the environment.

                What about the kinetic dance floor? Surely this technology could produce enough energy to make a difference in energy usage. Each floor tile, about 30 inches square, produces about 35 watts of energy in ideal conditions. Now imagine, perhaps, a whole room in a busy New York City subway station, covered with these tiles on the floor. There are 5000 tiles total and they operate at peak production 24 hours a day. Together, these tiles could produce 4200 kWh (kilowatt hours) of electricity per day. This sounds like a lot of energy, however the average American uses 250 kWh of electricity per day. This means that the entire system would provide power for only 17 people if operating at maximum efficiency all the time (both of which are basically impossible to achieve).
                So where do we go from here? Obviously, this technology only makes sense for the most active of people and would provide energy for only the occasional charging of an electronic device. The vast majority of Americans move too little and use too much electricity for this technology to be feasible. Without nearly magical improvements in efficiency, kinetic energy generators are and will stay unpractical. Green energy pursuers should abandon this area, as it is a nice theory, but an incredibly weak producer in practice.



Sources

http://www.extremetech.com/extreme/161079-kinetic-energy-harvesting-everyday-human-activity-could-power-the-internet-of-things








Thursday, October 30, 2014

The Peshtigo Fire of 1871

Artist's Rendition of Pre-Fire Peshtigo
           These days, with the climate warming and weather changing, it is becoming increasingly common to hear about wildfires both in the United States and abroad. Australia is plagued with fires as their inner regions warm and dry up. In the US, California and other western states are struggling to keep wildfires at bay despite higher temperatures and less rain. It would make sense to assume that the deadliest fire in the history of the United States took place there, however, it did not. Its location was far cooler: Wisconsin surrounding the village of Peshtigo.

U.S. Weather Bureau Map - 8 Oct 1871
            The wildfire took place in 1871 in northeastern Wisconsin and into Michigan. The weather conditions that summer were perfect for fires. There had been droughts and unusually high temperatures throughout the country, causing the parched landscape to become a giant tinderbox. On October 8, a cold front started to move through the region. The strong and steady breeze fulfilled the last factor needed to make any fire that happened huge. A total of five separate fires were sparked across Michigan, Illinois, and Wisconsin that day, the most famous of which was the Great Chicago Fire, however none of these fires had the same intensity of the Peshtigo fire.

            There are a number of ways that the initial spark that started the deadly blaze could have originated. The area depended on agriculture and lumber for its income. In farming, it was commonplace to burn small sections of land to clear them, one of which could have started burning out of control. The area lumber mills kept great amounts of raw lumber around them and also produced sawdust and bark as waste. In such dry conditions a stray spark from a saw could have ignited the blaze. Additionally, sparks from train wheels could have lit dried grass around the track. Whatever the source, once the blaze started, it grew uncontrollably on the plethora of dried objects that it could easily burn. Bucket brigades were no match for these beasts.

            Before the terrified residents knew it, they were dealing with a massive fire. Strong winds caused the fire to move at perhaps 40 to 50 miles per hour and develop into a firestorm. The blaze reached five miles wide and by some estimations, a mile high. Even the people on the other side of the bay were not safe, as the firestorm jumped more than ten miles across Green Bay. Even more terrifying were the fire tornadoes that developed during the chaos. One whirled through the village of Peshtigo, flattening and incinerating everything in its path. The residents of another town close to Peshtigo tried to flee, however they either burned to death while running or boiled as they tried to shelter in a small river.

            When the fire ended, it had burned 1.2 million acres and killed an estimated 1200 to 2500 people. The exact number is unclear, as some towns were burned so completely that there was no one left to identify bodies or give an estimate of how many lived in the area. Nearly 350 people, about half the population of Peshtigo, was buried in a mass grave near the town. The desolation was so complete that many left the area rather than live among the painful memories. Accordingly rebuilding was slow. The state government was not in session so the only immediate aid that got to the town was a train car load of supplies commandeered by the wife of the governor. Eventually the residents received some government assistance but it was too little too late; their lives had already been charred beyond recognition.

The Peshtigo Fire as depicted in Harper's Weekly            A fire of this magnitude is unlikely to happen again in the United States. The ability to fight fires in planes and the practice of controlled burns to prevent larger fires have made conditions exponentially safer. In addition, better communication systems and faster response before and after wildfires can save many of those caught in these dangerous situations.







Sources:
http://www.crh.noaa.gov/grb/peshtigofire.php
http://www.peshtigofire.info/
http://en.wikipedia.org/wiki/Peshtigo_Fire

Wednesday, October 15, 2014

Bananas: More Than You Ever Wanted to Know

 


                The banana. Possibly the humblest of fruits. So regular to us that we seldom think about how this tropical product gets to our thoroughly un-tropical area. As it turns out, the banana is actually an amazing representation of diffusion of a product and globalization of consumption. To realize the full extent of the banana’s global takeover, we must start 15,000 years ago in the tropical forests of Papua New Guinea.
Native Banana

  The banana that we know today is almost completely unrecognizable from its native counterpart. The fruit that is now easily peeled, seedless, and sweet was considerably less luxurious in its native form. In fact it was full of seeds and the peel was fairly difficult to remove. However, the people of the Kuk valley in current Papua New Guinea domesticated the fruit, slowly decreasing the size of the seeds and increasing the amount of edible flesh. In fact, some of the banana plant’s seeds were bred to be so small that the plant became largely asexual. From this early start, the cultivators spread the plant throughout the surrounding islands and eventually to mainland Asia and northern Australia. In some cases, they seemed to simply plant the bananas and promptly leave the area, only to rediscover the plant years later growing wild. Through this process, the peoples of mainland Asia discovered, cultivated, and incorporated the banana into their culture.
 
                As time progressed, more selective breeding of the banana took place in Asia and eventually India. Some varieties were bred to eat, others to harvest fibers for cloth, and still others to have large leaves. In this way, the banana became an integral part of the culture of the Far East. As the Middle and Far East started to trade more extensively, the banana kept moved farther west. With the rise of Islam and the moors, the banana moved ever further and reached Northern Africa and Spain around 1200 AD. The fruit was further popularized here as more people came in contact with it, though few that lived outside the banana’s growing areas could find it.
 

Ugandan Banana Plantation
                The age of discovery brought the banana to its final destination, the Americas. As it was a tropical plant, Spanish conquistadores brought bananas along in their journeys. From here, banana plantations, whether run by slaves or communities, popped up around the Caribbean and in South America. From this point, bananas were grown as either a subsistence or a local sale crop in the Americas.

                So the banana is now known on every inhabited continent. What could be next? The answer is slow but steady expansion. As the banana spread to every viable growing location (tropical climates, hot and wet), the ability of people to transport crops also grew. In this way, bananas became one of the most internationally intertwined fruit, as many countries are growing them and even more are consuming. Fast forward now to around the year 1900. There are significant banana crops being grown in Australia, India, tropical Africa, and many countries in Latin America. The world is now connected by many kinds of transportation, making goods and information travel much more smoothly. There is now such a thing as international competition in banana trade. When there is a profit to be made, there is always a company that rises to the top. For bananas, this company is United Fruit. The company flourished throughout the next 70 years despite, or because of, the questionable business decisions it made.
 
                For instance, United Fruit was a company largely responsible for producing accurately named “Banana Republics,” countries that rely extremely heavily on one export. United Fruit largely controlled a few of these countries including Guatemala, Honduras, and Costa Rica by establishing a trade monopoly. They then had enough power and influence to basically control the internal politics of the country to reflect the goals of the company. This led to numerous cases in which the rights and needs of workers were being ignored by the large company that controlled them.
 
                As do many companies that seem to be unstoppable, the opposite is true. After only 70 years in power, decline in profits, caused in part by Standard Fruit Company (eventually Dole), United Fruits merged with another company. This merger would eventually become Chiquita. Here begins a trade war between two multinational corporations that will do anything to ensure uniform quality and low cost of fruit. But neither is prepared for the challenges ahead.
 
                So remember near the beginning of the article when the indigenous Papua New Guineans bred bananas to have such small seeds many became asexual? Good news right? Wrong. This also makes the banana one of the most fragile fruits in terms of susceptibility to pathogens. This is because through asexual reproduction, all the banana plants (not trees as they are actually giant herbs), share the exact same genetic material. If a pathogen finds its way to a banana plant and the plant has no immunity, it is not just the plant that dies, the disease will spread throughout the entire plantation, country, and (this has happened before) world. In the 1950s, as the national corporations vied for the best prices and growing locations, they were all struck with a banana virus that spread quickly from country to country. In a matter of a decade, all farmers were forced to change varieties of bananas as they were unable to stop the virus. The new plant type had resistance to the virus but the fruit was not nearly as good. Just think about that when you are eating your next banana. You are eating the world’s second choice.
A real bummer. The fungus is everywhere.

                But this problem does not stop there. The current type of bananas grown around the world are being attacked by what is called Panama Disease, a deadly fungus. It is devastating banana crops and threatening to wipe out the currently preferred species of banana. More bad news, there is really no good backup species. When these bananas are gone, we could have reached the end of commercial banana growing.
 
                This brings us to the impact of banana’s globalization. It has really done a lot to change the world. Poor tropical countries around the world grow bananas as a way to get into the international market, though they often endure terrible working conditions and unfair business practices from large corporations when doing so. Some countries rely so heavily on their export that the banana buying companies have major influence in their governments, earning them the title of a banana republic. Bananas have also helped alert the world to the problem of monoculture in agriculture. When only one type of banana is grown and something bad happens to that type, there is not an easy way to instantly switch all the crops to another variety. An enormous economic collapse of that product usually ensues, which is beneficial to no one. To combat this, plantation owners should try to diversify crops to ensure that there is not a complete collapse in the event of a world-wide banana disease. 
                Long story short, bananas are a perfect example of globalization. They were once an unpleasant little herb growing on a hot, wet, jungle-covered island. They were domesticated, transported, artificially selected, diffused, and universally accepted into cultures around the world. Despite their easy conquering of the continents, they are in danger because of monoculture and a sneaky little fungus by the name of Panama Disease. You honestly could have read this paragraph and lived the rest of your life without regret. Just go. Eat a banana and really try to enjoy it. You may not be able to for long.
 
Video Link: http://www.youtube.com/watch?v=IKRCIyhheBE (soon to be embedded)
 
Sources:



Thursday, September 4, 2014

GMO Labeling


               There is currently a particularly polarizing debate raging on whether or not food products that include genetically modified organisms (GMOs) should be required to label their products. Each side makes valid arguments concerning the potential dangers, or lack thereof, of taking certain actions. To fully understand the debate, one must explore what a GMO actually entails.


                The simple definition of a GMO is an organism that has had a section of its DNA replaced with that of another species. These changes in DNA cause the organism to produce different proteins at different times, causing a change in a characteristic. The changes are almost always made to increase crop yields through cold, heat, drought, disease, or insect tolerance, making GMOs vastly less expensive and less risky for farmers to grow. For instance, genetically modified corn may simply mature faster or produce more than its unmodified counterparts, a relatively small change. These modifications, however, can be much more drastic. For instance, sections of rat DNA have been replaced with jellyfish DNA, causing the rats to glow in the dark. Obviously none of the genetically modified foods on the market will glow. In fact, the changes are usually so small that there is virtually no difference between the modified and unmodified food.


                This is where the debate on GMO labeling really starts. Scientifically, there is no difference between GMOs and other organisms on the market, meaning that most of the scientific community feels there is no reason to separately label the products. Many consumer advocacy groups, however, see the situation very differently. They cite possibilities of increased allergy exposure due to new “franken-foods” containing more than one species of DNA. Some short-term studies of rats have also shown health problems in rats who consumed certain types of genetically modified corn (specifically a type that naturally produces a pesticide). They believe that due to these dangers, the consumer has a right to know if their food contains GMOs. This would allow them to make informed decisions and choose whether or not they want to buy products or produce with modifications.


                Anti-Labeling advocates disagree with the labeling, as they believe it will have a variety of unintended and negative outcomes. One of the initial outcomes, they feel, will be a mass hysteria over the safety of GMOs contained in a great amount of common foods. This will cause the consumer to pay more for the organic equivalent of their item, not to mention would be financially devastating to the company producing the food containing GMOs. They could also be less healthy, as they may reject perfectly health and safe produce simply because of a sticker or marking. Farmers would also be hard hit, as they would have to backtrack to use the less productive and reliable unmodified plants. This would mean less agricultural output, ultimately putting the country in risk of a food shortage. Additionally, anti-labelers cite the increasing trend of labeling Non-GMO containing food, which would make the labeling of GMO’s obsolete.


                In the end, the fight over GMO labeling is unlikely to move anywhere at a national level. Lobbyists from major food producing companies (Nestle, Kellogg’s, etc.) are doing everything they can to dissuade legislation from passing on this issue. They cite the possible economic difficulties the companies may face if they are forced to label their products containing GMOs. In the meantime, GMOs continue to increase crop efficiency in and increasingly cramped and hungry world. Consumers can expect more difficulties avoiding GMO products in years to come and must be vigilant if they are to do so successfully. It is ultimately up to the consumers to research whether their food contains GMOs and decide whether they will take the risk of consuming it.







Links/Related Articles
http://www.forbes.com/sites/richardlevick/2014/08/25/gmos-a-spoonful-of-sugar-helps-the-medicine-go-down/
http://www.forbes.com/sites/jonentine/2014/08/25/why-liberal-americans-are-turning-against-gmo-labeling/
http://www.labelgmos.org/
http://uk.reuters.com/article/2014/09/03/us-usa-gmo-labeling-idUKKBN0GY09O20140903
http://www.slate.com/articles/health_and_science/science/2014/05/gmo_food_labels_would_label_laws_in_vermont_maine_connecticut_increase_food.html
http://www.scientificamerican.com/article/labels-for-gmo-foods-are-a-bad-idea/