Tuesday, September 30, 2014

Fuzzy expert systems and GIS for cholera health risk

Cholera is listed as an internationally quarantinable disease by the International Health Organization, and it is one of the most researched communicable diseases, yet it is still wreaking havoc on countries in Southern and Eastern Africa. Outbreaks in 2000 were traced to the uMhlathuze River in the northern part of the KwaZulu-Natal Province. Risk factors for cholera outbreaks include a hot and humid climate and socio-economic factors. The CSIR, Council for Scientific and Industrial Research, has used GIS tools to assess likely locations for outbreaks. Their models use the assumption that environmental conditions like algal blooms trigger Vibrio, the bacteria that cause cholera, growth. If there is Vibrio in the water, spread of the disease then depends on human access to safe water. This risk potential model was designed to predict cholera outbreaks and hopefully prevent them in the future.

 By researching the environment that cholera outbreaks occur in and assessing the risk of outbreaks, they hope to reduce the spread of cholera through well planned resource allocation. The model below describes how a cholera outbreak can be caused by an algal bloom.

The cholera outbreak potential model takes into account average annual rainfall, mean maximum daily temperature on a monthly basis and ‘month of first rains’ per pixel (salts from the first rain run into the river affecting the salinity). Results from the model show long term cholera outbreak risk. However, results do not show location and time of the outbreaks. Expanding the model will incorporate remote sensing data to supply input information for data like phytoplankton levels and the spread of algal blooms. Field data will need to be taken for data like temperature, daily rainfall, dissolved oxygen levels, salinity, oxidization, reduction potential, presence of bacteria, and pH. The model will take into account the weather data around the time of past cholera outbreaks, and predictions of future outbreaks can be made. Funding has been given to this project to make remote sensing possible.
Fleming, Gavin; Merwe, Marna van der; McFerren, Graeme. (2006). Fuzzy expert systems and GIS for cholera health risk prediction in southern Africa. Science Direct. Retrieved from

Monday, September 29, 2014

Big Data

     With so many new technological innovations, it has become increasingly common to gather data about users and examine it in a geographical context. Big data references the databases that belong to large corporations such as telephone companies and even media application developers (FourSquare, Twitter, etc.)

This map represents the movement of Twitter users. These patterns can be studied to learn more about how information spreads in a geographical context.

     Big data is a powerful tool for business analysts as it allows them to study their consumers and their locations, to an extent. This in turn helps companies gain a more comprehensive view of their targeted audience. Some of the data that is collected by these companies is considered to be VGI, or Volunteered Geographic Information. This information is called volunteered because the user agrees to allow the company to collect information about the consumer's use of the product. An example of this symbiotic relationship between business and consumer can be seen in products like MapShare and Google MapMaker.

The map above depicts tourist density. The information was gathered through a photo sharing website / application called Flickr.

    While gathering information about users and consumers by collecting VGI can be useful, many individuals do not always approve of the data collecting that is done by third parties and as a result,
this gathering of data is also the cause of mistrust and irritation among users. In addition to mistrust, VGI is not a substitute for a random population sample. There is a lack of knowledge about the user outside of the fact that they are participating in this generation of information for a third party database. Due to this lack of knowledge about economic status, context and motives, it is difficult to make generalizations about the population who is contributing the data. 

     It has been determined that in order for this method of data collection to be effective, more emphasis must be placed on where the data is coming from and also the contextual conditions of the data. Also, it has been recommended that this practice be viewed as a communication between two participating parties, not just a sender-recipient partnership. As a result of this type of relationship, more in depth data will likely be more readily put forward because a relationship between company and user that contains more trust and communication will, in theory, yield more insight about the consumer to the company.


Fischer, F. (2012, April 1). VGI as Big Data. A New but Delicate Geographic Data-Source. Retrieved September 29, 2014. 

Volunteered Geographic Information: Pros and cons

                VGI data (Volunteered Geographic Information) is an up and coming form of Big Data.  What is “Big Data”, you ask? “In recent years databases in enterprises have grown bigger and bigger. Mobile phones tracking and logging their users’ behavior, social media applications and an increasing number of interconnected sensors, create more and more data in increasingly shorter periods of time. This valuable data is called big data.”  VGI’s can be incredibly useful in that it lets users create a great deal of sharable, valuable data.
Tourist density and flows calculated from Flickr database
               

                However, there are drawbacks to VGI data.  One such problem is that “VGI datasets rather reflect the characteristics of specific online communities of interest but do not necessarily fulfill the qualities of a random population sample.”  VGI is not distributed well over socioeconomic, physical location, or any sort of variable, really.  This is its biggest problem. The future of VGI data must reconcile this lack of distribution with its enormous potential.



Citation:
Fischer, Florian.  “A New but Delicate Geographic Data-Source: VGI as Big Data”.  GEO Informatics.  2012.

Sunday, September 28, 2014

Research Using GIS Gives Insight to Extent of Local Food Flows in Philadelphia

Peleg Kremer and Tracey L. DeLiberty compiled statistical and geographic research using some geographic information systems (GIS) methods such as remote sensing to look into the extent of how locally produced the food in Philadelphia is. As they discussed, the industrialized and urbanized food system involving long travel distances from producer to consumer and use of pesticides to preserve food quality that is married with long produce travel distances contributes to negative health effects in both humans and the environment. This travel distance is often referred to as “food miles”, and a general rule can be formed that the more food miles produce must travel, the more negative health effects it will have on the human consumer and environment. 
Kremer and DeLiberty used GIS techniques to compile maps that expressed the distance from producer, in this case it was farms, to consumers. The consumers’ location of consumption was represented by farmers’ markets, the end of the travel route for the produce. According to their data, the average amount of “food miles” on these produce travel routes is sixty-one. Here is a visual representation of these routes compiled by Kremer and DeLiberty’s research that can reveal the extent of food miles in Philadelphia’s local food system:

Kremer and DeLiberty also compiled helpful information using GIS techniques that revealed to some extent the capability of land to harvest food in residential environments. This was done with infrared, remote sensing, GIS techniques that separated types of vegetation, giving way to being able to see potential in residential areas for food production. They highlighted the fact that focusing on the use of residential land would introduce difficulties because of factors such as land quality and residents’ willingness to participate, however, they made an estimation that if even five percent of that land was used, then around 9.9 million pounds of food could be produced at the most local level, which would benefit the consumers and the environment.
Kremer and DeLiberty’s research gives insight to the flow of food in Philadelphia, and can be very helpful to the food-localization movement. To see their full research article, go to: http://www.sciencedirect.com/science/article/pii/S0143622811000087
Works Cited
Kremer, P., & DeLiberty, T. L. (2011). Local food practices and growing potential: Mapping the case of Philadelphia. Applied Geography31(4), 1252-1261.

Tuesday, September 23, 2014

Scénario catastrophe pour la ville de Quito, Equateur.

En 1995, il était difficile voire impossible de prévoir un séisme suffisamment à l’avance pour éviter une catastrophe. Cependant, il était possible de déterminer les conséquences socio-économiques qu’un tremblement de terre pourrait avoir sur une région donnée. C’est ce qu’à réaliser Monsieur Jean-Luc Chatelain et ses compères dans la région qui les intéressait tout particulièrement, la capitale de l’Equateur, Quito. Etant soumis à une activité sismique intense, Quito se présentait comme une ville parfaite à l’élaboration d’un scénario sismique. D’autant plus que les autorités gouvernementales de la ville se devaient de se prémunir contre ce genre de danger.

Il leur fallait alors récolter suffisamment de données afin de réaliser ce projet. À l’aide de la banque de données urbaines spécialisées gérée par le Système d’Information Géographique SAVANE, mise à disposition par la Direction de la Planification de la municipalité de Quito et l’ORSTOM en 1991, Jean-Luc Chatelain et ses pairs bénéficiait d’un outil de travail privilégié. En effet, ses données, gérées par le SIG, donnaient accès aux informations nécessaires à l’élaboration d’un scénario sismique, telles que la sismicité historique (les tremblements de terre qui ont frappé dans le passé), la distribution des intensités des ondes aux travers de la ville, la topographie et la géologie de la ville. A cela, ils rajoutent des informations concernant les habitations et les infrastructures de la ville. Ces différentes données ainsi qu’une matrice, un calcul standard évaluant les risques de destructions, sont ensuite importées dans le système d’information géographique afin de finalisé une carte montrant les zones les plus à risque de la ville.


 Figure 1 : Répartition des intensités sismiques produites par le séisme côtier (Quito) + Distribution du type de construction (Quito)

Afin de parvenir à cette fameuse carte finale, il aura tout d’abord fallu, dans un premier temps, simuler la propagation des ondes et leurs intensités au travers de la ville. Pour cela, trois événements sismiques historiques ont été retenus avec à chaque fois un hypocentre différent, et couplé aux données topographiques et géologiques de la ville, ils ont obtenu la carte de distribution des intensités sismiques de la ville pour chacun des trois séismes. Dans un second temps, les données  concernant les habitations et infrastructures de la ville ont également été cartographiés. Ses données étant catégorisé suivant le type de construction.

Figure 2 : Carte finale montrant les pourcentages de dégâts des infrastructures en cas d'épisode sismique.

Pour la touche finale, ces cartes sont réunies dans le SIG, les données sont croisées et des matrices standard d’évaluations des dégâts en rapport avec le type de construction et l’intensité sismique de la zone sont ajoutées.  Ces matrices sont les mêmes que celles utilisées pour les scénarios sismiques de la Californie et du Japon. C’est ainsi qu’ils ont obtenu cette carte évaluant le pourcentage de dégâts par zone.

Pour conclure, on peut voir que la carte finale est bel et bien représentative du degré d'intensité des ondes et des types de bâtiments et d'infrastructures. En effet, on peut voir au centre-ville (rectangle), la partie la plus ancienne de la ville, que les bâtiments "adobe", "maçonnerie non renforcé" et "auto-construit" seraient les plus touchés en cas de séismes. Alors que pour le reste de la ville, la partie la plus nouvelle est essentiellement construite avec du "béton armé". Ce qui rend les bâtiments bien plus résistants aux tremblements de terre.
Pour finir, il est important de souligner l'importance du SIG dans ce genre de région où les hommes politiques ne sont pas toujours attentifs aux risques qui les entourent et, tout particulièrement, aux problèmes socio-économiques que ces risques peuvent engendrer.

Monday, September 22, 2014

U.S. Census Data, GIS, and the Preston Medical Library.

     While GIS can be a valuable asset on its own, when paired with other forms of gathering information, it can truly be a limitless form of data collection, analysis and presentation. In 2011, the Preston Medical library in Tennessee used a combination of U.S. census data and GIS programs to better help their customers through an outreach program called Consumer and Patient Health Information Service, or CAPHIS.

     Because more medical risks have been associated with lower literacy rates, the library's goal was to provide CAPHIS customers with information that they could understand. The program would work as follows, when customers or patients would call the library seeking medical information, the library would then use Microsoft Access to locate the caller using a query by zip code then use the data from the census and GIS programs to examine the literacy rates of the location of the caller. These literacy rates are then used to determine the estimated socioeconomic status of that person. After doing this, the library would be able to provide the consumer with medical advice and information at a literacy rate that is determined by their estimated socioeconomic status.

   The benefits of having this combination of GIS and census information are numerous. One key piece of information that was gathered from this methodology was a new idea of which diseases and medical conditions are prevalent in certain counties. Another bonus to using this system was that all of the information was able to be placed on a map. Maps were created to display how many calls took place per 100,000 people and also the rate of ambulatory disabilities per 100,000 people.



     All of this information could have been gathered by recording the numbers presented above. However, by using GIS, the spread of disease, callers, and their proximity to under-served areas are easier to grasp and understand. GIS helps to paint a picture of a large, complex amount of information and present it in a streamlined and manageable format.

Socha Y.M, Oelschlegel, S, Vaught, C.J, & Earl M. (2012). Improving an outreach service by analyzing the relationship of health information disparities to socioeconomic indicators using geographic information systems. JMed Lib Assoc 100(3).  https://lms.southwestern.edu/file.php/4373/Literature/Socha-2012-HealthOutreach.pdf

The emergence of GIS qualifications and their effects on GIS jobs

Careers in GIS are an interesting phenomenon because the field is still relatively new, yet the demand for GIS work is high.  As such, there are heated deliberations over what constitutes a “GIS professional”.  DiBiase thinks that a “GIS Professional [is] someone who makes a living through learned professional work (see table below) that requires advanced knowledge of geographic information systems and related geospatial technologies, data, and methods” (1).








Bill Huxhold was a GIS professional in the nineties who promoted creating a system of qualifications to be called a GIS professional.  Huxhold protests were heard, as he eventually convinced the Urban and Regional Information Systems Association (URISA) to establish a certification committee to study the problem.  His ideas became popular in the GIS community as well.







 As you can see on the table above, GIS jobs are rapidly growing and is a high-demand job.  This was partially due to the establishing of qualifications for GIS professionals (which is anyone who uses GIS to make a living).  “In 2010, DOLETA issued a Geospatial Technology Competency Model (GTCM) that identifies the specialized knowledge and abilities that successful geospatial professionals possess” (4).  Employers, students, and educators can all use the tool for their own purposes.  Now there are multiple certifications GIS professionals can acquire, such as the GISP, that signify to employers that the GIS professional has the correct skill level.



Citation:
DiBiase, David.  2012. “Strengthening the GIS Profession” ArcNews.
Rebecca Huteson
Mapping could help stop Ebola’s Spread

The spread of the Ebola virus through West Africa isn’t thoroughly understood. Lars Skog has researched the spread of past epidemics like the Black Death, the Russian Flu Pandemic of 1889, the Asiatic Influenza of 1957 and the Swine Flu to better understand how these diseases are spread. The spread of the Black Death in the fourteenth century bears a resemblance to the spread of the Ebola virus because they are both spread by small mammals. Based on our current level of knowledge on the spread of Ebola, it is spread by fruit bats. Some rural West Africans hunt them, and the disease can also be spread by their droppings.

Answering question regarding preferable bat habitats, what kind of factors change these habitats and how the virus affects the bats health will help to understand how the virus is spread. Geoinformation technology is currently available for public health response organizations, but collecting more data about these bats and the spread of Ebola could help stop it.

Callahan, D. (2014). Mapping could help stop Ebola’s Spread. Directions Magazine. Retrieved from


Saturday, September 20, 2014

Looking at the University of Texas’s Solar-Power Sustainability Potential


There has been a recent push to integrate more solar-power technology into the University of Texas campus. As you may know, solar panels are often placed on roofs of buildings, and they generate energy from the sun that is converted to electrical power to generate electricity for the interior of the building. This solar power is supposed to be a more sustainable and environmentally friendly alternative. There are some obstacles for putting up roof panels and increasing Texas’s solar power percentage. First, some buildings at The University of Texas feature red tiled roofs that are held with sentiment within the community. Many people would not be too fond of the idea of solar panels being mounted on top of them and adding large steel frames to the roofs’ historical image. The second challenge is that integrating solar-power technology would not be the most economically sound idea for the university’s budget. The University of Texas currently has a power generation system on campus that is working well, and adding in solar-power technology is not the highest priority.

            However, it is important to note that the University of Texas has a great potential for solar technology. The extent of this potential was mapped out with Graphing Information Systems technology by parties affiliated with the solar-power movement. Here is a diagram of the university’s rooftop solar-power potential constructed from a digital elevation model, which portrays elevation of objects, at an aerial view with light detection and ranging (LIDAR) technology shaping out the buildings:
*The closer to red the color is, the more potential that area of rooftop has for solar-power generation. Areas represented by colors closer to blue would be less ideal for placing solar panels as they are more often in the shade.

            Finally, it is important to note that the price of using solar-power technology is going down. Because of this information gathered by Moulay Anwar Sounny-Slitine, a former Master’s student of the University of Texas’s Department of Geography and the Environment and others involved in the push for more solar-power on the university’s campus, ideal places to put solar panels are now known when the economically appropriate time comes. The university has many rooftops that could hold solar panels and provide environmentally friendly technology as one may infer from the diagram. There are plenty of these rooftops that aren’t comprised of the prized red tiles which carry the aesthetic sentiment held among some in the university’s community. According to Slitine, this is a fair amount and can be utilized in the future. This is an exciting prospect for the University of Texas’s sustainability level.

Works Cited

Sounny-Slitine, M. (2011, October 21). Solar Power Potential on the University of Texas Campus. Retrieved September 20, 2014.

Monday, September 15, 2014

Measuring Intercepted Solar Radiation

Overview     

     Many factors influence the true amount of solar radiation that actually reaches the planet's surface. The factors can include the orientation of the Earth's surface, sky obstructions, and surrounding topographic features. Because there are so many factors that can alter how much radiation is being intercepted, P.M. Rich, W.A. Hetrick, S.C. Saving, and R.O. Dubayah developed an algorithm that allows for rapid calculation of the amount of intercepted radiation. Each of the possible factors that affect interception were given formulas to account for there role in the interception. 



Two different projections for intercepted solar radiation. 
Top: hemispherical Coordinate System
Bottom: Equiangular Coordinate System




Process

     In order to measure radiation, the sky is projected onto a plane and then divided into sections. By dividing the sky into sections, it was easier to formulate a table of values that would be easier to compare and analyze. Each section of the sky, depending on the amount of radiation absorbed, will have a calculated "irradiance" value. Irradiance is the amount of electromagnetic radiation per unit area that can be found on a specific surface. Having the value for the full amount of radiation (irradiance) allows for influential factors to be accounted for with the help of cosine algorithms. After accounting for influencing factors, the values are put into a table to analyze. 




Examples of how the sky can be divided. 
Top: A more evenly divided sky provides calculations from all sky directions.
Bottom: The unevenly divided sky produces better calculations when dealing with one specific spot and one particular sky direction.

Evaluation & Conclusion


     The values that were calculated were compared to topographic maps in order to find out what obstructions were causing some areas to collect less radiation that others. The division of the sky played a key role in the results. For example, the more finely and equally divided the sky provided better results when measuring radiation from all directions in the sky while the unequally divided sections produced better results when trying to determine radiation collected in one particular spot from a particular direction.

This process is one that is viable, flexible, and very applicable to a wide array of uses. This algorithm could help decide which areas could make the best use of solar panels for better energy yields. Because the Sun provides such a large amount of Earth's energy and also due to the fact that different influential factors can always be given their own proprietary cosine equations, this algorithmic technique to measure solar radiation exhibits enough flexibility to make it a contender in future environmental energy problems and endeavors.


Rich, P.M., R. Dubayah, W.A. Hetrick, and S.C. Saving. 1994. Using viewshed models to calculate intercepted
solar radiation: applications in ecology. American Society for Photogrammetry and Remote Sensing Technical
Papers. pp 524–529. 

http://professorpaul.com/publications/rich_et_al_1994_asprs.pdf


Could Katrina have made more damages without Lidar ?

A long time has gone by since the devastating Hurricane Katrina. Nevertheless, New Orleans still carries the stigmatisms of that dark day of August 29th 2005.  What if we move back to August 2005 to learn a little bit more about the behind the scene of that event.

At this time, a mapping tool, Lidar, was used to collect information about the topography of inundation in New Orleans. Based on “a high level of spatial detail and vertical accuracy of elevation measurements, light detection and ranging remote sensing is an excellent mapping technology for use in low-relief hurricane-prone coastal areas”, according to Dean Gesch.  Thus, this high-resolution and high accuracy elevation data is more than useful when it comes to determine the flooding risk in cities, especially the coastal ones. Furthermore, it is useful for studies of the responses to impacts of storms. Indeed, this elevation data prove to be essential to determine the hurricane response and recovery activities. They can accurately establish a map of the different land-surface elevations within the city. In 2005 this tool, Lidar, even though was relatively new in the world of remote-sensing technology and because of its advanced technology, the U.S. Geological Survey used it for their National Elevation Dataset. New Orleans elevation data were updated in June 2005; therefore, they were already available for response to Katrina.
Figure 1: We can see the different elevation data of New Orleans. Red being the highest and blue the lowest.

Immediately after the levee breaches, there was a demand for a map showing the extent and magnitude of the flood waters in the city. The National Elevation Data proved to be a great help for mapping the extent and the depth of the inundation because no aerial imagery of the area were available at this period. Basically, knowing that the level of the Lake Pontchartrain and the flood waters equalized, the elevation was calculated from a lake-level gage on the lake and the data provided by Lidar. By its effectiveness and quickness to be realized, they could project the length of time required to remove the water from the city.


Figure 2: Map of the relative water depth for the New Orleans area.

If we compare the two figures, we can see that the most devastated part of the city could have been predicted as the lowest land-surface elevation on the first map represents the deepest flood waters on the second.

What if we change?
Unfortunately, Dean Gesch, the author of the article, does not provide any information if they used this tool to prevent environmental hazards, such as hurricanes. In any case, Lidar probably helped New Orleans’s mayor, Ray Nagin, to evacuate the most risky part of the city. However, we could analyze the use of the supposed-to-be-safe levee. Are they as safe as we think they are? Do they really prevent from flooding? Apparently this is not even an accurate question, when we see what happened in New Orlean. Of course levees provide an important source of safety, and category 5 hurricanes are not that common, but you have to be ready for the worse at anytime. Mother nature is not going to tell you months in advanced what she is up to. We know now that the levees were not strong enough to resist to this hazard and when you look at a transversal cut of New Orleans, you understand how it ended up like a vulgar swimming pool.
 Figure 3: Area map of New Orleans and its levees elevation.


To conclude, we could ask ourselves why do Men have to put themselves in a dangerous position? What if we stopped trying to change nature by adjusting to her? That is probably one of the longest debates that governments do not find enough time to talk about.

Reference: Topography-based Analysis of Hurricane Katrina Inundation of New Orleans By Dean Gesch, 2005. 
For Figure 3: http://en.wikipedia.org/wiki/New_Orleans#mediaviewer/File:New_Orleans_Levee_System.svg

Neogeography: A fusion of art and mapping

                Neography is a modern take on geography, combining the science of geography and GIS with digital art.  “The term ‘neogeography’ is taken to engulf traditional geography as well as all forms of personal, intuitive, absurd or artistic explorations and representations of geographical space, aided by new technologies associated with the Geospatial Web” (Papadimitriou 2013).  As this quote illustrates, neography is a fascinating hybrid of two seemingly incompatible fields that is accomplished through the powerful software of modern GIS. 
                                          Figure 1 Technologies that contribute to Neogeography
                
                Geotagging and georeferencing are two such ways our modern intuitive software enables neogeography.  In the process mappers are able to add personal flair such as “snapshots, texts, music, random sounds and noises and even video clips” (Papadimitriou 2013).  With Neogeography, mapping becomes much more of an individual, grassroots phenomenon rather than an administrative government one. 
                Another phenomenon that accompanies neogeography is an increasing accessibility and usability of GIS software.  No longer is it required to map with extensive knowledge of GIS software.  “Some packages, such as those provided by Yahoo, require no prior GIS skills to produce interesting and aesthetically pleasant output in neogeography” (Papadimitriou 2013). 
                In conclusion, neogeography has helped change mapping into a form of art.  “Geographical education may well open a new chapter in response to these developments, possibly called ‘neogeographical education’, whose aim would be to foster educational activities worldwide in order to build the newly emerging geospatially enabled Web 3.0” (Papadimitriou 2013).

Citation:

Papadimitriou, Fivos. "A "Neographical Education"? The Geospatial Web, GIS and Digital Art in Adult Education." International Research in Geographical and Environmental Education 19.1 (2010): 71-74. Routledge. Web. 19 Feb. 2013.

The Times Are Changing and So's the Land!


We as humans have dramatically changed the earth’s surface in the last 150 years. Changes in global land use has lead to more and larger urban centers, thousands of square miles of subsistence and commercial agriculture, the loss of millions of square miles of forest, and much more. These land use changes can and have had detrimental effects on the environment and its ability to supply and service our constantly growing population. Among these detrimental effects are: changes to the atmospheric composition, disruption of ecosystems, biodiversity loss, and the degradation of soil and water.
Food production is one of the biggest environmental concerns because of the large amount of land it uses-croplands and pastures cover about 40% of the earth’s surface-and the resource intensive nature of modern agriculture. Agriculture has changed and grown substantially in the last 40 years. Cropland area has grown by about 12% while fertilizer use has increased by 700%. This can be at least partially attributed to the “Green Revolution” which promoted the use of fertilizers and machinery to increase crop yield.  While these changes have lead to increased crop yield-global grain production has roughly doubled in the last 10 or 20 years-they also have devastating effects on the environment.
Land use also greatly affects the hydrologic cycle. The over use of fertilizers leads to damaging run off that both degrades water quality locally and downstream as well as causing algal blooms and “dead zones”-when the fertilizer reaches the ocean it causes a boom in algae growth, the algae use all the oxygen in the water and nothing else can survive there, thousands of fish wash up on the shore dead during these blooms. Agriculture accounts for 85% of global water consumption and a lot of that water is being pumped unsustainably from underground sources. Some of these sources have salt in the water which ends up on the soil when the crops are irrigated. This salinizes-think salt-the soil and makes it impossible to grow anything there. Deforestation, increased impervious surfaces like roads and parking lots, and urbanization also degrade water quality and disrupt the hydrologic cycle.

In the last 300 years humans have cleared around 7-11 million square km of forest for agriculture or timber harvesting purposes. While reforestation projects are helping build back the forests the biodiversity and some of the ecological services that were lost in the original clearing will likely never fully recover. These changes to vegetation mass, land use, and the hydrologic cycle have impacts on our atmosphere and climate as well. With increased human development have come increased emissions. The earth has natural systems to regulate the composition of gases in the atmosphere one of which is forests which act as natural carbon sinks. However, it is not capable of handling the added weight of our emissions and we already cut down 7-11 million km2 of our carbon-absorbing forests. The earth is warming and it’s at least mostly if not entirely because of humans how we have used and reshaped the land.
Modern land use practices are sacrificing the long-term health of the environment for short-term rewards. Human actions need to take a sharp turn towards sustainability if we want to continue to enjoy the ecological services that our environment provides. Sustainable land-use would not only preserve ecological services for future generations, but would also seek to increase the resilience of that ecological service. For example, creating a plan for a cropland that would have environmental, social, and economic benefits would seek to increase the yield per unit of fertilizer, land, and water input thus reducing the environmental impact. Increasing green-spaces in urban places can reduce runoff and the “heat-island” effect while providing parks to play in and gardens to harvest. The ultimate goal is to find a way to coexist with and leave natural ecosystems as unchanged as possible. Life was around for billions of years before humans and it had a pretty good thing going before we threw a wrench in the works. Working with the natural systems the environment already had in place—using ladybugs to control aphids instead of insecticide—usually gives the best results.
What about GIS?
            GIS was probably used to gather most of these statistics and learn the full extent of these land-use changes. GIS can also be used predict their progression and map potential threats or areas of concern. GIS is a powerful tool for anyone interested in looking at how the surface of the earth has changed, is changing, and will likely change in the future.

Works Cited
Foley, J. A., DeFries, R., Asner, G. P., Barford, C., Bonan, G., Carpenter, S. R., ... & Snyder, P. K. (2005). Global consequences of land use. science,309(5734), 570-574.