Google Earth Engine (GEE) is a cloud-based tool for scientific analysis and visualization of geospatial datasets, for academic, non-profit, business and government users.
“Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth’s surface.” [1]
GEE is available for free to individual, non-commercial users. However, to obtain access a Googe Account must request access from Google at https://earthengine.google.com/signup/. With GEE, Google has created a public repository of geospatial data that harnesses the power of Google’s cloud-computing infrastructure to remove obstacles with regional to planetary scale analyses.
In a traditional sense, there was a specific set of steps required in order to begin any remote sensing analysis. In general those steps were:
This process of obtaining data could take anywhere from several minutes to several hours depending on the nature of the data and the users experience with the software. What makes GEE unique is the focus on analysis by removing the steps required to catalog, organize, and process remotely sensed imagery.
GEE contains more than 90 petabytes (1 petabyte = 1,000 terabytes) of cataloged raster and vector datasets. This data are auto-updated daily by the Google Earth Engine team and new datasets are being added monthly. If you find that a dataset you use is not currently contained within the catalog you can suggest a new dataset be added by the GEE team.
With GEE, Google has provided the tools and application programming interface (API) necessary to quickly move from obtaining a dataset to exploratory analysis. All of these analyses are run on Google servers where the data is stored allowing for speed and processing power well beyond that of a single computer. Additionally, through the use of javascript or python, scientists can create repeatable analyses and share those with other users by simply sharing a single web address.
Working within the Google Chrome browser, GEE provides an API that allows you to search for datasets, visualize the data, develop code, examine the results of your analyses, and save your work within a repository.
In this exercise we will work though a very simple example of how to use GEE to obtain satellite imagery of a location, examine various bands, and run a simple normalized difference vegetation index for a dataset.
To begin, navigate to Google Earth Engine in a Chrome Browser. If you do not have Chrome you can install it from here. Your screen should appear like the one below
Each of the sections of the screen were described above in “Working with the Earth Engine API”, however, your screen will now show your information in the script manager. To start, click on the New button and create a new file. If prompted to create a repository, provide the name of a new repository to store your exercise script(s). Now, using the “Search places and datasets…” bar along the top of the page, type in Clarksville, TN and click on the appropriate result.
This will allow use to zoom in to Clarksville. We can use the zoom tools on the left or a double-click to refine the level of zoom. Once you have centered Clarksville, click on the Add a Marker icon in the upper left of the image to place a point (by left-clicking) near downtown.
Notice that you can also add lines, shape, and square/rectangle. With
this marker placed, you should now see geometry imports
available as well as a new var in the script box. By moving
your mouse over the geometry box you can use the gear icon you can change the
name and color of the point. Rename the geometry to clarksville. Now
that we have a location we can add a dataset. In the search box, type
Sentinel 2 and select the import option next to the
Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-1C
(TOA) from the rasters.
You will now see a new variable in your scripting window labeled as an ImageCollection. Essentially this means that the entire catalog of Sentinel 2 imagery is now available to this analysis. For more information on the sensor click the link for Sentinel 2. Here you will find information about he sensors availability range, provider, description, bands, spatial and temporal resolution, and image properties. This will be important later on when selecting the appropriate bands for analysis. Go ahead and rename this dataset sentinel2.
So now that we have a dataset to examine we can obtain information about our study site. If you were to run some datasets in the entirety you might receive an error statement in the console that says…
…this is due to an attempt to analyze an extremely large number of files in a dataset. If you need to examine global scale data you can contact Google to discuss a much larger quota. So in order to avoid such issues we are going to limit our search results.
To start, we will limit them based on geography. Because we only want images that intersect with our area of interest we can begin by setting the variable and providing a limit on its extent.
var image = ee.ImageCollection(sentinel2)
.filterDate("2019-01-01", "2019-10-30") // use to filter specific dates
.filterBounds(clarksville) // only include scenes that intersect with our point geometry
.sort("CLOUD_COVERAGE_ASSESSMENT") // filter by cloudy pixel percentage
.first(); // select the least cloudy scene
In the script above:
Now that we have our filtered data, we can view the information in
the console with print(image);. This identifies the
image variable and provides information in the console
tab. Information can be found regarding the number of available bands,
cloud cover, etc.
In order to display the image we can set visualization parameters based on the information we want to obtain and the band combinations available in the data. For instance, if you want a “natural color” image (meaning as close to what you would see with your eyes from a plane) you would select the Red, Green, and Blue bands to display. If you wanted to examine vegetation, you would likely be interested in a false color composite with the Infrared, Red, and Green bands. For this example we will go ahead and make both.
// Set visualization parameters
var naturalcolor = {
bands: ["B4", "B3", "B2"],
min: 0,
max: 4000,
};
var falsecolor = {
bands: ["B8", "B4", "B3"],
min: 0,
max: 4000,
};
Now we have two variables, naturalcolor and falsecolor, we can use when displaying the imagery. To do that we want to add the information to the map using Map.addLayer():
// Display images on the map
Map.addLayer(image, naturalcolor, "Natural Color Composite");
Map.addLayer(image, falsecolor, "False Color Composite");
This added both the natural color and false color composite images to the active display. Using the options available under the layers box along the top of the map we can turn layers off and on, change the band combinations with the gear icon, and set the layer transparency with the slider bar.
Now that we have imagery of our area we can export that to a TIF file that can be used within other programs. To do this we will need to create a variable to be exported, provide some parameters, and create an export function that will run separately in the Tasks tab.
var exportRGB = image.visualize({
bands: ["B4", "B3", "B2"],
min: 0,
max: 4000,
});
Export.image.toDrive({
image: exportRGB,
scale: 10,
description: "clarksvilleRGB",
crs: "EPSG:4326",
});
Here we created a variable to export an RGB image. Next we used Export.image.t0Drive() to identify the variable (image), the scale of the pixels (10, found in the information on Sentinel), provide a description of the function, and finally the projection (EPSG:4326 = WGS84). When you run this script you can see there is a new task available.
Click run on the task and a new window will open prompting you to provide a location, file name, and additional resolution information.
These images will save in your Google Drive folder, so provide a file name and folder name. For resolution, because we learned that the spatial resolution of our bands was 10m we provided that information both in the scale and here in the resolution section. After a few minutes we will have a new raster file in our folder.
Try to repeat the process above using the USGS Landsat 8 Collection 2 Tier 1 TOA Reflectance. Remember to look at the description, bands, and image properties to include the appropriate information.
If we are interested in obtaining other types of information such as
terrain data we can follow a similar process to the one above using a
different sensor. The Shuttle Radar Topography Mission (NASA
SRTM Digital Elevation 30m) from NASA-JPL provides near near-global
scale, high-quality elevation data. Search for that dataset and import
it as a variable named srtm. Now we can use
Map.addLayer(srtm, {min:100, max:250}, 'DEM'); to add DEM
information. The min and max values are the elevation range in
meters.
One of the basic remote sensing analyses is a vegetation index called normalized differnce vegetation index (NDVI). It is calculated as a ration between red and near infrared bands with a range of -1 to 1. Where -1 is absence of vegetation and 1 is healthy green vegetation. This can quickly be calculated with the information we have already scripted.
To begin we need to create an equation and variables from the Sentinel data.
var NDVI = image.expression(
"(NIR - RED) / (NIR + RED)",
{
RED: image.select("B4"),
NIR: image.select("B8")
});
Map.addLayer(NDVI, {min: 0, max: 1}, "NDVI");
Remember that in Sentinel, Band 4 is the Red band and Band 8 is Near Infrared. Now we have a black and white image scaled with NDVI values.
If we want to determine the value of an individual pixel, we can use the inspector tab and use the cross hairs to click on an area of interest. Once you select a location and left-click you will see a series of graphs appear in the tab.
These graphs depict the pixel values of each band in the two image variables and the NDVI value of 0.7159633. Click around the image on dark and light pixels and areas where you think you know the land cover to see if the results match your idea of the location (forest = high values, little/no vegetation = low values).
The script above only details a tiny fraction of what is capable with Google Earth Engine. Using the Google Earth Engine Developers Guide example, we can create custom time lapse videos to examine our changing landscapes.
Take a look at the numerous tutorials and videos Earth Engine users have made available to see what you can do with this platform.
Now it’s your turn! As previously discussed, with the ever changing availability of packages in R there are issues with obtaining reliable imagery for research studies. Using the script above, alter the information to focus on your thesis, research area, or portion of your research area and download an RGB image. If you have time, try to add that geoTIFF image to one of your previous exercises with/without an aerial imagery as a basemap. Using the get link button at the top of the scripting window, copy the URL, and add it to the README document for that exercise to provide access to the script.
[1]: Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., and Moore, R. (2017) Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment 202(1). https://doi.org/10.1016/j.rse.2017.06.031
Exercise Repository: https://github.com/chrismgentry/Google-Earth-Engine
In addition to Earth Engine, Google has made a number of other services freely available (or by request) that can be beneficial to geospatial analysis and visualization. One of those services was an update to Google Earth via a web browser that allows you to create points, lines, and polygons and save them into a project along with Google Earth imagery to create an interactive display of your data. This can be beneficial to creating objects that can be used in R or with citizen science projects.
I put together a quick tutorial, unfortunately due to restrictions with the software the cursor isn’t shown in the video. It is relatively self-explanatory and the video might at least provide an idea of the capabilities.
Another serivce available in the Chrome browser is Google Earth Studio.
Google Earth Studio is a browser-based animation tool for Google Earth’s 3D and satellite imagery. The massive collection of 2D and 3D Earth data available in Google Earth, from large-scale geological features to individual city buildings, provides the easiest way to leverage geospatial data and imagery for still and animated content. I have also included a few presentations related to these services in the repository.