Google Research Blog
The latest news from Research at Google
The 2016 Google Earth Engine User Summit: Turning pixels into insights
Monday, September 19, 2016
Posted by Chris Herwig, Program Manager, Google Earth Engine
"
We are trying new methods [of flood modeling] in Earth Engine based on machine learning techniques which we think are cheaper, more scalable, and could exponentially drive down the cost of flood mapping and make it accessible to everyone."
-Beth Tellman, Arizona State University and
Cloud to Street
Recently, Google headquarters hosted the
Google Earth Engine User Summit 2016
, a three-day hands-on technical workshop for scientists and students interested in using Google Earth Engine for planetary-scale cloud-based geospatial analysis. Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with a simple, yet powerful API backed by Google's cloud, which scientists and researchers use to detect, measure, and predict changes to the Earth's surface.
Earth Engine founder Rebecca Moore kicking off the first day of the summit
Summit attendees could choose among
twenty-five hands-on workshops
over the course of the three day summit, most generated for the summit specifically, giving attendees an exclusive introduction to the latest features in our platform. The sessions covered a wide range of topics and Earth Engine experience levels, from image classifiers and classifications, time series analysis, building custom web applications, all the way to arrays, matrices, and linear algebra in Earth Engine.
Terra Bella
Product Manager, Kristi Bohl, taught a
session on using SkySat imagery
, like the image above over Sydney, Australia, for change detection. Workshop attendees also learned how to take advantage of the deep temporal stack the SkySat archive offers for change-over-time analyses.
Cross-correlation between Landsat 8 NDVI and the sum of CHIRPS precipitation. Red is high cross-correlation and blue is low. The gap in data is because CHIRPS is masked over water.
Nick Clinton, a developer advocate for Earth Engine, taught
a time series session
that covered statistical techniques as applied to satellite imagery data. Students learned how to make graphics like the above, which shows the cross-correlation between
Landsat 8 NDVI
and the sum of
CHIRPS precipitation
from the previous month over San Francisco, CA. The correlation should be high for relatively
r-selected
plants like grasses and weeds and relatively low for perennials, shrubs, or forest.
My
workshop session
covered how users can upload their own data into Earth Engine and the many different ways to take the results of their analyses with them, including rendering static map tiles hosted on Google Cloud Storage, exporting images, creating new assets, and even making movies, like this timelapse video of all the
Sentinel 2A
images captured over Sydney Australia.
Along with the workshop sessions, we hosted five plenary speakers and 18 lightning talk presenters. These presenters shared how Earth Engine fits into their research, spanning from drought monitoring, agriculture, conservation, flood risk mapping, and hydrological analysis.
Plenary Speakers
Agriculture in the Sentinel era: scaling up with Earth Engine
, Guido Lemoine, European Commission's Joint Research Centre
F
lood Vulnerability from the Cloud to the Street (and back!) powered by Google Earth Engine
, Beth Tellman, Arizona State University and Cloud to Street
Accelerating Rangeland Conservation
, Brady Allred, University of Montana
Monitoring Drought with Google Earth Engine: From Archives to Answers
, Justin Huntington, Desert Research Institute / Western Regional Climate Center
Automated methods for surface water detection
, Gennadii Donchytes, Deltares
Lightning Presentations
Mapping the Behavior of Rivers
, Alex Bryk, University of California, Berkeley
Climate Data for Crisis and Health Applications
, Pietro Ceccato, Columbia University
Appalachian Communities at Risk
, Matt Wasson, Jeff Deal, Appalachian Voices
Water, Wildlife and Working Lands
, Patrick Donnelly, U.S. Fish and Wildlife Service
Stream-side NDVI and The Salmonid Population Viability Project
, Kurt Fesenmyer, Trout Unlimited
Mapping Evapotranspiration for Water Use and Availability
, Mac Friedrichs, USGS
Dynamic Wildfire Modeling in Earth Engine
, Miranda Gray, Conservation Science Partners
Fishing at Scale, now also in Earth Engine
, David Kroodsma, Skytruth
Mapping crop yields from field to national scales in Earth Engine
, David Lobell, Stanford University
Mapping Pacific Wildfires Impacts with Earth Engine
, Matthew Lucas, University of Hawaii
EarthEnv.org - Environmental layers for accessing status and trends in biodiversity, ecosystems and climate
, Jeremy Malczyk, Map of Life
Building a Landsat 8 Mosaic of Antarctica
, Allen Pope, University of Colorado Boulder
Monitoring Primary Production at Broad Spatial and Temporal Scales
, Nathaniel Robinson, University of Montana
Assessing Urbanization Trends for Public Health: Modelling Nighttime Lights Imagery in Africa with Earth Engine
, David Savory, University of California, San Francisco
National-scale mapping of forest carbon
, Ty Wilson, US Forest Service
Utilizing Google Earth Engine to Enhance Decision-Making Capabilities
, Brittany Zajic, NASA DEVELOP National Program
Keeping our users first
It is always inspiring to see such a diverse group of people come together to celebrate, learn, and share all the amazing and wondrous things people are doing with Earth Engine. It is not only an opportunity for our users to learn the latest techniques; it is also a way for the Earth Engine team to experience the new and exciting ways people are harnessing Earth Engine to solve some of the most pressing environmental issues facing humanity.
We've already begun planning for next year's user summit, and based on the success of this year's, we're hoping to hold an even larger one.
See through the clouds with Earth Engine and Sentinel-1 Data
Monday, August 03, 2015
Posted by Luc Vincent, Engineering Director, Geo Imagery
This year the
Google Earth Engine
team attended the
European Geosciences Union General Assembly
meeting in Vienna, Austria to engage with a number of European geoscientific partners. This was just the first of a series of European summits the team has attended over the past few months, including, most recently, the
IEEE Geoscience and Remote Sensing Society
meeting held last week in Milan, Italy.
Noel Gorelick presenting Google Earth Engine at EGU 2015.
We are very excited to be collaborating with many European scientists from esteemed institutions such as the
European Commission Joint Research Centre
,
Wageningen University
, and
University of Pavia
. These researchers are
utilizing the Earth Engine geospatial analysis platform
to address issues of global importance in areas such as food security, deforestation detection, urban settlement detection, and freshwater availability.
Thanks to the enlightened free and open data policy of the European Commission and European Space Agency, we are pleased to announce the availability of
Copernicus Sentinel-1
data through Earth Engine for visualization and analysis. Sentinel-1, a radar imaging satellite with the ability to see through clouds, is the first of at least 6
Copernicus
satellites going up in the next 6 years.
Sentinel-1 data visualized using Earth Engine, showing Vienna (left) and Milan (right).
Wind farms seen off the Eastern coast of England.
This radar data offers a powerful complement to other optical and thermal data from satellites like Landsat, that are already available in the Earth Engine public data catalog. If you are a geoscientist interested in accessing and analyzing the newly available EC/ESA Sentinel-1 data, or anything else in our multi-petabyte data catalog, please
sign up for Google Earth Engine
.
We look forward to further engagements with the European research community and are excited to see what the world will do with the data from the European Union's Copernicus program satellites.
Map of Life: A preview of how to evaluate species conservation with Google Earth Engine
Thursday, January 08, 2015
Posted by Walter Jetz, Dept. of Ecology and Evolutionary Biology, Yale University, and Dave Thau, Developer Advocate, Google Earth Engine, with support from Robert Guralnick, Dept. of Natural History, University of Florida
Nature reserves have a vital role for protecting biodiversity and its many functions. However, there is often insufficient information available to determine where to most effectively invest conservation efforts to prevent future extinctions, or which species may be left out of conservation actions entirely.
To help address these issues,
Map of Life
, in collaboration with
Google Earth Engine
, has now pre-released a new service to pinpoint at-risk species and where in the world that they occur. At the fingertips of regional naturalists, conservation groups, resource managers and global threat assessors, the tool has the potential to help identify and close key information gaps and highlight species of greatest concern.
Take the Tamaulipas Pygmy Owl, one of the smallest owls in the world that is restricted to highland forests in Mexico. The
consensus range map
for the species indicates a broad distribution of over 50,000 km
2
:
Left: Tamaulipas Pygmy Owl (Glaucidium sanchezi, photo credit: Adam Kent). Right: Map of Life consensus range map showing the potentially habitable range of this species.
But accounting for available habitat in the area using remotely sensed information presents a
different picture
: less than 10% of this range are forested and at the suitable elevation.
Users can change the habitat association settings and explore on-the-fly how this affects the distribution and map quality. This refined range map now allows a much improved
evaluation
of the owl’s potential protection. Furthermore, the sensitivity of conservation assessments to various assumptions can be directly explored in this tool.
The owl’s potential protection is likely to occur in only around 1,000 km
2
that are under formal protection, representing seven reserves of which only two have greater than 100 km
2
area. This is much less than would be desirable for a species with this small a global range.
Another species example, the Hildegard’s Tomb Bat, is similarly
concerning
: less than 6,000 km
2
of suitable range remains for this forest specialist in East Africa, with less than half currently under protection.
A demonstration of this tool for 15 example species was pre-released at the decadal
World Parks Congress
in Sydney Australia last November to the global community of conservation scientists and practitioners. In the coming months this interactive evaluation will be expanded to thousands more species, providing a valuable resource to aid in global conservation efforts. For more information and updates,
follow Map of Life
.
The World Parks Congress: Using technology to protect our natural environment
Wednesday, November 12, 2014
Posted by Dave Thau, Developer Advocate for Google Earth Engine and Karin Tuxen-Bettman, Program Manager, Google Earth Outreach
(Cross posted on the
Official Google Australia Blog
)
This week, thousands of people from more than 160 countries will gather in Sydney for the once-in-a-decade
IUCN World Parks Congress
to discuss the governance and management of protected areas. The
Google Earth Outreach
and
Google Earth Engine
teams will be at the event to showcase exemplars of how technology can help protect our environment.
Here are a few of the workshops and events happening in Sydney this week:
Monday, November 10th - Tuesday, November 11th:
Over the last couple of days, the Google Earth Outreach and Earth Engine teams delivered a 2-day hands-on
workshop
to develop the technical capacity of park managers, researchers, and communities. At this workshop, participants were introduced to
Google mapping tools
to help them with their conservation programs.
November 13 - 19:
Google will be at the Oceans Pavilion inside the World Parks Congress to demonstrate how
Trekker
,
Street View
and
Open Data Kit
on Android mobile devices can assist with parks monitoring and management.
Friday, November 14, 9:30-10:30am:
Join a
Live Sydney Seahorse Hunt
in Sydney Harbour, via Google Hangout, with
Catlin Seaview Survey
and
Sydney Institute of Marine Science
. Richard Vevers, Director of the Catlin Seaview Survey, will venture underwater to his favorite dive site and talk with experts about the unique marine life (including seahorses!) that explorers can expect to find around Sydney.
Tune in here
at 10:30am to catch all the action.
Saturday, November 15th, 8:30am:
Networking for nature: the future is cool
. Hear about how technology-driven ocean initiatives can help us better understand and strengthen our connection with our natural environments. WPCA-Marine’s plenary session will includes presentations by Sylvia Earle and Mission Blue, Catlin Seaview Survey, Google, Oceana, and SkyTruth. The session will also feature leading young marine professionals Mariasole Bianco and Rebecca Koss.
Saturday, November 15th, 12:15pm:
We’ll be hosting a
panel discussion
on using
Global Forest Watch
to monitor protected areas in near-real-time. Global Forest Watch is a dynamic online alert system to help park rangers monitor and preserve vast stretches of parkland.
Saturday, November 15th, 1:30 - 3:00pm:
At the Biodiversity Pavilion join Walter Jetz from Yale and Dave Thau from Google for a presentation on
Google Earth Engine
and
The Map of Life
. The presentation will showcase how Google Earth Engine is being used in a variety of conservations efforts - including monitoring water resources, the health of the world's forests, and measuring the impact of protected areas on biodiversity preservation. We will also announce a new global resource from The Map of Life for mapping and monitoring biodiverse ecosystems.
We believe that technology can help address some of our world’s most pressing environmental challenges and we look forward to working with Australian conservationists to integrate technology into their work.
You can find us at the Oceans Pavilion inside the World Parks Congress, where we will be joined by our environmental partners including
The Jane Goodall Institute
,
The World Resources Institute
and
The Map of Life
.
We hope to see you at one of our events this week!
Berkeley Earth Maps Powered by Google Maps Engine now available in the Google Maps Gallery
Thursday, March 20, 2014
Posted by Dr. Robert Rohde, Berkeley Earth
Google Maps is a familiar and versatile tool for exploring the world, but adding new data on top of Google Maps has traditionally required expending effort for both data management and website scripting. Google recently expanded
Google Maps Engine
and debuted an updated
Google Maps Gallery
. These tools aim to make it easier for users and organizations to integrate their geographic data with Google Maps and share it with the world. At
Berkeley Earth
we had an early opportunity to work with these new tools.
The use of Google Maps Engine eliminates the need for users to run their own map-serving Web servers. Maps Engine also handles mundane mapping tasks, such as automatically converting georeferenced image files into beautiful map layers that can be viewed in Google Maps, no programming required.
Annual average land-surface temperature during the period 1951-1980 as estimated by
Berkeley Earth
.
Similarly, one can take tables of location data and map them onto a Google Map using geographic markers and popup message boxes that make it easy to explore georeferenced information.
Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis.
On the left is part of the original table of data. On the right is its representation in Google Maps Engine.
When mapping locations, the new Maps Engine tools allows users to upload their own geographic markers or chose from Google’s many selections; the geographic marker icons used in the temperature station map above were uploaded by us. Alternatively, we could have used one of the stock icons provided by Maps Engine. In addition, users can customize the content and appearance of the popup message boxes by using HTML. If the georeferenced data can be linked the web addresses of already existing online content, one can also incorporate images or outgoing links within the message boxes, helping the user find more information about the content presented in the map.
The ease of putting image layers into the new Maps Engine has allowed Berkeley Earth to create and share many
scalable maps of climate and weather information
that are fun to explore. Incorporating these maps in our website and posting them on the Google Maps Gallery provides the public with a new tool to help locate local weather stations, learn about local climate, and download various kinds of weather and climate data.
Now, anyone can easily learn about both the weather in their city and the climate of the entire globe from a single, simple interface. Google Maps Engine and the new Maps Gallery has allowed us to bring the story of climate to a broad audience in a way that can be easily understood.
Monitoring the World's Forests with Global Forest Watch
Thursday, February 20, 2014
Posted by Crystal Davis, Director of Global Forest Watch, the World Resources Institute, and Dave Thau, Developer Advocate, Google Earth Engine
Cross-posted at the Google Lat Long Blog
By the time we find out about deforestation, it’s usually too late to take action.
Scientists have been studying forests for centuries, chronicling the vital importance of these ecosystems for human society. But most of us still lack timely and reliable information about where, when, and why forests are disappearing.
This is about to change with the launch of
Global Forest Watch
—an online forest monitoring system created by the World Resources Institute, Google and a group of more than 40 partners. Global Forest Watch uses technologies including
Google Earth Engine
and
Google Maps Engine
to map the world’s forests with satellite imagery, detect changes in forest cover in near-real-time, and make this information freely available to anyone with Internet access.
By accessing the most current and reliable information, everyone can learn what’s happening in forests around the world. Now that we have the ability to peer into forests, a number of telling stories are beginning to emerge.
Global forest loss far exceeds forest gain
Pink = tree cover loss
Blue = Tree cover gain
According to
data
from the University of Maryland and Google, the world lost more than 500 million acres of forest between 2000 and 2012. That’s the equivalent of losing 50 soccer fields’ worth of forests every minute of every day for the past 13 years! By contrast, only 0.8 million km2 have regrown, been planted, or restored during the same period.
The United States’ most heavily forested region is made up of production forests
Pink = tree cover loss Blue = Tree cover gain
The Southern United States is home to the nation’s most heavily forested region, making up 29 percent of the total U.S. forest land. Interestingly, the majority of this region is “production forests.” The mosaic of loss (pink) and gain (blue) in the above map shows how forests throughout this region are used as crops – grown and harvested in five-year cycles to produce timber or wood pulp for paper production.
This practice of “intensive forestry” is used all over the world to provide valuable commodities and bolster regional and national economies. WRI
analysis
suggests that if managers of production forests embrace a “
multiple ecosystem services strategy
”, they will be able to generate additional benefits such as biodiversity, carbon storage, and water filtration.
Forests are protected in Brazil’s indigenous territories
Pink = tree cover loss Dark green = forest Light green = Degraded land or pastures
The traditional territory of Brazil's Surui tribe is an island of green surrounded by lands that have been significantly degraded and deforested over the past 10+ years. Indigenous communities often rely on forests for their livelihoods and cultural heritage and therefore have a strong incentive to manage forests sustainably. However, many indigenous communities struggle to protect their lands against encroachment by illegal loggers, which may be seen in Global Forest Watch using annual data from the University of Maryland and Google, or monthly alerts from
Imazon
, a Brazilian NGO and GFW partner.
Make Your Own Forest Map
Previously, the data required to make these maps was difficult to obtain and interpret, and most people lacked the resources necessary to access, view, and analyze the the information. With Global Forest Watch, this data is now open to anyone with Internet access. We encourage you to visit Global Forest Watch and
make your own forest map
. There are many stories to tell about what is happening to forests around the world—and your stories can lead to action to protect these special and threatened places. What story will you tell?
The first detailed maps of global forest change
Thursday, November 14, 2013
Posted by Matt Hansen and Peter Potapov, University of Maryland; Rebecca Moore and Matt Hancher, Google
Most people are familiar with exploring images of the Earth’s surface in Google Maps and Earth, but of course there’s more to satellite data than just pretty pictures. By applying algorithms to time-series data it is possible to quantify global land dynamics, such as forest extent and change. Mapping global forests over time not only enables many science applications, such as climate change and biodiversity modeling efforts, but also informs policy initiatives by providing objective data on forests that are ready for use by governments, civil society and private industry in improving forest management.
In a collaboration led by researchers at the University of Maryland, we built a new map product that quantifies global forest extent and change from 2000 to 2012. This product is the first of its kind, a global 30 meter resolution thematic map of the Earth’s land surface that offers a consistent characterization of forest change at a resolution that is high enough to be locally relevant as well. It captures myriad forest dynamics, including fires, tornadoes, disease and logging.
Global 30 meter resolution thematic maps of the Earth’s land surface: Landsat composite reference image (2000), summary map of forest loss, extent and gain (2000-2012), individual maps of forest extent, gain, loss, and loss color-coded by year.
Click to enlarge
The satellite data came from the Enhanced Thematic Mapper Plus (ETM+) sensor onboard the NASA/USGS
Landsat 7
satellite. The expertise of NASA and USGS, from satellite design to operations to data management and delivery, is critical to any earth system study using Landsat data. For this analysis, we processed over 650,000 ETM+ images in order to characterize global forest change.
Key to the study’s success was the collaboration between remote sensing scientists at the University of Maryland, who developed and tested models for processing and characterizing the Landsat data, and computer scientists at Google, who oversaw the implementation of the final models using Google’s Earth Engine computation platform.
Google Earth Engine
is a massively parallel technology for high-performance processing of geospatial data, and houses a copy of the entire Landsat image catalog. For this study, a total of 20 terapixels of Landsat data were processed using one million CPU-core hours on 10,000 computers in parallel, in order to characterize year 2000 percent tree cover and subsequent tree cover loss and gain through 2012. What would have taken a single computer 15 years to perform was completed in a matter of days using Google Earth Engine computing.
Global forest loss totaled 2.3 million square kilometers and gain 0.8 million square kilometers from 2000 to 2012. Among the many results is the finding that tropical forest loss is increasing with an average of 2,101 additional square kilometers of forest loss per year over the study period. Despite the reduction in Brazilian deforestation over the study period, increasing rates of forest loss in countries such as Indonesia, Malaysia, Tanzania, Angola, Peru and Paraguay resulted in a statistically significant trend in increasing tropical forest loss. The maps and statistics from this study fill an information void for many parts of the world. The results can be used as an initial reference for countries lacking such information, as a spur to capacity building in such countries, and as a basis of comparison in evolving national forest monitoring methods. Additionally, we hope it will enable further science investigations ranging from the evaluation of the integrity of protected areas to the economic drivers of deforestation to carbon cycle modeling.
The Chaco woodlands of Bolivia, Paraguay and Argentina are under intensive pressure from agroindustrial development. Paraguay’s Chaco woodlands within the western half of the country are experiencing rapid deforestation in the development of cattle ranches. The result is the highest rate of deforestation in the world.
Click to enlarge
Global map of forest change:
http://earthenginepartners.appspot.com/science-2013-global-forest
If you are curious to learn more, tune in next Monday, November 18 to a live-streamed, online presentation and demonstration by Matt Hansen and colleagues from UMD, Google, USGS, NASA and the Moore Foundation:
Live-stream Presentation: Mapping Global Forest Change
Live online presentation and demonstration, followed by Q&A
Monday, November 18, 2013 at 1pm EST, 10am PST
Link to live-streamed event:
http://goo.gl/JbWWTk
Please submit questions here:
http://goo.gl/rhxK5X
For further results and details of this study, see
High-Resolution Global Maps of 21st-Century Forest Cover Change
in the November 15th issue of the journal Science.
Building A Visual Planetary Time Machine
Monday, June 10, 2013
Posted by Randy Sargent, Google/Carnegie Mellon University; Matt Hancher and Eric Nguyen, Google; and Illah Nourbakhsh, Carnegie Mellon University
When a societal or scientific issue is highly contested, visual evidence can cut to the core of the debate in a way that words alone cannot — communicating complicated ideas that can be understood by experts and non-experts alike. After all, it took the invention of the optical telescope
to overturn
the idea that the heavens revolved around the earth.
Last month, Google announced a
zoomable and explorable time-lapse view of our planet
. This time-lapse Earth enables you explore the last 29 years of our planet’s history — from the global scale to the local scale, all across the planet. We hope this new visual dataset will ground debates, encourage discovery, and shift perspectives about some of today’s pressing global issues.
This project is a collaboration between Google’s
Earth Engine
team, Carnegie Mellon University’s
CREATE Lab
, and
TIME Magazine
— using nearly a petabyte of historical record from USGS’s and NASA’s
Landsat
satellites. And in this post, we’d like to give a little insight into the process required to build this time-lapse view of our planet.
Previews of the phenomena visible in these time-lapses.
First we'll describe Google’s Earth Engine system for deriving the time-series imagery. Second, we'll tell you more about CMU’s open-source “Time Machine” software for creating and streaming large, explorable time-series imagery.
Annual Composites: Distilling a Massive Dataset
Google Earth Engine
brings together the world's scientific satellite imagery — over a petabyte of multispectral imagery recording over 40 years of history — and makes it available online with tools that scientists, independent researchers, and nations can use to mine this massive warehouse of data to detect changes, map trends and quantify differences on the Earth's surface using Google’s computational infrastructure. Today, the platform is used to
monitor the Amazon
and
estimate forest carbon in Tanzania
, among hundreds of other partners developing new uses for the technology.
Using Earth Engine, we first built annual global mosaics at a resolution of 30 meters per pixel for each year from 1984 through 2012. We started with a total of 2,068,467 scenes from the Landsat 4, 5, and 7 satellites, comprising 909 terabytes of data. The Earth’s atmosphere is a constantly-shifting sea of clouds, so in order to assemble a seamless cloud-free view of each year we analyzed all the images available at each location and used a simple cloud model to separate out the clouds from the ground. To help correct for atmospheric and seasonal effects, we used an additional 20TB of data from the
MODIS
MCD43A4 product to build a cloud-free low-resolution model of the Earth over time. We combined all this to produce a statistical estimate of the color of each pixel for every year for which data was available. Producing the final 29 global mosaics took a bit less than a day and consumed approximately 260,000 core-hours of CPU.
Some areas of the planet are almost perpetually cloudy, obscuring satellite views. In addition, before the more capable Landsat 7 began operating in 1999, coverage in some areas of the world was sparse, particularly in Asia, for
various operational and technological reasons
. We wrestled with how best to visualize areas with missing or cloud-obscured images from each year. In the end, after much experimentation, we chose to simply interpolate between valid image years. Other techniques, such as greying out invalid data, created distractingly large artifacts, visually drowning out the valid information. However, the downside with the approach we have taken is that it can be difficult to tell which data is original and which is interpolated. We are exploring the possibility of including a view that allows drilling down into the non-interpolated, original mosaics.
"Time Machine": An HTML5 Time-Series Exploration Tool
Once we had produced the final global images, we adapted the Carnegie Mellon CREATE Lab’s
open-source “Time Machine” software
, which enables authoring, streaming, and exploring very-high-resolution videos. Time Machine videos take advantage of the power of HTML5 and modern web browsers: they are
streamed as multiresolution, overlapping video tiles
and displayed in a web page by manipulating the HTML5 <video> tag, in much the same way that Google Maps first demonstrated using the HTML <img> tag.
Examples of zoomable timelapses with hundreds of millions or billions of pixels per frame include documenting
plant growth
,
bee colony collapse
, and
very-large-scale simulations of the universe
. Time-lapse Earth, however, sets a new record for giant videos: each frame of the video is a global Mercator-projected map with a resolution of 30 meters per pixel at the equator, for a total of 1.78
trillion
pixels per frame. That’s about a million times larger than a standard HD video stream. In order to scale to such large videos, we needed to integrate Time Machine’s data production pipeline into Earth Engine and the rest of Google’s infrastructure. Encoding the final video tiles consumed approximately 1.4 million core-hours of CPU in Google’s data centers over the course of about a day. For CMU's researchers, this would have been impossible without Google's resources.
Combining all three phases of product generation:
Total processing time: 3 days
Total CPU usage: 1.8 million core-hours
Peak CPU usage: 66,000 simultaneous cores
Destination locations of top 1500 share links, weighted by number of visits.
Time-lapse Earth is powerful because it helps us to access and construct the story of our planet. That story will become richer with each release, as we continue to improve fidelity and add data. The story-teller is everyone — scientists and citizens alike provide the real value by interacting, exploring, layering their knowledge upon the globe, and sharing their insights so that we can all better understand our world.
We are especially proud of the collaboration that made time-lapse Earth possible, and believe it to be an exemplar of how industry, academia, government, and the press can benefit from working together deeply over a period of years. By drawing on the strengths of each member of the collaborative community, Google strives to integrate the world's technical expertise and knowledge in order to tackle innovative and groundbreaking projects. In doing so, it is our goal to deliver an impactful service, one that can put a focus on the dramatic effect we are having on our planet.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gmail
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
ph.d. fellowship
PhD Fellowship
PhotoScan
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2017
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.