Google Research Blog
The latest news from Research at Google
Julia meets HTML 5
Monday, January 31, 2011
Posted by Daniel Wolf, Software Engineer
Today, we launched
on Google Labs, a fractal renderer in HTML 5.
Julia Map uses the
Google Maps API
to zoom and pan into the fractals. The images are computed with
HTML 5 canvas
. Each image generally requires millions of floating point operations.
spread the heavy calculations on all cores of the machine.
We hope you will enjoy exploring the different Julia sets, and share the URLs of the most artistic images you discovered. See what others have posted on Twitter under hashtag
. Click on the images below to dive in to infinity!
Google at NIPS 2010
Thursday, January 27, 2011
Posted by Slav Petrov, Doug Aberdeen, and Lisa McCracken, Google Research
The machine learning community met in Vancouver in December for the 24th
Neural Information Processing Systems Conference (NIPS)
. As always, the single-track program of the main conference featured a number of outstanding talks, followed by interesting late night poster sessions. A record number of workshops covered a wide variety of topics, while allocating sufficient time for skiing in Whistler - after all, many of the most interesting research conversations happen while riding the lift in-between ski runs. This year’s conference also featured a symposium dedicated to
, providing a retrospective on Sam’s life and work. Sam, a fellow Googler and professor at NYU, was at the heart of the NIPS community and is terribly missed.
As always, Google was involved in various ways with NIPS. Here at Google, we take a data-driven approach when solving problems. Therefore, Machine Learning is in one way or another at the core of most of the things that we do. It is therefore unsurprising that many Googlers helped shape the program of the conference or were in the audience. This year, three Googlers served as area chairs and even more were reviewers. Googlers also co-authored the following papers:
Label Embedding Trees for Large Multi-Class Tasks
by Samy Bengio and Jason Weston
Learning Bounds for Importance Weighting
by Corinna Cortes, Yishay Mansour, and Mehryar Mohri
Online Learning in the Manifold of Low-Rank Matrices
by Uri Shalit, Daphna Weinshall, and Gal Chechik
Deterministic Single–Pass Algorithm for LDA
by Issei Sato, Kenichi Kurihara, and Hiroshi Nakagawa
Distributed Dual Averaging In Networks
by John Duchi, Alekh Agarwal, and Martin Wainwright
Additionally, Googlers co-organized three well attended workshops:
Coarse–to–Fine Learning and Inference
by Ben Taskar, David Weiss, Benjamin Sapp, and Slav Petrov
Low–rank Methods for Large–scale Machine Learning
by Arthur Gretton, Michael Mahoney, Mehryar Mohri, and Ameet Talwalkar
Learning on Cores, Clusters, and Clouds
by John Duchi, Ofer Dekel, John Langford, Lawrence Cayton, and Alekh Agarwal
Finally, Yoram Singer gave a great talk on
Learning Structural Sparsity
at the Sam Roweis symposium and Googlers presented the following talks during the workshops:
Online Learning in the Manifold of Low–Rank Matrices
by Uri Shalit, Daphna Weinshall, and Gal Chechik
Distributed MAP Inference for Undirected Graphical Models
by Sameer Singh, Amar Subramanya, Fernando Pereira, and Andrew McCallum
MapReduce/Bigtable for Distributed Optimization
by Keith Hall, Scott Gilpin and Gideon Mann
Self-Pruning Prediction Trees
by Sally Goldman
Web Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings
by Jason Weston, Samy Bengio, and Nicolas Usunier
Coarse–to–fine Decoding for Parsing and Machine Translation
by Slav Petrov
Overall, it was a very successful conference and it was good to be back in Vancouver one last time. This coming year
will be in Granada, Spain. Hasta luego!
More Google Contributions to the Broader Scientific Community
Tuesday, January 25, 2011
Posted by Corinna Cortes and Alfred Spector, Google Research
Googlers actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our publications offer technical and algorithmic advances, demonstrate things we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google.
In an effort to highlight some of our work, we periodically select a number of publications to be featured on this site. We first posted a
set of papers
on this blog in mid-2010 and subsequently discussed them in more detail in the following blog postings. This blog posting highlights a few new noteworthy papers authored or co-authored by Googlers from the later half of 2010. In the coming weeks we will be offering a more in-depth look at these publications, but here are some summaries:
Algorithms and Electronic Commerce
Robust Mechanisms for Risk-Averse Sellers
ACM Conference on Electronic Commerce (EC)
Mukund Sundararajan and Qiqi Yan, Stanford University
In his seminal Nobel prize-winning work, Roger Myerson identified the revenue-maximizing auction for a risk-neutral auctioneer. In contrast, this work identifies good mechanisms for risk-averse auctioneers. These mechanisms trade a little revenue for better certainty, in the best possible way. We expect this work will help guide reserve-price selection in auctions where auctioneers/sellers want better control over their revenue.
Monitoring Algorithms for Negative Feedback Systems
World Wide Web Conference (WWW)
Mark Sandler and S. Muthukrishnan
In negative feedback systems, users report abusive content at a site to its owner for consideration or removal, but the users might not be honest. For the site owners, this represents a trade-off between vetting such user reports by humans vs. accepting them without vetting. This paper presents a mathematical framework for design and analysis of such systems and presents algorithms with provably good trade-offs against malicious users.
Children's Roles Using Keyword Search Interfaces in the Home
Computer Human Interactions (CHI)
Allison Druin, University of Maryland, Elizabeth Foss, University of Maryland, Hilary Hutchinson, Evan Golub, University of Maryland, and Leshell Hatley, University of Maryland
In this paper, we describe seven search roles children display as information seekers using Internet keyword interfaces, based on a home study of 83 children ages 7, 9, and 11.
Large Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings
European Conference on Machine Learning (ECML) Best Paper
Jason Weston, Samy Bengio, and Nicolas Usunier, Universite Paris 6 - LIP6
In this paper, we introduce a generic framework to find a joint representation of images and their labels, which can then be used for various tasks, including image ranking and image annotation. We simultaneously propose an efficient training algorithm that scales to tens of millions of images and hundreds of thousands of labels, while focusing training on making good predictions at the top of the ranked list. The models are both fast at prediction time and have low memory usage making it possible to house such systems on a laptop or mobile device.
Overlapping Experiment Infrastructure: More, Better
Faster Experimentation, Knowledge Discovery and Datamining (KDD)
Diane Tang, Ashish Agarwal, Deirdre O'Brien, and Mike Meyer
Google's data driven culture requires running a large number of live traffic experiments. This paper describes Google's overlapping experimental infrastructure where a single event (e.g. a web search) can be assigned to multiple simultaneous large experiments. The infrastructure and supporting tools provide a framework that enables running experiments from design to decision making and launch, and can be generalized to many other web applications.
Products of Random Latent Variable Grammars
North American Chapter of the Association for Computational Linguistics (NAACL)
It is well known that the Expectation Maximization algorithm can converge to widely varying local maxima. This paper shows that this can be advantageous when learning latent variable grammars for syntactic parsing. By combining multiple state-of-the-art individual grammars into an unweighted product model, parsing accuracy can be improved from 90.2% to 91.8% for English, and from 80.3% to 84.5% for German.
Contention Aware Execution: Online Contention Detection and Response
International Symposium on Code Generation and Optimization (CGO)
Jason Mars, University of Virginia, Neil Vachharajani, Robert Hundt, Mary Lou Soffa, University of Virginia
This paper makes a big step forward in addressing an important and pressing problem in the field of Computer Science today. This work presents a lightweight runtime solution that significantly improves the utilization of datacenter servers by up to 58% on average. This work also received the CGO 2010 Best Presentation Award.
Say What? Why users choose to speak their web queries
Maryam Kamvar and Doug Beeferman
Say What? Have you been speaking your search queries into your mobile device rather than typing them? Spoken search is available on Android, iPhone and Blackberry devices and we see an increasing numbers of searches coming in by voice on these phones. In our paper “Say What: Why users choose to speak their web queries” we investigate, on an aggregate level, what factors are most predictive of spoken queries. Understanding context in which a speech-driven search is used (or conversely not used) can be used to improve recognition engines and spoken interface design. So, save keystrokes and say your query!
Query Language Modeling for Voice Search
IEEE Workshop on Spoken Language Technology
Ciprian Chelba, Johan Schalkwyk, Thorsten Brants, Vida Ha, Boulos Harb, Will Neveitt, Carolina Parada*, Johns Hopkins University, and Peng Xu
The paper describes language modeling for google.com query data, and its application to speech recognition for Google Voice Search.
Our empirical findings include:
10% relative gains in WER from large scale modeling,
a less known yet potentially quite detrimental interaction between Kneser-Ney smoothing and entropy pruning (approx. 10% relative increase in WER)
evidence that hints at non-stationarity of the query stream, and
surprisingly strong dependence across three English locales---USA, Britain and Australia.
Dremel: Interactive Analysis of Web-Scale Datasets
Very Large Data Bases (VLDB)
Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, and Theo Vassilakis, Google Inc.
Dremel is a scalable, interactive ad-hoc query system. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system is widely used at Google and serves as the foundational technology behind BigQuery, a product launched in limited preview mode.
Systems and Infrastructure
Large-scale Incremental Processing Using Distributed Transactions and Notifications
USENIX Symposium on Operating Systems Design and Implementation (OSDI)
Daniel Peng and Frank Dabek
In the past, Google accumulated a whole day’s worth of changes to the web and ran a series of enormous MapReduces to apply this batch of changes to our index of the web. This system led to a delay of several days between crawling a document and presenting it to users in search results. To meet our goal of reducing the indexing delay to minutes, we needed to update the index as each individual document was crawled, rather than in daily batches. No existing infrastructure supported this kind of incremental transformation at web scale, so we built Percolator: a framework for transforming a large repository using small ACID transactions.
Availability in Globally Distributed Storage Systems
USENIX Symposium on Operating Systems Design and Implementation (OSDI)
Daniel Ford, Francois Labelle, Florentina Popovici, Murray Stokely, Van-Anh Truong*, Columbia University, Luiz Barroso, Carrie Grimes, and Sean Quinlan
In our paper, we characterize the availability of cloud storage systems, based on extensive monitoring of Google's main storage infrastructure, and the sources of failure which affect availability. We also present statistical models for reasoning about the impact of design choices such as data placement, recovery speed, and replication strategies, including replication across multiple data centers.
Improved Consistent Sampling, Weighted Minhash and L1 Sketching
IEEE International Conference on Data Mining (ICDM)
With the huge amounts of very high-dimensional data, such as images and videos, we frequently need to "sketch" the data -- that is, represent it in a much more compact form, while still allowing us to accurately determine how different any two images or videos are. In this paper, we describe a sketching method for L1, one of the most common distance measures. It works by first hashing the data with a new algorithm, and then compressing each hash to a small number of bits, which is learned from data. This method is fast and allows the distances to be estimated accurately, while reducing the storage requirements by a factor of 100.
*) work carried out while at Google
Supporting computer science education with CS4HS
Thursday, January 20, 2011
Posted by Terry Ednacot, Education Program Manager
have shown a decline in the number of U.S. students taking computer science AP classes, which also leads to a decline in students declaring computer science as their majors—a concerning trend in the U.S. as we try to remain competitive in the global economy. With programs like Computer Science for High School (
), we hope to increase the number of CS majors —and therefore the number of people entering into careers in CS—by promoting computer science curriculum at the high school level.
For the fourth consecutive year, we’re funding CS4HS to invest in the next generation of computer scientists and engineers. CS4HS is a workshop for high school and middle school computer science teachers that introduces new and emerging concepts in computing and provides tips, tools and guidance on how to teach them. The ultimate goals are to “train the trainer,” develop a thriving community of high school CS teachers and spread the word about the awe and beauty of computing.
In 2011 we’re expanding the program considerably and hope to double the number of schools we funded in 2010. If you’re a university, community college, or technical School in the U.S., Canada, Europe, Middle East or Africa and are interested in hosting a workshop at your institution, please visit
to submit an application for grant funding.
Applications will be accepted between January 18, 2011 and February 18, 2011.
In addition to submitting your application, on the CS4HS website you’ll find info on how to organize a workshop, as well as websites and agendas from last year’s participants to give you an idea of how the workshops were structured in the past. There’s also a collection of
CS4HS curriculum modules
that previous participating schools have shared for future organizers to use in their own program.
Previous organizers have told us that teachers have left their workshops excited about the new materials they learned and the innovative ideas they’ve discussed with other teachers. We’re hopeful that they’ll pass on to their students not only the skills that they learned but also that passion.
Adaptive Data Analysis
Automatic Speech Recognition
Electronic Commerce and Algorithms
Google Cloud Platform
Google Play Apps
Google Science Fair
Google Voice Search
High Dynamic Range Imaging
Internet of Things
Natural Language Processing
Natural Language Understanding
Optical Character Recognition
Public Data Explorer
Security and Privacy
Site Reliability Engineering
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog