Google Research Blog
The latest news from Research at Google
Friday, July 30, 2010
Posted by Corinna Cortes and Alfred Spector, Google Research
We often get asked if Google scientists and engineers publish technical papers, and the answer is, “Most certainly, yes.” Indeed, we have a formidable research capability, and we encourage publications as well as other forms of technical dissemination--including our contributions to open source and standards and the introduction of new APIs and tools, which have proven to sometimes be foundational.
Needless to say, with our great commitment to technical excellence in computer science and related disciplines, we find it natural and rewarding to contribute to the scientific community and to ongoing technical debates. And we know that it is important for Google to help create the fundamental building blocks upon which continuing advances can occur.
To be specific, Googlers publish hundreds of technical papers that appear in journals, books, and conference and workshop proceedings every year. These deal with specific applications and engineering questions, algorithmic and data structure problems, and important theoretical problems in computer science, mathematics, and other areas, that can guide our algorithmic choices. While the publications are interesting in their own right, they also offer a glance at some of the key problems we face when dealing with very large data sets and demonstrate other questions that arise in our engineering design at Google.
We’d like to highlight a few of the more noteworthy papers from the first trimester of this year. The papers reflect the breadth and depth of the problems on which we work. We find that virtually all aspects of computer science, from systems and programming languages, to algorithms and theory, to security, data mining, and machine learning are relevant to our research landscape. A more complete list of our publications can be found
In the coming weeks we will be offering a more in-depth look at these publications, but here are some summaries:
"Google Search by Voice: A Case Study," by Johan Schalkwyk, Doug Beeferman, Francoise Beaufays, Bill Byrne, Ciprian Chelba, Mike Cohen, Maryam Garrett, Brian Strope, to appear in
Advances in Speech Recognition: Mobile Environments, Call Centers, and Clinics
, Amy Neustein (Ed.), Springer-Verlag 2010.
Google Search by Voice is a result of many years of investment in speech at Google. In our book chapter, “Google Search by Voice: A Case Study,” we describe the basic technology, the supporting technologies, and the user interface design behind Google Search by Voice. We describe how we built it and what lessons we have learned. Google search by voice is growing rapidly and being built in many languages. Along the way we constantly encounter new research problems providing the perfect atmosphere for doing research on real world problems.
Computer Architecture & Networks & Distributed Systems
"Energy-proportional Datacenter Networks," by Dennis Abts, Mike Marty, Philip Wells, Peter Klausler, Hong Liu,
International Symposium on Computer Architecture
, ISCA, June 201
Google researchers have called on industry and academia to develop energy-proportional computing systems, where the energy consumed is directly proportional to the utilization of the system. In this work, we focus on the energy usage of high-bandwidth, highly scalable cluster networks. Through a combination of an energy-efficient topology and dynamic fine-grained control of link speeds, our proposed techniques show the potential to significantly reduce both electricity and environmental costs.
Economics & Market Algorithms
"Quasi-Proportional Mechanisms: Prior-free Revenue Maximization," by Vahab S. Mirrokni, S. Muthukrishnan, Uri Nadav,
Latin American Theoretical Informatics Symposium
, LATIN, April 2010.
Say a seller wishes to sell an item, but the buyers value it vastly differently. What is a suitable auction to sell the item, in terms of efficiency as well as revenue? First and second price auctions will be efficient but will only extract the lower value in equilibrium; if one knows the distributions from which values are drawn, then setting a reserve price will get optimal revenue but will not be efficient. This paper views this problem as prior-free auction and proposes a quasi-proportional allocation in which the probability that an item is allocated to a bidder depends (quasi-proportionally) on their bids. The paper also proves existence of an equilibrium for quasi-proportional auctions and shows how to compute them efficiently. Finally, the paper shows that these auctions have high efficiency and revenue.
"Auctions with Intermediaries," Jon Feldman, Vahab Mirrokni, S. Muthukrishnan, Mallesh Pai,
ACM Conference on Electronic Commerce
, EC, June 2010.
We study an auction where the bidders are middlemen, looking in turn to auction off the item if they win it. This setting arises naturally in online advertisement exchange systems, where the participants in the exchange are ad networks looking to sell ad impressions to their own advertisers. We present optimal strategies for both the bidders and the auctioneer in this setting. In particular, we show that the optimal strategy for bidders is to choose a randomized reserve price, and the optimal reserve price of the centeral auctioneer may depend on the number of bidders (unlike the case when there are no middlemen).
"Discontinuous Seam-Carving for Video Retargeting," Matthias Grundmann, Vivek Kwatra, Mei Han, Irfan Essa,
Computer Vision and Pattern Recognition
, CVPR, June 2010.
Playing a video on devices with different form factors requires resizing (or
) the video to fit the resolution of the given device. We have developed a content-aware technique for video retargeting based on
, which unlike standard methods like uniform scaling and cropping, strives to retain salient content (such as actors, faces and structured objects) while discarding relatively unimportant pixels (such as the sky or a blurry background). The key innovations of our research include: (a) a solution that maintains temporal continuity of the video in addition to preserving its spatial structure, (b) space-time smoothing for automatic as well as interactive (user-guided) salient content selection, and (c) sequential frame-by-frame processing conducive for arbitrary length and streaming video.
"Random classification noise defeats all convex potential boosters," Philip M. Long, Rocco A. Servedio, Machine Learning, vol. 78 (2010), pp. 287-304.
A popular approach that has been used to tackle many machine learning problems recently is to formulate them as optimization problems in which the goal is to minimize some “convex loss function.” This is an appealing formulation because these optimization problems can be solved in much the same way that a marble rolls to the bottom of a bowl. However, it turns out that there are drawbacks to this formulation. In "Random Classification Noise Defeats All Convex Potential Boosters," we show that any learning algorithm that works in this way can fail badly if there are noisy examples in the training data. This research motivates further study of other approaches to machine learning, for which there are algorithms that are provably more robust in the presence of noise.
"Clustering Query Refinements by User Intent," Eldar Sadikov, Jayant Madhavan, Lu Wang, Alon Halevy,
Proceedings of the International World Wide Web Conference
, WWW, April 2010.
When users pose a search query, they usually have an underlying intent or information need, and the sequence of queries he or she poses in single search sessions is usually determined by the user's underlying intent. Our research demonstrates that there typically are only a small number of prominent underlying intents for a given user query. Further, these intents can be identified very accurately by an analysis of anonymized search query logs. Our results show that underlying intents almost always correspond to well-understood high-level concepts.
"How does search behavior change as search becomes more difficult?", Anne Aula, Rehan Khan, Zhiwei Guan,
Proceedings of the ACM Conference on Human Factors in Computing Systems
, CHI , April 2010.
Seeing that someone is getting frustrated with a difficult search task is easy for another person--just look for the frowns, and listen for the sighs. But could a computer tell that you're getting frustrated from just the limited behavior a search engine can observe? Our study suggests that it can: when getting frustrated, our data shows that users start to formulate question queries, they start to use advanced operators, and they spend a larger proportion of the time on the search results page. Used together, these signals can be used to build a model that can potentially detect user frustration.
Adaptive Data Analysis
Automatic Speech Recognition
Electronic Commerce and Algorithms
Google Cloud Platform
Google Science Fair
Google Voice Search
High Dynamic Range Imaging
Internet of Things
Natural Language Processing
Natural Language Understanding
Optical Character Recognition
Public Data Explorer
Security and Privacy
Site Reliability Engineering
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog