Google Research Blog
The latest news from Research at Google
New Research Challenges in Language Understanding
Friday, November 22, 2013
Posted by Maggie Johnson, Director of Education and University Relations
We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event.
The 2-day workshop included keynote talks, panel discussions and break-out sessions [
agenda
]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas.
Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them.
At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:
Knowledge representation, integration, and maintenance
Efficient and scalable infrastructure and algorithms for inferencing
Presentation and explanation of knowledge
Multilingual computation
Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the
Faculty Research Awards program
. Faculty Research Awards are one-year grants to researchers working in areas of mutual interest.
The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.
EMEA Faculty Summit 2012
Tuesday, October 02, 2012
Michel Benard, University Relations Manager
Last week we held our fifth Europe, Middle East and Africa (EMEA) Faculty Summit in London, bringing together 94 of EMEA’s foremost computer science academics from 65 universities representing 25 countries, together with more than 60 Googlers.
This year’s jam-packed agenda included a welcome reception at the
Science Museum
(plus a tour of the special exhibition: “
Codebreaker - Alan Turing’s life and legacy
”), a keynote on “Research at Google” by
Alfred Spector
, Vice President of Research and Special Initiatives and a welcome address by Nelson Mattos, Vice President of Engineering and Products in EMEA, covering Google’s engineering activity and recent innovations in the region.
The Faculty Summit is a chance for us to meet with academics in Computer Science and other areas to discuss the latest exciting developments in research and education, and to explore ways in which we can collaborate via our our
University Relations programs
.
The two and a half day program consisted of tech talks, break out sessions, a panel on online education, and demos. The program covered a variety of computer science topics including Infrastructure, Cloud Computing Applications, Information Retrieval, Machine Translation, Audio/Video, Machine Learning, User Interface, e-Commerce, Digital Humanities, Social Media, and Privacy. For example,
Ed H. Chi
summarized how researchers use
data analysis to understand the ways users share content with their audiences
using the
Circle feature in Google+
.
Jens Riegelsberger
summarized how UI design and user experience research is essential to creating a seamless experience on Google Maps.
John Wilkes
discussed some of the research challenges - and opportunities - associated with building, managing, and using computer systems at massive scale. Breakout sessions ranged from technical follow-ups on the talk topics to discussing ways to increase the presence of women in computer science.
We also held one-on-one sessions where academics and Googlers could meet privately and discuss topics of personal interest, such as how to develop a compelling research award proposal, how to apply for a sabbatical at Google or how to gain Google support for a conference in a particular research area.
The Summit provides a great opportunity to build and strengthen research and academic collaborations. Our hope is to drive research and education forward by fostering mutually beneficial relationships with our academic colleagues and their universities.
Faculty Summit 2012: Online Education Panel
Monday, August 20, 2012
Posted by
Peter Norvig
, Director of Research
On July 26th, Google's 2012
Faculty Summit
hosted computer science professors from around the world for a chance to talk and hear about some of the work done by Google and by our faculty partners. One of the sessions was a panel on Online Education. Daphne Koller's presentation on "
Education at Scale
" describes how a talk about YouTube at the 2009 Google Faculty Summit was an early inspiration for her, as she was formulating her approach that led to the founding of
Coursera
. Koller started with the goal of allowing Stanford professors to have more time for meaningful interaction with their students, rather than just lecturing, and ended up with a model based on the flipped classroom, where students watch videos out of class, and then come together to discuss what they have learned. She then refined the flipped classroom to work when there is no classroom, when the interactions occur in online discussion forums rather than in person. She described some fascinating experiments that allow for more flexible types of questions (beyond multiple choice and fill-in-the-blank) by using peer grading of exercises.
In my
talk
, I describe how I arrived at a similar approach but starting with a different motivation: I wanted a textbook that was more interactive and engaging than a static paper-based book, so I too incorporated short videos and frequent interactions for the
Intro to AI class
I taught with Sebastian Thrun.
Finally, Bradley Horowitz, Vice President of Product Management for Google+ gave a
talk
describing the goals of Google+. It is not to build the largest social network; rather it is to understand our users better, so that we can serve them better, while respecting their privacy, and keeping each of their conversations within the appropriate circle of friends. This allows people to have more meaningful conversations, within a limited context, and turns out to be very appropriate to education.
By bringing people together at events like the Faculty Summit, we hope to spark the conversations and ideas that will lead to the next breakthroughs, perhaps in online education, or perhaps in other fields. We'll find out a few years from now what ideas took root at this year's Summit.
Reflections on Digital Interactions: Thoughts from the 2012 NA Faculty Summit
Thursday, August 02, 2012
Posted by Alfred Spector, Vice President of Research and Special Initiatives
Last week, we held our eighth annual North America
Computer Science Faculty Summit
at our headquarters in Mountain View. Over 100 leading faculty joined us from 65 universities located in North America, Asia Pacific and Latin America to attend the two-day Summit, which focused on new interactions in our increasingly digital world.
In my introductory remarks, I shared some themes that are shaping our research agenda. The first relates to the amazing scale of systems we now can contemplate. How can we get to computational clouds of, perhaps, a billion cores (or processing elements)? How can such clouds be efficient and manageable, and what will they be capable of? Google is actively working on most aspects of large scale systems, and we continue to look for opportunities to collaborate with our academic colleagues. I note that we announced a cloud-based
program
to support Education based on Google App Engine technology.
Another theme in my introduction was semantic understanding. With the introduction of our
Knowledge Graph
and other work, we are making great progress toward data-driven analysis of the meaning of information. Users, who provide a continual stream of subtle feedback, drive continuous improvement in the quality of our systems, whether about a celebrity, the meaning of a word in context, or a historical event. In addition, we have found that the combination of information from multiple sources helps us understand meaning more efficiently. When multiple signals are aggregated, particularly with different types of analysis, we have fewer errors and improved semantic understanding. Applying the “combination hypothesis,” makes systems more intelligent.
Finally, I talked about User Experience. Our field is developing ever more creative user interfaces (which both present information to users, and accept information from them), partially due to the revolution in mobile computing but also due in-part to the availability of large-scale processing in the cloud and deeper semantic understanding. There is no doubt that our interactions with computers will be vastly different 10 years from now, and they will be significantly more fluid, or natural.
This page
lists the Googler and Faculty presentations at the summit.
One of the highest intensity sessions we had was the panel on online learning with Daphne Koller from Stanford/Coursera, and Peter Norvig and Bradley Horowitz from Google. While there is a long way to go, I am so pleased that academicians are now thinking seriously about how information technology can be used to make education more effective and efficient. The infrastructure and user-device building blocks are there, and I think the community can now quickly get creative and provide the experiences we want for our students. Certainly, our own recent experience with our online
Power Searching Course
shows that the baseline approach works, but it also illustrates how much more can be done.
I asked Elliot Solloway (University of Michigan) and Cathleen Norris (University of North Texas), two faculty attendees, to provide their perspective on the panel and they have posted their reflections on
their blog
.
The digital era is changing the human experience. The summit talks and sessions exemplified the new ways in which we interact with devices, each other, and the world around us, and revealed the vast potential for further innovation in this space. Events such as these keep ideas flowing and it’s immensely fun to be part of very broadly-based, computer science community.
Natural Language in Voice Search
Tuesday, July 31, 2012
Posted by Jakob Uszkoreit, Software Engineer
On July 26 and 27, we held our eighth annual
Computer Science Faculty Summit
on our Mountain View Campus. During the event, we brought you a series of blog posts dedicated to sharing the Summit's talks, panels and sessions, and we continue with this glimpse into natural language in voice search. --Ed
At this year’s Faculty Summit, I had the opportunity to showcase the newest version of
Google Voice Search
. This version hints at how Google Search, in particular on mobile devices and by voice, will become increasingly capable of responding to natural language queries.
I first outlined the trajectory of Google Voice Search, which was initially released in 2007.
Voice actions
, launched in 2010 for Android devices, made it possible to control your device by speaking to it. For example, if you wanted to set your device alarm for 10:00 AM, you could say “set alarm for 10:00 AM. Label: meeting on voice actions.” To indicate the subject of the alarm, a meeting about voice actions, you would have to use the keyword “label”! Certainly not everyone would think to frame the requested action this way. What if you could speak to your device in a more natural way and have it understand you?
At last month’s
Google I/O 2012
, we announced a version of voice actions that supports much more natural commands. For instance, your device will now set an alarm if you say “my meeting is at 10:00 AM, remind me”. This makes even previously existing functionality, such as sending a text message or calling someone, more discoverable on the device -- that is, if you express a voice command in whatever way feels natural to you, whether it be “let David know I’ll be late via text” or “make sure I buy milk by 3 pm”, there is now a good chance that your device will respond how you anticipated it to.
I then discussed some of the possibly unexpected decisions we made when designing the system we now use for interpreting natural language queries or requests. For example, as you would expect from Google, our approach to interpreting natural language queries is data-driven and relies heavily on machine learning. In complex machine learning systems, however, it is often difficult to figure out the underlying cause for an error: after supplying them with training and test data, you merely obtain a set of metrics that hopefully give a reasonable indication about the system’s quality but they fail to provide an explanation for why a certain input lead to a given, possibly wrong output.
As a result, even understanding why some mistakes were made requires experts in the field and detailed analysis, rendering it nearly impossible to harness non-experts in analyzing and improving such systems. To avoid this, we aim to make every partial decision of the system as interpretable as possible. In many cases, any random speaker of English could look at its possibly erroneous behavior in response to some input and quickly identify the underlying issue - and in some cases even fix it!
We are especially interested in working with our academic colleagues on some of the many fascinating research and engineering challenges in building large-scale, yet interpretable natural language understanding systems and devising the machine learning algorithms this requires.
Big Pictures with Big Messages
Thursday, July 26, 2012
Posted by Maggie Johnson, Director of Education and University Relations
Google’s Eighth Annual
Computer Science Faculty Summit
opened today in Mountain View with a fascinating talk by Fernanda Viégas and Martin Wattenberg, leaders of the data visualization group at our Cambridge office. They provided insight into their design process in visualizing big data, by highlighting Google+ Ripples and a map of the wind they created.
To preface his explanation of the design process, Martin shared that his team “wants visualization to be ‘G-rated,’ showing the full detail of the data - there’s no need to simplify it, if complexity is done right.” Martin discussed how their
wind map
started as a personal art project, but has gained interest particularly among groups that are interested in information on the wind (sailors, surfers, firefighters). The map displays surface wind data from the
US National Digital Forecast Database
and updates hourly. You can zoom around the United States looking for where the winds are fastest - often around lakes or just offshore - or check out the
gallery
to see snapshots of the wind from days past.
Fernanda discussed the development of
Google+ Ripples
, a visualization that shows
how news spreads
on Google+. The visualization shows spheres of influence and different patterns of spread. For example, someone might post a video to their Google+ page and if it goes viral, we’ll see several circles in the visualization. This depicts the influence of different individuals sharing content, both in terms of the number of their followers and the re-shares of the video, and has revealed that individuals are at times more influential than organizations in the social media domain.
Martin and Fernanda closed with two important lessons in data visualization: first, don’t “dumb down” the data. If complexity is handled correctly and in interesting ways, our users find the details appealing and find their own ways to interact with and expand upon the data. Second, users like to see their personal world in a visualization. Being able to see the spread of a Google+ post, or zoom in to see the wind around one’s town is what makes a visualization personal and compelling-- we call this the “I can see my house from here” feature.
The
Faculty Summit
will continue through Friday, July 27 with talks by Googlers and faculty guests as well as breakout sessions on specific topics related to this year’s theme of digital interactions. We will be looking closely at how computation and bits have permeated our everyday experiences via smart phones, wearable computing, social interactions, and education.
We will be posting here throughout the summit with updates and news as it happens.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gmail
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
ph.d. fellowship
PhD Fellowship
PhotoScan
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2017
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.