Google Research Blog
The latest news from Research at Google
And the winner of the $1 Million Little Box Challenge is…CE+T Power’s Red Electrical Devils
Monday, February 29, 2016
Posted by Ross Koningstein, Engineering Director Emeritus, Google Research
In July 2014, Google and the
launched the $1 Million
Little Box Challenge
, an open competition to design and build a small kW-scale inverter with a power density greater than 50 Watts per cubic inch while meeting a number of other specifications related to efficiency, electrical noise and thermal performance. Over 2,000 teams from across the world registered for the competition and more than 80 proposals qualified for review by
IEEE Power Electronics Society
and Google. In October 2015,
18 finalists were selected
to bring their inverters to the
National Renewable Energy Laboratory
(NREL) for testing.
Today, Google and the IEEE are proud to announce that the grand prize winner of the $1 Million Little Box Challenge is
’s Red Electrical Devils. The Red Electrical Devils (named after
Belgium’s national soccer team
) were declared the winner by a consensus of judges from Google, IEEE Power Electronics Society and NREL. Honorable mentions go to teams from
Virginia Tech’s Future Energy Electronics Center
CE+T Power’s Red Electrical Devils receive $1 Million Little Box Challenge Prize
Schneider, Virginia Tech and The Red Electrical Devils all built 2kW inverters that passed
100 hours of testing at NREL
, adhered to the technical specifications of the competition, and were recognized today in a ceremony at the
ARPA-E Energy Innovation Summit
in Washington, DC. Among the 3 finalists, the Red Electric Devils’ inverter had the highest power density and smallest volume.
Impressively, the winning team exceeded the power density goal for the competition by a factor of 3,
which is more than 10 times more compact than commercially available inverters
! When we initially brainstormed technical targets for the Little Box Challenge, some of us at Google didn’t think such audacious goals could be achieved. Three teams from around the world proved decisively that it could be done.
Our takeaway: Establish a worthy goal and smart people will exceed it!
Congratulations again to CE+T Power’s Red Electrical Devils, Schneider Electric and Virginia Tech’s Future Energy Electronics and sincere thanks to our collaborators at IEEE and NREL. The finalist’s technical approach documents will be posted on the
Little Box Challenge
website until December 31, 2017. We hope this helps advance the state of the art and innovation in kW-scale inverters.
On the Personalities of Dead Authors
Wednesday, February 24, 2016
Posted by Marc Pickett, Software Engineer, Chris Tar, Engineering Manager and Brian Strope, Research Scientist
“Great, ice cream for dinner!”
How would you interpret that? If a 6 year old says it, it feels very different than if a parent says it. People are good at inferring the deeper meaning of language based on both the context in which something was said, and their knowledge of the personality of the speaker.
But can one program a computer to understand the intended meaning from natural language in a way similar to us? Developing a system that knows definitions of words and rules of grammar is one thing, but giving a computer conversational context along with the expectations of a speaker’s behaviors and language patterns is quite another!
To tackle this challenge, a
Natural Language Understanding
research group, led by Ray Kurzweil, works on building systems able to understand natural language at a deeper level. By experimenting with systems able to perceive and project different personality types, it is our goal to enable computers to interpret the meaning of natural language similar to the way we do.
One way to explore this research is to build a system capable of sentence prediction. Can we build a system that can, given a sentence from a book and knowledge of the author’s style and “personality”, predict what the author is most likely to write next?
We started by utilizing the works of a thousand different authors found on
to see if we could train a
Deep Neural Network
(DNN) to predict, given an input sentence, what sentence would come next. The idea was to see whether a DNN could - given millions of lines from a jumble of authors - “learn” a pattern or style that would lead one sentence to follow another.
This initial system had no author ID at the input - we just gave it pairs (line, following line) from 80% of the literary sample (saving 20% of it as a validation holdout). The labels at the output of the network are a simple YES or NO, depending on whether the example was truly a pair of sentences in sequence from the training data, or a randomly matched pair. This initial system had an error rate of 17.2%, where a random guess would be 50%. A slightly more sophisticated version also adds a fixed number of previous sentences for context, which decreased the error down to 12.8%.
We then improved that initial system by giving the network an additional signal per example: a unique ID representing the author. We told it who was saying what. All examples from that author were now accompanied by this ID during training time. The new system learned to leverage the Author ID, and decreased the relative error by 12.3% compared to the previous system (from 12.8% down to 11.1%). At some level, the system is saying “I've been told that this is Shakespeare, who tends to write like this, so I'll take that into account when weighing which sentence is more likely to follow”. On a slightly different ranking task (pick which of two responses most likely follows, instead of just a yes/no on a given trigger/response pair), including the fixed window of previous sentences along with this author ID resulted in an error rate of less than 5%.
The 300 dimensional vectors our system derived to do these predictions are presumably representative of the Author’s word choice, thinking, and style. We call these “Author vectors”, analogous to
. To get an intuitive sense of what these vectors are capturing, we projected the 300 dimensional space into two dimensions and plotted them as shown in the figure below. This gives some semblance of similarity and relative positions of authors in the space.
A two-dimensional representation of the vector embeddings for some of the authors in our study. To project the 300 dimensional vectors to two dimensions, we used the
. Note that contemporaries and influencers tend to be near each other (E.g., Nathaniel Hawthorne and Herman Melville, or Marlowe and Shakespeare).
It is interesting to consider which dimensions are most pertinent to defining personality and style, and which are more related to content or areas of interest. In the example above, we find Shakespeare and Marlowe in adjacent space. At the very least, these two dimensions reflect similarities of contemporary authors, but are there also measurable variables corresponding to “snark”, or humor, or sarcasm? Or perhaps there is something related to interests in sports?
With this working, we wondered, “How would the model respond to the questions of a personality test?” But to simulate how different authors might respond to questions found in such tests, we needed a NN that, rather than strictly making a yes/no decision, would produce a yes/no decision while being influenced by the author vector - including sentences it hasn't seen before.
To simulate different authors’ responses to questions, we use the author vectors described above as inputs to our more general networks. In that way, we get the performance and generalization of the network across all authors and text it learned on, but influenced by what’s unique to a chosen author. Combined with our generative model, these vectors allow us to generate responses as different authors. In effect, one can chat with a statistical representation of the text written by Shakespeare!
Once we set the author vector for a chosen author, we posed the Myers Briggs questions to the system as the “current sentence”, set the author vector for the chosen author, and gave the Myers Briggs response options as the next-sentence candidates. When we asked “Are you more of”: “a private person” or “an outgoing person” to our model of Shakespeare’s texts, it predicted “a private person”. When we changed the author vector to Mark Twain and pose the same question, we got “an outgoing person”.
If you're interested in more predictions our models made,
here's the complete list
for the small dataset of authors that we used. We have no reason to believe that these assessments are particularly accurate, since our systems weren't trained to do that well. Also, the responses are based on the writings of the author. Dialogs from fictional characters are not necessarily representative of the author’s actual personality. But we do know that these kinds of text-based systems can predict these kinds of classifications (for example
this UPenn study
used language use in public posts to predict users' personality traits). So we thought it would be interesting to see what we could get from our early models.
Though we can in no way claim that these models accurately respond with with the authors would have said, there are a few amusing anecdotes. When asked “Who is your favorite author?” and gave the options “Mark Twain”, “William Shakespeare”, “Myself”, and “Nobody”, the Twain model responded with “Mark Twain” and the Shakespeare model responded with “William Shakespeare”. Another example comes from the personality test: “When the phone rings” Shakespeare's model “hope[s] someone else will answer”, while Twain's “[tries] to get to it first”. Fitting, perhaps, since the telephone was patented during Twain's lifetime, but after Shakespeare.
This work is an early step towards better understanding intent, and how long-term context influences interpretation of text. In addition to being fun and interesting, this work has the potential to enrich products through personalization. For example, it could help provide more personalized response options for the recently introduced
Smart Reply feature
in Inbox by Gmail.
Natural Language Processing
Natural Language Understanding
Google Science Fair 2016: #howcanwe make things better with science?
Tuesday, February 23, 2016
Posted by Olivia Hallisey, 2015 Grand Prize winner, Google Science Fair
(Cross-posted from the
Google for Education blog
2016 Google Science Fair
opens for submissions today. Together with LEGO Education, National Geographic, Scientific American and Virgin Galactic, we’re inviting all young explorers and innovators to make something better through science and engineering. To learn more about the competition, how to enter, prize details and more, visit the
follow along on Google+
In this post,
2015 Grand Prize winner, Olivia Hallisey
, joins us to reflect back on her own experience with Google Science Fair.
I remember the day I first heard about the Google Science Fair last year. I was sitting in my 10th grade science class when my teacher asked us: “What will you try?” I loved the invitation—and the challenge—that the Google Science Fair offered. It was a chance to use science to do something that could really make a difference in the world.
I had always been curious and interested in science, and knew I wanted to submit a project, but didn’t really know exactly where to begin. I asked my teacher for his advice on selecting a research topic. He encouraged me to choose something that I felt passionate about, or something that outraged me, and told me to look at the world around me for inspiration. So I did. At that time, the Ebola crisis was all over the news. It was a devastating situation and I wanted to help be a part of the solution. I had found my project.
With the outbreak spreading so quickly, I decided that I wanted to find a way to diagnose the virus earlier so that treatment could be delivered as quickly as possible to those who were affected. I read online about silk’s amazing storage and stabilizing properties, and wondered if I could use silk to transport antibodies that could test for the virus. After many failed attempts (and cutting up lots of cocoons) I finally succeeded in creating a temperature-independent, portable, and inexpensive diagnostic test that could detect the Ebola virus in under 30 minutes. I was really excited that my research could help contribute to saving lives, and I was proud to be selected as the Grand Prize winner a few months later.
As the 2016 Google Science Fair launches today, I wanted to share a few tips from my own experience: First, as my teacher once guided me to do, look at the world around you for ideas. If you’re stuck, try the
Make Better Generator
to find something that excites or inspires you. Second, find a mentor who’s interested in the same things as you. There are a lot of
on the GSF site to get you started. And finally, don’t get discouraged—often what first appears like failure can teach you so much more.
I urge other teenagers like me to take this opportunity to find a way to make the world around them better. Every one of us, no matter our age or background, can make a difference—and as young people, we’re not always so afraid to try things that adults think will fail. But change doesn’t happen overnight, and it often starts with a question. So look at the world around you and challenge yourself to make something better.
Science isn’t just a subject—it’s a way to make things better. So I hope you’ll join the
enter the Google Science Fair this year
. Our world is waiting to see what you come up with!
Google Science Fair
Exploring the Intersection of Art and Machine Intelligence
Monday, February 22, 2016
Posted by Mike Tyka, Software Engineer
In June of last year, we
published a story
about a visualization technique that helped to understand how neural networks carried out difficult visual classification tasks. In addition to helping us gain a deeper understanding of how NNs worked, these techniques also produced
strange, wonderful and oddly compelling images
Following that blog post, and especially after
we released the source code
, dubbed DeepDream, we
witnessed a tremendous interest
not only from the machine learning community but also from the creative coding community. Additionally, several artists such as
and many others immediately started experimenting with the technique as a new way to create art.
”, 2015, Memo Akten, used with permission.
Soon after, the paper
A Neural Algorithm of Artistic Style
by Leon Gatys in Tuebingen was released. Their technique used a convolutional neural network to factor images into their separate style and content components. This in turn allowed the creation, by using a neural network as a generic image parser, of new images that combined the style of one with the content of another. Once again it took the creative coding community by storm and immediately many artists and coders began
with the new algorithm, resulting in
style transfer algorithm
crosses a photo with a painting style; for example
Neil deGrasse Tyson
in the style of
Kadinsky’s Jane Rouge Bleu
. Photo by
, used with permission.
The open-source deep-learning community, especially projects such as
, hugely contributed to the spread, accessibility and development of these algorithms. Both DeepDream and style transfer were rapidly implemented in a plethora of different languages and deep learning packages. Immediately others took the techniques and developed them
“Saxophone dreams” - Mike Tyka.
With machine learning as field moving forward at a breakneck pace and rapidly becoming part of many -- if not most -- online products, the opportunities for artistic uses are as wide as they are unexplored and perhaps overlooked. However the interest is growing rapidly: the University of London is now offering a course on
Machine learning and art
. NYU ITP offers
a similar program
this year. The Tate Modern’s IK Prize 2016 topic:
These are exciting early days, and we want to continue to stimulate artistic interest in these emerging technologies. To that end, we are announcing a two day DeepDream event in San Francisco at the
Gray Area Foundation for the Arts
, aimed at showcasing some of the latest exploration of the intersection of Machine Intelligence and Art, and spurring discussion focused around future directions:
Friday Feb 26th:
DeepDream: The Art of Neural Networks
, an exhibit consisting of 29 neural network generated artworks, created by artists at Google and from around the world. The works will be auctioned, with all proceeds going to the Gray Area Foundation, which has been active in supporting the intersection between arts and technology for over 10 years.
On Saturday Feb 27th
Art and Machine Learning Symposium
, an open one-day symposium on Machine Learning and Art, aiming to bring together the neural network and the creative coding communities to exchange ideas, learn and discuss. Videos of all the talks will be posted online after the event.
We look forward to sharing some of the interesting works of art generated by the art and machine learning community, and being part of the discussion of how art and technology can be combined.
Text-to-Speech for low resource languages (episode 3): But can it say “Google”?
Friday, February 19, 2016
Posted by Martin Jansche, Software Engineer, Google Research for Low Resource Languages
This is the third episode in the series of posts reporting on the work we are doing to build text-to-speech (TTS) systems for low resource languages. In the
, we described the crowdsourced acoustic data collection effort for Project Unison. In the
, we described how we built parametric voices based on that data. In this episode, we look at how we are compiling a pronunciation lexicon for a TTS system.
In Project Unison we are developing ways to bring Google's spoken language technology to the world’s major languages. As part of this broader goal, we are piloting a process for building a text-to-speech (TTS) system that can speak Bengali (Bangla). While our exploration of new methods have allowed us to
gather sufficient data
train a statistical parametric voice
capable of speaking Bengali, we had to address the next challenge: How do we make the voice sound like it is fluent in that language?
When people learn foreign languages, they are usually expected to pick up the full details from repeated exposure once they've mastered the basics and reached sufficient fluency. Often second-language learners struggle with issues that may seem so natural to fluent speakers that they are taken for granted. For instance, in order to read text out loud, one must know how to read different kinds of numerical expressions (e.g. dates, times, phone numbers, Roman numerals), and how to pronounce a wide variety of words, ranging from native words to newly coined brand names to loanwords, which themselves can arrive from different source languages. As TTS systems heavily rely on machine learning, they tend to face similar challenges as human learners: the way words are pronounced is often complex, sometimes surprising, and rarely fully documented.
Take the Bengali word meaning "microscope", which is অণুবীক্ষণ. Its pronunciation can be transcribed in the
International Phonetic Alphabet
as /o.nu.bik.kʰɔn/. When our system encounters this word, it analyzes the spelling in
into abstract written units called graphemes and then predicts the spoken sounds, or phonemes, of
from these graphemes.
The correspondence between graphemes and phonemes varies along several dimensions. One dimension is horizontal complexity: in many cases a single grapheme corresponds to a single phoneme, but the Bengali ligature ক্ষ is special, as several graphemes correspond to several phonemes in a somewhat surprising way. Another dimension is vertical predictability: a grapheme may correspond to different phonemes in different contexts and the correct phoneme may be difficult to predict. The Bengali grapheme “a” / “-a” is both very frequent and its pronunciation very unpredictable. It either corresponds to the phoneme /o/ or to the phoneme /ɔ/ or it is not pronounced. In the table above, we see all three possibilities within one word. As is standard in speech processing, our approach relies on human experts who transcribe words into phoneme sequences and machine learning models that capture the complex aspect of the grapheme-phoneme correspondence.
In order for our Bengali TTS system to pronounce the words in a sentence, it relies on a pronunciation dictionary, or lexicon, that provides pronunciations of a number of common words. When a word is not in the lexicon, it falls back on a machine learning model that was trained on thousands of pronunciations, which can then provide a pretty good guess at how a previously unseen word is pronounced. With a sufficiently large pronunciation dictionary, the system can be expected to reach a high level of fluency.
We first started compiling a Bengali pronunciation lexicon, with our Bangladeshi linguists transcribing a few thousand words into phonemes. This work was done in a web application that had been custom built for this purpose. Just like
an earlier version
, this transcription tool supports the work of linguists by providing a virtual keyboard for entering phonemes.
Once a few thousand words had been transcribed, we trained a machine learning system that could predict phonemic transcriptions for previously unseen words, so that the linguists only had to correct the output of that system. After the TTS voice had been built, it also became possible to listen to the voice reading out the entered transcriptions.
Even before the first machine learning model had been trained for Bengali, we configured the transcription tool to provide some constraints on how words could be transcribed. Bengali, like most writing systems, has certain aspects that make it complex, while in other ways it is quite regular. As discussed above, the grapheme “a” (অ) can have different pronunciations depending on context, but its pronunciation does not vary wildly: it is either silent or pronounced as a vowel, never as a consonant. By incorporating constraints on which graphemes can correspond to which phonemes, we can easily identify unlikely or erroneous transcriptions. This methodology has been
in use at Google for several years
The grapheme-phoneme correspondence varies along several dimensions, including regular words vs. abbreviations, and native words vs. loanwords. For example the word meaning "doctor" and pronounced /ɖɔk.ʈor/ can be written in several ways in Bengali: in Bengali script as ডক্টর or as the abbreviation ডঃ; and in Latin script as the English loanword doctor or as the abbreviation Dr. A TTS system should accept all ways of writing this word, hence all written variations are in our pronunciation lexicon.
A Bengali TTS voice should further be able to pronounce a variety of common brand names written in Latin script. The linguists from Project Unison therefore transcribed a few thousand such words phonemically into Bengali. For example, "WhatsApp" was transcribed /ho.aʈs.æp/, and "Google" was straightforwardly transcribed as /gu.gol/ just as if it had been spelled গুগল.
Overall our linguists transcribed more than 65,000 Bengali words into phonemic notation. In an effort to contribute to the community working on speech synthesis, speech recognition, and related natural language efforts, we are releasing our
Bengali pronunciation dictionary
Creative Commons License (CC BY 4.0)
. It is our hope that this will be a valuable resource for researchers and developers who are improving the state of spoken language systems.
Despite our efforts, this Bengali dictionary is incomplete and contains residual errors. As a work-in-progress it will continue to improve over time. We are hoping that other natural language and speech researchers will join us in making available more datasets under open licenses. As we refine our development process and extend it to more languages, we are planning on releasing additional datasets for other languages in the future.
NEXT UP: One Down, 299 to Go (Ep 4)
Running your models in production with TensorFlow Serving
Tuesday, February 16, 2016
Posted by Noah Fiedel, Software Engineer
Machine learning powers many Google product features, from
speech recognition in the Google app
Smart Reply in Inbox
search in Google Photos
. While decades of experience have enabled the software industry to establish best practices for building and supporting products, doing so for services based upon machine learning introduces
new and interesting challenges
Today, we announce the release of
, designed to address some of these challenges. TensorFlow Serving is a high performance, open source serving system for machine learning models, designed for production environments and optimized for
is ideal for running multiple models, at large scale, that change over time based on real-world data, enabling:
model lifecycle management
experiments with multiple algorithms
efficient use of GPU resources
TensorFlow Serving makes the process of taking a model into production easier and faster. It allows you to safely deploy new models and
while keeping the same server architecture and APIs. Out of the box it provides integration with TensorFlow, but it can be extended to serve other types of models.
Here’s how it works. In the simplified, supervised training pipeline shown below, training data is fed to the learner, which outputs a model:
Once a new model version becomes available, upon
, it is ready to be deployed to the serving system, as shown below.
TensorFlow Serving uses the (previously trained) model to perform inference - predictions based on new data presented by its clients. Since clients typically communicate with the serving system using a
remote procedure call
(RPC) interface, TensorFlow Serving comes with a reference front-end implementation based on
, a high performance, open source RPC framework from Google.
It is quite common to launch and iterate on your model over time, as new data becomes available, or as you improve the model. In fact, at Google, many pipelines run continuously, producing new model versions as new data becomes available.
TensorFlow Serving is written in C++ and it supports Linux. TensorFlow Serving introduces minimal overhead. In our benchmarks we recoded ~100,000 queries per second (QPS) per core on a 16 vCPU Intel Xeon E5 2.6 GHz
, excluding gRPC and the TensorFlow inference processing time.
We are excited to share this important component of TensorFlow today under the Apache 2.0 open source license. We would love to hear your
on Stack Overflow and GitHub respectively. To get started quickly, clone the code from
and check out this
You can expect to keep hearing more about TensorFlow as we continue to develop what we believe to be one of the best machine learning toolboxes in the world. If you'd like to stay up to date, follow
, and keep an eye out for
's keynote address at
GCP Next 2016
Google Research Awards: Fall 2015
Friday, February 12, 2016
Posted by Maggie Johnson, Director of Education and University Relations
Due to changes in the program, the new deadline for the 2016 Google Faculty Research Awards is
September 30th, 11:59 PM PDT
, and not October 15th, as shown below.
We have just completed another round of the
Google Research Awards
, our annual open call for proposals on computer science and related topics including machine learning, speech recognition, natural language processing, and computational neuroscience. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.
This round we received 950 proposals, an increase of 18% over last round, covering 55 countries and over 350 universities. After expert reviews and committee discussions, we decided to fund 151 projects. This round we increased our support of machine learning projects increased by 71% from last round. Physical interfaces and immersive experiences, a relatively new area for the Google Research Awards, saw a 19% increase in the number of submitted proposals.
Congratulations to the well-deserving
recipients of this round’s awards
. If you are interested in applying for the next round (deadline is October 15), please visit
for more information. Please note that we are now moving to an annual cycle.
Announcing the Google Internet of Things (IoT) Technology Research Award Pilot
Wednesday, February 10, 2016
Posted Vint Cerf, Chief Internet Evangelist, and Max Senges, Google Research
Over the past year, Google engineers have experimented and developed a set of building blocks for the
Internet of Things
- an ecosystem of connected devices, services and “things” that promises direct and efficient support of one’s daily life. While there has been significant progress in this field, there remain significant challenges in terms of (1) interoperability and a standardized modular systems architecture, (2) privacy, security and user safety, as well as (3) how users interact with, manage and control an ensemble of devices in this connected environment.
It is in this context that we are happy to invite university researchers
to participate in the
Internet of Things (IoT) Technology Research Award Pilot
. This pilot provides selected researchers in-kind gifts of Google IoT related technologies (listed below), with the goal of fostering collaboration with the academic community on small-scale (~4-8 week) experiments, discovering what they can do with our software and devices.
We invite you to submit proposals in which Google IoT technologies are used to (1) explore interesting use cases and innovative user interfaces, (2) address technical challenges as well as interoperability between devices and applications, or (3) experiment with new approaches to privacy, safety and security. Proposed projects should make use of one or a combination of these Google technologies:
Google beacon platform
- consisting of the open beacon format Eddystone and various client and cloud APIs, this platform allows developers to mark up the world to make your apps and devices work smarter by providing timely, contextual information.
- based on the Eddystone URL beacon format, the Physical Web is an approach designed to allow any smart device to interact with real world objects - a vending machine, a poster, a toy, a bus stop, a rental car - and not have to download an app first.
Nearby Messages API
- a publish-subscribe API that lets you pass small binary payloads between internet-connected Android and iOS devices as well as with beacons registered with
Google's proximity beacon service
- Brillo is an Android-based embedded OS that brings the simplicity and speed of mobile software development to IoT hardware to make it cost-effective to build a secure smart device, and to keep it updated over time. Weave is an open communications and interoperability platform for IoT devices that allows for easy connections to networks, smartphones (both Android and iOS), mobile apps, cloud services, and other smart devices.
- a communication hub for the Internet of Things supporting Bluetooth® Smart Ready, 802.15.4 and 802.11a/b/g/n/ac. It also allows you to quickly create a guest network and control the devices you want to share (see
Google Cloud Platform IoT Solutions
- tools to scale connections, gather and make sense of data, and provide the reliable customer experiences that IoT hardware devices require.
- provides custom full screen apps for a purpose-built Chrome device, such as a guest registration desk, a library catalog station, or a point-of-sale system in a store.
- an open-source framework designed to make it easier to develop secure, multi-device user experiences, with or without an Internet connection.
Check out the
Ubiquity Dev Summit playlist
for more information on these platforms and their best practices.
Please submit your proposal here
by February 29th in order to be considered for a award. Proposals will be reviewed by researchers and product teams within Google. In addition to looking for impact and interesting ideas, priority will be given to research that can make immediate use of the available technologies. Selected proposals will be notified by the end of March 2016. If selected, the award will be subject to Google’s terms, and your use of Google technologies will be subject to the applicable Google terms of service.
To connect our physical world to the Internet is a broad and long-term challenge, one we hope to address by working with researchers across many disciplines and work practices. We are looking forward to the collaborative opportunity provided by this pilot, and learning about innovative applications you create for these new technologies.
The same eligibility conditions as for the Faculty Research Award Program apply -
Internet of Things
Adaptive Data Analysis
Automatic Speech Recognition
Electronic Commerce and Algorithms
Google Cloud Platform
Google Play Apps
Google Science Fair
Google Voice Search
High Dynamic Range Imaging
Internet of Things
Natural Language Processing
Natural Language Understanding
Optical Character Recognition
Public Data Explorer
Security and Privacy
Site Reliability Engineering
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog