tag:blogger.com,1999:blog-212249942018-07-13T01:21:46.901-07:00Research BlogThe latest news from Research at GoogleGoogle Blogsnoreply@blogger.comBlogger11125tag:blogger.com,1999:blog-21224994.post-61885335595724933612018-05-04T11:00:00.000-07:002018-05-04T11:00:00.591-07:00The Question of Quantum Supremacy<span class="byline-author">Posted by Sergio Boixo, Research Scientist and Theory Team Lead, and Charles Neill, Quantum Electronics Engineer, Quantum A.I. Lab</span><br /><br /><a href="https://en.wikipedia.org/wiki/Quantum_technology">Quantum computing</a> integrates the two largest technological revolutions of the last half century, <a href="https://en.wikipedia.org/wiki/Information_technology">information technology</a> and <a href="https://en.wikipedia.org/wiki/Quantum_mechanics">quantum mechanics</a>. If we compute using the rules of quantum mechanics, instead of binary logic, some <a href="https://en.wikipedia.org/wiki/Hubbard_model">intractable</a> <a href="https://en.wikipedia.org/wiki/Nitrogen_fixation">computational tasks</a> become feasible. An important goal in the pursuit of a universal quantum computer is the determination of the smallest computational task that is prohibitively hard for today’s classical computers. This crossover point is known as the “quantum supremacy” frontier, and is a critical step on the path to more powerful and useful computations.<br /><br />In “<a href="https://www.nature.com/articles/s41567-018-0124-x">Characterizing quantum supremacy in near-term devices</a>” published in Nature Physics (arXiv <a href="https://arxiv.org/abs/1608.00263">here</a>), we present the theoretical foundation for a practical demonstration of quantum supremacy in near-term devices. It proposes the task of sampling bit-strings from the output of random quantum circuits, which can be thought of as the “hello world” program for quantum computers. The upshot of the argument is that the output of random chaotic systems (think <a href="https://en.wikipedia.org/wiki/Butterfly_effect">butterfly effect</a>) become very quickly harder to predict the longer they run. If one makes a random, chaotic <a href="https://en.wikipedia.org/wiki/Qubit">qubit</a> system and examines how long a classical system would take to emulate it, one gets a good measure of when a quantum computer could outperform a classical one. <a href="https://arxiv.org/abs/1803.04402">Arguably</a>, this is the strongest theoretical proposal to prove an <a href="https://arxiv.org/abs/1011.3245">exponential</a> <a href="https://arxiv.org/abs/1504.07999">separation</a> between the <a href="https://arxiv.org/abs/1612.05903">computational</a> power of classical and quantum computers. <br /><br />Determining where exactly the quantum supremacy frontier lies for sampling random quantum circuits has rapidly become an exciting area of research. On one hand, <a href="https://arxiv.org/abs/1710.05867">improvements</a> in <a href="https://arxiv.org/abs/1712.05384">classical</a> <a href="http://arxiv.org/abs/1802.06952">algorithms</a> to <a href="https://arxiv.org/abs/1804.04797">simulate quantum circuits</a> aim to <a href="https://arxiv.org/abs/1805.01450">increase the size</a> of the quantum circuits required to establish quantum supremacy. This forces an experimental quantum device with a sufficiently large number of qubits and low enough error rates to implement circuits of sufficient depth (i.e the number of layers of gates in the circuit) to achieve supremacy. On the other hand, we now understand better how the particular choice of the quantum gates used to build random quantum circuits affects the <a href="http://arxiv.org/abs/1804.04797">simulation cost</a>, leading to improved benchmarks for near-term quantum supremacy (available for download <a href="https://github.com/sboixo/GRCS">here</a>), which are in some cases quadratically more expensive to simulate classically than the original proposal.<br /><br />Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers, which we call <i>cross-entropy benchmarking</i>. A successful quantum supremacy experiment with random circuits would demonstrate the basic building blocks for a large-scale <a href="https://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.032324">fault-tolerant</a> quantum computer. Furthermore, quantum physics has not yet been tested for <a href="https://arxiv.org/abs/1203.5813">highly complex</a> quantum states such as this. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-ZOtsQgZroZc/WuyXsilCW1I/AAAAAAAACo0/BULrTsPCQioAsgK2KhY99icD1bj5cHd-QCLcBGAs/s1600/image1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1165" data-original-width="1600" height="466" src="https://2.bp.blogspot.com/-ZOtsQgZroZc/WuyXsilCW1I/AAAAAAAACo0/BULrTsPCQioAsgK2KhY99icD1bj5cHd-QCLcBGAs/s640/image1.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://pdfs.semanticscholar.org/650c/3fa2a231cd77cf3d882e1659ee14175c01d5.pdf">Space-time volume</a> of a quantum circuit computation. The computational cost for quantum simulation increases with the volume of the quantum circuit, and in general grows exponentially with the number of qubits and the circuit depth. For asymmetric grids of qubits, the computational space-time volume grows slower with depth than for symmetric grids, and can result in circuits exponentially <a href="https://arxiv.org/abs/1712.05384">easier to simulate</a>.</td></tr></tbody></table>In “<a href="http://science.sciencemag.org/content/360/6385/195">A blueprint for demonstrating quantum supremacy with superconducting qubits</a>” (arXiv <a href="https://arxiv.org/abs/1709.06678">here</a>), we illustrate a blueprint towards quantum supremacy and experimentally demonstrate a proof-of-principle version for the first time. In the paper, we discuss two key ingredients for quantum supremacy: exponential complexity and accurate computations. We start by running algorithms on subsections of the device ranging from 5 to 9 qubits. We find that the classical simulation cost grows exponentially with the number of qubits. These results are intended to provide a clear example of the exponential power of these devices. Next, we use <i>cross-entropy benchmarking</i> to compare our results against that of an ordinary computer and show that our computations are highly accurate. In fact, the error rate is low enough to achieve quantum supremacy with a larger quantum processor. <br /><br />Beyond achieving quantum supremacy, a quantum platform should offer clear applications. In our paper, we apply our algorithms towards computational problems in quantum statistical-mechanics using complex multi-qubit gates (as opposed to the two-qubit gates designed for a <a href="https://research.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">digital quantum processor</a> with <a href="https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html">surface code error correction</a>). We show that our devices can be used to study fundamental properties of materials, e.g. microscopic differences between metals and insulators. By extending these results to next-generation devices with ~50 qubits, we hope to answer scientific questions that are beyond the capabilities of any other computing platform.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-fb2grNdr0Zo/WuyYB808xwI/AAAAAAAACo8/s7NtNVI93LEsnMNZ5eP7NzwDtacnAa35gCLcBGAs/s1600/image2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1119" data-original-width="1600" height="278" src="https://4.bp.blogspot.com/-fb2grNdr0Zo/WuyYB808xwI/AAAAAAAACo8/s7NtNVI93LEsnMNZ5eP7NzwDtacnAa35gCLcBGAs/s400/image2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Photograph of two gmon superconducting qubits and their tunable coupler developed by Charles Neill and Pedram Roushan.</td></tr></tbody></table>These two publications introduce a realistic proposal for near-term quantum supremacy, and demonstrate a proof-of-principle version for the first time. We will continue to decrease the error rates and increase the number of qubits in <a href="https://research.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">quantum processors</a> to reach the quantum supremacy frontier, and to develop <a href="https://research.googleblog.com/2017/10/announcing-openfermion-open-source.html">quantum algorithms</a> for useful near-term <a href="https://research.googleblog.com/2018/03/reformulating-chemistry-for-more.html">applications</a>.Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.comtag:blogger.com,1999:blog-21224994.post-31048120594239714052018-03-22T10:00:00.000-07:002018-03-22T10:00:23.124-07:00Reformulating Chemistry for More Efficient Quantum Computation<span class="byline-author">Posted by Ryan Babbush, Senior Research Scientist, Quantum AI Team</span><br /><br />The first known classical “computer” was the <a href="https://en.wikipedia.org/wiki/Antikythera_mechanism">Antikythera mechanism</a>, an analog machine used to simulate the classical mechanics governing dynamics of celestial bodies on an astronomical scale. Similarly, a major ambition of quantum computers is to simulate the quantum mechanics governing dynamics of particles on the atomic scale. These simulations are often <a href="https://link.springer.com/article/10.1007%2FBF02650179">classically intractable</a> due to the complex quantum mechanics at play. Of particular interest is the simulation of electrons forming chemical bonds, which give rise to the properties of essentially all molecules, materials and chemical reactions.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-ZO-I9-aCFc4/WrFmzmrDcpI/AAAAAAAACgk/jqAZACA5sJsY-GoTCMkp9WRWToGBmioBwCLcBGAs/s1600/F1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="521" data-original-width="1600" height="208" src="https://1.bp.blogspot.com/-ZO-I9-aCFc4/WrFmzmrDcpI/AAAAAAAACgk/jqAZACA5sJsY-GoTCMkp9WRWToGBmioBwCLcBGAs/s640/F1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b>Left:</b> The first known computing device, the <a href="https://en.wikipedia.org/wiki/Antikythera_mechanism">Antikythera mechanism</a>: a classical machine used to simulate classical mechanics. <b>Right:</b> Google’s 22 Xmon qubit “foxtail” chip arranged in a bilinear array on a wafer, the predecessor to Google’s new <a href="https://research.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">Bristlecone quantum processor</a> with 72 qubits, a quantum machine we intend to use to simulate quantum mechanics, among other applications.</td></tr></tbody></table>Since the launch of the<a href="https://research.google.com/teams/quantumai/"> Quantum AI team</a> in 2013, we have been developing practical algorithms for quantum processors. In 2015, we conducted the <a href="https://research.googleblog.com/2016/07/towards-exact-quantum-description-of.html">first quantum chemistry experiment on a superconducting quantum computing device</a>, published in <a href="https://link.aps.org/doi/10.1103/PhysRevX.6.031007"><i>Physical Review X</i></a>. More recently, our quantum simulation effort <a href="https://www.nature.com/articles/nphys3930">experimentally simulated exotic phases of matter</a> and <a href="https://research.googleblog.com/2017/10/announcing-openfermion-open-source.html">released</a> the first software package for quantum computing chemistry, <a href="http://www.openfermion.org/">OpenFermion</a>. Earlier this month, our hardware team announced the new <a href="https://research.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">Bristlecone quantum processor</a> with 72 qubits. <br /><br />Today, we highlight two recent publications with theoretical advances that significantly reduce the cost of these quantum computations. Our results were presented at the <a href="https://collegerama.tudelft.nl/Mediasite/Showcase/qip2018/Presentation/67accacf1dfc4ebca0963327bfe51e3d1d">Quantum Information Processing</a> and <a href="https://www.research.ibm.com/ibm-q/thinkq/agenda.html">IBM ThinkQ</a> conferences.<br /><br />The first of these works, “<a href="https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.011044">Low-Depth Quantum Simulation of Materials</a>,” published this week in <a href="https://journals.aps.org/prx/"><i>Physical Review X</i></a>, was a collaboration between researchers at Google, the group of <a href="http://www.chan-lab.caltech.edu/index.html">Professor Garnet Chan at Caltech</a> and the <a href="https://www.microsoft.com/en-us/research/group/microsoft-quantum-redmond-quarc/">QuArC group at Microsoft</a>. Our fundamental advance was to realize that by changing how molecules are represented on quantum computers, we can greatly simplify the quantum circuits required to solve the problem. Specifically, we specially design <a href="https://en.wikipedia.org/wiki/Basis_set_(chemistry)">basis sets</a> so that the equations describing the system energies (i.e. the <a href="https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)">Hamiltonian</a>) become more straightforward to express for quantum computation.<br /><br />To do this, we focused on using basis sets related to functions (<a href="https://en.wikipedia.org/wiki/Plane_wave">plane waves</a>) used in classical electronic structure calculations to provide a periodic representation of the physical system. This enables one to go beyond the quantum simulation of single-molecules and instead use quantum computers to model realistic materials. For instance, instead of simulating a single lithium hydride molecule floating in free space, with our approach one can quantum simulate a <a href="https://en.wikipedia.org/wiki/Lithium_hydride">crystal of lithium hydride</a>, which is how the material appears in nature. With larger quantum computers one could study other important materials problems such as the degradation of battery <a href="https://en.wikipedia.org/wiki/Cathode">cathodes</a>, chemical reactions involving <a href="https://en.wikipedia.org/wiki/Heterogeneous_catalysis">heterogeneous catalysts</a>, or the unusual electrical properties of <a href="https://www.nobelprize.org/nobel_prizes/physics/laureates/2010/press.html">graphene</a> and <a href="https://en.wikipedia.org/wiki/High-temperature_superconductivity">superconductors</a>.<br /><br />In “<a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.110501">Quantum Simulation of Electronic Structure with Linear Depth and Connectivity</a>,” published last week in <i><a href="https://journals.aps.org/prl/">Physical Review Letters</a></i> with the same collaborators and a Google intern from the <a href="http://aspuru.chem.harvard.edu/">Aspuru-Guzik group at Harvard</a>, we leverage the structure introduced in the work above to design algorithms for near-term quantum computers with qubits laid out in a linear array. Whereas past methods required such quantum computers to run for time scaling as the fifth power of the number of simulated electrons for each dynamic step, our improved algorithm runs for time scaling linearly with respect to the number of electrons. This reduction in computational cost makes it viable to perform quantum chemistry simulations on near-term devices with fewer gates in each quantum circuit, possibly avoiding the need for full <a href="https://en.wikipedia.org/wiki/Quantum_error_correction">error-correction</a>.<br /><br />Even with these improvements, it is no small task to deploy such new technology to outperform classical quantum chemistry algorithms and methods which have been refined in parallel with the development of classical computers for more than <a href="https://books.google.com/books/about/Introduction_to_Quantum_Mechanics_with_A.html?id=J-HcCgAAQBAJ&printsec=frontcover&source=kp_read_button#v=onepage&q&f=false">eighty years</a>. However, at the current rate of advances in quantum algorithms and hardware, quantum technologies may provide chemists with an invaluable new tool. We look forward to sharing our research results as they develop.Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.comtag:blogger.com,1999:blog-21224994.post-71444696505513388362018-03-14T10:00:00.000-07:002018-03-14T10:11:27.588-07:00Balanced Partitioning and Hierarchical Clustering at Scale<span class="byline-author">Posted by Hossein Bateni, Research Scientist and Kevin Aydin, Software Engineer, NYC Algorithms and Optimization Research Team</span><br /><br />Solving large-scale optimization problems often starts with <a href="https://en.wikipedia.org/wiki/Graph_partition">graph partitioning</a>, which means partitioning the vertices of the graph into clusters to be processed on different machines. The need to make sure that clusters are of near equal size gives rise to the <i>balanced graph partitioning problem</i>. In simple terms, we need to partition the vertices of a given graph into k almost equal clusters, while we minimize the number of edges that are cut by the partition. This <a href="https://en.wikipedia.org/wiki/NP-hardness">NP-hard</a> problem is notoriously difficult in practice because the <a href="http://snap.stanford.edu/class/cs224w-readings/arora04expansion.pdf">best approximation algorithms</a> for small instances rely on <a href="https://en.wikipedia.org/wiki/Semidefinite_programming">semidefinite programming</a> which is impractical for larger instances. <br /><br />This post presents the distributed algorithm <a href="https://research.google.com/teams/nycalg/graph-mining/">we developed</a> which is more applicable to large instances. We introduced this balanced graph-partitioning algorithm in our <a href="https://dl.acm.org/authorize.cfm?key=N08074">WSDM 2016 paper</a>, and have applied this approach to several applications within Google. Our more recent <a href="https://nips.cc/Conferences/2017/Schedule?showEvent=9453">NIPS 2017 paper</a> provides more details of the algorithm via a theoretical and empirical study.<br /><br /><b>Balanced Partitioning via Linear Embedding</b><br />Our algorithm first embeds vertices of the graph onto a line, and then processes vertices in a distributed manner guided by the linear embedding order. We examine various ways to find the initial embedding, and apply four different techniques (such as local swaps and dynamic programming) to obtain the final partition. The best initial embedding is based on “<a href="https://papers.nips.cc/paper/7262-affinity-clustering-hierarchical-clustering-at-scale">affinity clustering</a>”.<br /><br /><b>Affinity Hierarchical Clustering</b><br />Affinity clustering is an agglomerative hierarchical graph clustering based on <a href="https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm">Borůvka’s classic Maximum-cost Spanning Tree algorithm</a>. As discussed above, this algorithm is a critical part of our balanced partitioning tool. The algorithm starts by placing each vertex in a cluster of its own: v0, v1, and so on. Then, in each iteration, the highest-cost edge out of each cluster is selected in order to induce larger merged clusters: A<sub>0</sub>, A<sub>1</sub>, A<sub>2</sub>, etc. in the first round and B<sub>0</sub>, B<sub>1</sub>, etc. in the second round and so on. The set of merges naturally produces a hierarchical clustering, and gives rise to a linear ordering of the leaf vertices (vertices with degree one). The image below demonstrates this, with the numbers at the bottom corresponding to the ordering of the vertices.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-NgoiCV3R5b8/WqhG7fzZZ3I/AAAAAAAACfU/yPbXncxLylAMIPv5BcpazkiLYyPiC6PgQCLcBGAs/s1600/image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="440" data-original-width="1074" height="262" src="https://3.bp.blogspot.com/-NgoiCV3R5b8/WqhG7fzZZ3I/AAAAAAAACfU/yPbXncxLylAMIPv5BcpazkiLYyPiC6PgQCLcBGAs/s640/image1.png" width="640" /></a></div>Our <a href="https://papers.nips.cc/paper/7262-affinity-clustering-hierarchical-clustering-at-scale">NIPS’17 paper</a> explains how we run affinity clustering efficiently in the massively parallel computation (MPC) model, in particular using distributed hash tables (<a href="https://en.wikipedia.org/wiki/Distributed_hash_table">DHTs</a>) to significantly reduce running time. This paper also presents a theoretical study of the algorithm. We report clustering results for graphs with tens of trillions of edges, and also observe that affinity clustering empirically beats other clustering algorithms such as k-means in terms of “quality of the clusters”. <a href="https://www.youtube.com/watch?v=1IOEFNGPNJc">This video</a> contains a summary of the result and explains how this parallel algorithm may produce higher-quality clusters even compared to a sequential single-linkage agglomerative algorithm.<br /><br /><b>Comparison to Previous Work</b><br />In comparing our algorithm to previous work in (distributed) balanced graph partitioning, we focus on <a href="https://dl.acm.org/citation.cfm?id=2556213">FENNEL</a>, <a href="http://ieeexplore.ieee.org/document/7930049/">Spinner</a>, <a href="http://glaros.dtc.umn.edu/gkhome/metis/metis/overview">METIS</a>, and a recent <a href="https://dl.acm.org/citation.cfm?id=2433461">label propagation-based algorithm</a>. We report results on several public social networks as well as a large private map graph. For a <a href="https://dl.acm.org/citation.cfm?id=1772751">Twitter followership graph</a>, we see a consistent improvement of 15–25% over previous results (<a href="https://dl.acm.org/citation.cfm?id=2433461">Ugander and Backstrom, 2013</a>), and for LiveJournal graph, our algorithm outperforms all the others for all cases except k = 2, where ours is slightly worse than FENNEL's.<br /><br />The following table presents the fraction of cut edges in the Twitter graph obtained via different algorithms for various values of k, the number of clusters. The numbers given in parentheses denote the size imbalance factor: i.e., the relative difference of the sizes of largest and smallest clusters. Here “Vanilla Affinity Clustering” denotes the first stage of our algorithm where only the hierarchical clustering is built and no further processing is performed on the cuts. Notice that this is already as good as the best previous work (shown in the first two columns below), cutting a smaller fraction of edges while achieving a perfect (and thus better) balance (i.e., 0% imbalance). The last column in the table includes the final result of our algorithm with the post-processing.<br /><br /><table border="1" cellpadding="1" cellspacing="0" style="width: 100%;"><tbody><tr> <td><div style="text-align: center;"><b>k</b></div></td> <td><div style="text-align: center;"><b><a href="https://dl.acm.org/citation.cfm?id=2433461">UB13</a><br />(5%)</b></div></td> <td><div style="text-align: center;"><b><a href="http://ieeexplore.ieee.org/document/7930049/">Spinner</a><br />(5%)</b></div></td> <td><div style="text-align: center;"><b>Vanilla Affinity<br />Clustering<br />(0%)</b></div></td><td><div style="text-align: center;"><b>Final Algorithm<br />(0%)</b></div></td></tr><tr> <td><div style="text-align: center;"><b>20</b></div></td> <td><div style="text-align: center;">37.0%</div></td> <td><div style="text-align: center;">38.0%</div></td> <td><div style="text-align: center;">35.71%</div></td><td><div style="text-align: center;">27.50%</div></td></tr><tr> <td><div style="text-align: center;"><b>40</b></div></td> <td><div style="text-align: center;">43.0%</div></td> <td><div style="text-align: center;">40.0%</div></td> <td><div style="text-align: center;">40.83%</div></td><td><div style="text-align: center;">33.71%</div></td></tr><tr> <td><div style="text-align: center;"><b>60</b></div></td> <td><div style="text-align: center;">46.0%</div></td> <td><div style="text-align: center;">43.0%</div></td> <td><div style="text-align: center;">43.03%</div></td><td><div style="text-align: center;">36.65%</div></td></tr><tr> <td><div style="text-align: center;"><b>80</b></div></td> <td><div style="text-align: center;">47.5%</div></td> <td><div style="text-align: center;">44.0%</div></td> <td><div style="text-align: center;">43.27%</div></td><td><div style="text-align: center;">38.65%</div></td></tr><tr> <td><div style="text-align: center;"><b>100</b></div></td> <td><div style="text-align: center;">49.0%</div></td> <td><div style="text-align: center;">46.0%</div></td> <td><div style="text-align: center;">45.05%</div></td><td><div style="text-align: center;">41.53%</div></td></tr></tbody></table><br /><b>Applications</b><br />We apply balanced graph partitioning to multiple applications including <a href="https://maps.google.com/">Google Maps</a> driving directions, the serving backend for web search, and finding <a href="https://arxiv.org/abs/1611.03780">treatment groups for experimental design</a>. For example, in Google Maps the World map graph is stored in several <a href="https://en.wikipedia.org/wiki/Shard_(database_architecture)">shards</a>. The navigational queries spanning multiple shards are substantially more expensive than those handled within a shard. Using the methods described in our paper, we can reduce 21% of cross-shard queries by increasing the shard imbalance factor from 0% to 10%. As discussed in our paper, live experiments on real traffic show that the number of multi-shard queries from our cut-optimization techniques is 40% less compared to a baseline <a href="https://en.wikipedia.org/wiki/Hilbert_curve#Applications_and_mapping_algorithms">Hilbert embedding</a> technique. This, in turn, results in less CPU usage in response to queries. In a future blog post, we will talk about application of this work in the web search serving backend, where balanced partitioning helped us design a cache-aware load balancing system that dramatically reduced our cache miss rate.<br /><br /><b>Acknowledgements</b><br /><i>We especially thank Vahab Mirrokni whose guidance and technical contribution were instrumental in developing these algorithms and writing this post. We also thank our other co-authors and colleagues for their contributions: Raimondas Kiveris, Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Silvio Lattanzi, Aaron Archer and other members of <a href="https://research.google.com/teams/nycalg/">NYC Algorithms and Optimization research team</a>.</i>Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-78291282603190878942018-02-23T10:00:00.000-08:002018-02-25T10:35:35.756-08:00A Summary of the Google Zürich Algorithms & Optimization Workshop<span class="byline-author">Posted by Silvio Lattanzi, Research Scientist, Google Zürich and Vahab Mirrokni, Research Scientist, Google New York</span><br /><br />Recently, we hosted a <a href="https://sites.google.com/corp/view/algorithms-workshop/home">workshop on Algorithms and Optimization</a> in our office in Zürich, with the goal of fostering collaboration between researchers from <a href="https://sites.google.com/corp/view/algorithms-workshop/guests">academia</a> and <a href="https://sites.google.com/corp/view/algorithms-workshop/googlers">Google</a> by providing a forum to exchange ideas in machine learning theory and large-scale graph mining. As part of the topics discussed, we additionally highlighted our aim to build a group similar to the <a href="https://research.google.com/teams/nycalg/">NYC algorithms research team</a> in Google’s Zürich office.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-2O8fxWj4K_0/WpBVhVU-MyI/AAAAAAAACYk/MjyEXdxdWcsW8BQEwjl9GU7fz_wdX1FWQCLcBGAs/s1600/image1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="571" data-original-width="860" height="424" src="https://2.bp.blogspot.com/-2O8fxWj4K_0/WpBVhVU-MyI/AAAAAAAACYk/MjyEXdxdWcsW8BQEwjl9GU7fz_wdX1FWQCLcBGAs/s640/image1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Silvio Lattanzi presenting the work of the Graph Mining team</td></tr></tbody></table>The workshop was structured in five sessions (see the full agenda <a href="https://sites.google.com/corp/view/algorithms-workshop/agenda">here</a>), each consisting of talks by attendees that touched the following research areas:<br /><ul><li><b>Market Algorithms:</b> This session included five talks upon problems related to optimizing online marketplaces and repeated auctions. <a href="https://research.google.com/pubs/mirrokni.html">Vahab Mirrokni</a> (Google New York) opened the session with an overview talk about <a href="https://research.google.com/teams/nycalg/market-algorithms/">market algorithms project</a>, then <a href="http://paulduetting.com/">Paul Duetting</a> (London School of Economics) presented <a href="https://docs.google.com/presentation/d/1qfB6XxjEP1NTXad6YUInbxyDrlHGowqUfqxfXg_7_Gs/edit">recent advancement in stochastic optimization for pricing</a>. <a href="https://research.google.com/pubs/RenatoPaesLeme.html">Renato Paes Leme</a> (Google New York) spoke about dynamic auctions in practice. <a href="http://wwwold.diag.uniroma1.it/~leon/wikka.php?wakka=HomePage">Stefano Leonardi</a> (Sapienza University of Rome) presented the challenges arising from the <a href="https://docs.google.com/presentation/d/1zASkwtfZXjrvjOplAja2fKAfaEwYQBSnvjClO-WErzQ/edit#slide=id.p">Reservation Exchange Markets</a> and finally <a href="https://research.google.com/pubs/author37631.html">Radu Jurca</a> (Google Zürich) explained how to pack YouTube Reservation Ads.</li><li><b>Machine Learning Theory:</b> Our second session focused on theoretical aspects of machine learning research. <a href="https://research.google.com/pubs/OlivierBousquet.html">Olivier Bousquet</a> (Google Brain Team, Zürich) opened the session discussing challenges in <a href="https://docs.google.com/presentation/d/1oGYXTcOQ45gexSlbUNex3oOzRcZT4MO2FS4zKkKUhMw/edit#slide=id.p">agnostic learning of distribution</a>. Then <a href="http://seas.yale.edu/faculty-research/faculty-directory/amin-karbasi">Amin Karbasi</a> (Yale) and <a href="https://las.inf.ethz.ch/krausea">Andreas Krause</a> (ETH Zürich) presented recent results on <a href="https://docs.google.com/presentation/d/1dfgC7nO3VKUPaj5bXIWYPzpRct2Bf-p1AbVV4lJGGX4/edit#slide=id.p">submodular optimization</a> and <a href="https://docs.google.com/presentation/d/1xg4PArim7VuV27SLdbUzWeyv-_A_1U2mVyhsWZbDh4s/edit#slide=id.p3">learning submodular models</a>. <a href="http://people.inf.ethz.ch/jaggim/">Martin Jaggi</a> (EPFL) explained new technique to parallelize optimization algorithms. And finally <a href="http://cesa-bianchi.di.unimi.it/">Nicolò Cesa-Bianchi</a> (Università degli Studi di Milano) presented <a href="https://docs.google.com/presentation/d/17Pj5yd_sOhUhFCq_tsRM_dXqi-LpGSTPhYQy3YahEZI/edit#slide=id.p">new results on bandits</a>.</li><li><b>Large-scale Graph Mining:</b> In this session, we presented some of our achievements and challenges in the context of large-scale graph mining project. <a href="https://research.google.com/pubs/SilvioLattanzi.html">Silvio Lattanzi</a> (Google Zürich) opened the session describing the applied and theoretical work of the <a href="https://research.google.com/teams/nycalg/graph-mining/">Graph Mining team</a>. Then <a href="https://duch.mimuw.edu.pl/~sank/wordpress/">Piotr Sankowski</a> (University of Warsaw) presented an <a href="https://docs.google.com/presentation/d/1_wIm6uDg3ddR-NyHGXtITc5jm73b8fur0Uvoze5eKLU/edit#slide=id.p">interesting model to explain the size of cascades in real-world graphs</a>. <a href="https://www.cl.cam.ac.uk/~tms41/">Thomas Sauerwald</a> (University of Cambridge) presented some new results on <a href="https://docs.google.com/presentation/d/1EBYo7vv_XR2ss9YGePuFSHQ1Lg6-hrwtkIT3VEaCKOc/edit#slide=id.p">Coalescing Random Walks</a> and <a href="https://algo2.iti.kit.edu/sanders.php">Peter Sanders</a> (Karlsruhe Institute of Technology) presented several interesting results in <a href="https://docs.google.com/presentation/d/1EkazlJpMGglxGdt_FjwEumt1C2pVsmdf9JLCiaVjQcE/edit#slide=id.p">algorithm engineering for large datasets</a>. After this talk, we brainstormed with Peter Sanders and Christian Schulz (University of Vienna) on different techniques to produce balanced graph partitioning results that would beat the quality of cuts generated in a <a href="https://research.google.com/pubs/pub44315.html">recent paper</a>. We are looking forward to seeing the improved results.</li><li><b>Privacy and Fairness:</b> This session covered new topics concerning privacy-preserving algorithms, and fairness in machine learning and recommender systems. Both of these topics are among the main areas of concern in machine learning. For example, <a href="https://research.google.com/pubs/SergeiVassilvitskii.html">Sergei Vassilvitskii</a> (Google New York) presented new algorithm to compute fair clustering and <a href="http://theory.epfl.ch/celis/HOME.html">Elisa Celis</a> (EPFL) discussed several aspects of Algorithmic fairness and Bias in Machine learning. <a href="https://www.insead.edu/faculty-research/faculty/dragos-florin-ciocan">Florin Ciocan</a> (INSEAD) described algorithms for <a href="https://docs.google.com/presentation/d/1JgQHD6WoQ8qwWhneEtsV7_lFCr1bRM08OKLl663vvYY/edit#slide=id.p">fair allocation</a> and <a href="https://warwick.ac.uk/fac/sci/dcs/people/graham_cormode/">Graham Cormode</a> (University of Warwick) presented <a href="https://docs.google.com/presentation/d/1uGtt-nlrh3tuo7Lpd3rCjmA6MeyCX7egooXUFqf0WEE/edit#slide=id.g28f38fb92e_0_155">algorithms for private release of marginal statistics</a>.</li><li><b>Sketching, Hashing, and Dynamic Algorithms:</b> Finally the last session covered some recent results in the area of sketching, hashing and dynamic algorithms. <a href="https://research.google.com/pubs/MortezaZadimoghaddam.html">Morteza Zadimoghaddam</a> (Google New York) opened the session describing a new algorithm for <a href="https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html">dynamic consistent hashing</a>. Then <a href="http://www.wisdom.weizmann.ac.il/~robi/">Robert Krauthgamer</a> (Weizmann Institute of Science) presented some <a href="https://docs.google.com/presentation/d/1RM5CVQ38JSiHspQ0ZxRTPB_UAZXic4YRGUhT7OTh364/edit#slide=id.g273c628ac7_2_0">recent results on graph sketching and combinatorial optimization</a>. <a href="http://www.dcs.warwick.ac.uk/~u1671158/">Sayan Bhattacharya</a> (University of Warwick) described the design of <a href="https://docs.google.com/presentation/d/1S-ZxFs_vjmfmB9J3ru7NCxg14UgmHQnVOnyrTB3Q2Qg/edit#slide=id.p">Dynamic Algorithms via Primal-Dual Method</a>. And finally <a href="http://people.uniroma2.it/giuseppe.italiano/">Pino Italiano</a> (University of Rome Tor Vergata) presented <a href="https://docs.google.com/presentation/d/1-vVCFfZB9RFVOghv1ypZUHlQ7yJN3pJGOCaVWv5Q5C8/edit#slide=id.p">new efficient algorithms for network analysis</a>.</li></ul>Overall, it was a great day with many excellent talks and with many opportunities for discussing interesting problems. All the presentations, including videos, can be found on our workshop website, <a href="https://sites.google.com/corp/view/algorithms-workshop/presentations">here</a>.Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-27223775356651818682017-12-05T10:00:00.000-08:002017-12-05T12:52:28.446-08:00Introducing a New Foveation Pipeline for Virtual/Mixed Reality<span class="byline-author">Posted by Behnam Bastani, Software Engineer Manager and Eric Turner, Software Engineer, Daydream</span><br /><br /><a href="https://en.wikipedia.org/wiki/Virtual_reality">Virtual Reality</a> (VR) and <a href="https://en.wikipedia.org/wiki/Mixed_reality">Mixed Reality</a> (MR) offer a novel way to immerse people into new and compelling experiences, from gaming to professional training. However, current VR/MR technologies present a fundamental challenge: to present images at the extremely high resolution required for immersion places enormous demands on the rendering engine and transmission process. Headsets often have insufficient display resolution, which can limit the field of view, worsening the experience. But, to drive a higher resolution headset, the traditional rendering pipeline requires significant processing power that even high-end mobile processors cannot achieve. As <a href="https://www.youtube.com/watch?v=IlADpD1fvuA&feature=youtu.be&t=1330">research</a> continues to deliver promising new techniques to increase display resolution, the challenges of driving those displays will continue to grow. <br /><br />In order to further improve the visual experience in VR and MR, we introduce a pipeline that takes advantage of the characteristics of human visual perception to enable an amazing visual experience at low compute and power cost. The pipeline proposed in this article considers the full system dependency including the rendering engine, memory bandwidth and capability of display module itself. We determined that the current limitation is not just in the content creation, but it also may be in transmitting data, handling latency and enabling interaction with real objects (mixed reality applications). The pipeline consists of 1. Foveated Rendering with a focus on reducing of compute per pixel. 2. Foveated Image Processing with a focus on the reduction of visual artifacts and 3. Foveated Transmission with a focus on bits per pixel transmitted.<br /><br /><b>Foveated Rendering</b><br />In the human visual system, the <a href="https://en.wikipedia.org/wiki/Fovea_centralis">fovea centralis</a> allows us to see at high-fidelity in the center of our vision, allowing our brain to pay less attention to things in our peripheral vision. Foveated rendering takes advantage of this characteristic to improve the performance of the rendering engine by reducing the spatial or bit-depth resolution of objects in our peripheral vision. To make this work, the location of the High Acuity (HA) region needs to be updated with eye-tracking to align with eye <a href="https://en.wikipedia.org/wiki/Saccade">saccades,</a> which preserves the perception of a constant high-resolution across the field of view. In contrast, systems with no eye-tracking may need to render a much larger HA region. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-dEriDuKV8-I/WiXZvlusJFI/AAAAAAAACPw/7I-Aih2eRlM6NCIaRRDLT5QMqR4knJTlwCLcBGAs/s1600/image2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="197" data-original-width="624" height="202" src="https://4.bp.blogspot.com/-dEriDuKV8-I/WiXZvlusJFI/AAAAAAAACPw/7I-Aih2eRlM6NCIaRRDLT5QMqR4knJTlwCLcBGAs/s640/image2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The left image is rendered at full resolution. The right image uses two layers of foveation — one rendered at high resolution (inside the yellow region) and one at lower resolution (outside).</td></tr></tbody></table>A traditional foveation technique may divide a frame buffer into multiple spatial resolution regions. Aliasing introduced by rendering to lower spatial resolution may cause perceptible temporal artifacts when there is motion in the content due to head motion or animation. Below we show an example of temporal artifacts introduced by head rotation. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-MQDessThj78/WiXZctBZGKI/AAAAAAAACPs/GLERcZMyhzkfv8o4iBh3C3zjKL_L7FZAgCLcBGAs/s1600/image8.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="292" data-original-width="998" height="187" src="https://2.bp.blogspot.com/-MQDessThj78/WiXZctBZGKI/AAAAAAAACPs/GLERcZMyhzkfv8o4iBh3C3zjKL_L7FZAgCLcBGAs/s640/image8.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A smooth full rendering (image on the left). The image on the right shows temporal artifacts introduced by motion in foveated region.</td></tr></tbody></table>In the following sections, we present two different methods we use aimed at reducing these artifacts: <i>Phase-Aligned Foveated Rendering</i> and <i>Conformal Foveated Rendering</i>. Each of these methods provide different benefits for visual quality during rendering and are useful under different conditions.<br /><br /><b>Phase-Aligned Rendering</b><br />Aliasing occurs in the Low-Acuity (LA) region during foveated rendering due to the subsampling of rendered content. In traditional foveated rendering discussed above, these aliasing artifacts flicker from frame to frame, since the display pixel grid moves across the virtual scene as the user moves their head. The motion of these pixels relative to the scene cause any existing aliasing artifacts to flicker, which is highly perceptible to the user, even in the periphery.<br /><br />In Phase-Aligned rendering, we force the LA region <a href="https://en.wikipedia.org/wiki/Frustum">frustums</a> to be aligned rotationally to the world (e.g. always facing north, east, south, etc.), not the current frame's head-rotation. The aliasing artifacts are mostly invariant to head pose and therefore much less detectable. After upsampling, these regions are then reprojected onto the final display screen to compensate for the user's head rotation, which reduces temporal flicker. As with traditional foveation, we render the high-acuity region in a separate pass, and overlay it onto the merged image at the location of the fovea. The figure below compares traditional foveated rendering with phase-aligned rendering, both at the same level of foveation.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-fQo6pdqdq4Y/WiXY7s8q5_I/AAAAAAAACPg/UimtDf06AVQpuv2OvduxWptPs7tOFx-bgCLcBGAs/s1600/PA.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="300" data-original-width="998" height="192" src="https://4.bp.blogspot.com/-fQo6pdqdq4Y/WiXY7s8q5_I/AAAAAAAACPg/UimtDf06AVQpuv2OvduxWptPs7tOFx-bgCLcBGAs/s640/PA.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Temporal artifacts in non-world aligned foveated rendered content (left) and the phase-aligned method (right).</td></tr></tbody></table>This method gives a major benefit to reducing the severity of visual artifacts during foveated rendering. Although phase-aligned rendering is more expensive to compute than traditional foveation under the same level of acuity reduction, we can still yield a net savings by pushing foveation to more aggressive levels that would otherwise have yielded too many artifacts.<br /><b><br /></b> <b>Conformal Rendering</b><br />Another approach for foveated rendering is to render content in a space that matches the smoothly varying reduction in resolution of our visual acuity, based on a nonlinear mapping of the screen distance from the visual fixation point.<br /><br />This method gives two main benefits. First, by more closely matching the visual fidelity fall-off of the human eye, we can reduce the total number of pixels computed compared to other foveation techniques. Second, by using a smooth fall-off in fidelity, we prevent the user from seeing a clear dividing line between High-Acuity and Low-Acuity, which is often one of the first artifacts that is noticed. These benefits allow for aggressive foveation to be used while preserving the same quality levels, yielding more savings.<br /><br />We perform this method by warping the vertices of the virtual scene into non-linear space. This scene is then <a href="https://en.wikipedia.org/wiki/Rasterisation">rasterized </a>at a reduced resolution, then unwarped into linear space as a post-processing effect combined with lens distortion correction.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-Hbb5CAdi_Fo/WiXYItN9IPI/AAAAAAAACPY/CCcKuVoy6uIk4mrhdEFW8Yv9rHySTd2yQCLcBGAs/s1600/conformal.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="292" data-original-width="998" height="187" src="https://2.bp.blogspot.com/-Hbb5CAdi_Fo/WiXYItN9IPI/AAAAAAAACPY/CCcKuVoy6uIk4mrhdEFW8Yv9rHySTd2yQCLcBGAs/s640/conformal.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Comparison of traditional foveation (left) to conformal rendering (right), where content is rendered to a space matched to visual perception acuity and HMD lens characteristics. Both methods use the same number of total pixels.</td></tr></tbody></table>A major benefit of this method over the phase-aligned method above is that conformal rendering only requires a single pass of rasterization. For scenes with lots of vertices, this difference can provide major savings. Additionally, although phase-aligned rendering reduces flicker, it still produces a distinct boundary between the high- and low-acuity regions, whereas conformal rendering does not show this artifact. However, a downside of conformal rendering compared to phase-alignment is that aliasing artifacts still flicker in the periphery, which may be less desirable for applications that require high visual fidelity.<br /><b><br /></b> <b>Foveated Image Processing</b><br />HMDs often require image processing steps to be performed after rendering, such as local tone mapping, lens distortion correction, or lighting blending. With foveated image-processing, different operations are applied for different foveation regions. As an example, lens distortion correction, including chromatic aberration correction, may not require the same spatial accuracy for each part of the display. By running lens distortion correction on foveated content before upscaling, significant savings are gained in computation. This technique does not introduce perceptible artifacts.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-sBfSyV35XkE/WiXV5ywtFUI/AAAAAAAACPM/BMRtcy5glwgaLhv8BRJUQg691yjXbeNtgCLcBGAs/s1600/image4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="408" data-original-width="624" height="418" src="https://1.bp.blogspot.com/-sBfSyV35XkE/WiXV5ywtFUI/AAAAAAAACPM/BMRtcy5glwgaLhv8BRJUQg691yjXbeNtgCLcBGAs/s640/image4.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Correction for head-mounted-display lens chromatic aberration in foveated space. Top image shows the conventional pipeline. The bottom image (in Green) shows the operation in the foveated space.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-bQ6K13SFUPQ/WiXVpgTODXI/AAAAAAAACPI/ivMiJ4hxGYkES0pbARCh-xQ911jkkB2QwCLcBGAs/s1600/image1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="203" data-original-width="624" height="208" src="https://3.bp.blogspot.com/-bQ6K13SFUPQ/WiXVpgTODXI/AAAAAAAACPI/ivMiJ4hxGYkES0pbARCh-xQ911jkkB2QwCLcBGAs/s640/image1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The left image shows reconstructed foveated content after lens distortion. The right image shows image difference when lens distortion correction is performed in a foveated manner. The right image shows that minimal error is introduced close to edges of frame buffer. These errors are imperceptible in an HMD.</td></tr></tbody></table><b><br /></b> <b>Foveated Transmission</b><br />A non-trivial source of power consumption for standalone HMDs is data transmission from the <a href="https://en.wikipedia.org/wiki/System_on_a_chip">system-on-a-chip</a> (SoC) to the display module. Foveated transmission aims to save power and bandwidth by transmitting the minimum amount of data necessary to the display as shown in figure below. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-clL3tevDVLw/WiXVaXSsaaI/AAAAAAAACPA/gSRRc7aluLIQxkrrX1Wl3mZGAWw4YPDdwCLcBGAs/s1600/image5.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="386" data-original-width="614" height="402" src="https://1.bp.blogspot.com/-clL3tevDVLw/WiXVaXSsaaI/AAAAAAAACPA/gSRRc7aluLIQxkrrX1Wl3mZGAWw4YPDdwCLcBGAs/s640/image5.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Rather than streaming upscaled foveated content (left image), foveated transmission enables streaming content pre-reconstruction (right image) and reducing the number of bits transmitted.</td></tr></tbody></table>This change requires moving the simple upscaling and blending operations to the display side and transmitting only the foveated rendered content. Complexity arises if the foveal region, the red box in above figure, moves with eyetracking. Such motion may cause temporal artifacts (figure below) since Display Stream Compression (DSC) used between SoC and the display is not designed for foveated content. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-ilSskhoTTi8/WiXVIU6SaoI/AAAAAAAACO8/5M2Qh9NB3G8-QUoSPg7a7vrujoX2zFsRQCLcBGAs/s1600/image3.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="303" data-original-width="646" height="300" src="https://2.bp.blogspot.com/-ilSskhoTTi8/WiXVIU6SaoI/AAAAAAAACO8/5M2Qh9NB3G8-QUoSPg7a7vrujoX2zFsRQCLcBGAs/s640/image3.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Comparison of full integration of foveation and compression techniques (left) versus typical flickering artifacts that may be introduced by applying DSC to foveated content (right).</td></tr></tbody></table><b>Toward a New Pipeline</b><br />We have focused on a few components of a “foveation pipeline” for MR and VR applications. By considering the impact of foveation in every part of a display system — rendering, processing and transmission — we can enable the next generation of lightweight, low-power, and high resolution MR/VR HMDs. This topic has been an active area of research for many years and it seems reasonable to expect the appearance of VR and MR headsets with foveated pipelines in the coming years. <br /><br /><b>Acknowledgements</b><br /><i>We would like to recognize the work done by the following collaborators:</i><br /><ul><li><i>Haomiao Jiang and Carlin Vieri on display compression and foveated transmission</i></li><li><i>Brian Funt and Sylvain Vignaud on the development of new foveated rendering algorithms</i></li></ul>Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-54901146623782673672017-11-09T10:30:00.000-08:002017-11-09T10:30:00.753-08:00Seamless Google Street View Panoramas<span class="byline-author">Posted by Mike Krainin, Software Engineer and Ce Liu, Research Scientist, Machine Perception</span><br /><br />In 2007, we introduced <a href="https://www.google.com/streetview/">Google Street View</a>, enabling you to explore the world through panoramas of neighborhoods, landmarks, museums and more, right from your browser or mobile device. The creation of these panoramas is a complicated process, involving capturing images from a multi-camera rig called a rosette, and then using image blending techniques to carefully stitch them all together. However, many things can thwart the creation of a "successful" panorama, such as mis-calibration of the rosette camera geometry, timing differences between adjacent cameras, and <a href="https://en.wikipedia.org/wiki/Parallax">parallax</a>. And while we attempt to address these issues by using approximate scene geometry to account for parallax and frequent camera re-calibration, visible seams in image overlap regions can still occur. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-eMyMcE1Lrzw/WgOOljjq8jI/AAAAAAAACLI/GQokAlRZQ4QWeldN5OmdwwtiPPgxnOk5gCLcBGAs/s1600/image18.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="425" data-original-width="1187" height="228" src="https://4.bp.blogspot.com/-eMyMcE1Lrzw/WgOOljjq8jI/AAAAAAAACLI/GQokAlRZQ4QWeldN5OmdwwtiPPgxnOk5gCLcBGAs/s640/image18.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Left: A Street View car carrying a multi-camera rosette. Center: A close-up of the rosette, which is made up of 15 cameras. Right: A visualization of the spatial coverage of each camera. Overlap between adjacent cameras is shown in darker gray.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-ylBIpwy26_A/WgOPwpsURNI/AAAAAAAACLU/YocLgJztmxM6ZgCPsWF1mzbKwtlLWtbOACLcBGAs/s1600/f2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="760" data-original-width="1525" height="318" src="https://2.bp.blogspot.com/-ylBIpwy26_A/WgOPwpsURNI/AAAAAAAACLU/YocLgJztmxM6ZgCPsWF1mzbKwtlLWtbOACLcBGAs/s640/f2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Left: The Sydney Opera House with stitching seams along its iconic shells. Right: The same Street View panorama after optical flow seam repair. </td></tr></tbody></table>In order to provide more seamless Street View images, we’ve developed a new algorithm based on <a href="https://en.wikipedia.org/wiki/Optical_flow">optical flow</a> to help solve these challenges. The idea is to subtly warp each input image such that the image content lines up within regions of overlap. This needs to be done carefully to avoid introducing new types of visual artifacts. The approach must also be robust to varying scene geometry, lighting conditions, calibration quality, and many other conditions. To simplify the task of aligning the images and to satisfy computational requirements, we’ve broken it into two steps.<br /><br /><b>Optical Flow</b><br />The first step is to find corresponding pixel locations for each pair of images that overlap. Using techniques described in our <a href="https://research.googleblog.com/2017/04/photoscan-taking-glare-free-pictures-of.html">PhotoScan blog post</a>, we compute optical flow from one image to the other. This provides a smooth and dense correspondence field. We then downsample the correspondences for computational efficiency. We also discard correspondences where there isn’t enough visual structure to be confident in the results of optical flow.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-3_PrlnlalVU/WgOQEK4XR3I/AAAAAAAACLY/mMi33cRq2cI9CeEtRefR_Pl_WsZWslm-gCLcBGAs/s1600/image10.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="544" data-original-width="1600" height="216" src="https://1.bp.blogspot.com/-3_PrlnlalVU/WgOQEK4XR3I/AAAAAAAACLY/mMi33cRq2cI9CeEtRefR_Pl_WsZWslm-gCLcBGAs/s640/image10.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><br />The boundaries of a pair of constituent images from the rosette camera rig that need to be stitched together.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-K0K3_q-6jYU/WgOQP67vylI/AAAAAAAACLg/sk-tnUCPI3wu7URyuASe6Zaqf365nRcggCLcBGAs/s1600/image4.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="724" data-original-width="939" height="493" src="https://4.bp.blogspot.com/-K0K3_q-6jYU/WgOQP67vylI/AAAAAAAACLg/sk-tnUCPI3wu7URyuASe6Zaqf365nRcggCLcBGAs/s640/image4.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">An illustration of optical flow within the pair’s overlap region.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-dGwhq-Fj7x0/WgOQjgj8IPI/AAAAAAAACLk/B5N6K-jOsZ4y5i1U_XHatx5tGfOb3LWhgCLcBGAs/s1600/f5.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="772" data-original-width="1173" height="420" src="https://1.bp.blogspot.com/-dGwhq-Fj7x0/WgOQjgj8IPI/AAAAAAAACLk/B5N6K-jOsZ4y5i1U_XHatx5tGfOb3LWhgCLcBGAs/s640/f5.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Extracted correspondences in the pair of images. For each colored dot in the overlap region of the left image, there is an equivalently-colored dot in the overlap region of the right image, indicating how the optical flow algorithm has aligned the point. These pairs of corresponding points are used as input to the global optimization stage. Notice that the overlap covers only a small portion of each image.</td></tr></tbody></table><b>Global Optimization</b><br />The second step is to warp the rosette’s images to simultaneously align all of the corresponding points from overlap regions (as seen in the figure above). When stitched into a panorama, the set of warped images will then properly align. This is challenging because the overlap regions cover only a small fraction of each image, resulting in an under-constrained problem. To generate visually pleasing results across the whole image, we formulate the warping as a <a href="https://en.wikipedia.org/wiki/Spline_(mathematics)">spline</a>-based flow field with spatial regularization. The spline parameters are solved for in a non-linear optimization using Google’s open source <a href="http://ceres-solver.org/">Ceres Solver</a>.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-Q6T-1Xy7gvM/WgOREyAmTdI/AAAAAAAACLw/a165a0SIOwo1JggsjEsaR550qj6iFklNACLcBGAs/s1600/image6.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="1210" height="210" src="https://1.bp.blogspot.com/-Q6T-1Xy7gvM/WgOREyAmTdI/AAAAAAAACLw/a165a0SIOwo1JggsjEsaR550qj6iFklNACLcBGAs/s640/image6.gif" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A visualization of the final warping process. Left: A section of the panorama covering 180 degrees horizontally. Notice that the overall effect of warping is intentionally quite subtle. Right: A close-up, highlighting how warping repairs the seams.</td></tr></tbody></table>Our approach has many similarities to <a href="https://link.springer.com/chapter/10.1007%2F978-1-4757-3482-9_13">previously published work</a> by Shum & Szeliski on “deghosting” panoramas. Key differences include that our approach estimates dense, smooth correspondences (rather than patch-wise, independent correspondences), and we solve a nonlinear optimization for the final warping. The result is a more well-behaved warping that is less likely to introduce new visual artifacts than the kernel-based approach.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-sEQuEAh8EbM/WgORZRJ3QSI/AAAAAAAACL0/lhQygBYEP6YUvD75M5NY5u-3I0yJFiwSACLcBGAs/s1600/f7.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="527" data-original-width="1600" height="210" src="https://1.bp.blogspot.com/-sEQuEAh8EbM/WgORZRJ3QSI/AAAAAAAACL0/lhQygBYEP6YUvD75M5NY5u-3I0yJFiwSACLcBGAs/s640/f7.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Left: A close-up of the un-repaired panorama. Middle: Result of kernel-based interpolation. This fixes discontinuities but at the expense of strong wobbling artifacts due to the small image overlap and limited footprint of kernels. Right: Result of our global optimization.</td></tr></tbody></table>This is important because our algorithm needs to be robust to the enormous diversity in content in Street View’s billions of panoramas. You can see how effective the algorithm is in the following examples:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-eB5einc7rZE/WgORrU9lYjI/AAAAAAAACME/FEAQE0EmOw4MTfIsMj0qYwXP0kc3k9-sgCLcBGAs/s1600/f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="768" data-original-width="1546" height="316" src="https://4.bp.blogspot.com/-eB5einc7rZE/WgORrU9lYjI/AAAAAAAACME/FEAQE0EmOw4MTfIsMj0qYwXP0kc3k9-sgCLcBGAs/s640/f8.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Tower Bridge, London</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-CPD78I6VFyg/WgORrKvdinI/AAAAAAAACL8/WU-egRbXeHgoThki8D8tyCHzk3G2iJb7wCLcBGAs/s1600/f9.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="763" data-original-width="1532" height="318" src="https://1.bp.blogspot.com/-CPD78I6VFyg/WgORrKvdinI/AAAAAAAACL8/WU-egRbXeHgoThki8D8tyCHzk3G2iJb7wCLcBGAs/s640/f9.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Christ the Redeemer, Rio de Janeiro</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-IQjjDs-C4dc/WgORrKCol_I/AAAAAAAACMA/_KaAY-GOx1o2NKlE-Oo3WW0RAiXZxEoHgCLcBGAs/s1600/f10.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="753" data-original-width="1514" height="318" src="https://1.bp.blogspot.com/-IQjjDs-C4dc/WgORrKCol_I/AAAAAAAACMA/_KaAY-GOx1o2NKlE-Oo3WW0RAiXZxEoHgCLcBGAs/s640/f10.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">An SUV on the streets of Seattle</td></tr></tbody></table>This new algorithm was recently added to the Street View stitching pipeline. It is now being used to restitch existing panoramas on an ongoing basis. Keep an eye out for improved Street View near you!<br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/iL67eGBS3Ic/0.jpg" frameborder="0" height="360" src="https://www.youtube.com/embed/iL67eGBS3Ic?rel=0&feature=player_embedded" width="640"></iframe></div><br /><b>Acknowledgements</b><br /><i>Special thanks to Bryan Klingner for helping to integrate this feature with the Street View infrastructure.</i>Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-77605221046956773052017-08-23T10:00:00.000-07:002017-09-12T09:29:03.923-07:00Google at KDD’17: Graph Mining and Beyond<span class="byline-author">Posted by Bryan Perozzi, Research Scientist, NYC Algorithms and Optimization Team</span><br /><br />The <a href="http://www.kdd.org/kdd2017/">23rd ACM conference on Knowledge Discovery and Data Mining</a> (KDD’17), a main venue for academic and industry research in data science, information retrieval, data mining and machine learning, was held last week in Halifax, Canada. Google has historically been an active participant in KDD, and this year was no exception, with Googlers’ contributing numerous papers and participating in workshops. <br /><br />In addition to our overall participation, we are happy to congratulate fellow Googler Bryan Perozzi for receiving the SIGKDD 2017 Doctoral Dissertation Award, which serves to recognize excellent research by doctoral candidates in the field of data mining and knowledge discovery. This award was given in recognition of his <a href="http://perozzi.net/publications/16_thesis.pdf">thesis</a> on the topic of machine learning on graphs performed at Stony Brook University, under the advisorship of <a href="http://www3.cs.stonybrook.edu/~skiena/">Steven Skiena</a>. Part of his thesis was developed during his internships at Google. The thesis dealt with using a restricted set of local graph primitives (such as ego-networks and truncated random walks) to effectively exploit the information around each vertex for <a href="http://dl.acm.org/citation.cfm?id=2623732">classification</a>, <a href="http://dl.acm.org/citation.cfm?doid=2623330.2623682">clustering</a>, and <a href="http://epubs.siam.org/doi/abs/10.1137/1.9781611974348.24">anomaly detection</a>. Most notably, the work introduced the random-walk paradigm for graph embedding with neural networks in DeepWalk.<br /><br /><a href="http://dl.acm.org/citation.cfm?id=2623732">DeepWalk: Online Learning of Social Representations</a>, originally presented at KDD'14, outlines a method for using a series of local information obtained from truncated random walks to learn <i>latent</i> representations of nodes in a graph (e.g. users in a social network). The core idea was to treat each segment of a random walk as a sentence “in the language of the graph.” These segments could then be used as input for neural network models to learn representations of the graph’s nodes, using sequence modeling methods like <a href="https://en.wikipedia.org/wiki/Word2vec">word2vec</a> (which had just been developed at the time). This research continues at Google, most recently with <a href="https://arxiv.org/abs/1705.05615">Learning Edge Representations via Low-Rank Asymmetric Projections</a>.<br /><br />The full list of Google contributions at KDD’17 is listed below (Googlers highlighted in <span style="color: #3d85c6;">blue</span>).<br /><br /><b><u>Organizing Committee</u></b><br />Panel Chair: <span style="color: #3d85c6;"><i>Andrew Tomkins </i></span><br />Research Track Program Chair: <i><span style="color: #3d85c6;">Ravi Kumar </span></i><br />Applied Data Science Track Program Chair: <span style="color: #3d85c6;"><i>Roberto J. Bayardo </i></span><br />Research Track Program Committee: <i><span style="color: #3d85c6;">Sergei Vassilvitskii</span></i><i>,</i><i><span style="color: #3d85c6;"> Alex Beutel</span></i><i>,</i><i><span style="color: #3d85c6;"> Abhimanyu Das</span></i><i>,</i><i><span style="color: #3d85c6;"> Nan Du</span></i><i>,</i><i><span style="color: #3d85c6;"> Alessandro Epasto</span></i><i>,</i><i><span style="color: #3d85c6;"> Alex Fabrikant</span></i><i>,</i><i><span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Kristen Lefevre</span></i><i>,</i><i><span style="color: #3d85c6;"> Bryan Perozzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Karthik Raman</span></i><i>,</i><i><span style="color: #3d85c6;"> Steffen Rendle</span></i><i>,</i><i><span style="color: #3d85c6;"> Xiao Yu</span></i><br />Applied Data Science Program Track Committee: <i><span style="color: #3d85c6;">Edith Cohen</span></i><i>,</i><i><span style="color: #3d85c6;"> Ariel Fuxman</span></i><i>,</i><i><span style="color: #3d85c6;"> D. Sculley</span></i><i>,</i><i><span style="color: #3d85c6;"> Isabelle Stanton</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Zinkevich</span></i><i>,</i><i><span style="color: #3d85c6;"> Amr Ahmed</span></i><i>,</i><i><span style="color: #3d85c6;"> Azin Ashkan</span></i><i>,</i><i><span style="color: #3d85c6;"> Michael Bendersky</span></i><i>,</i><i><span style="color: #3d85c6;"> James Cook</span></i><i>,</i><i><span style="color: #3d85c6;"> Nan Du</span></i><i>,</i><i><span style="color: #3d85c6;"> Balaji Gopalan</span></i><i>,</i><i><span style="color: #3d85c6;"> Samuel Huston</span></i><i>,</i><i><span style="color: #3d85c6;"> Konstantinos Kollias</span></i><i>,</i><i><span style="color: #3d85c6;"> James Kunz</span></i><i>,</i><i><span style="color: #3d85c6;"> Liang Tang</span></i><i>,</i><i><span style="color: #3d85c6;"> Morteza Zadimoghaddam</span></i><br /><br /><b><u>Awards</u></b><br />Doctoral Dissertation Award: <span style="color: #3d85c6;"><i>Bryan Perozzi</i></span>, for <a href="http://perozzi.net/publications/16_thesis.pdf">Local Modeling of Attributed Graphs: Algorithms and Applications</a>.<br /><br />Doctoral Dissertation Runner-up Award: <i><span style="color: #3d85c6;">Alex Beutel</span></i>, for <a href="http://alexbeutel.com/papers/CMU-CS-16-105.pdf">User Behavior Modeling with Large-Scale Graph Analysis</a>.<br /><br /><b><u>Papers</u></b><br /><a href="http://www.kdd.org/kdd2017/papers/view/ego-splitting-framework-from-non-overlapping-to-overlapping-clusters">Ego-Splitting Framework: from Non-Overlapping to Overlapping Clusters</a><br /><i><span style="color: #3d85c6;">Alessandro Epasto</span></i><i>,</i><i><span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Renato Paes Leme</span></i><br /><br /><a href="http://delivery.acm.org/10.1145/3100000/3098020/p105-cohen.pdf?ip=104.132.34.79&id=3098020&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E5945DC2EABF3343C&CFID=931414749&CFTOKEN=83886961&__acm__=1503326945_f7fd5e1fd73428ab35196997099a0840">HyperLogLog Hyperextended: Sketches for Concave Sublinear Frequency Statistics</a><br /><span style="color: #3d85c6;"><i>Edith Cohen</i></span><br /><br /><a href="http://dl.acm.org/citation.cfm?id=3098043">Google Vizier: A Service for Black-Box Optimization</a><br /><i><span style="color: #3d85c6;">Daniel Golovin</span></i><i>,</i><i><span style="color: #3d85c6;"> Benjamin Solnik</span></i><i>,</i><i><span style="color: #3d85c6;"> Subhodeep Moitra</span></i><i>,</i><i><span style="color: #3d85c6;"> Greg Kochanski</span></i><i>,</i><i><span style="color: #3d85c6;"> John Karro</span></i><i>,</i><i><span style="color: #3d85c6;"> D. Sculley</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/quick-access-building-a-smart-experience-for-google-drive">Quick Access: Building a Smart Experience for Google Drive</a><br /><i><span style="color: #3d85c6;">Sandeep Tata</span></i><i>,</i><i><span style="color: #3d85c6;"> Alexandrin Popescul</span></i><i>,</i><i><span style="color: #3d85c6;"> Marc Najork</span></i><i>,</i><i><span style="color: #3d85c6;"> Mike Colagrosso</span></i><i>,</i><i><span style="color: #3d85c6;"> Julian Gibbons</span></i><i>,</i><i><span style="color: #3d85c6;"> Alan Green</span></i><i>,</i><i><span style="color: #3d85c6;"> Alexandre Mah</span></i><i>,</i><i><span style="color: #3d85c6;"> Michael Smith</span></i><i>,</i><i><span style="color: #3d85c6;"> Divanshu Garg</span></i><i>,</i><i><span style="color: #3d85c6;"> Cayden Meyer</span></i><i>,</i><i><span style="color: #3d85c6;"> Reuben KanPapers</span></i><br /><br /><a href="http://dl.acm.org/citation.cfm?id=3098021">TFX: A TensorFlow Based Production Scale Machine Learning Platform</a><br /><i><span style="color: #3d85c6;">Denis Baylor</span></i><i>,</i><i><span style="color: #3d85c6;"> Eric Breck</span></i><i>,</i><i><span style="color: #3d85c6;"> Heng-Tze Cheng</span></i><i>,</i><i><span style="color: #3d85c6;"> Noah Fiedel</span></i><i>,</i><i><span style="color: #3d85c6;"> Chuan Yu Foo</span></i><i>,</i><i><span style="color: #3d85c6;"> Zakaria Haque</span></i><i>,</i><i><span style="color: #3d85c6;"> Salem Haykal</span></i><i>,</i><i><span style="color: #3d85c6;"> Mustafa Ispir</span></i><i>,</i><i><span style="color: #3d85c6;"> Vihan Jain</span></i><i>,</i><i><span style="color: #3d85c6;"> Levent Koc</span></i><i>,</i><i><span style="color: #3d85c6;"> Chiu Yuen Koo</span></i><i>,</i><i><span style="color: #3d85c6;"> Lukasz Lew</span></i><i>,</i><i><span style="color: #3d85c6;"> Clemens Mewald</span></i><i>, </i><i><span style="color: #3d85c6;">Akshay Modi</span></i><i>,</i><i><span style="color: #3d85c6;"> Neoklis Polyzotis</span></i><i>,</i><i><span style="color: #3d85c6;"> Sukriti Ramesh</span></i><i>,</i><i><span style="color: #3d85c6;"> Sudip Roy</span></i><i>,</i><i><span style="color: #3d85c6;"> Steven Whang</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Wicke</span></i><i>, </i><i><span style="color: #3d85c6;"> Jarek Wilkiewicz</span></i><i>,</i><i><span style="color: #3d85c6;"> Xin Zhang</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Zinkevich</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/construction-of-directed-2k-graphs">Construction of Directed 2K Graphs</a><br /><i>Balint Tillman, Athina Markopoulou, Carter T. Butts, <span style="color: #3d85c6;">Minas Gjoka</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/a-practical-algorithm-for-solving-the-incoherence-problem-of-topic-models-i">A Practical Algorithm for Solving the Incoherence Problem of Topic Models In Industrial Applications </a><br /><i><span style="color: #3d85c6;">Amr Ahmed</span></i><i>,</i><i><span style="color: #3d85c6;"> James Long</span></i><i>,</i><i><span style="color: #3d85c6;"> Dan Silva</span></i><i>,</i><i><span style="color: #3d85c6;"> Yuan Wang</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/train-and-distribute-managing-simplicity-vs.-flexibility-in-high-level-mach">Train and Distribute: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks </a><br /><i><span style="color: #3d85c6;">Heng-Tze Cheng</span>,<span style="color: #3d85c6;"> Lichan Hong</span>,<span style="color: #3d85c6;"> Mustafa Ispir</span>,<span style="color: #3d85c6;"> Clemens Mewald</span>,<span style="color: #3d85c6;"> Zakaria Haque</span>,<span style="color: #3d85c6;"> Illia Polosukhin</span>,<span style="color: #3d85c6;"> Georgios Roumpos</span>,<span style="color: #3d85c6;"> D Sculley, Jamie Smith</span>,<span style="color: #3d85c6;"> David Soergel</span>,<span style="color: #3d85c6;"> </span>Yuan Tang,<span style="color: #3d85c6;"> Philip Tucker</span>,<span style="color: #3d85c6;"> Martin Wicke</span>,<span style="color: #3d85c6;"> Cassandra Xia</span>,<span style="color: #3d85c6;"> Jianwei Xie</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/learning-to-count-mosquitoes-for-the-sterile-insect-technique">Learning to Count Mosquitoes for the Sterile Insect Technique</a><br /><i><span style="color: #3d85c6;">Yaniv Ovadia</span></i><i>,</i><i><span style="color: #3d85c6;"> Yoni Halpern</span></i><i>,</i><i><span style="color: #3d85c6;"> Dilip Krishnan</span>, Josh Livni, Daniel Newburger, <span style="color: #3d85c6;">Ryan Poplin</span>, Tiantian Zha, <span style="color: #3d85c6;">D. Sculley</span></i><br /><br /><b><u>Workshops</u></b><br /><a href="http://www.mlgworkshop.org/2017/">13th International Workshop on Mining and Learning with Graphs</a><br />Keynote Speaker: <i><span style="color: #3d85c6;">Vahab Mirrokni</span> - Distributed Graph Mining: Theory and Practice</i><br />Contributed talks include:<br /><a href="https://arxiv.org/pdf/1706.07845.pdf">HARP: Hierarchical Representation Learning for Networks</a><br /><i>Haochen Chen, <span style="color: #3d85c6;">Bryan Perozzi</span>, Yifan Hu and Steven Skiena</i><br /><br /><a href="http://www.fatml.org/">Fairness, Accountability, and Transparency in Machine Learning</a><br />Contributed talks include:<br /><a href="http://www.fatml.org/media/documents/fair_clustering_through_fairlets.pdf">Fair Clustering Through Fairlets </a><br /><i>Flavio Chierichetti, <span style="color: #3d85c6;">Ravi Kumar</span>,<span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Sergei Vassilvitskii</span></i><br /><a href="https://arxiv.org/pdf/1707.00075.pdf">Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations</a><br /><i><span style="color: #3d85c6;">Alex Beutel</span></i><i>,</i><i><span style="color: #3d85c6;"> Jilin Chen</span></i><i>,</i><i><span style="color: #3d85c6;"> Zhe Zhao</span></i><i>,</i><i><span style="color: #3d85c6;"> Ed H. Chi</span></i><br /><br /><b><u>Tutorial</u></b><br /><a href="https://github.com/random-forests/tensorflow-workshop">TensorFlow</a><br /><i><span style="color: #3d85c6;">Rajat Monga</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Wicke</span></i><i>,</i><i><span style="color: #3d85c6;"> Daniel ‘Wolff’ Dobson</span></i><i>,</i><i><span style="color: #3d85c6;"> Joshua Gordon</span></i>Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-15180365961139069462017-08-21T10:00:00.000-07:002017-08-21T10:11:26.652-07:00Announcing the NYC Algorithms and Optimization Site<span class="byline-author">Posted by Vahab Mirrokni, Principal Research Scientist and Xerxes Dotiwalla, Product Manager, NYC Algorithms and Optimization Team</span><br /><br />New York City is home to several Google algorithms research groups. We collaborate closely with the teams behind many Google products and work on a wide variety of algorithmic challenges, like <a href="https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html">optimizing infrastructure</a>, <a href="https://research.googleblog.com/2015/08/kdd-2015-best-research-paper-award.html">protecting privacy</a>, <a href="https://research.googleblog.com/2016/09/research-from-vldb-2016-improved-friend.html">improving friend suggestions</a> and much more.<br /><br />Today, we’re excited to provide more insights into the research done in the Big Apple with the launch of the <a href="https://research.google.com/teams/nycalg/">NYC Algorithms and Optimization Team page</a>. The NYC Algorithms and Optimization Team comprises multiple overlapping research groups working on large-scale graph mining, large-scale optimization and market algorithms. <br /><br /><b>Large-scale Graph Mining</b><br />The <a href="https://research.google.com/teams/nycalg/graph-mining/">Large-scale Graph Mining Group</a> is tasked with building the most scalable library for graph algorithms and analysis and applying it to a multitude of Google products. We formalize data mining and machine learning challenges as graph algorithms problems and perform fundamental research in those fields leading to publications in top venues.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-qLyPm7efRy4/WZdxY-TVDkI/AAAAAAAAB9o/QUO-W9rV8ys4b9OnmaaCLajyL-YgPPE-wCLcBGAs/s1600/image3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://3.bp.blogspot.com/-qLyPm7efRy4/WZdxY-TVDkI/AAAAAAAAB9o/QUO-W9rV8ys4b9OnmaaCLajyL-YgPPE-wCLcBGAs/s320/image3.png" width="320" /></a></div><br />Our projects include:<br /><ul><li><b>Large-scale Similarity Ranking:</b> Our research in pairwise similarity ranking has produced a number of innovative methods, which we have published in top venues such as WWW, ICML, and VLDB, e.g., improving friend suggestion using <a href="https://research.google.com/pubs/pub44265.html">ego-networks</a> and <a href="https://research.google.com/pubs/pub42479.html">computing similarity rankings in large-scale multi-categorical bipartite graphs</a>.</li><li><b>Balanced Partitioning:</b> Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems. As <a href="https://research.google.com/pubs/pub44315.html">our paper</a> shows, we are able to achieve a 15-25% reduction in cut size compared to state-of-the-art algorithms in the literature.</li><li><b>Clustering and Connected Components:</b> We have state-of-the-art implementations of many different algorithms including hierarchical clustering, overlapping clustering, <a href="https://research.google.com/pubs/pub41596.html">local clustering</a>, spectral clustering, and <a href="https://research.google.com/pubs/pub43122.html">connected components</a>. Our methods are 10-30x faster than the best previously studied algorithms and can scale to graphs with trillions of edges.</li><li><b>Public-private Graph Computation:</b> Our <a href="http://dl.acm.org/citation.cfm?doid=2783258.2783354">research</a> on novel models of graph computation based on a personal view of private data preserves the privacy of each user.</li></ul><b>Large-scale Optimization</b><br />The <a href="https://research.google.com/teams/nycalg/large-scale-optimization/">Large-scale Optimization Group</a>’s mission is to develop large-scale optimization techniques and use them to improve the efficiency and robustness of infrastructure at Google. We apply techniques from areas such as combinatorial optimization, online algorithms, and control theory to make Google’s massive computational infrastructure do more with less. We combine online and offline optimizations to achieve such goals as increasing throughput, decreasing latency, minimizing resource contention, maximizing the efficacy of caches, and eliminating unnecessary work in distributed systems. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-lTVKCQqtrCk/WZdxeSplRZI/AAAAAAAAB9s/VO-k8NNwD7oRLgxW_dDNt0oqc8lElPQ8QCLcBGAs/s1600/image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://3.bp.blogspot.com/-lTVKCQqtrCk/WZdxeSplRZI/AAAAAAAAB9s/VO-k8NNwD7oRLgxW_dDNt0oqc8lElPQ8QCLcBGAs/s320/image1.png" width="320" /></a></div><br />Our research is used in critical infrastructure that supports core products:<br /><ul><li><b>Consistent Hashing:</b> We <a href="https://research.google.com/pubs/pub45756.html">designed memoryless balanced allocation algorithms</a> to assign a dynamic set of clients to a dynamic set of servers such that the load on each server is bounded, and the allocation does not change by much for every update operation. This technique is currently implemented in <a href="https://cloud.google.com/pubsub/">Google Cloud Pub/Sub</a> and <a href="https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html">externally</a> in the open-source <a href="https://github.com/arodland/haproxy/commit/b02bed24daf64743cb9a571e93ed29ee4bc7efe7">haproxy</a>.</li><li><b>Distributed Optimization Based on Core-sets:</b> <a href="https://research.google.com/pubs/pub44219.html">Composable core-sets</a> provide an effective method for solving optimization problems on massive datasets. This technique can be used for several problems including <a href="https://research.google.com/pubs/pub42964.html">distributed balanced clustering</a> and <a href="https://research.google.com/pubs/pub44222.html">distributed submodular maximization</a>.</li><li><b>Google Search Infrastructure Optimization:</b> We partnered with the Google Search infrastructure team to build a distributed feedback control loop to govern the way queries are fanned out to machines. We also improved the efficacy of caching by increasing the homogeneity of the stream of queries seen by any single machine.</li></ul><b>Market Algorithms</b><br />The Market Algorithms Group analyzes, designs, and delivers economically and computationally efficient marketplaces across Google. Our research serves to optimize display ads for DoubleClick’s reservation ads and exchange, as well as sponsored search and mobile ads.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-a37jDHWP5nE/WZdxjDk0STI/AAAAAAAAB9w/V6yOF-Lxwm0AkGyGkkoIodAmX4z64It-ACLcBGAs/s1600/image2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://1.bp.blogspot.com/-a37jDHWP5nE/WZdxjDk0STI/AAAAAAAAB9w/V6yOF-Lxwm0AkGyGkkoIodAmX4z64It-ACLcBGAs/s320/image2.png" width="320" /></a></div><br />In the past few years, we have explored a number of areas, including:<br /><ul><li><b>Display Ads Research:</b> The Display ads ecosystem provides a great platform for a variety of research problems in online stochastic optimization and computational economics, such as <a href="https://research.google.com/pubs/pub41755.html">whole-page optimization</a> and optimal contract design. An important part of this research area is dedicated to auction optimization for advertising exchanges where we deal with <a href="https://research.google.com/pubs/pub36634.html">auctions with intermediaries</a>, <a href="https://research.google.com/pubs/pub45185.html">optimal pricing strategies</a>, and <a href="https://research.google.com/pubs/pub36975.html">optimal yield management</a> for reservation contracts and ad exchanges.</li><li><b>Online Stochastic Matching:</b> We have developed new algorithms for <a href="https://research.google.com/pubs/pub35487.html">online stochastic matching</a>, <a href="https://research.google.com/pubs/pub37475.html">budgeted allocation</a>, <a href="https://research.google.com/pubs/pub44231.html">handling traffic spikes</a>, and more general variants of the problem, called <a href="https://research.google.com/pubs/pub44224.html">submodular welfare maximization</a>.</li><li><b>Robust Stochastic Allocation:</b> In <a href="https://research.google.com/pubs/pub37475.html">one paper</a>, we study online algorithms that achieve a good performance in both adversarial and stochastic arrival models. In <a href="https://research.google.com/pubs/pub44231.html">another paper</a>, we develop a hybrid model and algorithms with approximation factors that change as a function of the accuracy of the forecast.</li><li><b>Optimizing Advertiser Campaigns:</b> We have studied algorithmic questions such as <a href="https://research.google.com/pubs/pub40688.html">positive carryover effects</a>, <a href="https://research.google.com/pubs/pub32834.html">budget optimization in search-based auctions</a>, and <a href="https://research.google.com/pubs/pub42963.html">concise bid optimization strategies with multiple budget constraints</a>.</li><li><b>Dynamic Mechanism Design:</b> We have developed efficient mechanisms for sophisticated settings that occur in internet advertising, such as online settings and polyhedral constraints. We have also designed a new family of <a href="https://research.google.com/pubs/pub45752.html">dynamic mechanisms</a>, called <a href="https://research.google.com/pubs/pub45750.html">bank account mechanisms</a>, and showed their effectiveness in designing <a href="https://research.google.com/pubs/pub45751.html">non-clairvoyant dynamic mechanisms</a> that can be implemented without relying on forecasting the future steps.</li></ul>For a summary of our research activities, you can take a look at talks at our <a href="https://sites.google.com/site/marketalgorithms/market-algorithms-workshop">recent market algorithms workshop</a>.<br /><br />It is our hope that with the help of this new <a href="https://research.google.com/teams/nycalg/">Google NYC Algorithms and Optimization Team page</a> that we can more effectively share our work and broaden our dialogue with the research and engineering community. Please visit the site to learn about our latest projects, <a href="https://sites.google.com/corp/view/nycalgorithms/home">publications</a>, <a href="https://sites.google.com/site/nycresearchseminar/">seminars</a>, and research areas!Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-7492759680564894022017-04-03T10:00:00.000-07:002017-04-03T10:00:23.496-07:00Consistent Hashing with Bounded Loads<span class="byline-author">Posted by Vahab Mirrokni, Principal Scientist, Morteza Zadimoghaddam, Research Scientist, NYC Algorithms Team</span><br /><br />Running a large-scale web service, such as content hosting, necessarily requires <a href="https://en.wikipedia.org/wiki/Load_balancing_(computing)">load balancing</a> — distributing clients <i>uniformly</i> across multiple servers such that none get overloaded. Further, it is desirable to find an allocation that does not change very much over time in a <i>dynamic</i> environment in which both clients and servers can be added or removed at any time. In other words, we need the allocation of clients to servers to be <i>consistent</i> over time.<br /><br />In collaboration with <a href="http://www.diku.dk/~mthorup/">Mikkel Thorup</a>, a visiting researcher from university of Copenhagen, we developed a new efficient allocation algorithm for this problem with <i>tight guarantees</i> on the maximum load of each server, and studied it theoretically and empirically. We then worked with our Cloud team to implement it in <a href="https://cloud.google.com/pubsub/">Google Cloud Pub/Sub</a>, a scalable event streaming service, and observed substantial improvement on uniformity of the load allocation (in terms of the maximum load assigned to servers) while maintaining consistency and stability objectives. In August 2016 we described our algorithm in the paper “<a href="https://arxiv.org/abs/1608.01350">Consistent Hashing with Bounded Loads</a>”, and shared it on ArXiv for potential use by the broader research community. <br /><br />Three months later, Andrew Rodland from <a href="https://vimeo.com/">Vimeo</a> informed us that he had found the paper, implemented it in <a href="https://github.com/arodland/haproxy/commit/b02bed24daf64743cb9a571e93ed29ee4bc7efe7">haproxy</a> (a widely-used piece of open source software), and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. He recently summarized this story in a <a href="https://medium.com/vimeo-engineering-blog/improving-load-balancing-with-a-new-consistent-hashing-algorithm-9f1bd75709ed">blog post</a> detailing his use case. Needless to say, we were excited to learn that our theoretical research was not only put into application, but also that it was useful <i>and</i> open-sourced. <br /><br /><b>Background</b><br />While the concept of <a href="https://en.wikipedia.org/wiki/Consistent_hashing">consistent hashing</a> has been developed in the past to deal with load balancing in dynamic environments, a fundamental issue with all the previously developed schemes is that, in certain scenarios, they may result in sub-optimal load balancing on many servers. <br /><br />Additionally, both clients and servers may be added or removed periodically, and with such changes, we do not want to move too many clients. Thus, while the dynamic allocation algorithm has to always ensure a proper load balancing, it should also aim to minimize the number of clients moved after each change to the system. Such allocation problems become even more challenging when we face hard constraints on the capacity of each server - that is, each server has a capacity that the load may not exceed. Typically, we want capacities close to the average loads. <br /><br />In other words, we want to simultaneously achieve both <i>uniformity</i> and <i>consistency</i> in the resulting allocations. There is a vast amount of literature on solutions in the much simpler case where the set of servers is fixed and only the client set is updated, but in this post we discuss solutions that are relevant in the fully <i>dynamic</i> case where both clients and servers can be added and removed. <br /><br /><b>The Algorithm</b><br />We can think about the servers as bins and clients as balls to have a similar notation with well-studied <a href="https://en.wikipedia.org/wiki/Balls_into_bins">balls-to-bins stochastic processes</a>. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either <a href="https://en.wikipedia.org/wiki/Floor_and_ceiling_functions">floor or ceiling</a> of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property. <br /><br />Imagine a given range of numbers overlaid on a circle. We apply a hash function to balls and a separate hash function to bins to obtain numbers in that range that correspond to positions on that circle. We then start allocating balls in a specific order independent of their hash values (let’s say based on their ID). Then each ball is moved clockwise and is assigned to the first bin with spare capacity. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-pgZ4b9H7VlM/WOJ91rDe_XI/AAAAAAAABqw/wIjtyPHheFgyHpXIqY4qNLhd_H9DnHsXACLcB/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="588" src="https://3.bp.blogspot.com/-pgZ4b9H7VlM/WOJ91rDe_XI/AAAAAAAABqw/wIjtyPHheFgyHpXIqY4qNLhd_H9DnHsXACLcB/s640/image00.png" width="640" /></a></div>Consider the example above where 6 balls and 3 bins are assigned using two separate hash functions to random locations on the circle. For the sake of this instance, assume the capacity of each bin is set to 2. We start allocating balls in the increasing order of their ID values. Ball number 1 moves clockwise, and goes to bin C. Ball number 2 goes to A. Balls 3 and 4 go to bin B. Ball number 5 goes to bin C. Then ball number 6 moves clockwise and hits bin B first. However bin B has capacity 2 and already contains balls 3 and 4. So ball 6 keeps moving to reach bin C but that bin is also full. Finally, ball 6 ends up in bin A that has a spare slot for it.<br /><br />Upon any update in the system (ball or bin insertion/deletion), the allocation is recomputed to keep the uniformity objective. The art of the analysis is to show that a small update (a few number of insertions and deletions) results in minor changes in the state of the allocation and therefore the consistency objective is met. In <a href="https://arxiv.org/abs/1608.01350">our paper</a> we show that every ball removal or insertion in the system results in O(1/ε<sup>2</sup>) movements of other balls. The most important thing about this upper bound is that it is independent of the total number of balls or bins in the system. So if the number of balls or bins are doubled, this bound will not change. Having an upper bound independent of the number of balls or bins introduces room for scalability as the consistency objective is not violated if we move to bigger instances. Simulations for the number of movements (relocations) per update is shown below when an update occurs on a bin/server. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-IJFOHvSomXY/WOJ-AecmDaI/AAAAAAAABq0/wVAwJd8jxNs7cT30aU0ek3_WpPzYYSO9ACLcB/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="505" src="https://3.bp.blogspot.com/-IJFOHvSomXY/WOJ-AecmDaI/AAAAAAAABq0/wVAwJd8jxNs7cT30aU0ek3_WpPzYYSO9ACLcB/s640/image01.png" width="640" /></a></div>The red curve shows the average number of movements and the blue bars indicate the variance for different values of ε (the x-axis). The dashed curve is the upper bound suggested by our theoretical results which fits nicely as a prediction of the actual number of movements. Furthermore, for any value of ε, we know the load of each bin is at most (1+ε) times the average load. Below we see the load distribution of bins for different values of ε=0.1, ε=0.3 and ε=0.9.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-PUdBAM5bDKk/WOJ-Hm3HAgI/AAAAAAAABq4/iREEzcJjdIQ7YYYE6bfGIEsbQALIozEKgCLcB/s1600/image02.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="532" src="https://1.bp.blogspot.com/-PUdBAM5bDKk/WOJ-Hm3HAgI/AAAAAAAABq4/iREEzcJjdIQ7YYYE6bfGIEsbQALIozEKgCLcB/s640/image02.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The distribution of loads for several values of ε. The load distribution is nearly uniform covering all ranges of loads from 0 to (1+ε) times average, and many bins with load equal to (1+ε) times average.</td></tr></tbody></table>As one can see there is a tradeoff — a lower ε helps with uniformity but not with consistency, while larger ε values help with consistency. A lower ε will ensure that many loads will be equal to the hard capacity limit of (1+ε) times the average, and the rest have a decaying distribution.<br /><br />When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances. <br /><br />While our internal results are exciting, we are even more pleased that the broader community found our solution useful enough to <a href="https://github.com/arodland/haproxy">open-source</a>, allowing anyone to use this algorithm. If you are interested in further details of this research, please see the <a href="https://arxiv.org/abs/1608.01350">paper</a> on ArXiv, and stay tuned for more research from the <a href="https://research.google.com/teams/nycalg/">NYC Algorithms Team</a>!<br /><br /><b>Acknowledgements:</b><br />We would like to thank Alex Totok, Matt Gruskin, Sergey Kondratyev and Haakon Ringberg from the Google Cloud Pub/Sub team, and of course <a href="http://www.diku.dk/~mthorup/">Mikkel Thorup</a> for his invaluable contributions to this paper.Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-70975319191334731112016-09-20T10:00:00.000-07:002016-09-20T10:00:16.001-07:00The 280-Year-Old Algorithm Inside Google Trips<span class="byline-author">Posted by Bogdan Arsintescu, Software Engineer & Sreenivas Gollapudi, Kostas Kollias, Tamas Sarlos and Andrew Tomkins, Research Scientists<br /></span><br /><br /><a href="https://en.wikipedia.org/wiki/Algorithm_engineering">Algorithms Engineering</a> is a lot of fun because algorithms do not go out of fashion: one never knows when an oldie-but-goodie might come in handy. Case in point: Yesterday, Google <a href="https://googleblog.blogspot.com/2016/09/see-more-plan-less-try-google-trips.html">announced Google Trips</a>, a new app to assist you in your travels by helping you create your own “perfect day” in a city. Surprisingly, deep inside Google Trips, there is an algorithm that was invented 280 years ago. <br /><br />In 1736, <a href="https://en.wikipedia.org/wiki/Leonhard_Euler">Leonhard Euler</a> authored a brief but <a href="http://eulerarchive.maa.org//docs/originals/E053.pdf">beautiful mathematical paper</a> regarding the town of Königsberg and its 7 bridges, shown here:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-HRkkgmmGB3Y/V-Fk43DGYkI/AAAAAAAABNw/j5c6gQMUsjAWtMWwMBnQ-D35sA8l0-McQCLcB/s1600/image05.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="504" src="https://4.bp.blogspot.com/-HRkkgmmGB3Y/V-Fk43DGYkI/AAAAAAAABNw/j5c6gQMUsjAWtMWwMBnQ-D35sA8l0-McQCLcB/s640/image05.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from <a href="https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg">Wikipedia</a></td></tr></tbody></table>In the paper, Euler studied the following question: is it possible to walk through the city crossing each bridge exactly once? As it turns out, for the city of Königsberg, the answer is no. To reach this answer, Euler developed a general approach to represent any layout of landmasses and bridges in terms of what he dubbed the <i>Geometriam Situs</i> (the “Geometry of Place”), which we now call <a href="https://en.wikipedia.org/wiki/Graph_theory">Graph Theory</a>. He represented each landmass as a “node” in the graph, and each bridge as an “edge,” like this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-wRl8WjuCA-c/V-FkPmiZK-I/AAAAAAAABNs/Z9htxAzYeyg_C44uNKdjCYVLoQqaaQHuwCLcB/s1600/Screen%2BShot%2B2016-09-20%2Bat%2B9.26.46%2BAM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="146" src="https://1.bp.blogspot.com/-wRl8WjuCA-c/V-FkPmiZK-I/AAAAAAAABNs/Z9htxAzYeyg_C44uNKdjCYVLoQqaaQHuwCLcB/s640/Screen%2BShot%2B2016-09-20%2Bat%2B9.26.46%2BAM.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from <a href="https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg">Wikipedia</a></td></tr></tbody></table>Euler noticed that if all the nodes in the graph have an even number of edges (such graphs are called “Eulerian” in his honor) then, and only then, a cycle can be found that visits every edge exactly once. Keep this in mind, as we’ll rely on this fact later in the post.<br /><br />Our team in Google Research has been fascinated by the “Geometry of Place” for some time, and we started investigating a question related to Euler’s: rather than visiting just the bridges, how can we visit as many interesting places as possible during a particular trip? We call this the “itineraries” problem. Euler didn’t study it, but it is a well known topic in Optimization, where it is often called the “<a href="http://chekuri.cs.illinois.edu/papers/orienteering-journal.pdf">Orienteering</a>” problem.<br /><br />While Euler’s problem has an efficient and exact solution, the itineraries problem is not just hard to solve, it is hard to even <i>approximately</i> solve! The difficulty lies in the interplay between two conflicting goals: first, we should pick great places to visit, but second, we should pick them to allow a good itinerary: not too much travel time; don’t visit places when they’re closed; don’t visit too many museums, etc. Embedded in such problems is the challenge of finding efficient routes, often referred to as the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">Travelling Salesman Problem</a> (TSP).<br /><br /><b>Algorithms for Travel Itineraries</b><br /><br />Fortunately, the real world has a property called the “<a href="https://en.wikipedia.org/wiki/Triangle_inequality">triangle inequality</a>” that says adding an extra stop to a route never makes it shorter. When the underlying geometry satisfies the triangle inequality, the TSP can be approximately solved using another <a href="https://en.wikipedia.org/wiki/Christofides_algorithm">algorithm discovered by Christofides</a> in 1976. This is an important part of our solution, and builds on Euler’s paper, so we’ll give a quick four-step rundown of how it works here:<br /><ol><li>We start with all our destinations separate, and repeatedly connect together the closest two that aren’t yet connected. This doesn’t yet give us an itinerary, but it does connect all the destinations via a <a href="https://en.wikipedia.org/wiki/Minimum_spanning_tree">minimum spanning tree</a> of the graph.</li><li>We take all the destinations that have an odd number of connections in this tree (Euler proved there must be an even number of these), and carefully pair them up.</li><li>Because all the destinations now have an even number of edges, we’ve created an Eulerian graph, so we create a route that crosses each edge exactly once.</li><li>We now have a great route, but it might visit some places more than once. No problem, we find any double visits and simply bypass them, going directly from the predecessor to the successor.</li></ol>Christofides gave an elegant proof that the resulting route is always close to the shortest possible. Here’s an example of the Christofides’ algorithm in action on a location graph with the nodes representing places and the edges with costs representing the travel time between the places.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-DYlLoNxg-S8/V-FlOHLF8TI/AAAAAAAABN0/kNISdLQvX6cfWAmjT8k-LKPEMJA63nX-ACLcB/s1600/image04.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="172" src="https://3.bp.blogspot.com/-DYlLoNxg-S8/V-FlOHLF8TI/AAAAAAAABN0/kNISdLQvX6cfWAmjT8k-LKPEMJA63nX-ACLcB/s640/image04.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Construction of an Eulerian Tour in a location graph</td></tr></tbody></table>Armed with this efficient route-finding subroutine, we can now start building itineraries one step at a time. At each step, we estimate the benefit to the user of each possible new place to visit, and likewise estimate the cost using the Christofides algorithm. A user’s benefit can be derived from a host of natural factors such as the popularity of the place and how different the place is relative to places already visited on the tour. We then pick whichever new place has the best benefit per unit of extra cost (e.g., time needed to include the new place in the tour). Here’s an example of our algorithm actually building a route in London using the location graph shown above:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-y_t6IG5RMuE/V-Flb6RQktI/AAAAAAAABN4/91je5cQumFcOI5bgJIliDkk3tX3MoPoAgCLcB/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="274" src="https://3.bp.blogspot.com/-y_t6IG5RMuE/V-Flb6RQktI/AAAAAAAABN4/91je5cQumFcOI5bgJIliDkk3tX3MoPoAgCLcB/s640/image01.png" width="640" /></a></div><b>Itineraries in Google Trips</b><br /><br />With our first good approximate solution to the itineraries problem in hand, we started working with our colleagues from the Google Trips team, and we realized we’d barely scratched the surface. For instance, even if we produce the absolute perfect itinerary, any particular user of the system will very reasonably say, “That’s great, but all my friends say I also need to visit this other place. Plus, I’m only around for the morning, and I don’t want to miss this place you listed in the afternoon. And I’ve already seen Big Ben twice.” So rather than just producing an itinerary once and calling it a perfect day, we needed a fast dynamic algorithm for itineraries that users can modify on the fly to suit their individual taste. And because many people have bad data connections while traveling, the solution had to be efficient enough to run disconnected on a phone.<br /><br /><b>Better Itineraries Through the Wisdom of Crowds</b><br /><br />While the algorithmic aspects of the problem were highly challenging, we realized that producing high-quality itineraries was just as dependent on our understanding of the many possible stopping points on the itinerary. We had Google’s extensive travel database to identify the interesting places to visit, and we also had great data from Google’s existing systems about how to travel from any place to any other. But we didn’t have a good sense for how people typically move through this geometry of places. <br /><br />For this, we turned to the wisdom of crowds. This type of wisdom is used by Google to <a href="https://googleblog.blogspot.com/2007/02/stuck-in-traffic.html">estimate delays on highways</a>, and to discover <a href="https://support.google.com/business/answer/6263531?hl=en">when restaurants are most busy</a>. Here, we use the same techniques to learn about common visit sequences that we can stitch together into itineraries that feel good to our users. We combine Google's knowledge of <a href="https://techcrunch.com/2015/07/28/google-search-now-shows-you-when-local-businesses-are-busiest/">when places are popular</a>, with the directions between those places to gather an idea of what tourists like to do when travelling.<br /><br />And the crowd has a lot more wisdom to offer in the future. For example, we noticed that visits to Buckingham Palace spike around 11:30 and stay a bit longer than at other times of the day. This seemed a little strange to us, but when we looked more closely, it turns out to be the time of the <a href="https://www.royalcollection.org.uk/visit/buckinghampalace/what-to-see-and-do/changing-the-guard">Changing of the Guard</a>. We’re looking now at ways to incorporate this type of timing information into the itinerary selection algorithms.<br /><br />So give it a try: Google Trips, available now on <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.travel.onthego">Android</a> and <a href="https://itunes.apple.com/app/id1081561570?mt=8">iOS</a>, has you covered from departure to return. Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-58222562181349520782016-09-15T11:00:00.000-07:002017-01-18T12:12:23.768-08:00Research from VLDB 2016: Improved Friend Suggestion using Ego-Net Analysis<span class="byline-author">Posted by Alessandro Epasto, Research Scientist, Google Research NY</span><br /><br />On September 5 - 9, New Delhi, India hosted the <a href="http://vldb2016.persistent.com/">42nd International Conference on Very Large Data Bases</a> (VLDB), a premier annual forum for academic and industry research on databases, data management, data mining and data analytics. Over the past several years, Google has actively participated in VLDB, both as official sponsor and with numerous contributions to the research and industrial tracks. In this post, we would like to share the research presented in one of the Google papers from VLDB 2016. <br /><br />In <a href="http://www.vldb.org/pvldb/vol9/p324-epasto.pdf"><i>Ego-net Community Mining Applied to Friend Suggestion</i></a>, co-authored by Googlers <a href="http://research.google.com/pubs/SilvioLattanzi.html">Silvio Lattanzi</a>, <a href="http://research.google.com/pubs/mirrokni.html">Vahab Mirrokni</a>, Ismail Oner Sebe, <a href="http://research.google.com/pubs/AhmedTaei.html">Ahmed Taei,</a> Sunita Verma and <a href="http://research.google.com/pubs/AlessandroEpasto.html">myself,</a> we explore how social networks can provide better friend suggestions to users, a challenging practical problem faced by all social network platforms<br /><br />Friend suggestion – the task of suggesting to a user the contacts she might already know in the network but that she hasn’t added yet – is major driver of user engagement and social connection in all online social networks. Designing a high quality system that can provide relevant and useful friend recommendations is very challenging, and requires state-of-the-art machine learning algorithms based on a multitude of parameters. <br /><br />An effective family of features for friend suggestion consist of <a href="https://en.wikipedia.org/wiki/Graph_(mathematics)">graph</a> features such as the <i>number of common friends </i>between two users. While widely used, the number of common friends has some major drawbacks, including the following which is shown in Figure 1.<br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-88-zF6lIbX0/V9rekNAup4I/AAAAAAAABMY/EdJuYRKPC3sE6xfgOeV5xJogkaewvG3JACLcB/s1600/image01.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="492" src="https://4.bp.blogspot.com/-88-zF6lIbX0/V9rekNAup4I/AAAAAAAABMY/EdJuYRKPC3sE6xfgOeV5xJogkaewvG3JACLcB/s640/image01.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Ego-net of Sally.</td></tr></tbody></table>In this figure we represent the social connections of Sally and her friends – the <i>ego-net</i> of Sally. An ego-net of a node (in this case, Sally) is defined as the graph that contains the node itself, all of the node’s neighbors and the connection among <i>those</i> nodes. Sally has 6 friends in her ego-net: <b>A</b>lbert (her husband), <b>B</b>rian (her son), <b>C</b>harlotte (her mother) as well as <b>U</b>ma (her boss), <b>V</b>incent and <b>W</b>ally (two of her team members). Notice how <b>A</b>, <b>B</b> and <b>C</b> are all connected with each other while they do not know <b>U</b>, <b>V</b> or <b>W</b>. On the other hand <b>U,</b> <b>V</b> and <b>W</b> have all added each other as their friend (except <b>U</b> and <b>W</b> who are good friend but somehow forgot to add each other).<br /><br />Notice how each of <b>A</b>, <b>B</b>, <b>C</b> have a common friend with each of <b>U</b>, <b>V</b> and <b>W</b>: Sally herself. A friend recommendation system based on common neighbors might suggest to Sally’s son (for instance) to add Sally’s boss as his friend! In reality the situation is even more complicated because users’ online and offline friends span several different social circles or communities (family, work, school, sports, etc). <br /><br />In our paper we introduce a novel technique for friend suggestions based on independently analyzing the ego-net structure. The main contribution of the paper is to show that it is possible to provide friend suggestions efficiently by constructing all ego-nets of the nodes in the graph and then independently applying community detection algorithms on them in large-scale <a href="https://en.wikipedia.org/wiki/MapReduce">distributed systems</a>. <br /><br />Specifically, the algorithm proceeds by constructing the ego-nets of all nodes and applying, independently on each of them, a community detection algorithm. More precisely the algorithm operates on so-called “ego-net-minus-ego” graphs, which is defined as the graph including only the neighbors of a given node, as shown in the figure below.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-4taXubdlCw0/V9re8hjeM3I/AAAAAAAABMg/p8HMW4Ztsy4NL973u2fT-xHz0HHyfaM1ACLcB/s1600/image00.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="530" src="https://2.bp.blogspot.com/-4taXubdlCw0/V9re8hjeM3I/AAAAAAAABMg/p8HMW4Ztsy4NL973u2fT-xHz0HHyfaM1ACLcB/s640/image00.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Clustering of the ego-net of Sally.</td></tr></tbody></table>Notice how in this example the ego-net-minus-ego of Sally has two very clear communities: her family (<b>A</b>, <b>B</b>, <b>C</b>) and her co-workers (<b>U</b>, <b>V</b>, <b>W</b>) which are easily separated. Intuitively, this is because one might expect that while nodes (e.g. Sally) participate in many communities, there is usually a single (or a limited number of) contexts in which two specific neighbors interact. While Sally is both part of her family and work community, Sally and Uma interact <i>only</i> at work. Through extensive experimental evaluation on large-scale public social networks and formally through a simple mathematical model, our paper confirms this intuition. It seems that while communities are hard to separate in a global graph, they are easier to identify at the local level of ego-nets. <br /><br />This allows for a novel graph-based method for friend suggestion which intuitively only allows suggestion of pairs of users that are clustered together in the same community from the point of view of their common friends. With this method, <b>U</b> and <b>W</b> will be suggested to add each other (as they are in the same community and they are not yet connected) while <b>B</b> and <b>U</b> will <i>not</i> be suggested as friends as they span two different communities. <br /><br />From an algorithmic point of view, the paper introduces efficient parallel and distributed techniques for computing and clustering all ego-nets of very large graphs at the same time – a fundamental aspect enabling use of the system on the entire world Google+ graph. We have applied this feature in the “You May Know” system of Google+, resulting in a clear positive impact on the prediction task, improving the acceptance rate by more than 1.5% and decreasing the rejection rate by more than 3.3% (a significative impact at Google scales).<br /><br />We believe that many future directions of work might stem from our preliminary results. For instance ego-net analysis could be potentially to automatically classify a user contacts in circles and to detect spam. Another interesting direction is the study of ego-network evolution in dynamic graphs. Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0