tag:blogger.com,1999:blog-212249942017-10-19T11:42:39.591-07:00Research BlogThe latest news from Research at GoogleGoogle Blogsnoreply@blogger.comBlogger5125tag:blogger.com,1999:blog-21224994.post-77605221046956773052017-08-23T10:00:00.000-07:002017-09-12T09:29:03.923-07:00Google at KDD’17: Graph Mining and Beyond<span class="byline-author">Posted by Bryan Perozzi, Research Scientist, NYC Algorithms and Optimization Team</span><br /><br />The <a href="http://www.kdd.org/kdd2017/">23rd ACM conference on Knowledge Discovery and Data Mining</a> (KDD’17), a main venue for academic and industry research in data science, information retrieval, data mining and machine learning, was held last week in Halifax, Canada. Google has historically been an active participant in KDD, and this year was no exception, with Googlers’ contributing numerous papers and participating in workshops. <br /><br />In addition to our overall participation, we are happy to congratulate fellow Googler Bryan Perozzi for receiving the SIGKDD 2017 Doctoral Dissertation Award, which serves to recognize excellent research by doctoral candidates in the field of data mining and knowledge discovery. This award was given in recognition of his <a href="http://perozzi.net/publications/16_thesis.pdf">thesis</a> on the topic of machine learning on graphs performed at Stony Brook University, under the advisorship of <a href="http://www3.cs.stonybrook.edu/~skiena/">Steven Skiena</a>. Part of his thesis was developed during his internships at Google. The thesis dealt with using a restricted set of local graph primitives (such as ego-networks and truncated random walks) to effectively exploit the information around each vertex for <a href="http://dl.acm.org/citation.cfm?id=2623732">classification</a>, <a href="http://dl.acm.org/citation.cfm?doid=2623330.2623682">clustering</a>, and <a href="http://epubs.siam.org/doi/abs/10.1137/1.9781611974348.24">anomaly detection</a>. Most notably, the work introduced the random-walk paradigm for graph embedding with neural networks in DeepWalk.<br /><br /><a href="http://dl.acm.org/citation.cfm?id=2623732">DeepWalk: Online Learning of Social Representations</a>, originally presented at KDD'14, outlines a method for using a series of local information obtained from truncated random walks to learn <i>latent</i> representations of nodes in a graph (e.g. users in a social network). The core idea was to treat each segment of a random walk as a sentence “in the language of the graph.” These segments could then be used as input for neural network models to learn representations of the graph’s nodes, using sequence modeling methods like <a href="https://en.wikipedia.org/wiki/Word2vec">word2vec</a> (which had just been developed at the time). This research continues at Google, most recently with <a href="https://arxiv.org/abs/1705.05615">Learning Edge Representations via Low-Rank Asymmetric Projections</a>.<br /><br />The full list of Google contributions at KDD’17 is listed below (Googlers highlighted in <span style="color: #3d85c6;">blue</span>).<br /><br /><b><u>Organizing Committee</u></b><br />Panel Chair: <span style="color: #3d85c6;"><i>Andrew Tomkins </i></span><br />Research Track Program Chair: <i><span style="color: #3d85c6;">Ravi Kumar </span></i><br />Applied Data Science Track Program Chair: <span style="color: #3d85c6;"><i>Roberto J. Bayardo </i></span><br />Research Track Program Committee: <i><span style="color: #3d85c6;">Sergei Vassilvitskii</span></i><i>,</i><i><span style="color: #3d85c6;"> Alex Beutel</span></i><i>,</i><i><span style="color: #3d85c6;"> Abhimanyu Das</span></i><i>,</i><i><span style="color: #3d85c6;"> Nan Du</span></i><i>,</i><i><span style="color: #3d85c6;"> Alessandro Epasto</span></i><i>,</i><i><span style="color: #3d85c6;"> Alex Fabrikant</span></i><i>,</i><i><span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Kristen Lefevre</span></i><i>,</i><i><span style="color: #3d85c6;"> Bryan Perozzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Karthik Raman</span></i><i>,</i><i><span style="color: #3d85c6;"> Steffen Rendle</span></i><i>,</i><i><span style="color: #3d85c6;"> Xiao Yu</span></i><br />Applied Data Science Program Track Committee: <i><span style="color: #3d85c6;">Edith Cohen</span></i><i>,</i><i><span style="color: #3d85c6;"> Ariel Fuxman</span></i><i>,</i><i><span style="color: #3d85c6;"> D. Sculley</span></i><i>,</i><i><span style="color: #3d85c6;"> Isabelle Stanton</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Zinkevich</span></i><i>,</i><i><span style="color: #3d85c6;"> Amr Ahmed</span></i><i>,</i><i><span style="color: #3d85c6;"> Azin Ashkan</span></i><i>,</i><i><span style="color: #3d85c6;"> Michael Bendersky</span></i><i>,</i><i><span style="color: #3d85c6;"> James Cook</span></i><i>,</i><i><span style="color: #3d85c6;"> Nan Du</span></i><i>,</i><i><span style="color: #3d85c6;"> Balaji Gopalan</span></i><i>,</i><i><span style="color: #3d85c6;"> Samuel Huston</span></i><i>,</i><i><span style="color: #3d85c6;"> Konstantinos Kollias</span></i><i>,</i><i><span style="color: #3d85c6;"> James Kunz</span></i><i>,</i><i><span style="color: #3d85c6;"> Liang Tang</span></i><i>,</i><i><span style="color: #3d85c6;"> Morteza Zadimoghaddam</span></i><br /><br /><b><u>Awards</u></b><br />Doctoral Dissertation Award: <span style="color: #3d85c6;"><i>Bryan Perozzi</i></span>, for <a href="http://perozzi.net/publications/16_thesis.pdf">Local Modeling of Attributed Graphs: Algorithms and Applications</a>.<br /><br />Doctoral Dissertation Runner-up Award: <i><span style="color: #3d85c6;">Alex Beutel</span></i>, for <a href="http://alexbeutel.com/papers/CMU-CS-16-105.pdf">User Behavior Modeling with Large-Scale Graph Analysis</a>.<br /><br /><b><u>Papers</u></b><br /><a href="http://www.kdd.org/kdd2017/papers/view/ego-splitting-framework-from-non-overlapping-to-overlapping-clusters">Ego-Splitting Framework: from Non-Overlapping to Overlapping Clusters</a><br /><i><span style="color: #3d85c6;">Alessandro Epasto</span></i><i>,</i><i><span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Renato Paes Leme</span></i><br /><br /><a href="http://delivery.acm.org/10.1145/3100000/3098020/p105-cohen.pdf?ip=104.132.34.79&id=3098020&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E5945DC2EABF3343C&CFID=931414749&CFTOKEN=83886961&__acm__=1503326945_f7fd5e1fd73428ab35196997099a0840">HyperLogLog Hyperextended: Sketches for Concave Sublinear Frequency Statistics</a><br /><span style="color: #3d85c6;"><i>Edith Cohen</i></span><br /><br /><a href="http://dl.acm.org/citation.cfm?id=3098043">Google Vizier: A Service for Black-Box Optimization</a><br /><i><span style="color: #3d85c6;">Daniel Golovin</span></i><i>,</i><i><span style="color: #3d85c6;"> Benjamin Solnik</span></i><i>,</i><i><span style="color: #3d85c6;"> Subhodeep Moitra</span></i><i>,</i><i><span style="color: #3d85c6;"> Greg Kochanski</span></i><i>,</i><i><span style="color: #3d85c6;"> John Karro</span></i><i>,</i><i><span style="color: #3d85c6;"> D. Sculley</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/quick-access-building-a-smart-experience-for-google-drive">Quick Access: Building a Smart Experience for Google Drive</a><br /><i><span style="color: #3d85c6;">Sandeep Tata</span></i><i>,</i><i><span style="color: #3d85c6;"> Alexandrin Popescul</span></i><i>,</i><i><span style="color: #3d85c6;"> Marc Najork</span></i><i>,</i><i><span style="color: #3d85c6;"> Mike Colagrosso</span></i><i>,</i><i><span style="color: #3d85c6;"> Julian Gibbons</span></i><i>,</i><i><span style="color: #3d85c6;"> Alan Green</span></i><i>,</i><i><span style="color: #3d85c6;"> Alexandre Mah</span></i><i>,</i><i><span style="color: #3d85c6;"> Michael Smith</span></i><i>,</i><i><span style="color: #3d85c6;"> Divanshu Garg</span></i><i>,</i><i><span style="color: #3d85c6;"> Cayden Meyer</span></i><i>,</i><i><span style="color: #3d85c6;"> Reuben KanPapers</span></i><br /><br /><a href="http://dl.acm.org/citation.cfm?id=3098021">TFX: A TensorFlow Based Production Scale Machine Learning Platform</a><br /><i><span style="color: #3d85c6;">Denis Baylor</span></i><i>,</i><i><span style="color: #3d85c6;"> Eric Breck</span></i><i>,</i><i><span style="color: #3d85c6;"> Heng-Tze Cheng</span></i><i>,</i><i><span style="color: #3d85c6;"> Noah Fiedel</span></i><i>,</i><i><span style="color: #3d85c6;"> Chuan Yu Foo</span></i><i>,</i><i><span style="color: #3d85c6;"> Zakaria Haque</span></i><i>,</i><i><span style="color: #3d85c6;"> Salem Haykal</span></i><i>,</i><i><span style="color: #3d85c6;"> Mustafa Ispir</span></i><i>,</i><i><span style="color: #3d85c6;"> Vihan Jain</span></i><i>,</i><i><span style="color: #3d85c6;"> Levent Koc</span></i><i>,</i><i><span style="color: #3d85c6;"> Chiu Yuen Koo</span></i><i>,</i><i><span style="color: #3d85c6;"> Lukasz Lew</span></i><i>,</i><i><span style="color: #3d85c6;"> Clemens Mewald</span></i><i>, </i><i><span style="color: #3d85c6;">Akshay Modi</span></i><i>,</i><i><span style="color: #3d85c6;"> Neoklis Polyzotis</span></i><i>,</i><i><span style="color: #3d85c6;"> Sukriti Ramesh</span></i><i>,</i><i><span style="color: #3d85c6;"> Sudip Roy</span></i><i>,</i><i><span style="color: #3d85c6;"> Steven Whang</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Wicke</span></i><i>, </i><i><span style="color: #3d85c6;"> Jarek Wilkiewicz</span></i><i>,</i><i><span style="color: #3d85c6;"> Xin Zhang</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Zinkevich</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/construction-of-directed-2k-graphs">Construction of Directed 2K Graphs</a><br /><i>Balint Tillman, Athina Markopoulou, Carter T. Butts, <span style="color: #3d85c6;">Minas Gjoka</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/a-practical-algorithm-for-solving-the-incoherence-problem-of-topic-models-i">A Practical Algorithm for Solving the Incoherence Problem of Topic Models In Industrial Applications </a><br /><i><span style="color: #3d85c6;">Amr Ahmed</span></i><i>,</i><i><span style="color: #3d85c6;"> James Long</span></i><i>,</i><i><span style="color: #3d85c6;"> Dan Silva</span></i><i>,</i><i><span style="color: #3d85c6;"> Yuan Wang</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/train-and-distribute-managing-simplicity-vs.-flexibility-in-high-level-mach">Train and Distribute: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks </a><br /><i><span style="color: #3d85c6;">Heng-Tze Cheng</span>,<span style="color: #3d85c6;"> Lichan Hong</span>,<span style="color: #3d85c6;"> Mustafa Ispir</span>,<span style="color: #3d85c6;"> Clemens Mewald</span>,<span style="color: #3d85c6;"> Zakaria Haque</span>,<span style="color: #3d85c6;"> Illia Polosukhin</span>,<span style="color: #3d85c6;"> Georgios Roumpos</span>,<span style="color: #3d85c6;"> D Sculley, Jamie Smith</span>,<span style="color: #3d85c6;"> David Soergel</span>,<span style="color: #3d85c6;"> </span>Yuan Tang,<span style="color: #3d85c6;"> Philip Tucker</span>,<span style="color: #3d85c6;"> Martin Wicke</span>,<span style="color: #3d85c6;"> Cassandra Xia</span>,<span style="color: #3d85c6;"> Jianwei Xie</span></i><br /><br /><a href="http://www.kdd.org/kdd2017/papers/view/learning-to-count-mosquitoes-for-the-sterile-insect-technique">Learning to Count Mosquitoes for the Sterile Insect Technique</a><br /><i><span style="color: #3d85c6;">Yaniv Ovadia</span></i><i>,</i><i><span style="color: #3d85c6;"> Yoni Halpern</span></i><i>,</i><i><span style="color: #3d85c6;"> Dilip Krishnan</span>, Josh Livni, Daniel Newburger, <span style="color: #3d85c6;">Ryan Poplin</span>, Tiantian Zha, <span style="color: #3d85c6;">D. Sculley</span></i><br /><br /><b><u>Workshops</u></b><br /><a href="http://www.mlgworkshop.org/2017/">13th International Workshop on Mining and Learning with Graphs</a><br />Keynote Speaker: <i><span style="color: #3d85c6;">Vahab Mirrokni</span> - Distributed Graph Mining: Theory and Practice</i><br />Contributed talks include:<br /><a href="https://arxiv.org/pdf/1706.07845.pdf">HARP: Hierarchical Representation Learning for Networks</a><br /><i>Haochen Chen, <span style="color: #3d85c6;">Bryan Perozzi</span>, Yifan Hu and Steven Skiena</i><br /><br /><a href="http://www.fatml.org/">Fairness, Accountability, and Transparency in Machine Learning</a><br />Contributed talks include:<br /><a href="http://www.fatml.org/media/documents/fair_clustering_through_fairlets.pdf">Fair Clustering Through Fairlets </a><br /><i>Flavio Chierichetti, <span style="color: #3d85c6;">Ravi Kumar</span>,<span style="color: #3d85c6;"> Silvio Lattanzi</span></i><i>,</i><i><span style="color: #3d85c6;"> Sergei Vassilvitskii</span></i><br /><a href="https://arxiv.org/pdf/1707.00075.pdf">Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations</a><br /><i><span style="color: #3d85c6;">Alex Beutel</span></i><i>,</i><i><span style="color: #3d85c6;"> Jilin Chen</span></i><i>,</i><i><span style="color: #3d85c6;"> Zhe Zhao</span></i><i>,</i><i><span style="color: #3d85c6;"> Ed H. Chi</span></i><br /><br /><b><u>Tutorial</u></b><br /><a href="https://github.com/random-forests/tensorflow-workshop">TensorFlow</a><br /><i><span style="color: #3d85c6;">Rajat Monga</span></i><i>,</i><i><span style="color: #3d85c6;"> Martin Wicke</span></i><i>,</i><i><span style="color: #3d85c6;"> Daniel ‘Wolff’ Dobson</span></i><i>,</i><i><span style="color: #3d85c6;"> Joshua Gordon</span></i>Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-15180365961139069462017-08-21T10:00:00.000-07:002017-08-21T10:11:26.652-07:00Announcing the NYC Algorithms and Optimization Site<span class="byline-author">Posted by Vahab Mirrokni, Principal Research Scientist and Xerxes Dotiwalla, Product Manager, NYC Algorithms and Optimization Team</span><br /><br />New York City is home to several Google algorithms research groups. We collaborate closely with the teams behind many Google products and work on a wide variety of algorithmic challenges, like <a href="https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html">optimizing infrastructure</a>, <a href="https://research.googleblog.com/2015/08/kdd-2015-best-research-paper-award.html">protecting privacy</a>, <a href="https://research.googleblog.com/2016/09/research-from-vldb-2016-improved-friend.html">improving friend suggestions</a> and much more.<br /><br />Today, we’re excited to provide more insights into the research done in the Big Apple with the launch of the <a href="https://research.google.com/teams/nycalg/">NYC Algorithms and Optimization Team page</a>. The NYC Algorithms and Optimization Team comprises multiple overlapping research groups working on large-scale graph mining, large-scale optimization and market algorithms. <br /><br /><b>Large-scale Graph Mining</b><br />The <a href="https://research.google.com/teams/nycalg/graph-mining/">Large-scale Graph Mining Group</a> is tasked with building the most scalable library for graph algorithms and analysis and applying it to a multitude of Google products. We formalize data mining and machine learning challenges as graph algorithms problems and perform fundamental research in those fields leading to publications in top venues.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-qLyPm7efRy4/WZdxY-TVDkI/AAAAAAAAB9o/QUO-W9rV8ys4b9OnmaaCLajyL-YgPPE-wCLcBGAs/s1600/image3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://3.bp.blogspot.com/-qLyPm7efRy4/WZdxY-TVDkI/AAAAAAAAB9o/QUO-W9rV8ys4b9OnmaaCLajyL-YgPPE-wCLcBGAs/s320/image3.png" width="320" /></a></div><br />Our projects include:<br /><ul><li><b>Large-scale Similarity Ranking:</b> Our research in pairwise similarity ranking has produced a number of innovative methods, which we have published in top venues such as WWW, ICML, and VLDB, e.g., improving friend suggestion using <a href="https://research.google.com/pubs/pub44265.html">ego-networks</a> and <a href="https://research.google.com/pubs/pub42479.html">computing similarity rankings in large-scale multi-categorical bipartite graphs</a>.</li><li><b>Balanced Partitioning:</b> Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems. As <a href="https://research.google.com/pubs/pub44315.html">our paper</a> shows, we are able to achieve a 15-25% reduction in cut size compared to state-of-the-art algorithms in the literature.</li><li><b>Clustering and Connected Components:</b> We have state-of-the-art implementations of many different algorithms including hierarchical clustering, overlapping clustering, <a href="https://research.google.com/pubs/pub41596.html">local clustering</a>, spectral clustering, and <a href="https://research.google.com/pubs/pub43122.html">connected components</a>. Our methods are 10-30x faster than the best previously studied algorithms and can scale to graphs with trillions of edges.</li><li><b>Public-private Graph Computation:</b> Our <a href="http://dl.acm.org/citation.cfm?doid=2783258.2783354">research</a> on novel models of graph computation based on a personal view of private data preserves the privacy of each user.</li></ul><b>Large-scale Optimization</b><br />The <a href="https://research.google.com/teams/nycalg/large-scale-optimization/">Large-scale Optimization Group</a>’s mission is to develop large-scale optimization techniques and use them to improve the efficiency and robustness of infrastructure at Google. We apply techniques from areas such as combinatorial optimization, online algorithms, and control theory to make Google’s massive computational infrastructure do more with less. We combine online and offline optimizations to achieve such goals as increasing throughput, decreasing latency, minimizing resource contention, maximizing the efficacy of caches, and eliminating unnecessary work in distributed systems. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-lTVKCQqtrCk/WZdxeSplRZI/AAAAAAAAB9s/VO-k8NNwD7oRLgxW_dDNt0oqc8lElPQ8QCLcBGAs/s1600/image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://3.bp.blogspot.com/-lTVKCQqtrCk/WZdxeSplRZI/AAAAAAAAB9s/VO-k8NNwD7oRLgxW_dDNt0oqc8lElPQ8QCLcBGAs/s320/image1.png" width="320" /></a></div><br />Our research is used in critical infrastructure that supports core products:<br /><ul><li><b>Consistent Hashing:</b> We <a href="https://research.google.com/pubs/pub45756.html">designed memoryless balanced allocation algorithms</a> to assign a dynamic set of clients to a dynamic set of servers such that the load on each server is bounded, and the allocation does not change by much for every update operation. This technique is currently implemented in <a href="https://cloud.google.com/pubsub/">Google Cloud Pub/Sub</a> and <a href="https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html">externally</a> in the open-source <a href="https://github.com/arodland/haproxy/commit/b02bed24daf64743cb9a571e93ed29ee4bc7efe7">haproxy</a>.</li><li><b>Distributed Optimization Based on Core-sets:</b> <a href="https://research.google.com/pubs/pub44219.html">Composable core-sets</a> provide an effective method for solving optimization problems on massive datasets. This technique can be used for several problems including <a href="https://research.google.com/pubs/pub42964.html">distributed balanced clustering</a> and <a href="https://research.google.com/pubs/pub44222.html">distributed submodular maximization</a>.</li><li><b>Google Search Infrastructure Optimization:</b> We partnered with the Google Search infrastructure team to build a distributed feedback control loop to govern the way queries are fanned out to machines. We also improved the efficacy of caching by increasing the homogeneity of the stream of queries seen by any single machine.</li></ul><b>Market Algorithms</b><br />The Market Algorithms Group analyzes, designs, and delivers economically and computationally efficient marketplaces across Google. Our research serves to optimize display ads for DoubleClick’s reservation ads and exchange, as well as sponsored search and mobile ads.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-a37jDHWP5nE/WZdxjDk0STI/AAAAAAAAB9w/V6yOF-Lxwm0AkGyGkkoIodAmX4z64It-ACLcBGAs/s1600/image2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1000" height="320" src="https://1.bp.blogspot.com/-a37jDHWP5nE/WZdxjDk0STI/AAAAAAAAB9w/V6yOF-Lxwm0AkGyGkkoIodAmX4z64It-ACLcBGAs/s320/image2.png" width="320" /></a></div><br />In the past few years, we have explored a number of areas, including:<br /><ul><li><b>Display Ads Research:</b> The Display ads ecosystem provides a great platform for a variety of research problems in online stochastic optimization and computational economics, such as <a href="https://research.google.com/pubs/pub41755.html">whole-page optimization</a> and optimal contract design. An important part of this research area is dedicated to auction optimization for advertising exchanges where we deal with <a href="https://research.google.com/pubs/pub36634.html">auctions with intermediaries</a>, <a href="https://research.google.com/pubs/pub45185.html">optimal pricing strategies</a>, and <a href="https://research.google.com/pubs/pub36975.html">optimal yield management</a> for reservation contracts and ad exchanges.</li><li><b>Online Stochastic Matching:</b> We have developed new algorithms for <a href="https://research.google.com/pubs/pub35487.html">online stochastic matching</a>, <a href="https://research.google.com/pubs/pub37475.html">budgeted allocation</a>, <a href="https://research.google.com/pubs/pub44231.html">handling traffic spikes</a>, and more general variants of the problem, called <a href="https://research.google.com/pubs/pub44224.html">submodular welfare maximization</a>.</li><li><b>Robust Stochastic Allocation:</b> In <a href="https://research.google.com/pubs/pub37475.html">one paper</a>, we study online algorithms that achieve a good performance in both adversarial and stochastic arrival models. In <a href="https://research.google.com/pubs/pub44231.html">another paper</a>, we develop a hybrid model and algorithms with approximation factors that change as a function of the accuracy of the forecast.</li><li><b>Optimizing Advertiser Campaigns:</b> We have studied algorithmic questions such as <a href="https://research.google.com/pubs/pub40688.html">positive carryover effects</a>, <a href="https://research.google.com/pubs/pub32834.html">budget optimization in search-based auctions</a>, and <a href="https://research.google.com/pubs/pub42963.html">concise bid optimization strategies with multiple budget constraints</a>.</li><li><b>Dynamic Mechanism Design:</b> We have developed efficient mechanisms for sophisticated settings that occur in internet advertising, such as online settings and polyhedral constraints. We have also designed a new family of <a href="https://research.google.com/pubs/pub45752.html">dynamic mechanisms</a>, called <a href="https://research.google.com/pubs/pub45750.html">bank account mechanisms</a>, and showed their effectiveness in designing <a href="https://research.google.com/pubs/pub45751.html">non-clairvoyant dynamic mechanisms</a> that can be implemented without relying on forecasting the future steps.</li></ul>For a summary of our research activities, you can take a look at talks at our <a href="https://sites.google.com/site/marketalgorithms/market-algorithms-workshop">recent market algorithms workshop</a>.<br /><br />It is our hope that with the help of this new <a href="https://research.google.com/teams/nycalg/">Google NYC Algorithms and Optimization Team page</a> that we can more effectively share our work and broaden our dialogue with the research and engineering community. Please visit the site to learn about our latest projects, <a href="https://sites.google.com/corp/view/nycalgorithms/home">publications</a>, <a href="https://sites.google.com/site/nycresearchseminar/">seminars</a>, and research areas!Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-7492759680564894022017-04-03T10:00:00.000-07:002017-04-03T10:00:23.496-07:00Consistent Hashing with Bounded Loads<span class="byline-author">Posted by Vahab Mirrokni, Principal Scientist, Morteza Zadimoghaddam, Research Scientist, NYC Algorithms Team</span><br /><br />Running a large-scale web service, such as content hosting, necessarily requires <a href="https://en.wikipedia.org/wiki/Load_balancing_(computing)">load balancing</a> — distributing clients <i>uniformly</i> across multiple servers such that none get overloaded. Further, it is desirable to find an allocation that does not change very much over time in a <i>dynamic</i> environment in which both clients and servers can be added or removed at any time. In other words, we need the allocation of clients to servers to be <i>consistent</i> over time.<br /><br />In collaboration with <a href="http://www.diku.dk/~mthorup/">Mikkel Thorup</a>, a visiting researcher from university of Copenhagen, we developed a new efficient allocation algorithm for this problem with <i>tight guarantees</i> on the maximum load of each server, and studied it theoretically and empirically. We then worked with our Cloud team to implement it in <a href="https://cloud.google.com/pubsub/">Google Cloud Pub/Sub</a>, a scalable event streaming service, and observed substantial improvement on uniformity of the load allocation (in terms of the maximum load assigned to servers) while maintaining consistency and stability objectives. In August 2016 we described our algorithm in the paper “<a href="https://arxiv.org/abs/1608.01350">Consistent Hashing with Bounded Loads</a>”, and shared it on ArXiv for potential use by the broader research community. <br /><br />Three months later, Andrew Rodland from <a href="https://vimeo.com/">Vimeo</a> informed us that he had found the paper, implemented it in <a href="https://github.com/arodland/haproxy/commit/b02bed24daf64743cb9a571e93ed29ee4bc7efe7">haproxy</a> (a widely-used piece of open source software), and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. He recently summarized this story in a <a href="https://medium.com/vimeo-engineering-blog/improving-load-balancing-with-a-new-consistent-hashing-algorithm-9f1bd75709ed">blog post</a> detailing his use case. Needless to say, we were excited to learn that our theoretical research was not only put into application, but also that it was useful <i>and</i> open-sourced. <br /><br /><b>Background</b><br />While the concept of <a href="https://en.wikipedia.org/wiki/Consistent_hashing">consistent hashing</a> has been developed in the past to deal with load balancing in dynamic environments, a fundamental issue with all the previously developed schemes is that, in certain scenarios, they may result in sub-optimal load balancing on many servers. <br /><br />Additionally, both clients and servers may be added or removed periodically, and with such changes, we do not want to move too many clients. Thus, while the dynamic allocation algorithm has to always ensure a proper load balancing, it should also aim to minimize the number of clients moved after each change to the system. Such allocation problems become even more challenging when we face hard constraints on the capacity of each server - that is, each server has a capacity that the load may not exceed. Typically, we want capacities close to the average loads. <br /><br />In other words, we want to simultaneously achieve both <i>uniformity</i> and <i>consistency</i> in the resulting allocations. There is a vast amount of literature on solutions in the much simpler case where the set of servers is fixed and only the client set is updated, but in this post we discuss solutions that are relevant in the fully <i>dynamic</i> case where both clients and servers can be added and removed. <br /><br /><b>The Algorithm</b><br />We can think about the servers as bins and clients as balls to have a similar notation with well-studied <a href="https://en.wikipedia.org/wiki/Balls_into_bins">balls-to-bins stochastic processes</a>. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either <a href="https://en.wikipedia.org/wiki/Floor_and_ceiling_functions">floor or ceiling</a> of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property. <br /><br />Imagine a given range of numbers overlaid on a circle. We apply a hash function to balls and a separate hash function to bins to obtain numbers in that range that correspond to positions on that circle. We then start allocating balls in a specific order independent of their hash values (let’s say based on their ID). Then each ball is moved clockwise and is assigned to the first bin with spare capacity. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-pgZ4b9H7VlM/WOJ91rDe_XI/AAAAAAAABqw/wIjtyPHheFgyHpXIqY4qNLhd_H9DnHsXACLcB/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="588" src="https://3.bp.blogspot.com/-pgZ4b9H7VlM/WOJ91rDe_XI/AAAAAAAABqw/wIjtyPHheFgyHpXIqY4qNLhd_H9DnHsXACLcB/s640/image00.png" width="640" /></a></div>Consider the example above where 6 balls and 3 bins are assigned using two separate hash functions to random locations on the circle. For the sake of this instance, assume the capacity of each bin is set to 2. We start allocating balls in the increasing order of their ID values. Ball number 1 moves clockwise, and goes to bin C. Ball number 2 goes to A. Balls 3 and 4 go to bin B. Ball number 5 goes to bin C. Then ball number 6 moves clockwise and hits bin B first. However bin B has capacity 2 and already contains balls 3 and 4. So ball 6 keeps moving to reach bin C but that bin is also full. Finally, ball 6 ends up in bin A that has a spare slot for it.<br /><br />Upon any update in the system (ball or bin insertion/deletion), the allocation is recomputed to keep the uniformity objective. The art of the analysis is to show that a small update (a few number of insertions and deletions) results in minor changes in the state of the allocation and therefore the consistency objective is met. In <a href="https://arxiv.org/abs/1608.01350">our paper</a> we show that every ball removal or insertion in the system results in O(1/ε<sup>2</sup>) movements of other balls. The most important thing about this upper bound is that it is independent of the total number of balls or bins in the system. So if the number of balls or bins are doubled, this bound will not change. Having an upper bound independent of the number of balls or bins introduces room for scalability as the consistency objective is not violated if we move to bigger instances. Simulations for the number of movements (relocations) per update is shown below when an update occurs on a bin/server. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-IJFOHvSomXY/WOJ-AecmDaI/AAAAAAAABq0/wVAwJd8jxNs7cT30aU0ek3_WpPzYYSO9ACLcB/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="505" src="https://3.bp.blogspot.com/-IJFOHvSomXY/WOJ-AecmDaI/AAAAAAAABq0/wVAwJd8jxNs7cT30aU0ek3_WpPzYYSO9ACLcB/s640/image01.png" width="640" /></a></div>The red curve shows the average number of movements and the blue bars indicate the variance for different values of ε (the x-axis). The dashed curve is the upper bound suggested by our theoretical results which fits nicely as a prediction of the actual number of movements. Furthermore, for any value of ε, we know the load of each bin is at most (1+ε) times the average load. Below we see the load distribution of bins for different values of ε=0.1, ε=0.3 and ε=0.9.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-PUdBAM5bDKk/WOJ-Hm3HAgI/AAAAAAAABq4/iREEzcJjdIQ7YYYE6bfGIEsbQALIozEKgCLcB/s1600/image02.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="532" src="https://1.bp.blogspot.com/-PUdBAM5bDKk/WOJ-Hm3HAgI/AAAAAAAABq4/iREEzcJjdIQ7YYYE6bfGIEsbQALIozEKgCLcB/s640/image02.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The distribution of loads for several values of ε. The load distribution is nearly uniform covering all ranges of loads from 0 to (1+ε) times average, and many bins with load equal to (1+ε) times average.</td></tr></tbody></table>As one can see there is a tradeoff — a lower ε helps with uniformity but not with consistency, while larger ε values help with consistency. A lower ε will ensure that many loads will be equal to the hard capacity limit of (1+ε) times the average, and the rest have a decaying distribution.<br /><br />When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances. <br /><br />While our internal results are exciting, we are even more pleased that the broader community found our solution useful enough to <a href="https://github.com/arodland/haproxy">open-source</a>, allowing anyone to use this algorithm. If you are interested in further details of this research, please see the <a href="https://arxiv.org/abs/1608.01350">paper</a> on ArXiv, and stay tuned for more research from the <a href="https://research.google.com/teams/nycalg/">NYC Algorithms Team</a>!<br /><br /><b>Acknowledgements:</b><br />We would like to thank Alex Totok, Matt Gruskin, Sergey Kondratyev and Haakon Ringberg from the Google Cloud Pub/Sub team, and of course <a href="http://www.diku.dk/~mthorup/">Mikkel Thorup</a> for his invaluable contributions to this paper.Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-70975319191334731112016-09-20T10:00:00.000-07:002016-09-20T10:00:16.001-07:00The 280-Year-Old Algorithm Inside Google Trips<span class="byline-author">Posted by Bogdan Arsintescu, Software Engineer & Sreenivas Gollapudi, Kostas Kollias, Tamas Sarlos and Andrew Tomkins, Research Scientists<br /></span><br /><br /><a href="https://en.wikipedia.org/wiki/Algorithm_engineering">Algorithms Engineering</a> is a lot of fun because algorithms do not go out of fashion: one never knows when an oldie-but-goodie might come in handy. Case in point: Yesterday, Google <a href="https://googleblog.blogspot.com/2016/09/see-more-plan-less-try-google-trips.html">announced Google Trips</a>, a new app to assist you in your travels by helping you create your own “perfect day” in a city. Surprisingly, deep inside Google Trips, there is an algorithm that was invented 280 years ago. <br /><br />In 1736, <a href="https://en.wikipedia.org/wiki/Leonhard_Euler">Leonhard Euler</a> authored a brief but <a href="http://eulerarchive.maa.org//docs/originals/E053.pdf">beautiful mathematical paper</a> regarding the town of Königsberg and its 7 bridges, shown here:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-HRkkgmmGB3Y/V-Fk43DGYkI/AAAAAAAABNw/j5c6gQMUsjAWtMWwMBnQ-D35sA8l0-McQCLcB/s1600/image05.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="504" src="https://4.bp.blogspot.com/-HRkkgmmGB3Y/V-Fk43DGYkI/AAAAAAAABNw/j5c6gQMUsjAWtMWwMBnQ-D35sA8l0-McQCLcB/s640/image05.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from <a href="https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg">Wikipedia</a></td></tr></tbody></table>In the paper, Euler studied the following question: is it possible to walk through the city crossing each bridge exactly once? As it turns out, for the city of Königsberg, the answer is no. To reach this answer, Euler developed a general approach to represent any layout of landmasses and bridges in terms of what he dubbed the <i>Geometriam Situs</i> (the “Geometry of Place”), which we now call <a href="https://en.wikipedia.org/wiki/Graph_theory">Graph Theory</a>. He represented each landmass as a “node” in the graph, and each bridge as an “edge,” like this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-wRl8WjuCA-c/V-FkPmiZK-I/AAAAAAAABNs/Z9htxAzYeyg_C44uNKdjCYVLoQqaaQHuwCLcB/s1600/Screen%2BShot%2B2016-09-20%2Bat%2B9.26.46%2BAM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="146" src="https://1.bp.blogspot.com/-wRl8WjuCA-c/V-FkPmiZK-I/AAAAAAAABNs/Z9htxAzYeyg_C44uNKdjCYVLoQqaaQHuwCLcB/s640/Screen%2BShot%2B2016-09-20%2Bat%2B9.26.46%2BAM.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from <a href="https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg">Wikipedia</a></td></tr></tbody></table>Euler noticed that if all the nodes in the graph have an even number of edges (such graphs are called “Eulerian” in his honor) then, and only then, a cycle can be found that visits every edge exactly once. Keep this in mind, as we’ll rely on this fact later in the post.<br /><br />Our team in Google Research has been fascinated by the “Geometry of Place” for some time, and we started investigating a question related to Euler’s: rather than visiting just the bridges, how can we visit as many interesting places as possible during a particular trip? We call this the “itineraries” problem. Euler didn’t study it, but it is a well known topic in Optimization, where it is often called the “<a href="http://chekuri.cs.illinois.edu/papers/orienteering-journal.pdf">Orienteering</a>” problem.<br /><br />While Euler’s problem has an efficient and exact solution, the itineraries problem is not just hard to solve, it is hard to even <i>approximately</i> solve! The difficulty lies in the interplay between two conflicting goals: first, we should pick great places to visit, but second, we should pick them to allow a good itinerary: not too much travel time; don’t visit places when they’re closed; don’t visit too many museums, etc. Embedded in such problems is the challenge of finding efficient routes, often referred to as the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">Travelling Salesman Problem</a> (TSP).<br /><br /><b>Algorithms for Travel Itineraries</b><br /><br />Fortunately, the real world has a property called the “<a href="https://en.wikipedia.org/wiki/Triangle_inequality">triangle inequality</a>” that says adding an extra stop to a route never makes it shorter. When the underlying geometry satisfies the triangle inequality, the TSP can be approximately solved using another <a href="https://en.wikipedia.org/wiki/Christofides_algorithm">algorithm discovered by Christofides</a> in 1976. This is an important part of our solution, and builds on Euler’s paper, so we’ll give a quick four-step rundown of how it works here:<br /><ol><li>We start with all our destinations separate, and repeatedly connect together the closest two that aren’t yet connected. This doesn’t yet give us an itinerary, but it does connect all the destinations via a <a href="https://en.wikipedia.org/wiki/Minimum_spanning_tree">minimum spanning tree</a> of the graph.</li><li>We take all the destinations that have an odd number of connections in this tree (Euler proved there must be an even number of these), and carefully pair them up.</li><li>Because all the destinations now have an even number of edges, we’ve created an Eulerian graph, so we create a route that crosses each edge exactly once.</li><li>We now have a great route, but it might visit some places more than once. No problem, we find any double visits and simply bypass them, going directly from the predecessor to the successor.</li></ol>Christofides gave an elegant proof that the resulting route is always close to the shortest possible. Here’s an example of the Christofides’ algorithm in action on a location graph with the nodes representing places and the edges with costs representing the travel time between the places.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-DYlLoNxg-S8/V-FlOHLF8TI/AAAAAAAABN0/kNISdLQvX6cfWAmjT8k-LKPEMJA63nX-ACLcB/s1600/image04.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="172" src="https://3.bp.blogspot.com/-DYlLoNxg-S8/V-FlOHLF8TI/AAAAAAAABN0/kNISdLQvX6cfWAmjT8k-LKPEMJA63nX-ACLcB/s640/image04.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Construction of an Eulerian Tour in a location graph</td></tr></tbody></table>Armed with this efficient route-finding subroutine, we can now start building itineraries one step at a time. At each step, we estimate the benefit to the user of each possible new place to visit, and likewise estimate the cost using the Christofides algorithm. A user’s benefit can be derived from a host of natural factors such as the popularity of the place and how different the place is relative to places already visited on the tour. We then pick whichever new place has the best benefit per unit of extra cost (e.g., time needed to include the new place in the tour). Here’s an example of our algorithm actually building a route in London using the location graph shown above:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-y_t6IG5RMuE/V-Flb6RQktI/AAAAAAAABN4/91je5cQumFcOI5bgJIliDkk3tX3MoPoAgCLcB/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="274" src="https://3.bp.blogspot.com/-y_t6IG5RMuE/V-Flb6RQktI/AAAAAAAABN4/91je5cQumFcOI5bgJIliDkk3tX3MoPoAgCLcB/s640/image01.png" width="640" /></a></div><b>Itineraries in Google Trips</b><br /><br />With our first good approximate solution to the itineraries problem in hand, we started working with our colleagues from the Google Trips team, and we realized we’d barely scratched the surface. For instance, even if we produce the absolute perfect itinerary, any particular user of the system will very reasonably say, “That’s great, but all my friends say I also need to visit this other place. Plus, I’m only around for the morning, and I don’t want to miss this place you listed in the afternoon. And I’ve already seen Big Ben twice.” So rather than just producing an itinerary once and calling it a perfect day, we needed a fast dynamic algorithm for itineraries that users can modify on the fly to suit their individual taste. And because many people have bad data connections while traveling, the solution had to be efficient enough to run disconnected on a phone.<br /><br /><b>Better Itineraries Through the Wisdom of Crowds</b><br /><br />While the algorithmic aspects of the problem were highly challenging, we realized that producing high-quality itineraries was just as dependent on our understanding of the many possible stopping points on the itinerary. We had Google’s extensive travel database to identify the interesting places to visit, and we also had great data from Google’s existing systems about how to travel from any place to any other. But we didn’t have a good sense for how people typically move through this geometry of places. <br /><br />For this, we turned to the wisdom of crowds. This type of wisdom is used by Google to <a href="https://googleblog.blogspot.com/2007/02/stuck-in-traffic.html">estimate delays on highways</a>, and to discover <a href="https://support.google.com/business/answer/6263531?hl=en">when restaurants are most busy</a>. Here, we use the same techniques to learn about common visit sequences that we can stitch together into itineraries that feel good to our users. We combine Google's knowledge of <a href="https://techcrunch.com/2015/07/28/google-search-now-shows-you-when-local-businesses-are-busiest/">when places are popular</a>, with the directions between those places to gather an idea of what tourists like to do when travelling.<br /><br />And the crowd has a lot more wisdom to offer in the future. For example, we noticed that visits to Buckingham Palace spike around 11:30 and stay a bit longer than at other times of the day. This seemed a little strange to us, but when we looked more closely, it turns out to be the time of the <a href="https://www.royalcollection.org.uk/visit/buckinghampalace/what-to-see-and-do/changing-the-guard">Changing of the Guard</a>. We’re looking now at ways to incorporate this type of timing information into the itinerary selection algorithms.<br /><br />So give it a try: Google Trips, available now on <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.travel.onthego">Android</a> and <a href="https://itunes.apple.com/app/id1081561570?mt=8">iOS</a>, has you covered from departure to return. Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0tag:blogger.com,1999:blog-21224994.post-58222562181349520782016-09-15T11:00:00.000-07:002017-01-18T12:12:23.768-08:00Research from VLDB 2016: Improved Friend Suggestion using Ego-Net Analysis<span class="byline-author">Posted by Alessandro Epasto, Research Scientist, Google Research NY</span><br /><br />On September 5 - 9, New Delhi, India hosted the <a href="http://vldb2016.persistent.com/">42nd International Conference on Very Large Data Bases</a> (VLDB), a premier annual forum for academic and industry research on databases, data management, data mining and data analytics. Over the past several years, Google has actively participated in VLDB, both as official sponsor and with numerous contributions to the research and industrial tracks. In this post, we would like to share the research presented in one of the Google papers from VLDB 2016. <br /><br />In <a href="http://www.vldb.org/pvldb/vol9/p324-epasto.pdf"><i>Ego-net Community Mining Applied to Friend Suggestion</i></a>, co-authored by Googlers <a href="http://research.google.com/pubs/SilvioLattanzi.html">Silvio Lattanzi</a>, <a href="http://research.google.com/pubs/mirrokni.html">Vahab Mirrokni</a>, Ismail Oner Sebe, <a href="http://research.google.com/pubs/AhmedTaei.html">Ahmed Taei,</a> Sunita Verma and <a href="http://research.google.com/pubs/AlessandroEpasto.html">myself,</a> we explore how social networks can provide better friend suggestions to users, a challenging practical problem faced by all social network platforms<br /><br />Friend suggestion – the task of suggesting to a user the contacts she might already know in the network but that she hasn’t added yet – is major driver of user engagement and social connection in all online social networks. Designing a high quality system that can provide relevant and useful friend recommendations is very challenging, and requires state-of-the-art machine learning algorithms based on a multitude of parameters. <br /><br />An effective family of features for friend suggestion consist of <a href="https://en.wikipedia.org/wiki/Graph_(mathematics)">graph</a> features such as the <i>number of common friends </i>between two users. While widely used, the number of common friends has some major drawbacks, including the following which is shown in Figure 1.<br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-88-zF6lIbX0/V9rekNAup4I/AAAAAAAABMY/EdJuYRKPC3sE6xfgOeV5xJogkaewvG3JACLcB/s1600/image01.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="492" src="https://4.bp.blogspot.com/-88-zF6lIbX0/V9rekNAup4I/AAAAAAAABMY/EdJuYRKPC3sE6xfgOeV5xJogkaewvG3JACLcB/s640/image01.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Ego-net of Sally.</td></tr></tbody></table>In this figure we represent the social connections of Sally and her friends – the <i>ego-net</i> of Sally. An ego-net of a node (in this case, Sally) is defined as the graph that contains the node itself, all of the node’s neighbors and the connection among <i>those</i> nodes. Sally has 6 friends in her ego-net: <b>A</b>lbert (her husband), <b>B</b>rian (her son), <b>C</b>harlotte (her mother) as well as <b>U</b>ma (her boss), <b>V</b>incent and <b>W</b>ally (two of her team members). Notice how <b>A</b>, <b>B</b> and <b>C</b> are all connected with each other while they do not know <b>U</b>, <b>V</b> or <b>W</b>. On the other hand <b>U,</b> <b>V</b> and <b>W</b> have all added each other as their friend (except <b>U</b> and <b>W</b> who are good friend but somehow forgot to add each other).<br /><br />Notice how each of <b>A</b>, <b>B</b>, <b>C</b> have a common friend with each of <b>U</b>, <b>V</b> and <b>W</b>: Sally herself. A friend recommendation system based on common neighbors might suggest to Sally’s son (for instance) to add Sally’s boss as his friend! In reality the situation is even more complicated because users’ online and offline friends span several different social circles or communities (family, work, school, sports, etc). <br /><br />In our paper we introduce a novel technique for friend suggestions based on independently analyzing the ego-net structure. The main contribution of the paper is to show that it is possible to provide friend suggestions efficiently by constructing all ego-nets of the nodes in the graph and then independently applying community detection algorithms on them in large-scale <a href="https://en.wikipedia.org/wiki/MapReduce">distributed systems</a>. <br /><br />Specifically, the algorithm proceeds by constructing the ego-nets of all nodes and applying, independently on each of them, a community detection algorithm. More precisely the algorithm operates on so-called “ego-net-minus-ego” graphs, which is defined as the graph including only the neighbors of a given node, as shown in the figure below.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-4taXubdlCw0/V9re8hjeM3I/AAAAAAAABMg/p8HMW4Ztsy4NL973u2fT-xHz0HHyfaM1ACLcB/s1600/image00.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="530" src="https://2.bp.blogspot.com/-4taXubdlCw0/V9re8hjeM3I/AAAAAAAABMg/p8HMW4Ztsy4NL973u2fT-xHz0HHyfaM1ACLcB/s640/image00.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Clustering of the ego-net of Sally.</td></tr></tbody></table>Notice how in this example the ego-net-minus-ego of Sally has two very clear communities: her family (<b>A</b>, <b>B</b>, <b>C</b>) and her co-workers (<b>U</b>, <b>V</b>, <b>W</b>) which are easily separated. Intuitively, this is because one might expect that while nodes (e.g. Sally) participate in many communities, there is usually a single (or a limited number of) contexts in which two specific neighbors interact. While Sally is both part of her family and work community, Sally and Uma interact <i>only</i> at work. Through extensive experimental evaluation on large-scale public social networks and formally through a simple mathematical model, our paper confirms this intuition. It seems that while communities are hard to separate in a global graph, they are easier to identify at the local level of ego-nets. <br /><br />This allows for a novel graph-based method for friend suggestion which intuitively only allows suggestion of pairs of users that are clustered together in the same community from the point of view of their common friends. With this method, <b>U</b> and <b>W</b> will be suggested to add each other (as they are in the same community and they are not yet connected) while <b>B</b> and <b>U</b> will <i>not</i> be suggested as friends as they span two different communities. <br /><br />From an algorithmic point of view, the paper introduces efficient parallel and distributed techniques for computing and clustering all ego-nets of very large graphs at the same time – a fundamental aspect enabling use of the system on the entire world Google+ graph. We have applied this feature in the “You May Know” system of Google+, resulting in a clear positive impact on the prediction task, improving the acceptance rate by more than 1.5% and decreasing the rejection rate by more than 3.3% (a significative impact at Google scales).<br /><br />We believe that many future directions of work might stem from our preliminary results. For instance ego-net analysis could be potentially to automatically classify a user contacts in circles and to detect spam. Another interesting direction is the study of ego-network evolution in dynamic graphs. Research Bloghttps://plus.google.com/101673966767287570260noreply@blogger.com0