Google Research Blog
The latest news from Research at Google
Google North American Faculty Summit - Day 2
Wednesday, August 04, 2010
Posted by Andrew Tomkins, Director of Engineering, Google Research
Friday at the
Google Faculty Summit
, we discussed ideas around online social capabilities. Chris Messina opened the discussion with a talk about open initiatives for the social web. Damon Horowitz, founder of Aardvark, gave a talk about the Aardvark experience. But in this post, I’d like to talk about a panel I moderated on the future of the social web. The panel consisted of four experts in the area. Joseph Smarr came to Google after eight years as CTO of social networking site Plaxo. Lada Adamic is on the faculty at University of Michigan, where she studies the nature of social and information networks. Eytan Adar is also on the faculty at the University of Michigan, where he studies the evolution of production and consumption of data over time. Luis von Ahn is on the faculty of Carnegie Melon University and also an employee at Google; he studies mechanisms to connect significant human efforts to interesting problems.
One theme that received a lot of attention from panelists and audience members alike was the benefits and pitfalls of social personalization. In the context of an activity stream, there seems to be general agreement that passing lightweight updates among friends is a valuable tool for “social grooming,” or keeping light contact with friends as a way of maintaining the state of the friendship. For information discovery, however, the topic received more debate: real-world social networks have always been used to both push and pull information, but in conjunction with high-quality search, it's reasonable to ask which types of information needs can be best addressed by your friends. Social network connections typically display
(similarity) in the dimensions of geography and interests, so your friends are more likely to have something interesting to say about your local area and your longstanding hobbies or interests, along with other subjects. If so, the answer you receive has two added bonuses. First, your background knowledge about your friend will aid you in assessing the quality of the answer. And second, an answer from a friend satisfies not just an information need but also a human need to interact and share experiences. This socially augmented information can arrive through a push channel in which your friend already posted (for example) a review for a restaurant, or through a pull channel in which you send to your friends a request for information. The same mechanisms for social information sharing may also operate powerfully in the context of a group coming together around a shared interest or goal, rather than just in the context of an individual. Consider for example a group of students working together to understand some new material. The same two mechanisms apply: knowledge about the other students helps you evaluate their contributions, and the interactions in the group have value beyond the pure information transmitted.
There was considerable discussion about social networks' capacity to funnel information to a user through the lens of a particular viewpoint or ideology. Imagine an individual who arrives on the web as a supporter or detractor of a particular political figure or mindset, and then surrounds him or herself with like-minded people online, enjoying positive and supportive discussions but failing to encounter a diverse set of views and counter-opinions. Literature in the social sciences, beginning with the famous Asch conformity experiments from the 1950s, details the mechanisms that cause people to conform to group expectations and even abandon normal personality traits based on the norms of the new situation. And work by Nobel Prize-winning economist Thomas Schelling shows that very small and "reasonable" biases we might have towards avoiding becoming an extreme minority might lead a system to evolve into a highly balkanized state. Similar models have been proposed and evaluated in the Internet domain, and some preliminary measurements have been performed. While faculty members in the audience surmised that personalization could lead to more extremism, the group agreed there is no conclusive evidence.
Another topic we touched on is mechanism design: the problem of designing systems so that agents in the system, each acting selfishly, will together produce some desired outcome. Consider a social networking game. If the desired outcome is revenue for the game manufacturer, then the actions that increase status in the game (using real-world currency to purchase items in the game; inviting friends to join and participate in the game; clicking on advertisements in the game) are designed well to support this goal. Rewarding the action of bringing new friends into the game is one obvious approach to increasing the total user population. More subtly, any game system must provide sufficient fun to be worth the expense to users. The dramatic success of casual online games of this form (6 percent of U.S. pageviews come from these games, according to a study by Ravi Kumar and myself in the WWW 2010 conference) is a testimony to the presence of successful mechanisms of this form.
Finally, here is a small sampling of other issues that arose in the panel as controversial points or interesting areas for future research:
Social networks draw massive amounts of user time. We are beginning to get some limited visibility into exactly how this attention is allocated, which raises the research question of how much utility users are actually deriving from this investment of time, either in information, entertainment, social grooming or other intangibles.
In certain online communities, we see behavioral norms that are skewed towards public visibility of essentially all activity. Do these norms reflect the desires of the populations that choose to join the community, or do they emerge specifically because of the technical tools offered by the website that hosts the community?
Social networks are increasingly offering richer tools to users in an attempt to capture nuances of interactions that exist in the real world. In the fullness of time, how close will we get, and when will this happen?
Social networks formalize the status of a friendship, with significant breakpoints at initiation, acceptance and removal of a binary tie. The visibility of these events leads to both "overfriending" and offense when friendships are refused or removed. Are there improved mechanisms to produce and manage the relationships in online social networks, and if so, what are these mechanisms?
Social network graphs are notoriously difficult to partition into large regions with few edges between them (the sole exception being parts of a network that interact using different languages). A series of computational challenges arise when attempting to shard these networks for distributed analysis or serving from multiple computers.
One thing is clear from the discussion on Friday: social networks are increasingly becoming a valuable area for academic study. Faculty from widely disparate areas of computer science have thought deeply about the issues and implications of these tools; active research is ongoing in essentially all top institutions; and social network dynamics are appearing in the undergraduate curriculum. On top of that, they are an interdisciplinary phenomenon, involving not only many aspects of CS (UX, mechanism design, intense system requirements, security and privacy) but also psychology, economics and ethics, to name a few. There is much to study in order to understand these networks and maximize their societal value.
Google North American Faculty Summit - cloud computing
Tuesday, August 03, 2010
Posted by Brian Bershad, Director of Engineering, Site Director, Google Seattle
Of the three themes of our 2010 Faculty Summit, cloud computing was the one that pervaded all others, from
security in the cloud
to the presumption of cloud infrastructure behind the social web. But in our more focused discussion on cloud computing last Thursday, we started with the premise of “prodigiousness,” a concept introduced by Afred Spector, VP of Research and Special Initiatives.
While we all know that systems are huge and will get even huger, the implications of this size on programmability, manageability, power, etc. is hard to comprehend. Alfred noted that the Internet is predicted to be carrying a zetta-byte (10
bytes) per year in just a few years. And growth in the number of processing elements per chip may give rise to warehouse computers of having 10
or more processing elements. To use systems at this scale, we need new solutions for storage and computation. It was these solutions we focused on throughout our discussions.
In the plenary talk, Andrew Fikes spoke on storage system opportunities. Among many topics, he talked about shifting engineering foci to storage management and optimization not just on an individual cluster of co-located systems, but across geographically distributed clusters. The goal is so-called planetary-scale systems. This brings up all manner of diverse challenges ranging from the need to continually balance storage vs. transmission costs, the need to account for variable network latency characteristics, and the desire to optimize storage (e.g., by physically storing only one copy of a file that many feel they have rights to, or own).
We had a few roundtables in the afternoon for deeper discussions. In the table I led, we discussed two systems for “programming the data center” developed by systems researchers at Google Seattle/Kirkland. The first, Dremel, is a scalable, interactive ad-hoc query system for analysis of read-only nested databases. Dremel was recently presented in a paper at VLDB (
Dremel: Interactive Analysis of Web-Scale Datasets
, Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, Theo Vassilakis. In
Proceedings of the 36th Int'l Conf on Very Large Data Bases
, 2010). The system serves as the foundational technology behind
, a product
in limited preview mode at Google I/O in May.
We also discussed FlumeJava, a Java library that makes it easy to develop, test and run efficient data-parallel pipelines at data center scale. FlumeJava was developed by programming languages researchers at Google Seattle, and is currently in widespread use within Google. It was presented at the recent PLDI conference (
FlumeJava: easy, efficient data-parallel pipelines
, Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert R. Henry, Robert Bradshaw, Nathan Weizenbaum. In
Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation
). The work reflects Google’s commitment to programming language and compiler technologies at scale.
The field of data center programming has progressed substantially in the last 10 years. Dremel and FlumeJava systems represent abstractions of a higher level than the
construct we previously introduced, and we think they are easier to use (within their domain of applicability) and more automatically optimizable. With time, the field will discover new “instructions” and even better abstractions leading us to a point where computations which run on nearly unlimited processors can be expressed as easily as sequential programs. We are working hard to make progress here, and I look forward to reporting on our progress in the future.
Adaptive Data Analysis
Automatic Speech Recognition
Electronic Commerce and Algorithms
Google Cloud Platform
Google Play Apps
Google Science Fair
Google Voice Search
High Dynamic Range Imaging
Internet of Things
Natural Language Processing
Natural Language Understanding
Optical Character Recognition
Public Data Explorer
Security and Privacy
Site Reliability Engineering
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog