Google Research Blog
The latest news from Research at Google
Research Areas of Interest: Building scalable, robust cluster applications
Wednesday, January 27, 2010
Posted by Brad Chen, Technical Lead/Manager
As part of our series on research areas of interest to Google, we discuss some important areas relating to cluster applications in distributed systems. In the last two decades distributed systems have undergone a metamorphosis from academic curiosities to the foundation of an entire industry. Despite these successes, at Google we see distributed systems as a technology in its infancy, with huge gaps in the supporting research (some examples
) that represent some of the most important problems in the space. Here are some examples:
Stranded resources like idle memory, CPU, and disk bandwidth represent huge capital and operating expenses that deliver no business value. A cluster system based upon the best published research would be likely to leave 50% or more of hardware resources idle. We encourage researchers to explore hardware/software architectures that facilitate more supple sharing to avoid stranded and underutilized computational resources.
Balancing cost, performance, and reliability:
Current cluster applications tend to be excessively rigid and brittle, offering only coarse controls to tune the balance between reliability, performance and cost. We envision systems that allow cost to be optimized based on an input specification of performance and reliability requirements. An effective solution might allow service level settings to propagate downward through the layered structure of the system.
The level of expertise required to troubleshoot today's large systems is one of the biggest barriers to more and larger deployments. The published research in this area has at best marginally improved the need for such rare expertise. We envision systems that can adapt automatically to changing conditions, in which redundancy and multiple geographically distributed data centers simplify rather than complicate manageability. This will require breakthroughs in monitoring and data analysis to address the diversity of failure modes and simplify the task of keeping systems healthy.
Research in these areas will improve the current state of cluster applications enabling systems that are less expensive, easier to monitor, and can scale more efficiently.
Previous posts in the series:
Google Cluster Data
Thursday, January 07, 2010
Posted by Joseph L. Hellerstein, Manager of Google Performance Analytics
Google faces a large number of technical challenges in the evolution of its applications and infrastructure. In particular, as we increase the size of our compute clusters and scale the work that they process, many issues arise in how to schedule the diversity of work that runs on Google systems.
We have distilled these challenges into the following research topics that we feel are interesting to the academic community and important to Google:
How can we characterize Google workloads in a way that readily generates synthetic work that is representative of production workloads so that we can run stand alone benchmarks?
Predictive models of workload characteristics:
What is normal and what is abnormal workload? Are there "signals" that can indicate problems in a time-frame that is possible for automated and/or manual responses?
New algorithms for machine assignment:
How can we assign tasks to machines so that we make best use of machine resources, avoid excess resource contention on machines, and manage power efficiently?
Scalable management of cell work:
How should we design the future cell management system to efficiently visualize work in cells, to aid in problem determination, and to provide automation of management tasks?
To aid researchers in addressing these questions in a realistic manner, we will provide data from Google production systems. The initial focus of these data will be workload characterization. Details of the data can be found
. The data are structured as follows:
Time (int) - time in seconds since the start of data collection
JobID (int) - Unique identifier of the job to which this task belongs
TaskID (int) - Unique identifier of the executing task
Job Type (0, 1, 2, 3) - class of job (a categorization of work)
Normalized Task Cores (float) - normalized value of the average number of cores used by the task
Normalized Task Memory (float) - normalized value of the average memory consumed by the task
We solicit your
in terms of: (a) the quality and content of the data we are providing; (b) technical approaches and/or results related to the topics above; and (c) other research topics that you feel Google should be addressing in the area of Cloud Computing (along with details of the data required to address these topics).
Adaptive Data Analysis
Automatic Speech Recognition
Electronic Commerce and Algorithms
Google Cloud Platform
Google Play Apps
Google Science Fair
Google Voice Search
High Dynamic Range Imaging
Internet of Things
Natural Language Processing
Natural Language Understanding
Optical Character Recognition
Public Data Explorer
Security and Privacy
Site Reliability Engineering
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog