As the technology advances and research areas widen, demand for computational power increases day by day. Such computational demands can be mainly observed in several categories. Physical simulations from molecular level to universe level, analysis of large data from optical telescopes, gene sequencers, gravitational wave detectors, particle colliders and biology-inspired algorithms are some of those categories.
These tasks require high-performance computing (HPC). So one solution is to use supercomputers. But typically, the rate of job completion is more important than the turnaround time of individual jobs since overall result is what’s useful. The term to refer to that idea is high-throughput computing.
These tasks require high-performance computing (HPC). So one solution is to use supercomputers. But typically, the rate of job completion is more important than the turnaround time of individual jobs since overall result is what’s useful. The term to refer to that idea is high-throughput computing.
To achieve high-throughput computing, distributed computing is a better approach since individual job can be processed in parallel in large quantities.
Available distributed computing options are:
○ Cluster computing - dedicated computers in a simple location.
○ Desktop grid computing - PCs within an organization as a computing resource.
○ Grid computing - sharing computing resources by separate organizations.
○ Cloud computing - a company selling access to computing power
○ Volunteer computing
Volunteer computing (also sometimes referred to as global computing) uses computational power volunteered by the general public to perform distributed scientific computing. Volunteers may include individuals as well as organizations such as universities.
This approach allow ordinary Internet users to volunteer their computer resources on idle time by forming parallel computing networks easily, quickly and inexpensively without needing expert help.
Typically when it comes to volunteer computing, the volunteers who contribute with their resources are considered to be anonymous, although some volunteer computing frameworks may collect information like a nickname and email address of the volunteers for the usage of credit system, etc.
Each of the distributed computing paradigms have different resources pools. For example, number of computers owned by a particular university when it comes to grid computing, and the number of servers owned by a company in cases of cloud computing. The number of total possible personal computers is the resource pool in the case of volunteer computing.
To understand the importance of volunteer computing, we have to consider its resource pool.
The number of privately-owned PCs around the globe is currently estimated as 1 billion and is expected to grow to 2 billion by 2015. Also, the resource pool is self-financing, self-updating and self-maintaining. Users buy and maintain their own computers. Therefore various costs associated with other types of grid computing do not apply to volunteer computing. Another important point is that consumer market adopts the latest technology quickly. A supercomputer or a computing grid cannot be replaced or upgradedeasily as newer technologies emerge. But the typical PC user can. For example, the fastest processors today are GPUs developed with computer games in mind. Due to these factors, we can state that volunteer computing has a huge potential for world computational needs.
The number of privately-owned PCs around the globe is currently estimated as 1 billion and is expected to grow to 2 billion by 2015. Also, the resource pool is self-financing, self-updating and self-maintaining. Users buy and maintain their own computers. Therefore various costs associated with other types of grid computing do not apply to volunteer computing. Another important point is that consumer market adopts the latest technology quickly. A supercomputer or a computing grid cannot be replaced or upgradedeasily as newer technologies emerge. But the typical PC user can. For example, the fastest processors today are GPUs developed with computer games in mind. Due to these factors, we can state that volunteer computing has a huge potential for world computational needs.
Berkeley Open Infrastructure for Network Computing (BOINC) is the predominant volunteer computing framework in use.
Some of the other volunteer computing frameworks are:
○ Bayanihan Computing Group
○ JADIF - Java Distributed (volunteer / grid) computing Framework
○ Javelin Global Computing Project
○ XremWeb Platform
○ Entropia
Here is a list of most active volunteer computing projects as of January 2012.
○ SETI@home: Search for extra-terrestrial life by analyzing radio frequencies emanating from space
○ Einstein@home: Search for pulsars using radio signals and gravitational wave data
○ World Community Grid: Humanitarian research on disease, natural disasters, and hunger
○ Climateprediction.net: Analyse ways to improve climate prediction model
○ Folding@home: Computational molecular biology
○ LHC@home: Improve the design of the Large Hadron Collider and its detectors
○ Milkyway@home: Create a highly accurate three-dimensional model of the Milky Way galaxy using data collected from the Sloan Digital Sky Survey
○ Spinhenge@home: Study nano-magnetic molecules for research into localized tumor chemotherapy and micro-memory
○ PrimeGrid: Generate a list of sequential prime numbers, search for particular types of primes
○ Malariacontrol.net: Simulate the transmission dynamics and health effects of malaria
(This post includes citations from several sources and aims to summarize volunteer computing)
○ Bayanihan Computing Group
○ JADIF - Java Distributed (volunteer / grid) computing Framework
○ Javelin Global Computing Project
○ XremWeb Platform
○ Entropia
Here is a list of most active volunteer computing projects as of January 2012.
○ SETI@home: Search for extra-terrestrial life by analyzing radio frequencies emanating from space
○ Einstein@home: Search for pulsars using radio signals and gravitational wave data
○ World Community Grid: Humanitarian research on disease, natural disasters, and hunger
○ Climateprediction.net: Analyse ways to improve climate prediction model
○ Folding@home: Computational molecular biology
○ LHC@home: Improve the design of the Large Hadron Collider and its detectors
○ Milkyway@home: Create a highly accurate three-dimensional model of the Milky Way galaxy using data collected from the Sloan Digital Sky Survey
○ Spinhenge@home: Study nano-magnetic molecules for research into localized tumor chemotherapy and micro-memory
○ PrimeGrid: Generate a list of sequential prime numbers, search for particular types of primes
○ Malariacontrol.net: Simulate the transmission dynamics and health effects of malaria
(This post includes citations from several sources and aims to summarize volunteer computing)