Sunday, January 22, 2012

What Are Parallel Operating Systems?

Parallel operating systems are used to interface multiple networked computers to complete tasks in parallel. The architecture of the software is often a UNIX-based platform, which allows it to coordinate distributed loads between multiple computers in a network. Parallel operating systems are able to use software to manage all of the different resources of the computers running in parallel, such as memory, caches, storage space, and processing power. Parallel operating systems also allow a user to directly interface with all of the computers in the network.
A parallel operating system works by dividing sets of calculations into smaller parts and distributing them between the machines on a network. To facilitate communication between the processor cores and memory arrays, routing software has to either share its memory by assigning the same address space to all of the networked computers, or distribute its memory by assigning a different address space to each processing core. Sharing memory allows the operating system to run very quickly, but it is usually not as powerful. When using distributed shared memory, processors have access to both their own local memory and the memory of other processors; this distribution may slow the operating system, but it is often more flexible and efficient. 


Most fields of science, including biotechnology, cosmology, theoretical physics, astrophysics, and computer science, use parallel operating systems to utilize the power of parallel computing. These types of system set-ups also help create efficiency in such industries as consulting, finance, defense, telecom and weather forecasting. In fact, parallel computing has become so robust that it has been used by cosmologists to answer questions about the origin of the universe. These scientists were able to run simulations of large sections of space all at once — it only took one month for scientists to compile a simulation of the formation of the Milky Way, a feat previously thought to be impossible.
Scientists, researches, and industries often choose to use parallel operating systems because of its cost effectiveness as well. It costs far less money to assemble a parallel computer network than it costs to develop and build a super computer for research. Parallel systems are also completely modular, allowing inexpensive repairs and upgrades to be made.
In 1967, Gene Amdahl, while working at IBM, conceptualized the idea of using software to coordinate parallel computing. He released his findings in a paper called Amdahl's Law, which outlined the theoretical increase in processing power one could expect from running a network with a parallel operating system. His research led to the development of packet switching, and thus to the modern parallel operating system. This often overlooked development of packet switching also was the breakthrough that later started the "Arpanet Project," which is responsible for the basic foundation of the world's largest parallel computer network: the Internet.
//