[ad_1]
Distributed programming involves multiple computing units working on a problem, often with different hardware and configurations. Programs must be split into processes that communicate with each other. Coordination can be difficult, but the approach can lead to better solutions. Examples include analyzing data, modeling molecules, and searching for extraterrestrial life.
Distributed programming is a form of parallel programming or parallel computing. Parallel programming involves computers and computing units within computers working simultaneously on a particular problem, such as predicting tomorrow’s weather. Compute units can be placed very close together and coordinated or they can be placed apart. When the computing units are separated, it is referred to as distributed programming. In this scenario, very often the computing units differ from each other, as well as the operating system and network configuration, making the programming of the computing activity particularly challenging.
When solving a problem in a distributed way, the program must be split up so that parts of the program can run on the different computing units; these parts are often called “processes”. Processes run concurrently but must communicate inputs and results to each other. If the processes are running on different hardware, such as one part running on Intel and another running on SUN, the programs must be compiled and optimized differently.
One way to solve a sufficiently difficult problem is to split the input pieces and have the different computational units work on the different pieces using the same algorithm, set of rules, or troubleshooting steps. For example, to decipher a genome of 10,000 pairs, the first 1,000 pairs might be assigned to the first computational unit, the second 1,000 pairs assigned to the second computational unit, and so on, all using the same algorithm. With distributed programming, one advantage is that different computational units could be running different algorithms to solve the same problem, thus leading to a significantly better solution. It’s like solving a puzzle with some people putting the edge together while others put pieces of a particular color together.
Coordinating distributed computing processes can be a particularly difficult task. Some compute units may go down or be stopped to handle other work. Messages containing calculation inputs or results may not reach their destinations. If programs are written naively, the loss of a computing unit or a few messages can cause an entire set of computers to crash.
In distributed programming, one process could be the controlling process, essentially making the other processes work, or all processes could work in a peer-to-peer fashion with no process being the “master”. Some examples of problems attempted with distributed programming include analyzing geological data for resources such as petroleum, modeling biological proteins and molecules, cracking coded messages, and military simulations. The SETI project to search for intelligent extraterrestrial life from radio messages received from Earth is perhaps one of the best known examples.
[ad_2]