Cloud Computing A Series: Parallel Computing

Manasreddy
3 min readJul 14, 2021

--

Parallel Computing

Parallel Computing slowly revolutionized the computing atmosphere, it allowed computers to do many things at once. Read thousands of commands in seconds. Search millions, heck even a billion web pages at once. So how does this work, how does this factor versus Cloud Computing and Grid Computing.

Parallel Computing

Parallel Computing basically breaks down a large problem into several small problems depending on the type of parallelism and processes them separately and glues them back together.

The figure shown above basically sums up the theory behind parallel computing. The primary goal for parallel computing is to increase the available computational power for faster application processing and problem-solving.

There are 4 types of Parallel Computing which are:

Bit-Level Parallelism: Which is based on increasing the processing power of word size. This reduces the number of instructions the processor must execute. For example, if adding two 16bit integers on an 8-bit processor, first the lower-order buts must be added then the higher-order bits then two processes to add them both must be done. Whereas a single 16-bit processor can process them in one instruction.

Instruction-level Parallelism: Which is based on the multiple execution of instructions of a program. For example, A code containing a for loop to read a list, each “i ” iteration can be done on different processors. Making the program “i” times faster.

Data- Level Parallelism: Which is based on concurrent execution of the same task on multiple computing systems. Ex. Input data is split into batches the exact same algorithm is applied to all batches of code.

Task- Level Parallelism: Which is based on the concurrent execution of different tasks on multiple computing systems.

Parallel Computing Architecture

Parallel computing has two different types of architecture:

Multi-core computing: is a computer processor integrated circuit with two or more separate processing cores. Each execute program instructions in parallel. A multi-core processor has one shared memory and uses a message bus to communicate.

Symmetric Processing: a multiprocessor computer hardware and software architecture that has one OS that treats all processors equally, and is connected to a single, shared memory.

Technique and Solutions in Parallel Computing

Application checkpointing: Records the current state of all components in the system so that it can restore in case of failure

Automatic Parallelisation: Automatic Conversion of serial code to multi-threaded code that can be used on an SMP machine

Parallel Programming Languages: Classified as either using Distributed Memory( use message parsing to communicate)or using shared Memory(manipulating shared memory variables)

Difference Between Parallel Computing and Cloud Computing

Cloud Computing: The system components are located at multiple locations, uses multiple computers, has only distributed memory, communicates through memory parsing.

Parallel Computing: Many operations are performed simultaneously, requires a single computer, can be both distributed or shared memory, processors communicate through a bus

--

--

No responses yet