You are here

Distributed processing

27 March, 2015 - 10:52

Historically, and to some extent even today, mainframe computer systems were used to run a number of applications that serve many users. Mainframe computers were housed in a central location called a data centre. They were connected to the terminals of different user departments. These terminals had little processing power when compared to the personal computers of today, and were often referred to as dumb terminals.

A single mainframe computer was used to serve many different departmental users. It had a number of central processing units, each of which could execute programs independently. The users ran their applications on the mainframe on a time-sharing basis. At a particular instant, a number of applications of different departments were run on a single mainframe computer with a number of computer processing units.

The development of such programs is very structural, as the impact of application bugs and/or operating procedures could cause problems on a very wide scale. They could affect the operation of many different departments. In most companies, there was only one mainframe computer for the whole company. Hence, mainframe systems needed very stringent application development and operational control.

Some modern software applications (not networks) use distributed processing, in which a task is divided among multiple processes residing on different computers, called servers or hosts. There could be many servers in a company. They may still reside in a single data room. During the design of an application, the processing is distributed among different programs, which are to be hosted on different hardware servers. This provides flexibility in design. Since the applications for different departments are scattered in different servers, distributed processing also offers flexibility. Computer networks are used to interconnect servers, which need to work together like an integrated application.

PCs can contribute to the distributed processing as well. The powerful PC improves user interfaces in data input and output data presentation. Computer networks serve a critical role in distributed processing. They can link the servers and end-user PC terminals, and help in passing data around in a timely manner. For multi-location enterprises and big companies operating across national boundaries, distributed processing is essential. Even in a local company, a properly designed distributed processing application provides flexibility for design, future upgrades and maintenance. There may well be cost advantages from using distributed processing.

A mainframe computer does have distinct advantages in some cases. Mainframes are still widely used in banks and governments in which centralized intensive processing and centralized secure storage are required. Super mainframes are used in weather forecasting, which requires lots of speedy but tedious numerical computations. Computer networks in these cases still have their roles, however: they are used to connect the mainframe to the terminals.