You are here

Upper Division Under-Graduate College Students

11 February, 2015 - 12:31

The challenge is to try parallel computing, not just talk about it.

During the week of May 21st to May 26th in 2006, this author attended a workshop on Parallel and Distributed Computing. The workshop was given by the National Computational Science Institute and introduced parallel programming using multiple computers (a group of micro computers grouped or clustered into a super-micro computer). The conference emphasized several important points related to the computer industry:

  1. During the past few years super-micro computers have become more powerful and more available.
  2. Desk top computers are starting to be built with multiple processors (or cores) and we will have multiple (10 to 30) core processors within a few years.
  3. Use of super-micro computing power is wide spread and growing in all areas: scientifc research, engineering applications, 3D animation for computer games and education, etc.
  4. There is a shortage of educators, scientifc researchers, and computer professionals that know how to manage and utilize this developing resource. Computer professionals needed include: Technicians that know how to create and maintain a super-micro computer; and Programmers that know how to create computer applications that use parallel programming concepts.

This last item was emphasized to those of you beginning a career in computer programming that as you progress in your education, you should be aware of the changing nature of computer programming as a profession. Within a few years all professional programmers will have to be familiar with parallel programming.

During the conference this author wrote a program that sorts an array of 150,000 integers using two different approaches. The first way was without parallel processing. When it was compiled and executed using a single machine, it took 120.324 seconds to run (2 minutes). The second way was to redesign the program so parts of it could be run on several processors at the same time. When it was compiled and executed using // machines within a cluster of micro-computers, it took 20.974 seconds to run. That's approximately 6 times faster. Thus, parallel programming will become a necessity to be able to utilize the multi-processor hardware of the near future.

A distributed computing environment was set up in a normal computer lab using a Linix operating system stored on a CD. After booting several computers with the CD, the computers can communicate with each other with the support of "Message Passing Interface" or MPI commands. This model known as the Bootable Cluster CD (BCCD) is available from:

Bootable Cluster CD University of Northern Iowa here.

The source code files used during the above workshop were modifed to a version 8, thus an 8 is in the filename. The non-parallel processing "super" code was named: nonps8.cpp with the parallel processing "super" code named: ps8.cpp (Note: The parallel processing code contains some comments that describe that part of the code being run by a machine identifed as the "SERVER NODE" with a part of the code being run by the 10 other machines (the Clients). The client machines communicate critical information to the server node using "Message Passing Interface" or MPI commands.)

Download the source code files here:

Two notable resources with super computer information were provided by presenters during the workshop:

Oklahoma University Supercomputing Center for Education & Research here.

Contra Costa College High Performance Computing here. You can also "Google" the topic's key words and spend several days reading and experimenting with High Performance Computing.

Consider reviewing the "Educator Resources" links provided in the next section.