Wednesday, May 24, 2006

How many cores?

Parallel computing has been around for as almost long as computers have been around. Many people have spent their entire careers working on parallel computing and made some great progress. Parallel computers go back to early 1960s and a huge amount of knowledge, art, science and experience has been developed since then. Even earlier when computers were big and expensive, parallelism was needed to allow simultaneous sharing of the CPU among users, and to allow interfacing with external processes and equipment. The whole area of operating systems development started thinking about concurrent processes and synchronization.

When scientists and engineers got hooked to computers the need for speed led to parallelism as one way to get more work done. Seymour Cray and others pioneered parallel computing in hardware and gave birth to supercomputers. This trend hasn't subsided, even though there have been ups and downs. The whole area of high performance computing is a universe of its own with vector processing, pipelining, shared memory parallelism, clusters as some of the things that have been brewing for a long time.

At the same time the desktop computing world has been blissfully ignoring parallelism to a large extent, even though the PC processors have internally relied on parallelism of one sort or another to keep the performance going. This trend has been fueled by Moore's law to some extent and it seems that a wall is approaching as far as performance is concerned. The software world has been limited to thread level parallelism provided by the OS and to the multimedia extensions provided by the processors. This is a gross simplification, of course, but the point is that the awareness of parallelism has been limited in the software world of the PCs.

The modern CPUs have been getting very dense, thanks to Moore's Law, the clock frequencies have been increasing (thanks to CPU vendors), and so has the use of instruction level parallelism internally. The world at large hasn't had to bother with dealing with parallelism much. Things would have kept going except for one problem. The CPUs are getting hot as densities grow and the frequencies rise. This means that the only way to get the performance going is to start putting many independent CPU cores on a single chip. That is where software can no longer avoid dealing with parallelism.

In the server world the software has been dealing with parallelism and it has been pretty hard to get things right and scalable. The end result is that the software development costs are pretty high. Dealing with race conditions, synchronization bugs and testing is not an easy thing, but the developers have been at it for quite but there are still no good solutions. These developers have been a small minority of all the software developers that are out there. Making more developers deal with parallelism is going to be very very painful and may lead to a productivity crisis (though some might say we already have a crisis for single threaded software).

It may take a generation before parallel programming and software world will be easier, but I wont try to predict the future. I am starting this blog to discuss several interesting issues, problems, solutions and hopefully insights into the world of parallel hardware and software. I'd like to hear from the readers on what they think the interesting problems and trends and any ideas on what might make interesting discussions on this blog.

3 Comments:

Anonymous Anonymous said...

This neat

6:15 PM  
Anonymous Anonymous said...

this blog surely is going to be interesting to read, being a computer science student myself i'm gonna look forward to this blog and its discussions.

1:06 AM  
Blogger Vinod G said...

Hi Divya - Thanks for the comment. I have been recitent about following up. I have several follow up posts in various states of completion but never got around to finishing them :) . Any paticular topic you would want to see?

Vinod

9:38 PM  

Post a Comment

<< Home