|HOME | INFOTECH | HEADLINES|
|January 22, 1998||
The Race For SpeedThe superpowers of the digital age will be the supercomputers. As information becomes the currency of the New World, the faster you process it the more powerful you get. This is no theory.
If you could crunch all the figures being generated over all the stock markets of the world, in real time, and run specific queries to enable you in cutting killer deals, what would it take? Not the 'really fast latest' multimedia PC you picked last Christmas. Surely not.
If you had to test the impact of a massive nuclear explosion in the middle of overpopulated China, only a moron would even contemplate warheads.
All the above problems could be easily run on very fast computers to work out a mathematical model that would give you the answers. But that range of computing speed has always remained elusive.
Even as our abilities to calculate faster and faster rise at a phenomenal rate, scientists and businessmen can always pose a problem that the high-performance machines cannot do.
And chasing this moving target are some of the finest minds in computing technology. Their work will build the arsenals of tomorrow's competitors, commercial or colonial.
Yet the experts themselves are not sure of what they are grappling with. Right from the definition of their field (supercomputing or high-performance computing?) to the results they have already achieved (gigaflops or teraflops?).
But such debates are in the routine when you are at the frontier, working miracles and discovering the unthinkable.
Rediff tapped seven supercomputing experts for a status report on the digital grand prix. They were at a Bangalore conference recently where Madhuri V Krishnan spoke withÖ
Viktor K Prasanna, Professor Director, Computer Engineering Division, School of Engineering, University of Southern California.
Dr C J Tan, Senior Manager, Application Systems Technology, Research Division, IBM, New York.
Mary Jane Irwin, Vice-President, ACM, New York.
N Radhakrishnan, Director Information Technology Laboratory, US Army Corps of Engineers, Waterways Experiment Station.
Dileep P Bhandarkar, Director, Systems Architecture Workstation Products Division, Intel Corporation Desktop Products Group, WA.
Professor George Milne, Director, Advanced Computing Research Centre, School of Computer and Information Science, University of South Australia.
Arvind ("the second name's a long story"), Charles W and Jennifer C Johnson Professor of Computer Science and Engineering, Laboratory of Computer Sciences, MIT.
And now, if you are ready, we shall begin.
How would you define 'high-performance computing'? Isn't it just a more academic sounding word for 'supercomputing'?
Prasanna: These are general terms, which could mean different things to different people. High-performance computing implies designing single-chip machines to be faster, with more visualisation power, more computing power and more problem solving power. Earlier the term supercomputing was used to mean this but now they prefer calling it high-performance computing.
Tan: Everybody wants their computer to be high-performance and very fast. I think the term high-performance probably means processors that use architectures that solve problems not normally solved by PCs or smaller computers. You need very high computing power to solve some problems.
Irwin: It can be explained in different contexts. Many people are interested in solving general problems and for them high-performance and supercomputing are equivalent terms. I come from an applications background, so I look at solving applications-specific problems very efficiently with a high-performance machine. You have different architectures meant to solve different kinds of problems. For instance, take a specific problem of data compression. For this you want to efficiently use radio waves. You want to take the information and compress it into the smallest space possible yet you must be able to stretch it out again at the other end without losing any information. Now this algorithm is very complex. But it's an important problem that weíre facing.
Radhakrishnan: High-performance computing, in contrast to supercomputing, includes not only the computing aspect but also the communications aspect, the post-processing of data visualisation. An easier way to define high-performance computing is that it provides the end user high-performance analysis. Workstations are part of high-performance computing and one day personal computers may be part of high-performance computing because they sure are getting there. The supercomputing concept is derived by addressing the computing aspect of a supercomputer. It does not involve data communications, post-processing visualisation, virtual reality and so on.
Bhandarkar: Everyone has a different definition. High-performance computing would range from what you can do with workstations to supercomputing. Typically, what people mean is scientific and engineering application. Itís a term used to mean fast scientific computing.
Milne: There are some applications which are computer intensive and cannot be done by normal computers. A classic example being weather forecasting and analysis of seismic data. Then there is the computing work for oilrigs, which involve a lot of simulation.
Arvind: A high-performance machine would mean the fastest computer built at any time. It could be a personal or supercomputer. Basically, it implies very fast processing.
What is the status of high-performance computing today?
Prasanna: The field is maturing but there are not many people providing such machines in the market today. Things should stabilise and standardise within the next couple of years. People are using it to solve largescale problems, using various applications, in areas like automotive design, weather forecasting, drug design, defence, air traffic control, radar and sonar work, navigation systems, banking, space and nuclear energy.
Tan: Weíve come a long way. Weíre now able to do a lot of physical and model simulation inside a computer instead of inside a laboratory. Many new areas of applications will be opening up in the coming years and there will be new challenges. For instance, when we first developed the SP (IBM RISC System/6000 Scalable Power Parallel System) in 1992 most of the applications were scientific but today almost 70 per cent of the applications are commercial in nature. Applications ranging from inventory management to forecasting your business, banks use them to manage their investment flow. Weíre looking forward to newer challenges, as we have to make the technology useful to us. The technology is always improving with new architecture, new structures. We need to do weather forecasting more accurately and more quickly. We use computer technology to simulate the physical world in fields of physics, chemistry, science and finance.
Irwin: Itís not as healthy as it used to be. Supercomputer companies have largely been bought up. Cray, one of the healthiest companies, has merged with SGI. The same is true of Connect, which is part of HP. Intelís Paragon project is no more.
Radhakrishnan: The status is very dynamic and there is aggressive growth. In some ways, the field is slightly unstable too. This is because of changes in technology like the introduction of parallel or scalable computing.
Bhandarkar: I think that in the last few years a lot of high-performance computing happened on supercomputers and mainframes. Today, with advances in the microprocessor technology, a lot of high-performance applications can run on high-end PCs and workstations. The high-performance capability is now reaching a larger number of users without them having to access to supercomputers like the old vector processor machines.
Milne: The field is changing. Five or six years ago it was dominated by a few companies selling machines to the tune of $2 million each. Some of these companies went bankrupt because they had such small markets. Today, there is a change of approach. A more creative approach has come about in the field. The supercomputer world is busy upgrading performance now.
Arvind: Itís thriving. The biggest change that has come about in the last five years is that of total convergence in terms of the kind of stuff we build our PCs with and the kind of stuff we build our supercomputers with. The same microprocessors are used in both and the largest machines are built out of commodity symmetry multiprocessors.
What kinds of speeds are achievable now and what improvements do you see in the near future?
Prasanna: Speeds depend on the problems you are looking at. You canít really quantify them. For instance, simulations that took months would probably take weeks now. The reality is that with more computing power you have, people defining more complex problems to solve, so youíre always chasing a moving target. Earlier people felt that if we had gigaflops (giga floating point operations per second, a measure of computing speed), it was enough. Now weíre talking teraflops and peraflops. We already want something more.
Tan: That is a relative number. Today we already have machines that approach teraflops in the US. Iím not sure about India but all you need is a requisite number of processors bunched together and you can get that figure here.
Irwin: In the US weíre talking teraflops. Weíre getting close to it at least.
Radhakrishnan: Speeds are generally not the precise way of measurement. For instance, MIPS (million instructions per second, another measure of computing speed) are jokingly called 'meaningless indicators of performance'. You can get a computer with half a trillion gigaflops right now. Weíll have a trillion gigaflops in the next couple of years. But the measure is not precise because it usually indicates peak performance, not sustained performance. If everything goes right then your sustained speed will be a fraction of your peak performance. In a vector machine, that could be 90 per cent but in a scalable parallel-processing machine, it could be as low as 15 per cent.
Bhandarkar: I can only tell you where Intel is today. Our fastest processor for í98 will be a 333 MHz-plus processor dubbed De Shultz.
Milne: Itís hard to quantify speeds as it means different things to different people. One can gauge it though, like computation that took days can be done within an hour with the high speeds of today. A classic example will be simulation of integrated circuits. These simulations would run for weeks but now it can be done in days and in the future could be done in hours.
Arvind: In our lab at MIT we have two clusters of SMPs, each with an aggregate computing power of about 25 gigaflops. Itís absolutely conceivable to go from this figure to several 100 gigaflops or even teraflops if you have the money to build such a machine. Hardware is not an issue but software is because important applications require significant investments to move these fast machines.
Which areas of research will benefit the most with advances in speeds?
Prasanna: In almost every area there are benefits to research. People talk of using 0.1 micron technology to design microchips in the next five years. All those designs have to be simulated. And to simulate a single circuit like that, with high computing power, more simulations can be done. Itís limitless.
Tan: Speed is always important so that you can get your results faster and you can solve larger problems but speed alone is not the issue. Your data has to be fed into your computer, the analysis has to be done, and the results have to be available in a user-friendly way. The entire system has to be balanced. The Internet has the reverse problem. It has so much data but not enough speeds to access it. Network technology has to improve to such a stage that the whole system works efficiently.
Irwin: If you look at mapping the human gene on the SAP, it could not have been done unless we had harnessed the computing power to do pattern matching. High-performance technology enables many other scientific and medical disciplines, allowing them to do research which they could not even contemplate doing 10 years ago. Look at aerospace technology, drug design and so on.
Radhakrishnan: Speed is one of the many factors that has bearing on research and its productivity. Obviously, if somethingís fast, you can do your results fast but thatís not the real function of speed. Speed also allows you to run larger problems that you were unable to run before. You can come out with a more precise definition. Supercomputer is a system in the order of magnitude slower than the problem at hand. Similarly so with bandwidths which are slower in the order of magnitude that people want. But with speeds should go memory. The amount of data storage capacity is essential because if you have something that computes so fast where would it all go? Thatís where data mining and visualisation comes in.
Bhandarkar: High speeds can enhance benefits to research in almost all fields I can think of. Simulation of car crashes, seismic activity, forecasting weather and so on.
Milne: Speed has great industrial benefits. With respect to research you can explore more alternatives, make some computations feasible. When you have to repeat experiments, you can compress a wider range of experiments in a smaller timeframe. And in a commercial perspective, for instance, in the integrated circuit design, you can get the product in the marketplace faster and that will have huge commercial benefits to the manufacturer and user.
Arvind: Weather forecasting is the major area where high-performance computing will have great benefits. Imagine you can predict the weather, earthquakes and other natural disasters quicker and within shorter timeframes, saving a million lives.
When will todayís gigaflops come down to the desktop?
Prasanna: It wonít be too long. Weíre already looking at a single desk processor in the range of a few 100 gigaflops the RS6000. If youíre looking at workstations, it will be in the next year or two. If youíre looking at home PCs, it will be a couple of years.
Tan: It wonít be too long in the future. Not more than a few years.
Irwin: Theyíre getting close. By 2000.
Radhakrishnan: Todayís gigaflops are on the desktop today. If you look at what is on your desktop today, it is like 1,000 times the computing power that was needed to launch the Apollo 13 to the moon. We have Cray type machines on our desktop today. You just need storage, which I think will be there by 2000.
Bhandarkar: Weíre looking at the IA64 processor today, this level of performance will be available to people in their offices by 2000.
Milne: Itís almost there. The issue, however is that people have no requirement for this performance. Itís a very specific, specialised market that needs such high computing performance. But more importantly, it will have an impact on a wider range of people than just those in the academic and commercial fields. There will be an environment where machines will be performance based and available at low costs. The technology will become more of a commodity in the coming years and will apply to a larger community. Image compression will be a huge demand created in the future months and years.
Arvind: Certainly, the fastest microprocessor available is available on the PC first - the only reason you donít see a large number of them instantly is because when a new microprocessor comes out it is dangerously expensive. Only later it drops and becomes accessible to more users.
Do you think high-performance machines will get to become more and more open-architecture oriented?
Prasanna: Open could imply different levels. For instance, people are trying to standardise on network interfaces and network protocols. In the case of proprietary protocols, whosoever designed the switch defined its mechanism for message transfer. Now the trend is toward standardising this so that anybody who designs the interface can use the standard format. People are talking of certain plus standards which are not yet defined for building parallel machines, standards on how to programme them, programming models and languages or what is called high-performance Fortran. Another case in point is the message-passing interface, which is also used inside India. Many universities and research labs jointly define its standards. In fact, a lot of parallellising efforts that has come about is based on message-passing standards.
Tan: Every specific problem requires a specific problem solving application. Sure, in some cases standardisation will work. But for more complex and larger problems, closed architecture will still exist.
Irwin: We sort of have it (open architecture) already in commodity parts. In vector processing, no, itís not possible.
Radhakrishnan: Look at parallel processing. Computers are being built with commodity parts. Param, for instance, is being built with Sun boards. I do not believe there will be open architecture. Or rather all computers will not have open architectures. There are going to be many vector processors just as there are going to be much architecture and there are going to be some categories. And because you have different architectures written, you have to have specific applications written and thereís not enough of a market for that to happen. I believe scalable computing will survive not because of scientific computing but because of business computing. Working out real-time applications (problem solving and deduction done in real time) will be the goal we are working towards.
Bhandarkar: What you will see is people building large computing facilities based on a collection of smaller machines. Four processors or multiprocessors would be connected and used as supercomputers. There will be homogenous multiprocessors using an Intel server or workstation and I think that is the direction that people will take.
Arvind: Yes, we must have open architecture out of necessity. But the market is not large enough for any one company to do it alone. Open systems mean you can mix and match from different companies or mix and match technology from the same company but from different generations, thatís probably whatís going to happen in a couple of years.
Countries like India have chosen parallel processing over vector processing due to denial of technology and paucity of funds. Which direction are other nations taking?
Prasanna: Almost all major vendors like Intel, DEC and Sun look at parallel processing. These machines are widely used. Indian organisations like the Centre for Development of Advanced Computing and the Bhabha Atomic Research Centre have adopted the technology. Aerospace simulations are also done through this. India had to adopt it. It was the only way it could get competing computer power. The reality today is that single chips are becoming more and more powerful. India has been able to go for parallel technology as it used local technology, it even made US and other governments who did not want to sell their technology to India rethink what they had to sell. In the past year, the US government lifted some restrictions. Now theyíve said machines with up to 750 megaflops is the limit so thereís no need for universities or research centres to get permission. This has happened due to local initiatives inside the country. The technology to build parallel machines is freely available in India itself and the technology has advanced quite a bit in the West too as it is being commercialised. It is fairly understood that almost anyone can build these parallel machines. You donít need a major organisation to do it for you.
Tan: Parallel processing is important because it's a new way of using commodity technology that large companies call for. The idea is to use commodity PCs and processors to link up or network with one another with a switch to achieve high performance. But here there are two problems. You have to have an efficient network to connect them together. Then you have to have software that will be able to solve your problems, thatís precisely when you need speeds. Eventually, one kind of machine will solve some problems better than the other.
Irwin: Parallel processors work because you use commodity parts. You got an Intel server, buy a commodity processor and you can think of an interesting way of bunching them together. In vector processing, nobody builds them as commodity parts so you have to custom build your processor.
Radhakrishnan: India has very few vector processors. Parallel processing helps India because a) it's affordable so universities can get them and b) it provides them an opportunity to play in the main field. The trend the world over is to move to parallel processing. India's strength lies in her supremacy in software, which is an important ingredient in the operating systems and applications systems. India had very few mainframes or big computers but we had minicomputers like VAX. When the microprocessor technology came in, it provided an opportunity to enter into that type of market. And because those microchips are used in large computers, it allowed us influence in both software and hardware markets. I believe that nothing replaces anything, most times, itís just a question of relative weight. Just because microprocessing computers came in doesnít mean mainframes will go or just because we will have Web TV, PCs wonít go away. Itís really a question of access.
Bhandarkar: There are fewer and fewer companies investing in vector processors except for a few Japanese companies and Cray. The rest of the world are taking existing microprocessors and connecting them together in a parallel fashion to get parallel processing.
Arvind: Parallel processing is the only way. Vector processing is important. From the hardware point of view you wonít have as much vector processing hardware. Even the largest vector processing machines are multiprocessor ones so weíre only debating details here. First order of performance is because of parallel computing and each node in your parallel computer could be a vector processor or an SMP or a single workstation and for cost reasons, it will not be a vector processor.
How long will it be before the silicon chip technology will be restricted by the speed of electricity?
Prasanna: If you look at all the projections, one way of looking at it is from the Mooreís Law perspective which states that every 18 months the speed of technology doubles so their projection is that for the next 20 years that law will hold. Weíre not going to be limited by these considerations. I am predicting that weíll have single chip gigaflops machines.
Tan: Theyíre talking about running chips by moving individual atoms in the next few years. But for the next 15 to 20 years people will still be able to get high performance from the silicon chip and maybe even beyond that. Weíve got a long way to go.
Irwin: By Mooreís Law, say weíve got till 2008-2010 as far as scaling goes. At the moment the design constraints are changing so weíre more concerned with feature size shrink. But with chips performance improving the other main worry is consumption of power. How can you cool these chips? Designing architecture that consumes less power is going to become important on the technology scale.
Radhakrishnan: We want to travel faster that the speed of light, not just electricity. Chips have to communicate with each other and they canít communicate faster than the speed of light, so we have a long, long way to go. Even the chips may change in the meanwhile, and instead of silicon there may be another material. Thereís already one in use called lithium arsenite.
Bhandarkar: Silicon chip technology has enough headroom so we donít have any fundamental limitations at least for the next five years. We havenít identified how it will be done but we donít see any end in sight.
Milne: Somebody said 2015.
Arvind: Silicon chip technology is here to stay for at least another 10-15 years.
Will supercomputers adopt exotic methods like quantum, bimolecular and optical?
Prasanna: There are a lot of things in the market at the research level like biologically-based computing methods, quantum computing and DNA computation but none of these are even close to realising their work. Maybe give them another 20 years. I definitely donít see any of them coming out commercially for the next 5-10 years at least.
Tan: Molecular or other methods of computing are still subjects of theoretical interest. In practice, the silicon technology has many years to go but Iím also saying there are people talking of moving atoms in computers at a (quantum) mechanical level.
Irwin: There are some reports that I have read about. But I know less of them. Iíve read thereís lot of work on optical communication and some work in optics and less work on quantum devices and biological computing.
Radhakrishnan: People are always looking at various methods but weíre still at the rudimentary level. Our interaction with the computer is at the command level. It is not at the sensory level. Weíre trying to mimic the brain. We want to use the computer to talk, to feel to understand, to see. Thatís not going to really happen. Supercomputers are just a small component of our desire to do more things the brain does.
Bhandarkar: I donít think any of those technologies are mature enough for serious commercial adoption. Research institutes need to evaluate them further.
Milne: Itís all happening inside research labs. There are some new technologies like the MPJ architecture, which allows you to send applications more effectively than quanta do. But the impact will be felt in five years only. Thereís work going on in optical computing. Silicon technology will be here for another 20 years. Itís doubling every two years. When it begins to die, some other technologies will emerge.
Arvind: Right now none of these methods of computing are mature enough. Quantum computing is just an idea. It will be many years before it will be realised. Same things for biological computing. Optical computing is slightly better understood but people donít see much advantage in an optical computer. They see more advantage in optical technology in inter-processor communications. It already exists in some form. Just imagine the whole of India being wired with fibre.
INFOTECH | TRAVEL | LIFE/STYLE | FREEDOM | FEEDBACK