Advertisement
Help
You are here: Rediff Home » India » Business » Special » Features
Search:  Rediff.com The Web
  Advertisement
      Discuss  |             Email   |         Print  |  Get latest news on your desktop

A peek into Intel's future
Ed Sperling, Forbes.com

Related Articles
The Many Cores Of Intel
Processors Sink, Atom Soars, Apple Baffles
Out Of Darwin's Domin
Stimulating Tech
Treasury Plans Fall Flat On Wall Street
 
 · My Portfolio  · Live market report  · MF Selector  · Broker tips
Get Business updates:What's this?
   
  Advertisement
February 19, 2009
Since the creation of Moore's Law in 1965, people have been predicting its death. Even Gordon Moore, the law's creator, had to revise it a couple of times to make it work.

The new wrinkle is that it's no longer just transistors doubling every couple years. Now, it's processor cores and different kinds of chips and spaces between the wires on a chip that are so small you can literally count the atoms between them. Forbes caught up with Pat Gelsinger, senior vice president and general manager of Intel's (nasdaq: INTC - news - people ) Enterprise Group, to take a look into Intel's future.

Forbes: We've been adhering to Moore's Law since the 1960s. How much farther does it go?

Gelsinger: We see no end in sight. The analogy I like to use is it's like driving down a road on a foggy night. How far can you see in front of you? Maybe 100 yards. But if you go down the road 50 yards, you can see another 100 yards. For Moore's Law, it's always been about a decade of visibility into the future. Today we have about a decade of visibility. We're at 45 nanometers; 32 nanometers is looking healthy, 22 nanometers is healthy, 14 nanometers is well under way and we're doing the core research on 10 nanometers.

Are we going to see the hundreds of cores Intel predicted several years ago?

Right now, many applications in the server space are almost unlimited in core count.

For things like search and database applications, that makes sense, but how about Oracle Financials or SAP R/3?

The database portions of those applications scale well, but the other portions don't scale nearly as well as a Web transaction, for example, where each thread is a different user. Clearly search is infinitely scalable, except for the gather piece of that.

Is the "gather piece" where you put all the independent searches back together?

Yes. The core breakthrough by Google was based on the MapReduce algorithm, and the "reduce" piece is the linear part where you gather together all the independent searches on the Web. That reduce piece has a linear component. Potentially ,you can infinitely scale the size of the search. Then you have to pull the results together and figure out which is the single best result--that piece is not scalable. But the other pieces scale wonderfully with cores, threads and more servers, and interestingly, there are lots of those kinds of problems.

So in the server space, the sky is the limit in the number of cores, even though there are a number of applications like traditional mainframe applications that were written in single threads. The grandson of the guy who wrote them is now retiring. Those will never become multithreaded applications. There is a very long tail of server applications that will not become multithreaded for as long as we can see.

What's Intel's vision for those applications? Will they be replaced by new applications?

Some will be replaced, but before they are replaced they will be virtualized and containerized. You can wrap it up in a nice virtualized container and move it around. You can put a [service-oriented architecture] interface on it--pick your favorite, SOAP (simple object access protocol) or XML (extensible markup language)--so that all of your new scalable applications can interoperate with that legacy application. It becomes less and less critical to the overall enterprise and data center operation.

Why is Intel such a big fan of virtualization?

Virtualization has so many great attributes. One of those is the ability to use multiple cores. You can now bring multiple applications off of hosted multiple-operating-system environments and take advantage of the scale of those new platforms.

One of the barriers to a data center IT manager upgrading to a new server might be a particular application that runs on Windows NT 3.51, the first multithreaded version of Windows. You really want to move it to a new server, but you don't want to go through the work of fixing the DLLs (dynamic link libraries), porting the application, and re-validating it on the new environment. But you can containerize that with a virtual machine, put other virtual machines on that server as well, lower the operational cost and actually accelerate the migration to new server hardware.

There also are occasions where, in a production environment, you can put up a new operating system environment to do testing. You can test a database running in parallel while the old one is running somewhere else, or you can test a new Linux load. You don't need to install a whole of parallel hardware to do that testing. You can just set up a virtual machine and have a checker running to verify the results are the same.

Your definition of virtualization is much broader than how many IT people look at it, though. You're talking about virtualization across the enterprise, right?

Yes. When I spoke at the VMware user conference, I said that virtualization will be the disaggregation of the traditional operating system. It creates the opportunity to create the data-center-wide operating system of the future. In the past, I had one server tied to a single operating system. Virtualization disaggregates that view.

But you're also pointing to running more than one application or operating system on a server. You're talking about dividing up work across enterprise resources.

On the one hand, it could be multiple operating systems on a single piece of hardware. But once you've done that, I can aggregate my server resources. I can do one operating environment on many servers, just like more cores, or I can do one server for many operating environments.

In the future, I can be doing redundancy across those environments, so instead of having a very expensive fault-tolerant machine in the corner, I can do redundancy at the machine level. Or I can run a highly fault-tolerant environment on industry-standard hardware and be able to meet or exceed the level of reliability I might have on that environment.

Do you introduce more complexity by doing that?

Hopefully not. Today each virtual machine becomes an entity that needs to be managed. If I have one piece of hardware running four virtual machines, each OS is an instance of a license and a management entity. So while you've done really well from a hardware manageability standpoint, you can establish more efficiency in software licensing and manageability.

One of the areas that can make this work better is a different view of the operating environment and the management environment. Now you can spawn virtual machines at almost no cost except for the management. Microsoft, Sun and VMware have different concepts in this area. But this is becoming a critical technology layer to accomplish these data-center-wide virtual operating environments of the future.

The jury seems to be out on how easy that is, even though the trends are pointing in that direction.

Clearly, the trend is there. The value is enormous. There are two aspects to this. One is how does any one of them do in a homogeneous environment--VMware with VMware and Microsoft with Microsoft--and how do they do in a heterogeneous environment, where you have Linux virtual machines running with Microsoft OSs and VMware virtual machines. And by the way, that's a lot of the unique value proposition of Intel. You need your hardware to run across all those environments.



More Specials
       Email  |        Print   |   Get latest news on your desktop

© 2009 Rediff.com India Limited. All Rights Reserved. Disclaimer | Feedback