IBM Sets Data Transfer Record

IBM

Researchers looking to cut costs may give existing Internet technology an extended life.

According to Intel, 639,900 GB of data is transferred every minute on the Internet. That includes 204 million emails, 61,141 hours of music, and 277,000 Facebook logins. The task of keeping the Internet churning at this voracious pace falls largely on datacom technology (short for data communications), which helps us transfer data between computer systems and devices.

Our need for faster data rates seems close to testing the limitations of our current datacom technology, but a recent breakthrough from IBM researchers is cause for added optimism. Not only did they set a new record for data transmission over fiber optics — beating a previous record they’d surpassed last year — they did so using technology and methods that many people thought were outmoded. They achieved a data rate of 64 Gb/s, which is around 14 percent faster than their previous record and about 2.5 times faster than the general capabilities of current technology. Although this speed increase is impressive on its own, this data record also serves a higher purpose as a much-desired evidence that the data communications technology we have now still has some extra years in it.

“The general theme of the research is to try to explore the limit of the datacom technology that’s currently being used today,” says Dan Kuchta, one of the IBM researchers who worked on this project.

This milestone emphasizes the continued usefulness of current technology in several ways. The multimode optical fiber they used is a relatively low-cost cable often found in data centers and supercomputers. These cables are limited to 57 meters in length, but Kuchta says the optical links in the last two systems they built were all less than 20 meters. One of these systems was the Sequoia supercomputer at Lawrence Livermore National Laboratory. The researchers also used standard non-return-to-zero (NRZ) modulation to send the data — think 1’s and 0’s. The pairing of these two allows for especially quick transmission time, which is vital in high-performance computing.

Before this research, some experts believed that transfer rates using NRZ modulation would be limited to 32 Gb/s, perilously close to the 25-28 Gb/s rate that much of our technology is currently running on. As many of us know, slow data rates can hinder activity on computers, like when a video keeps stopping to buffer. If transfer rates drop too far for our tech, some applications may quit working entirely. The key to achieving the record rate was applying an electrical communication method to this optical process.

“They’re using electronics to extend the bandwidth of the laser and that’s the big step,” says Stephen E. Ralph, director of Georgia Tech’s Terabit Optical Networking Consortium.

The researchers paired a vertical-cavity surface-emitting laser (VCSEL) with a custom-made silicon-germanium chip from IBM. The VCSEL, too, is an example of a low-cost technology with some questions concerning its future. Its bandwidth of 26 GHz would generally reach a rate of about 44 GB/s but IBM’s chip pushed it beyond that. Given this mashup, the current optical systems in place cannot increase their data rate without adjustment. But Kuchta says what they’ve made is basically just a higher speed version of what is already shipping out. Unlike most research, that would mean that it’s ready for commercialization right now. Ralph says, however, that an added step to show the viability of this method within a range of standard fibers would probably still be necessary. In the end, this all comes down to cost savings. Kuchta estimates that this technology costs about one-third as much as other options that use single mode fibers. In the price-sensitive datacom industry that could do more than keep money in pockets.

“Taking low-cost components and pushing them to their limits has enabled the datacom community to advance,” says Kuchta.

With cost and progress so intertwined, this research may even help allocate funds to finding its own successor. Although it seems to have extended its expiration date, Kuchta predicts that this technology is about six years away from retirement. By about 2020, it will likely be unable to keep up with data rates, requiring something faster to come along and move our data in the next decade.