Kompella and Lee Present RLI at SIGCOMM 2010
An inexpensive solution for diagnosing delays in algorithmic trading and data center applications has been developed by computer scientists from Purdue University, and will be presented at SIGCOMM 2010, the flagship networking conference. Professor Ramana Rao Kompella and Myungjin Lee, along with Nick Duffield of AT&T Laboratories, have proposed an architecture called reference latency interpolation (RLI) that would allow fine-grained per-flow measurements in routers and switches in data center environments. In trading applications, for instance, even a small additional latency in obtaining the stock feeds could lead to multi-million dollar losses for investment banks. For high performance computing applications, as well, several hundreds of 1000's of cycles are wasted by processors waiting for messages to arrive in time thus reducing the overall performance.
Until now, network operators have very limited capabilities to diagnose customer-specific problems in data center networks. A new market for high-fidelity measurements has sprung up with companies such as Corvil providing expensive boxes for fine-grained measurements. "The prohibitive cost of commercial boxes, about 90K British Pounds for 2 x 10G interfaces, prevents operators from deploying these boxes ubiquitously. Our solution will allow native support for fine-grained measurements at a very low cost", says Prof. Kompella.
Last year, too, Prof. Kompella along with other colleagues from UCSD have proposed a new high speed data structure called Lossy Difference Aggregator (LDA), but it is meant for aggregate measurements, not per-flow. "A serious limitation of LDA is that it only gives aggregate latency measurements, which may not be sufficient, since many SLAs are customer-specific and certain problems can affect only certain flows while averages across all packets may appear normal.", says Myungjin Lee, the first author of the paper and a Ph.D. student currently advised by Prof. Kompella."The foundation of RLI lies in the notion of delay locality, whereby packets that traverse the router temporally are often subject to the same queuing and other effects. Thus, by injecting a few intelligent probes at regular intervals, one can obtain reference delay samples that can then be used for estimating per-packet and in turn per-flow latencies." explains Dr. Nick Duffield, a researcher at AT&T Labs, who is a co-author on the paper. The research, funded by NSF and a grant from Cisco Systems, will certain help network operators in these latency-sensitive environments to diagnose and fix problems easily.