Jump to content

Infineta Systems

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 150.101.153.91 (talk) at 01:46, 30 August 2011 (→‎Competitors: added Exinda, organised alphabetically). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Infineta Systems
Company typePrivate
IndustryNetworking hardware
FoundedCalifornia 2008
Headquarters
Area served
Worldwide
Key people
Raj Kanaya, CEO

K.V.S. Ramarao, CTO
Ainslie Mayberry, CFO
Haseeb S. Budhani, VP Products
Amol Mahajani, VP Engineering
John Oh, VP Marketing

Steven Velardi, VP Sales
Websitewww.infineta.com

Infineta Systems is an Information Technology company that makes WAN optimization products for high performance, latency-sensitive network applications. The company’s flagship product, the Data Mobility Switch (DMS), makes it possible for application throughput to exceed the nominal bandwidth of the link.

Company

Infineta was founded by Raj Kanaya, the CEO, and Dr. K.V.S. Ramarao, the CTO in order to address the business challenges facing large organizations in the era of Big Data. Dr. Ramarao concluded that the computational resources, especially I/O operations and CPU cycles, associated with existing deduplication technologies would ultimately limit their scalability. [1] He and Mr. Kanaya determined that new technologies would be required to address Big Data and they founded Infineta to develop the algorithms and hardware necessary. The company currently has six patents pending for the technology.

Infineta is headquartered in San Jose, California and has attracted $30 million in two rounds of venture funding from Alloy Ventures, North Bridge Venture Partners, and Rembrandt Venture Partners.[2][3]

Products

Infineta launched its Data Mobility Switch in June 2011 after more than two years of development and extensive field trials. The DMS is the first WAN optimization technology to work at throughput rates of 10Gbps+.[4] Infineta designed the product in FPGA hardware around a multi-Gigabit switch fabric to minimize latency. As a result, accelerated packets (that is, packets that are processed by the deduplication engine) average no more than 50 microseconds port-to-port latency. Unaccelerated packets are bridged through the system at wire speed.[5] The company decided against designs based on software, large dictionaries, and existing deduplication algorithms because it found that the high operational overhead and latency introduced by these legacy technologies do not permit scaling on the level required for data center applications, which include replication, data migrations, and virtualization.[6]

The DMS works by removing redundant data bytes from network flows, which allows the same information to be transferred across a link using only 10%-15% of the bytes otherwise required. This process is known as data deduplication. The effect is that either the applications generating the data will respond by increasing performance, or, there will be a net decrease in the amount of WAN bandwidth those applications consume.

The product is also designed to addresses the long-standing issue of TCP performance[7] on “LFNs” (Long Fat Network) so even unreduced data can achieve throughputs equivalent to the WAN bandwidth. To illustrate what this means, take the example of transferring a 2.5GB (20 billion bits) file from New York to Chicago (15 ms latency, 30 ms round-trip time ) over a 1Gbps link. With standard TCP, which uses a 64KB window size, the file transfer would take about 20 minutes. The theoretical maximum throughput is 1Gbps, or about 20 seconds. The DMS performs the transfer in 19.5 to 21 seconds.[8]

Competitors

Other vendors in the area of WAN optimization include Aryaka, Bluecoat, Cisco WAAS, Exinda, Riverbed Technology, Silver Peak Systems.

See also

References

  1. ^ Martynov, Maxim (11). "Challenges for High-Speed Protocol-Independent Redundancy Eliminating Systems". Computer Communications and Networks, 2009. ICCCN 2009. Proceedings of 18th International Conference on: 6. doi:10.1109/ICCCN.2009.5235389. ISSN 1095-2055. {{cite journal}}: Check date values in: |date= and |year= / |date= mismatch (help); Unknown parameter |month= ignored (help)
  2. ^ "San Jose-Based Infineta Systems Raises $15 Million in Second Round". Silicon Valley Wire. 2011-06-06. Retrieved 2011-07-29.
  3. ^ "Infineta raises $15M to move big data across data centers — Cloud Computing News". Gigaom.com. 2011-06-06. Retrieved 2011-07-29.
  4. ^ Rath, John. "Infineta Ships 10Gbps Data Mobility Switch". Retrieved June 7, 2011.
  5. ^ "Typically, the entire latency budget between two servers participating in long distance live migration is around 5-6 milliseconds. Enterprises will need to find ways to optimize this latency-sensitive workflow while remaining within the necessary latency budget." - Jim Metzler, Vice President, Ashton, Metzler & Associates
  6. ^ “Highly-scalable, multi-gigabit WAN optimization will play a critical role in next-generation data centers as more applications, data, and services become centralized and delivered to remote sites over a WAN.... Achieving the highest degree of performance while simplifying data center architecture around space, cooling, and power will be crucial.” Joe Skorupa, research vice president, data center convergence, Gartner.
  7. ^ Jacobson, Van. "TCP Extensions for High Performance". Network Working Group V. Jacobson Request for Comments: 1323. ietf.org.
  8. ^ Throughput can be calculated as follows: where RWIN is the TCP Receive Window and RTT is the latency to and from the target. The default TCP window size in the absence of window scaling is 65,536 bytes, or 524,228 bits. So for this example, Throughput = 524,228 bits / 0.03 seconds = 17,476,267 bits/second or about 17.5 Mbit/s. Divide the bits to be transferred by the rate of transfer: 20,000,000,000 bits / 17,476,267 = 1,176.5 seconds, or 19.6 minutes.