|WikiProject Computing / Networking||(Rated C-class, Low-importance)|
- 1 Technologies based on DisplayPort such as DockPort or Thunderbold, capable of hispeed serial transport, not even mentioned
- 2 Neutrality of Unified Fabrics for High Performance Computing
- 3 What is Infiniband, anyway?
- 4 Performance Citations
- 5 Host Channel Adapter
- 6 Timeline?
- 7 Copyrighted Content? / Neutrality
- 8 Only the last two paragraphs seem to be an "advert"
- 9 History
- 10 An InfiniBand Boom?
- 11 Switched Fabric
- 12 Cray XD1/Mellanox misinformation
Technologies based on DisplayPort such as DockPort or Thunderbold, capable of hispeed serial transport, not even mentioned
Although they are not meant for system bus or internal paths, they include features like e.g. direct memory access and can externally connect devices with such capabilities. They can also encapsulate USB or PCIE transports.
Therefore I think that they should be included in summary sections below the article or paragraphs texts, just to get complex (and unbiased) overview of computer communication links. — Preceding unsigned comment added by Ldx1 (talk • contribs) 19:33, 9 May 2014 (UTC)
Neutrality of Unified Fabrics for High Performance Computing
The first two paragraphs of the "Unified Fabrics for High Performance Computing" section sound like blatant advertising. —Preceding unsigned comment added by 188.8.131.52 (talk) 23:56, 4 February 2008 (UTC)
I would concur. This (second) paragraph in particular is full of ill-defined superlatives, the kind of style that emanates from advertising departments disconnected from technical reality. Nothing more than a flagrant abuse of the free medium of the Wikipedia.
"InfiniBand is an industry standard, advanced interconnect for high performance computing systems and enterprise applications.
The combination of high bandwidth, low latency and scalability makes InfiniBand the advanced interconnect of choice to power many of the world's biggest and fastest computer systems and enterprise data centers. Using these advanced interconnect solutions, even the most demanding HPC and enterprise grid applications can run on entry-level servers."
What is Infiniband, anyway?
It seems to me that we need to find a better expression for what Infiniband really is.
- a switched fabric communications link (current version of the article, 22 September 2006);
- a point-to-point high-speed switch fabric interconnect architecture (as of 00:21, 22 August 2006); or
I would say Infiniband is not "a bus", but also not "a link". "Arhitecture" is definitely better than "link", IMHO. How about "technology"? "Protocol"? "Network"? "Communication protocol"? "Serial network data transmission technology"? (just brainstorming)
- How about a "computer network technology" or "computer network architecture"? -- intgr 18:08, 22 September 2006 (UTC)
Infiniband is simply an "interconnect fabric", in the same category as HIPPI, Myrinet, Quadrics, ServerNet, SCI (Dolphin), etc. It may be time to create an article (I didn't find one) for the generic "Interconnect Fabric", as an umbrella parent article generically covering all the various computer communications interconnects: from ethernet and Infiniband, to HyperTransport and Quickpath, to SGI's NumaLink, Cray's SeaStar, and IBM's SP switch fabric. Hardwarefreak (talk) 10:49, 17 November 2010 (UTC)
Someone needs to cite where the performance (specificially througput) results were found.
rivimey: I have included some example latencies for specific devices; there seems to be wide variation here. The devices chosen were simply those found in a google search, and the numbers are those stated by the manufacturer web page. I found one archived email indicating that, as usual, the manuf had overstated things in their opinion.
Host Channel Adapter
AFAIK, the host channel adapter (HCA) term is unique to InfiniBand (as is target channel adapter (TCA)) so it probably doesn't make sense to have a separate encyclopedia entry for either.
13:59, 10 January 2006 (UTC)
- There is no HCA article to merge with at the moment. I'm deleting the "suggested merge" tag. Alvestrand 10:02, 12 January 2006 (UTC)
Host Channel Adapter and InfiniBand should be merged together at the present time, though it may change in the future.
Can the orignial author change the sentence in the top paragraph containing: "In the past 12 months (...)"? I have no idea when the article was written, but maybe it should better be something like 'Since 2001, all major vendors are selling (...)". Mipmip 18:32, 29 July 2006 (UTC)
Copyrighted Content? / Neutrality
The initial paragraph seems to be copied exactly from "http://www.mellanox.com/company/infiniband.php". In fact, the whole article sounds very much as though it was written by an InfiniBand salesman (apologies if this suspection is wrong). The information about the companies doesn't belong here in my opinion. It also doesn't mention similar competing technologies. I added the "Advert" tag. —Preceding unsigned comment added by 184.108.40.206 (talk • contribs)
If you look down through the history you'll see that various discussions of how InfiniBand relates to other technologies and challenges it faces have been deleted. Got tired of fighting with this one. Ghaff 23:04, 2 August 2006 (UTC)
- Looks like many offending edits have been done here: . The IP address 220.127.116.11 belongs to "Mellanox Technologies, Inc." so they obviously had an agenda. I'll revert these changes now, and I'd be happy to give a hand removing any future POV edits. I can't see any obvious fights (as you put it) in the history, though. -- intgr 08:49, 3 August 2006 (UTC)
rivimey: I have changed this para significantly: I hope the result is (a) acceptable and (b) correct [I am not an IB expert!). It would be good if someone has the time to register at the Infiniband trade assoc's site to get the spec; this might help with hard facts :-) However, I am not sure how copyright relates to such activity so I haven't done so.
Only the last two paragraphs seem to be an "advert"
Most of this article is simply a fact-based history of InfiniBand. It seems NPOV, and points out the original broad expectation of InfiniBand and the narrower reality of today's uses of InfiniBand. The comment of the lack of information of InfiniBand competitors seems out of place. Including competitors would risk turning the article into an advertisement or sales discussion. No one has requested mentioning competitors to Fibre Channel in its Wikipedia article. With InfiniBand's limited application primarily to low-latency HPC clustering interconnects, the only competing/alternative technologies today are SCI and Myrinet, and both of those technologies have been largely displaced by InfiniBand. The other alternatives today for server clustering are either higher latency technologies such as Ethernet, or low-latency protocols over Ethernet mediums, which are not widely accepted.
Storage over InfiniBand is a niche area, but important in the InfiniBand discussion. The value proposition for InfiniBand based storage is two-fold: One, higher bandwidths than Fibre Channel (roughly 2X 4Gb Fibre Channel, going to 8X with DDR InfiniBand); and two, simpler networking with InfiniBand clusters, not requiring two different switching mediums and gateways/routers between the two.
The HyperTunnel comments are irrelevant. There is no information at the link provided on HyperTunnel over Infiniband. This is a nascent technology/proposal at best. Newisys Horus, which HyperTunnel over Infiniband is suggested as a competitor, has no major OEMs. The market for a large Opteron SMP server has not emerged yet, and it is premature to suggest HyperTunnel over Infiniband is the answer here. AMD has suggested it will offer greater SMP scalability in future Opteron designs, so both Newisys Horus and HyperTunnel over Infiniband may be irrelevant eventually.
I recommend the part about InfiniBand based storage be rewritten to explain the why rather than the who, and the final paragraph be removed. Regarding competitors and/or alternatives to InfinBand moves the discussion from objective to subjective, and opens the floodgates to NPOV issues. —Preceding unsigned comment added by 18.104.22.168 (talk • contribs)
Nice to see a history section, but it seems a little short on dates. Did the events take place in the late 20th century, or some other time?
In particular, we need to have individual revisions of standards, features (and compatibility limits) introduced by each one, and the documents which define them, cited by title and date. Also important is milestones (announcement, tapeout, shipment, etc…) for when (if ever) publicly available hardware implemented each one. 22.214.171.124 (talk) 02:48, 24 December 2013 (UTC)
An InfiniBand Boom?
It seems like we are on the verge of a boom in the use of InfiniBand, especially in the fast-growing high-performance computing arena. I don't have time to edit the Wikipedia article right now, but here are some references that could be used...
Westwind273 18:17, 11 July 2007 (UTC)
Point-to-point can simply refer to individual connections after establishment, rather than to the addressing scheme used to create and break connections. See the difference between “permanent” versus “switched” here. 126.96.36.199 (talk) 02:59, 24 December 2013 (UTC)
Cray XD1/Mellanox misinformation
The following snippet from the Infiniband article is factually incorrect:
"In another example of InfiniBand use within high performance computing, the Cray XD1 uses built-in Mellanox InfiniBand switches to create a fabric between HyperTransport-connected Opteron-based compute nodes."
The RapidArray router ASIC in the Cray XD1, originally called the OctigaBay 12K, is completely proprietary, originally developed by OctigaBay of Canada before the company was acquired by Cray Inc. The technology in the ASIC has its roots in high performance packet switching ASICs used in the telecom industry, from which both of the key venture partners, the CIO and CEO, had long employment histories.
Neither RapidArray nor the XD1 system contains Infiniband technology, nor Mellanox Technology. In fact the signaling rate of the RapidArray interconnect links is many times that of Infiniband, and the packet latency much lower.
See: http://www.arsc.edu/news/archive/fpga/Tue-1130-Woods.pdf and http://etd.ohiolink.edu/send-pdf.cgi/DESAI%20ASHISH.pdf?ucin1141333144 Hardwarefreak (talk) 10:27, 17 November 2010 (UTC)
I just deleted the following due to the reasons above. Please do not add it back in. Deleted on 02/01/2012.
"In another example of InfiniBand use within high-performance computing, the Cray XD1 uses built-in Mellanox InfiniBand switches to create a fabric between HyperTransport-connected Opteron-based compute nodes."