Talk:Power over Ethernet

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
edit·history·watch·refresh Stock post message.svg To-do list for Power over Ethernet:

There are no active tasks for this page
    • Discuss backwards compatibility with non-PoE devices. What keeps a non-PoE device from getting fried when plugged in to a PoE switch?

    Initial discussion[edit]

    I don't agree that this page should be speedily deleted. History of POE predates the IEEE standard, and similar to Wi-Fi and 10BASE-T it deserves its own article. I wrote a stub and put it in Talk:IEEE_802.3. Rhobite 22:41, Jul 5, 2004 (UTC)

    Personally, I'm all for keeping this page. --Kwnd 17:21, 16 July 2005 (UTC)

    I'm surprised there's no discussion of the EMI here. And, does POE have enough umph to power monitors in a thin client environment? If yes, which?--DennisDaniels 13:58, 14 December 2005 (UTC)

    The current article says 16.8 watts. An old flat pannel I just checked wants 12v @ 4A = 48w, so offhand, I'd say NO. However, newer monitors might be closer. --ssd 21:05, 28 December 2005 (UTC)
    I've found a thinclient with a monitor using POE. Pricing is not available as yet:
    http://www.dspdesign.com/products/product_detail?product_id=118 --DennisDaniels 09:16, 3 February 2006 (UTC)

    This article also contains many technical errors. Although a PoE switch can supply 15.4W, the maximum power garunteed to a device is 12.95W once you take into account cable losses. Don't have time to edit this page now, because it needs A LOT of work. I will have to remember to come back to it later. In the meantime, this page must be flagged for poor technical accuracy. For those of you that need real numbers, I suggest you read the standard which is linked at the bottom of the article. --Babar77 17:12, 5 January 2006 (UTC)

    I have reviewed the standard and corrected these numbers. Let me know if you think I missed anything. The article still has some serious issues with organization and tangential material --Kvng (talk) 18:04, 18 July 2009 (UTC)

    Cisco EnergyWise technology could also be mentioned here. It flexibly controls the powered/unpowered state of PoE ports on their Catalyst switches thus reducing total power consumption. Lcsaszar (talk) 11:58, 15 October 2009 (UTC)

    Homebrew[edit]

    keep in mind there's also the more "homebrew" varieties of PoE which work equally well... we're currently using a version of the circuit mentioned here: http://www.wireless.org.au/~jhecker/poe/ to power orinoco AP2000s using a 12v supply from the linux firewall/router box. - --Darkaz 12:56, 6 April 2006 (UTC)

    Where does the power come from?[edit]

    When someone does get around to editing the page, please hear my request. I came looking for an answer to my (admittedly ignorant) question: "Where does the power come from?" In other words, assuming I connect a POE adapter to my router board, and connect it via ethernet cable to my switch, will the power come from the switch? Or do I have to plug the POE adapter into a wall outlet? I'm assuming it will get the power from the switch, but I wanted to be sure. I read the article but it didn't answer this question. (Or it did and I missed it.) Thanks. --Smithfarm 07:55, 22 May 2006 (UTC)

    Either you can use a switch with POE built in (in which case the power comes from the swtich) or you can use a POE adaptor (in which case the adaptor will need to be plugged in) Plugwash 19:55, 16 August 2006 (UTC)
    This information needs to make it into the article. Probably in the introduction --Kvng (talk) 18:04, 18 July 2009 (UTC)

    The information is in the first sentence of the article. —Preceding unsigned comment added by Jean Couture (talkcontribs) 19:20, 26 November 2009 (UTC)

    I've edited the introduction. The information is there now. --Kvng (talk) 20:33, 27 November 2009 (UTC)
    I don't find the question ignorant at all. The answer is, it's important to understand this fully. Likely it ultimately comes from the AC wall supply, which can create dangerous situations, as manufacturers of POE supplying switches seem to ignore minor AC current flow caused by board capacitance. It's very easy to design a supply according to datasheet specs for the part you are using, only to find that it doesn't negotiate startup on some switch boxes, when additional I/O is connected to your device. One can almost assume that non-differential I/O, such as RS232, was not designed with full isolation in mind. I tried to add some warnings about this issue, but they also need more work. Danl999 (talk) 05:57, 27 October 2010 (UTC)danl999

    Agree merge is desirable, but should include midspans and RFC references[edit]

    The technology for PoE includes other components, such as Midspan power delivery, structured cabling and powered switches, and SNMP management. Merge would provide opportunity to show how the items work together without chasing all over for links. -- Kelley

    Advice against merging[edit]

    I do not think that IEEE 802.3af and PoE should be merged into one article. IEEE 802.3af is a standard, but PoE in generel is a mix all kinds of PoE; good and bad. The bad ones do not probe for PD, and can damage network equipment.

    I will remove paragraphs, that are better put in IEEE 802.3af and IEEE 802.3at. --Glenn 06:51, 17 June 2006 (UTC)

    I agree with Glenn, PoE is concept. Cisco-EPS/3Com-EPS/IEEE802.11af-t etc.. are instances of the concept. PoE page as a gateway/portal to the specifics of each instance makes more sense. Though a short form description of each instance is useful for the casual reader to find out what it's all about. --Electron9 2006-06-19

    As far as I can tell, the merge has been completed. I think it is time for this discussion to be archived --Kvng (talk) 18:04, 18 July 2009 (UTC)

    Change Power Specifications[edit]

    The power specifications are wrong. PoE and IEEE 802.3af are near identical, but vendors like Cisco have their own PoE specifications. I would mention this and keep them seperate. The power levels in Cisco PoE go up to 10watts. The IEEE standard goes up to 15.4 watts. The definition is wrong. Here's the power specs (p.41 of this link: http://standards.ieee.org/getieee802/download/802.3af-2003.pdf ) -- Dennis

    I have reviewed the standard and corrected these numbers. Let me know if you think I missed anything --Kvng (talk) 18:04, 18 July 2009 (UTC)

    What is a Midspan? Is it different from a switch?[edit]

    Sorry, I am not only new to Wiki, but new to PoE as well. Can anyone make a distinction for me as to what the difference is between a Midspan and a PoE switch? Thanks. 216.83.255.123 22:52, 7 March 2007 (UTC)

    From what I can tell the midspan basically puts POE into an already established network. A POE switch is like a normal switch but it also supplies power to devices on the network 81.149.144.41 13:39, 2 May 2007 (UTC)

    I would like to see a reference to cable length[edit]

    I tried to implement POE over 140 meters of CAT5 cable and it failed. Only ETHERNET works. —The preceding unsigned comment was added by 193.108.195.249 (talk) 14:41, 15 April 2007 (UTC).

    Ethernet only supports a 100m cable run 86.129.12.70 20:38, 13 August 2007 (UTC)
    I would say that ethernet officially only supports 100m, meaning it might happen you can make it work for a longer distance, but no guarantees are given. Datacommunication will proberbly be more relaxed on this issue. However not PoE, because cable resistance comes heavily into play and ruins any power output. This is also the reason for the large voltage range 25-60V. Electron9 23:24, 3 October 2007 (UTC)
    IEEE 802.3 Ethernet specifies a 100m maximum cable length for 100BaseTX and 1000BaseT. It's not about official support, it's about adhering to valid engineering recommendations. Having any expectation for a network to operate when your physical plant is out of spec is foolish. —Preceding unsigned comment added by 166.127.1.221 (talk) 15:45, 16 September 2009 (UTC)

    The 100 m length is made up of 90m solid(horizontal) cable and 2 X 5 metre (stranded) patchords. If you vary this by adding too much stranded cable, you can be out of limit. eg if your patchords total 20 metres, your horizontal run can only be 75metres. There is a helpful downloadable tool that calculates lengths according to the standards on the Nexans LANSYSTEMS website (Glynnphillips (talk) 13:53, 25 June 2010 (UTC))

    And for 10baseT, the 100m is on Cat 3 cable. 10baseT is known to work over longer distances on better cable. In the case of PoE, the voltage drop might be too high at maximum current level, and for a device that needs close to full voltage. Gah4 (talk) 01:08, 16 June 2017 (UTC)

    Midspans and RFCs[edit]

    A mid-span is what we sometimes call a power injector, it goes between the switch/router/whatever and the powered device. It provides signal connectivity between input and output but also supplies power to the PD. It does not interact with the data, it is just a power adapter. It allows PoE to be retrofitted without replacing the switch/router infrastructure.

    There are no RFCs for PoE, IEE802.3 is the standard for Ethernet cabling, the physical layer, well a physical layer, for IEEE-802. RFCs do govern the MIBs that allow PoE to be managed, but that has nothing in particular to do with PoE at the level it is being discussed here. Don't be confused with PPPoE, that's Point to Point Protocol over Ethernet, nothing to do with power Chann94501 21:11, 29 June 2007 (UTC)

    More power..[edit]

    Seems LM5072 manage 800mA. And 26W magnetics is available. So an unofficial 802.3af standard with 26W capability may be on it's way. Electron9 20:48, 5 November 2007 (UTC)

    This is being worked on as 802.3at. Some products are already available with higher output capability --Kvng (talk) 18:04, 18 July 2009 (UTC)

    Calculation error or wrong data?[edit]

    48 V times 400 mA equals 19.2 W, not 15.4 W. —Preceding unsigned comment added by 81.58.240.234 (talk) 10:56, 15 April 2008 (UTC)

    The equipment (PSE) can deliver max 400 mA and max 60 V, but only 15.40 W in total. Thus when the voltage is lower, more current is utilised. And vice versa. Electron9 (talk) 08:39, 14 May 2008 (UTC)
    I've corrected some errant figures. Fundamentally though, less power is available to the receiving device than is delivered to it because some power is dissipated in the cable --Kvng (talk) 17:52, 18 July 2009 (UTC)

    Terminology Section[edit]

    The terminology section seems surplus to requirements to me - the terms have already been pretty well covered, and the blanket statement of "When the device is a switch, it's called an endspan. Else, if it's an intermediary device between a non PoE capable switch and a PoE device, it's called a midspan" has already been explained in much more detail further up the page. Suggest complete deletion of that section. THD TommyD (talk) 16:52, 17 September 2008 (UTC)

    USB replacement[edit]

    The article advocates Ethernet with PoE as a replacement for USB. USB has a distance limitation of 3 m and is PC centric, not really a network. The value of PoE is that it eliminates the need to supply power separately for the class of Ethernet-connected devices mentioned in the article - WAPs, cameras, phones. We can mention that USB and Firewire also deliver data and power over the same cable but I think the USB discussion should be trimmed --Kvng (talk) 17:52, 18 July 2009 (UTC)

    I guess you can DIY now with USB-PoE-USB adaptor-pairs ? --195.137.93.171 (talk) 13:08, 24 November 2010 (UTC)
    In the alternatives to PoE section, USB is claimed to be limited to 2.5W of "non-isolated power". According to USB, USB can now go up to 100W. I'm not well-enough informed to correct this article, though. ALittleSlow (talk) 01:55, 28 January 2015 (UTC)
    I've thinned this as threatened above. There are no longer power or distance specifications to get wrong. ~KvnG 16:18, 31 January 2015 (UTC)

    Fiber optics?[edit]

    Given that ethernet is not only category-5/5a/6 cables, I think a mention of the fiber-optic version of ethernet would be worthwhile. Quick look found a mention of "Power over Fiber", but I'm unsure if any standards exist. 75.101.123.180 (talk) 10:28, 3 November 2009 (UTC)

    I would guess that the absence of electrical conductivity in glass could be a problem ;) --195.137.93.171 (talk) 13:10, 24 November 2010 (UTC)
    Most (all? almost all?) fiber-optics are actually plastics at this point. Aside from the direct method of photons carrying energy (think solar power). Depending on the added dopants, fiber-optics can be conductive or non-conductive. I imagine armored fibers might work even better for carrying electrical power. I wonder if anyone is working on making PoE work with GBICs/SFP. --75.101.123.182 (talk) 09:09, 15 April 2011 (UTC)
    "Most (all? almost all?) fiber-optics are actually plastics at this point." -- I dunno where you get this information from but i'm pretty damn sure it's wrong. IIRC there is a company who has attempted to use plastic fiber for proprietary home networking but all the serious long distance networking products use glass fiber as it has far better performance.
    "Depending on the added dopants, fiber-optics can be conductive or non-conductive." -- sure you can dope plastics to make them somewhat conductive but afaict the resistances are terriblly high compared to metals so you wouldn't want to use them as "wires"
    Getting back to the original question one could divide powered fiber into two categories
    1: optical power transmission, highly inefficient and therefore will only be used in specialist applications where an extremely high degree of electrical isolation is needed
    2: combined power and fiber members in a single cable, this sort of thing is used for undersea cables where there are both huge distances and huge datarates involved and no real way of injecting power in the middle but within a building I don't see much point. 1000BASE-T is more than sufficient for hooking up VOIP phones and wireless access points to the network.
    -- Plugwash (talk) 06:17, 16 November 2012 (UTC)
    A few companies do have power-over-fiber products: a quick search turns up Lastermotive and RLH. Diode lasers and photovoltaics, I think. I'd argue that they're not power-over-Ethernet in a useful sense, although of course you could run a power fiver in the same bundle as an ethernet fiber. Wiml (talk) 18:17, 5 May 2013 (UTC)

    Possible to use a PoE switch instead of a PoE adaptor?[edit]

    This is a qyery regaridng PoE. I couoldnrt see it here. I have bought an 2.4 GHz access point ( Vendor: Trango) which was supplied along with a PoE adaptor. I already have a PoE enables Dell switch in my office on which I have to connect the Access Point. Do I need to connect the PoE adapto in between or I can just connect the Access poit directly on the PoE enabled PSE switch. Please include such information in this article —Preceding unsigned comment added by 78.101.203.222 (talk) 09:15, 30 December 2009 (UTC)

    @Unsigned: yes you can. If the power needed for the access-point can be delivered by the switch you are using there is no need to use the PoE adaptor. The amount of power a switch in total can deliver and the amount of power (generally expressed in either (mili)Watts or by the current via miliAmpere as the voltage is fixed) each port can deliver is same or larger as required by the equipment you can connect it directly to the switch. Depending on the power class of both switch and AP there is no need for an adaptor, you might have to assign that specific port to be capable of delivering a higher power-class as in general an AP would require more power then a VOIP phone. (and besides that you can also often configure power-preference: if the connected PoE devices request more power then the switch can deliver (eg because one of the power-supplies of the switch fail and thus it can't deliver full PoE power) you can configure which devices are really essential and which can be powered off in such a situation: you would normally want that the phone of the reception-desk/secreatry continues to work, while an AP for guests to the company is not as important during partial power-outages/equipment failures. Tonkie (talk) 23:45, 19 February 2012 (UTC)

    Maybe a little late for this, but if the device follows the appropriate IEEE standard, as does the switch, it should be fine. But it might be using some non-standard voltage, and also not do the negotiation required by the standard, in which case it won't work. Gah4 (talk) 01:15, 16 June 2017 (UTC)

    802.3at Layer 2 management?[edit]

    The IEEE 802.3at standard supports coarse resistor signature power allocation AND Ethernet layer 2 [1] [2] based adjustment of required power. Using Link Layer Discovery Protocol (LLPD 802.1AB) formatted in Type-length-value (TLV 802.3bc) style. Any useful information on this layer 2 interface ..?, especially what LLPD packets PSEs will support. Electron9 (talk) 17:46, 10 January 2010 (UTC)

    assume pin 4 + 5 as positive (+) and pin 7 + 8 as negative (-) ???[edit]

    In the Notes section of the Article, version 16:59, 8 July 2010, I've trouble to understand the following sentences :

    A partial solution to the drawbacks of IEEE 802.3af is to assume pin 4 + 5 as positive (+) and pin 7 + 8 as negative (-). This would not be standards compliant but will make PD implementation easier and not damage anything. Any incompatibilities with IEEE 802.3af will only result in an unpowered device.

    It does not seems either to make sense or to be informative, given that the standard already specify this polarity for alternative B, and forbid PD from supporting only one alternative. Also the standard completely define the polarity (maybe but on devices supporting auto MDI/MDI-X, but because this is probably only useful on consumer laptops / desktop / tower computers, so this does not matters).

    The note section probably needs to be completely rewrote or else suppressed. 92.103.5.174 (talk) 13:02, 9 July 2010 (UTC)

    I consider the Notes section to be raw material to be evaluated and incorporated into the article body if appropriate. Feel free to take a crack at this. Feel free to delete unuseful material. --Kvng (talk) 13:47, 9 July 2010 (UTC)
    Deleted a lot of material following the notes section. The article should not be about how to design standard/nonstandard PoE. Even if it were, designers often insert diodes to protect against reverse bias. IIRC, one of the PoE standards stated that making a midspan power inserter was beyond the scope of the document. Left cable dissipation loss paragraph, but it should move up into the body. Glrx (talk) 17:12, 18 February 2011 (UTC)

    PoE UPS at PD ?[edit]

    Just wondering whether anyone is looking at putting an UPS not at the PoE switch or MidSpan, but next to the Powered Device (Other-End-Span ?) I'm also thinking about providing high intermittent peak power, not just UPS - a little like those huge capacitors next to your car amplifier ... but Rechargeable battery ?

    Topic says

    "Many powered devices have another connector for an optional auxiliary power supply"

    Couldn't that also be charged by the same PoE link?

    Thought [3] might have been it, but

    "Mid-Span or End-point (PSE)"

    Maybe I should patent it myself? LOL

    --195.137.93.171 (talk) 13:27, 24 November 2010 (UTC)

    PoE Power on and operating voltages wrong[edit]

    For some reason, a datasheet for a PoE part is used as a reference for the power on and operating voltages rather than the actual standard...and they don't agree. From the standard, 802.3at, table 33-18-PD Power supply limits, power on must occur by at least 42 volts for af and at devices, and must operate within 37 to 57v for af and 42.5 to 57v for at. Turn off voltage is between the 30.0V and Voff voltage and the minimum operating voltage. See section 33.3.7.1 for clarification, "The PD shall turn on at a voltage less than or equal to Von [42.0V]. After the PD turns on, the PD shall stay on over the entire V_{Port_PD} range [37 to 57V for af, 42.5 to 57V for at]."

    70.133.83.60 (talk) 01:03, 11 December 2010 (UTC)

    Any url to read the IEEE specification freely? anyway I suggest to add another column for the specification. It may also differ for 802.3af vs 802.3at. Electron9 (talk) 03:00, 11 December 2010 (UTC)
    IEEE 802 standards may be downloaded without charge from [4] --Kvng (talk) 15:05, 11 December 2010 (UTC)
    Yes they're different...see my original post. —Preceding unsigned comment added by 70.133.83.60 (talk) 23:46, 13 December 2010 (UTC)

    Auxillary power isn't just for backup[edit]

    The auxiliary port is there to supply power for when a PoE supply is not available (including but not limited to backup situations). The wording makes it sound like it's there just for backup/failure. From section 33.3.7 PD Power, "The PD may be capable of drawing power from a local power source. When a local power source is provided, the PD may draw some, none, or all of its power from the PI [physical interface/RJ-45]." 70.133.83.60 (talk) 23:57, 13 December 2010 (UTC)


    Can't find reference for PSE class[edit]

    "PSEs classify as Class 0."

    reference is to 802.3, section 2, 33.3. I can't find this mentioned anywhere. If a pse is connected to another pse, the signature will be invalid, and the "handshake" would stop at that. If this is suggesting that a poe powered pse is always class 0, then that's just wrong since it would be up to the designer...and it would almost definitely be a class 4, at device. I'll remove it if there isn't any disagreement.

    Multiple modes[edit]

    Apanella (talk · contribs) in an edit comment states, "Removed reference to "four pair" power from "derating section" as the standard IEEE 802.3at-2009 clearly states that any PSE is disallowed from powering all four pairs at once... thus reference afour pair power does n"

    On the other hand, one of the article's references states, "Interesting that Microsemi is able double the standard 802.3at-2009 maximum of 25W and go up to 51W without breaking the standard. I asked Daniel about this and he responded, "This is possible because the IEEE802.3at-2009 standard changed the definition of a Powered Device, compared to the text existing in IEEE802.3-2005's Clause 33. The new standard considers the PD the power interface, and not the whole device being powerd. This means that one can have two power interfaces, each taking 25.5W inside the same box. And nothing precludes these to be connected one over the 2-pairs using lines 1,2,36 and the other using the 2-pairs that use lines 4,5,7,8. "


    Referenced material wins for now. I've reverted the recent edits. --Kvng (talk) 20:31, 7 June 2011 (UTC)


    The Keating blog article is VERY good in many ways... but using the Keating blog that is referencing a marketing statement by Microsemi is a weak attempt to represent to the wiki table that attempts to reference the IEEE standard. Apanella (talk) 17:00, 18 June 2011 (UTC)

    Modes... specifically powering mpairs in IEEE802.3at-2009[edit]

    Reviewing the IEEE 802.3at-2009 standard, I would like to point out the following: a) Every schematic in the standard references only two out of the four pairs connected to the power souuce. In the standards a "PSE" is the power source.

    b) Section 33.2.3 PI pin assignments states: "A PSE shall implement "alternative, A, Alternative B, or both. While a PSE may be capable of both Alternative A and Alternative B, PSEs shall not operate Alternative A and Alternative B on the same segment simultaneously.

    c) Supporting "b": Referencing section "33.3 Powered Devices (PDs" in the same standard; See the note "PDs that simultaneeously require both Mode A and Mode B are specifically not allowed by this standard"

    Thus the standard specifically disallows 4-pair power. Clearly the article cited (Daniel)that is quoted in the last sentence of the"multiple modes" topic is in clear conflict with the IEEE standard.


    With regards to the article cited: I would submit that an IEEE standard takes precedence over product marketing materials.


    With regards to Wiki article accuracy: Also according to the standard... the maximum power is NOT 51W... but rather something in the 36.2W range... Thus in all actuality, there is a bit more work required to get the wiki article consistent with the industry standard and the 2-pair power that is allowed by the standard. I would be happy to start this work if there is agreement that the IEEE standard is an acceptable definitive source.

    For a copy of the standard "IEEE 802.3at-2009" can be found at: http://standards.ieee.org/about/get/802/802.3.html

    Thanks for your time and best regards.

    Apanella (talk) 14:20, 11 June 2011 (UTC)

    Please read WP:PRIMARY before proceeding. On WP, secondary sources are preferred over primary ones. What you've written above is an interpretation of the standard (which is in conflict with a reliable secondary source). This potentially constitutes original research. If you believe the source (Keating) is wrong, the best way to address it is to find at least two other secondary sources that support your position (and find no others that support Keating in the process). --Kvng (talk) 13:05, 14 June 2011 (UTC)
    Agree in part; disagree in part.
    The bare statement (a) sounds factual, but it cannot be used to conclude modes A/B must be separate. (That conclusion would also a bad extrapolation/generalization.)
    Both (b) (not clear where quotation ends) and (c) are quotations from the standard, and the standard is a reliable source. With those quotations, stating that the standard disallows 4 pair power would not be WP:OR. Editors are allowed to paraphrase.
    If the standard says single modes are limited to 36.2 W, then WP:CALC would allow the statement that PSE's may source at most 36.2W.
    One could also provide a sourced statement that some manufacturers provide 4 pair power at 51 W.
    We could debate about whether stating the obvious conclusion that 4 pair power equipment does not comply with the 802.3at-2009 standard (and include the "-2009" because standard could change). I would let that statement in as a paraphrase or as a WP:CALC. I would also let in a statement that a non-compliant 4 pair PSE is probably compatible with other 802.3at-2009 PD. If somebody gave a good engineering reason to challenge that statement, then the statements could be tossed. (But I would expect manufacturers to state their PSE's interoperate with 802.3at PD's.)
    Any statement about why 51 W instead of 2 x 36.2 W requires a source. However, if someone made the unsourced statement 51 W was a cable thermal limit, I would not delete it. I would flag with {{cn}}.
    Glrx (talk) 16:36, 14 June 2011 (UTC)


    Thank-you for the response, Glrx Admittedly I am a bit new to Wiki articles.... so I appreciate the direction given...

    There was no extrapolation or generalization (good or bad) with any statements cited in the earlier posts. However, there was "support" stated to the specific statement directly from section 33.3.1: "PDs that simultaneously require power from both Mode A and Mode B are specifically not allowed by this standard."

    This statement indeed is the cause of the issue with the errors in the table ("Standard Implementation" section).

    Although I understand and can see how "claimed" does indeed sufficiently categorize the 51W comment in 3rd paragraph, clearly, the table in the "Standard Implementation" section attempts to describe what is actually in the IEEE standard... not how companies are trying to implement (or claim) things in the field. Thus the comment on "operating simultaneously" in both "Supported modes" and "Derating of maximum..." is indeed incorrect per previously cited references.

    Admittedly, the edits to remove "four pair" and simultaneous A/B operation I did are indeed in conflict with *"self-published" sources that claim 4-wire power. IMPORTANT, the edits are also in complete agreement with reliable sources (IEEE 802.3at-2009). As it written today, with regards to 4-pair power, this page contains incorrect information obtained from questionable or "self-published" sources (9).

    Again, directly from the standard referenced in the table: Section 33.3.1 - "PDs that simultaneously require power from both Mode A and Mode B are specifically not allowed by this standard."

    • Please note, I am not sure I am correct in this terminology, please let me know if they are not correct. The references I am using to make this statement come from the following links:

    http://en.wikipedia.org/wiki/Wikipedia:Biographies_of_living_persons#Avoid_self-published_sources http://en.wikipedia.org/wiki/Wikipedia:SOURCES#Self-published_and_questionable_sources_as_sources_on_themselves

    Now, I do not know how to "flag" statements... maybe that is a good start... but in this case... probably not sufficient to maintain information accuracy and quality of the specific table entries.

    Apanella (talk) 16:37, 18 June 2011 (UTC)

    Does anyone actually know what the situation is with regards to maximum possible power available through PoE+? It appears that the standard does not recommend going past 36.2 W. It also appears that at least one company claims there's a loophole allowing them to go beyond that. What's actually available on the market? What are other (secondary) sources saying? --Kvng (talk) 01:49, 21 June 2011 (UTC)
    I would suggest that all the people of the IEEE standards committee and all POE equipment vendors know the status of power limits. A quick review of switch manufacturer websites show a plethora of ~36W devices through the support of IEEE 802.3at-2009. Arguably, the largest manufacturer of switches, does not even have a single 4-pair POE switch products (at least none that I have been able to find). There is at least one one company (not a switch manufacturer) that is claiming a loophole*. I am not sure that it is a secondary source as I am still a bit confused by the definition, but as a possible example of a public domain secondary reference, items #16 and #34 in the PDF file reference the rejection of 4-pair power as a part of the present standard. (http://grouper.ieee.org/groups/802/3/at/comments/D3.1/P802d3at_D3p1_premtg_by_clause.pdf)
    I think it is correct to say that the market does have non-standards** compliant 4-pair power and standards compliant 2-pair power (** regardless of marketing claim). Albeit an informal supporting remark, it can easily be stated that the IEEE 802.3at-2009 2-pair power systems are multiple orders of magnitude more common in dollars sold, market share, and port count.
    I realize that there may be some confusion on my part as to the wiki version of the concept of a "reliable published source"... I guess, in the end, the IEEE standard is a source that is easily verifiable with regards to what is claimed in the wiki table in questions that references the parameters of the standard referenced in the Wiki table. (Note: I have read: http://en.wikipedia.org/wiki/Wikipedia:Verifiability to try to understand better.)
    Apanella (talk) 21:18, 21 June 2011 (UTC)
    Microsemi apparently pushed its interpretation of 802.3at-2009. Our article cites a blog, which is a poor source; googling turns up Microsemi press releases that support the blog. I could see deleting all mention of -2009 4 pair power because only Microsemi is pushing it. I'm waffing because Microsemi may be prominent enough to counter WP:UNDUE.
    There is another press release that claims the industry is moving to HDBaseT that will support 100W of power.
    Glrx (talk) 17:15, 22 June 2011 (UTC)

    File:1140E-7.JPG Nominated for speedy Deletion[edit]

    Icon Now Commons orange.svg An image used in this article, File:1140E-7.JPG, has been nominated for speedy deletion at Wikimedia Commons for the following reason: Copyright violations
    What should I do?
    Speedy deletions at commons tend to take longer than they do on Wikipedia, so there is no rush to respond. If you feel the deletion can be contested then please do so (commons:COM:SPEEDY has further information). Otherwise consider finding a replacement image before deletion occurs.

    This notification is provided by a Bot --CommonsNotificationBot (talk) 09:08, 27 July 2011 (UTC)

    10ge + poe ?[edit]

    Can it be used for 10 Gigabit ethernet? `a5b (talk) 23:00, 6 August 2011 (UTC)

    At least one company has shown that it can be done: http://www.plxtech.com/about/news/pr/2011/0523 Mateo2 (talk) —Preceding undated comment added 14:56, 18 November 2011 (UTC).

    Cisco UPOE?[edit]

    I noticed a proprietary standard section and a Cisco proprietary section. Has anyone found good information related to the newer Cisco UPOE spec? — Preceding unsigned comment added by 163.150.25.75 (talk) 23:47, 23 January 2012 (UTC)

    Advantages[edit]

    The Power over Ethernet#Advantages over other integrated data and power standards section is unsourced, makes some dubious claims, and has a strong PoV. If the PoE article addresses the benefits over USB, it should not do it so early. PoE and USB have slightly different goals. PoE is a general peer connect; USB is a peripheral slave connect. USB power connections are cheaper than PoE; USB does not offer isolation. Data rate comparisons are are to peak values rather than sustained. USB is underpowered, but FireWire offers similar power to early PoE. Connector variety seems petty; I'm not sure that I want a 8P8C connector on my digital camera, GPS, smart phone, or mouse. 802.11 might be better if power is available at both ends - no need to string an Ethernet or USB cord. Power variations are an issue -- an internet phone may not need to worry about the power at the wallsocket, but my PoE switches need to get their power from the wall. Reading the section gives me the impression that PoE is so superior to USB that USB should not be used. Such a position requires a source. Glrx (talk) 17:17, 23 April 2012 (UTC)

    @Glrx : I fully agree with you, although one can give the advantages of using PoE in stead of USB for specific purposes. I have changed the text in this section so that it does mention some of the advantages of PoE over USB in specific conditions followed by some examples where USB is more logical to use and/or the disadvantages of using PoE compared to USB. With the new text of the entire section I think the "disputed" template is no longer required thus I removed it. (If you disagree feel free to put it back in, I won't dispute that :-) Thanks, Tonkie (talk) 20:34, 25 April 2012 (UTC)
    Made a lot of changes. Statements are still unsourced, but they are not as biased. Still doubt position of section. Glrx (talk) 19:38, 26 April 2012 (UTC)
    Frankly I think comparing PoE and USB is a bit like comparing a van to a motorcycle. Yes both can be used to get yourself from A-B but they have totally different design goals deriving from their toally different targetted uses. Plugwash (talk) 06:54, 16 November 2012 (UTC)

    An editor removed a footnote about power consumption and added a citation-needed tag.[5] The editor claimed the footnote was WP:OR based on isolated examples. I have reverted. The power consumption for specific devices is a reasonable method of finding power consumption, editors are allowed to do WP:CALCulations, and the footnote explains the claim. Glrx (talk) 20:56, 24 February 2014 (UTC)

    Section on risks and backwards compatibility missing![edit]

    In this article apparently there is no word on non-POE equipment (possibly older equipment) accidentally plugged into 48V PoE cables and possibly getting fried? USB3.0 has a physically different shape for the plug and socket for easy differentiation, for example, while all RJ-45, powered and non-powered looks just the same and the PoE lettering is often very minuscule around the sockets. This situation scares me and possibly many other IT guys big time. 82.131.210.163 (talk) 13:00, 2 August 2012 (UTC)

    This is handled by a sense operation performed by the switch before power is applied. Details are described on p. 5 here. This information should be included. I've added this to the todo list here. --Kvng (talk) 05:05, 6 August 2012 (UTC)
    Proper IEEE PoE won't apply singificant voltage until it has detected a powered device on the other end so it shouldn't cause any problems for existing equipment. Ghetto PoE on the other hand requires much more care. Plugwash (talk) 19:22, 16 November 2012 (UTC)

    High Wattage, 56V[edit]

    I was going to remove this section, but Kvng has tagged it with a cn. It's just another (minor) variation of the passive power scheme in the section above, so it is not a significant variation. WP:UNDUE Maybe the above section could be modified to include this voltage variation, but my sense is the 48V PoE value was chosen as the maximum low voltage distribution value. Consequently, I'd just remove the section. Glrx (talk) 23:44, 13 August 2012 (UTC)

    A stand-alone section for this is probably WP:UNDUE. I wouldn't bother editing right now; if a citation doesn't show up, we'll want to just delete the whole section. I haven't been able to find any citations myself, so that's a likely outcome. --Kvng (talk) 13:37, 15 August 2012 (UTC)
    OK. Looks like class 2 supply may be up to 60 VDC. (http://ulstandardsinfonet.ul.com/scopes/1310.html) Some lighting sections of NEC allow 42 V peak. Glrx (talk) 16:25, 15 August 2012 (UTC)

    Dubious claims from marvell in "Integrating EEE and PoE"[edit]

    The rather long quote from marvel in that section seems to make two assumptions. Firstly that all links are maximum length and secondly that all the powered devices are drawing full power. These assumptions are clearly highly unrealistic and as such real savings will be far lower. Plugwash (talk) 01:44, 25 March 2013 (UTC)

    AC vs. DC[edit]

    I have to object to the claim... "Critics of this approach argue that DC power is inherently less efficient than AC power due to the lower voltage, and this is made worse by the thin conductors of Ethernet."

    This claim doesn't make any sense in a system of relatively low-powered modern electronic devices. DC power is _not_ inherently less efficient or inherently of lower voltage. Modern electronic devices use DC-to-DC converter circuits (the basis of almost all modern computer power supplies). The whole idea of PoE is to use a relatively HIGH voltage such as 48-56VDC to reduce the resistive cable losses -- it doesn't matter that this is DC rather than AC. At the PoE-powered device a DC-to-DC converter then converts the 48VDC to perhaps 5VDC, or whatever voltage it needs. The advantage of AC only exists in larger power transmission systems such as the electric utility grid feeding a home or business. Galt57 (talk) 20:35, 13 January 2015 (UTC)Galt57

    This is a comparison of 48 V PoE over CAT5 vs. power through standard 110 or 220 V AC receptacles. The fact that PoE is DC is not relevant. I've clarified. All of this discussion in the article is uncited and has previously been marked as such. ~KvnG 15:15, 16 January 2015 (UTC)
    There are some cases where AC is needed for galvanic reasons, but this isn't one. Modern ferrite core transformers are much more efficient than iron core transformers. If you include the energy needed to get standard AC power to where it isn't already supplied, I think PoE wins. Gah4 (talk) 04:59, 16 June 2017 (UTC)

    Mode A polarity?[edit]

    Standard Implementation:Powering Devices says "Mode A has two alternate configurations (MDI and MDI-X), using the same pairs but with different polarities." but then doesn't seem to say what those polarities are for the MDI configuration vs. the MDI-X. Starling2001 (talk) 21:09, 10 March 2015 (UTC)

    MDI vs. MDI-X is crossover cable vs. straight through cable, which reverses the pairs. If there is an MDI-X switch, take the power after the switch, otherwise put in a bridge rectifier. Auto MDI-X devices should put in the rectifier. Gah4 (talk) 05:02, 16 June 2017 (UTC)

    PoE++?[edit]

    I'm lead to believe that efforts are currently underway (this year 2015) to standardise and ratify IEEE802.3bt which would become the standard for four-pair 95W PSE output. This would replace proprietary standards currently offered in this power range? e.g. so called PoH (see http://hdbaset.org) As I don't have concrete references for most of this - does anyone better connected have the info to extend the article to say what is coming? - I think it will provide designers with a very valuable extension to the PoE group of standards. — Preceding unsigned comment added by 108.171.128.169 (talk) 12:03, 27 March 2015 (UTC)

    The "Comparison with other" section[edit]

    The "Comparison with other (...)" section is in dire need of cleanup. It is basically a dumping ground for cheap computer-for-dummies style POV and unsourced blather, betrayed by weasel words that like "can be attractive" "are good choices" "may be more economical" "other approaches may be viable". Also completely unsourced claims about technologies that are not subject of the article are made (like "Remote weather sensors use very low data rates, so batteries (sometimes supplemented with solar power) and custom wireless data links are used.") This is unencyclopedic nonsense and does not belong here. I tried to trim away some of the worst, but got reverted. So - what do you think? Wefa (talk) 17:59, 15 September 2016 (UTC)

    Agreed. The comparison section doesn't really compare, and comparing networking technology with peripheral interfaces is a bad idea from the start (except for maybe a very limited aspect like e.g. how much power can be delivered and how far). I guess there are two choices: challenge the unsourced claims or salvage what's there, maybe rephrase the section to "Alternatives" and then discuss what roughly similar solutions there are from a user's POV. A comparison per se requires the same ballpark. --Zac67 (talk) 19:12, 15 September 2016 (UTC)
    now trimmed the whole thing. Basically, I don't see what should be said there. Before writing that section, an author should define the question that should be answered there ... Wefa (talk) 03:44, 14 March 2017 (UTC)
    • Oppose. The section puts PoE in perspective with other options. Where is the technology appropriate, and where is it inappropiate? PoE is a poor choice for a low data rate keyboard; USB or Bluetooth or other wireless is better. It's target is transmission over moderate distances where providing power is an issue. Glrx (talk) 17:28, 26 March 2017 (UTC)
    that seems to be a pointless exercise. PoE as an add-on standard on (certain variants of) Ethernet is well suited wherever Ethernet is, so the question seems moot. Wikipdia is not Computer Technology 101, it's an encyclopedia. If there was need to explain such, you could of course add a paragraph or two on PoE's main uses and its weaknesses for other uses - but please find a decent secondary source you can properly refer for that. The comparison with USB seems far fetched, since Ethernet vs USB are differentiated based on much more primary issues than just Power. Wefa (talk) 19:46, 1 April 2017 (UTC)

    sPoE[edit]

    I agree with the removal of a section on PoE without ethernet. I suspect that sPoE should belong here, but am not yet sure. Gah4 (talk) 21:50, 29 December 2017 (UTC)

    OK, it seems that Zmodo uses a two-pair (1,2) and (4,5) 12 volt PoE for their cameras. If it is only them, it probably doesn't belong, but if others are using the same thing, such that it is a defacto standard, it probably should be here. I suspect that the wiring is designed such that it won't cause problems if plugged into a regular ethernet port. Gah4 (talk) 03:22, 30 December 2017 (UTC)
    It seems proprietary, not even multi-vendor. Without specs it looks like a passive concept (real PoE is always active) although it may also be active and adhering to 802.3 standards (I couldn't find mention of 12V). Then, the only thing special is the integration of a PoE switch with the DVR – hardly mentionable. My vote is to remove until we've got more information. --Zac67 (talk) 11:07, 30 December 2017 (UTC)
    If it gets to be multi-vendor, though, I would vote to add it. What is special is that it works with two pairs, I believe with one pair for power and one for data. But yes, not add it yet. Gah4 (talk) 19:06, 30 December 2017 (UTC)
    Multi-vendor would be different, right. Single-pair network likely isn't Ethernet at all, if it isn't 100BASE-T1 intended for automotive. With data and power on the same pairs, dual-pair 100BASE-TX with Alternative A phantom power is very common though. By the way, 100BASE-T1 has recently been extended by "PoDL", to run data and power together on a single pair. That'd be novel for cameras. --Zac67 (talk) 20:48, 30 December 2017 (UTC)