Talk:SHA-3
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
Presentation of the Block Permutation
The presentation of the block permutation differs from the reference in presentation. The mapping between bits in the state and the matrix is specified on page 8 as follows:
"The mapping between the bits of s and those of a is s[w(5y + x) + z] = a[x][y][z]."
The wikipedia page basically switches the second and first coordinate:
"Let a[i][j][k] be bit (i×5 + j)×w + k of the input,[..]"
The following description of the algorithm is correct, but confusing for those comparing reference, implementation guidelines and other sources. I think it would be helpful to stay closer to the reference in this regard, especially as I don't see an advantage in presenting it with the coordinates switch.
If there is no sign of disagreement, I will come back and change the section accordingly. — Preceding unsigned comment added by Deejaydarvin (talk • contribs) 10:12, 25 June 2013 (UTC)
Requested move
- The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
The result of the proposal was moved. --BDD (talk) 19:19, 11 October 2012 (UTC) (non-admin closure)
Keccak → SHA-3 – Now that Keccak is the official SHA-3 algorithm, this article should be moved to SHA-3 (and perhaps recreate Keccak as a redirect to SHA-3 if it's felt warranted.) moof (talk) 16:56, 4 October 2012 (UTC)
- Support, just like Rijndael redirects to Advanced Encryption Standard (and not Advanced Encryption Standard process) -- intgr [talk] 17:01, 4 October 2012 (UTC)
- Support move. @moof: A move will automatically leave a redirect from Keccak. Nageh (talk) 12:35, 5 October 2012 (UTC)
- Support. SHA-3 will become the much more commonly used name for this algorithm, like AES. Make Keccak a redirect here, and include in in the history as the origin of SHA-3 —fudoreaper (talk) 06:09, 9 October 2012 (UTC)
- I was just about to suggest this. BrokenSegue 21:14, 10 October 2012 (UTC)
- The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
Not yet finalized
Update: SHA-3 was added to the Secure Hash Standard by NIST today. (http://www.nist.gov/customcf/get_pdf.cfm?pub_id=919060) — Preceding unsigned comment added by Rbrightwell (talk • contribs) 17:50, 5 August 2015 (UTC)
SHA-3 standard does not not exist yet: Secure Hash Standard (SHS) is not yet updated. Only thing which is 100 % sure is that SHA-3 will be based on Keccak. This fact was pointed by the Keccak authors at FOSDEM 2013 (https://fosdem.org/2013/schedule/event/security_keccak/).
- At what time of the video do they make that statement? I am watching, but the video is pretty long. —fudoreaper (talk) 03:07, 13 February 2013 (UTC)
- Ha, i just found it. 40:45 is the time when he mentions this clearly. We may need to modify this article then... —fudoreaper (talk) 03:19, 13 February 2013 (UTC)
- I tried to see if Wikipedia has a template for upcoming standards or similar but couldn't find one. If such template doesn't exists then perhaps something along lines:
As of [date] NIST hasn't yet published final SHA-3 specification. Contents of this article are subject to change once the final standard is published.Woupsi (talk) 22:06, 13 February 2013 (UTC)- Yes, something like this should be clearly stated in the beginning. What happened was the article called Keccak was moved to SHA-3, so a lot of the text comes from the days it was only talking about Keccak. Go ahead and make some changes! —fudoreaper (talk) 08:12, 19 February 2013 (UTC)
- I tried to see if Wikipedia has a template for upcoming standards or similar but couldn't find one. If such template doesn't exists then perhaps something along lines:
- Ha, i just found it. 40:45 is the time when he mentions this clearly. We may need to modify this article then... —fudoreaper (talk) 03:19, 13 February 2013 (UTC)
- Updated the article to not mention any particular variants like "SHA3-256", because the standard is not published, and so it is not final! -- Sverdrup (talk) 16:04, 18 February 2013 (UTC)
news on finalization https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE 80.98.89.22 (talk) 22:04, 27 August 2013 (UTC)
Can we delete rhash sample data? SHA-3 not standardized and the trickle of changes during the standardization process is changing the test values; moreover the current text suggest they're from a non-standard rhash utility. — Preceding unsigned comment added by 216.113.160.81 (talk) 00:52, 6 December 2014 (UTC)
I think it could be reasonable to remove the RHASH algorithm. I changed it, because when researching the algorithm, I found this example only matched the FIPS standard and the competition output on an empty string. The original block simply gave the examples as if there was no difference. I could give sample data to prove the point of the small change for the FIPS standard, but I'm not aware of it being published, so I hesitate to do so. I think that is why the RHASH algorithm has become so popular, because they have a far greater list of example inputs and outputs than the FIPS examples that consist of a few bits. I've already seen the RHASH algorithm popping up in other applications, probably because it is easier to test. It may be useful to instead have a section that shows the difference between the standards, the problem still exists however that the only published input example that is the same between all three is the empty string, and that just happens to be the worst example to use. It will be interesting if early adopters of SHA-3 mostly get it wrong, simply because RHASH got it wrong. The fact that it is wrong seems to be important.74.200.48.5 (talk) 14:35, 1 May 2015 (UTC)
- seconded. examples should be kept minimum, and only official values, the latest draft in our case. Krisztián Pintér (talk) 14:39, 1 May 2015 (UTC)
- removed the RHASH. also the "keccak" examples, unclear what they were. kept only the standardized ones, plus added the obligatory avalanche showcase. Pintér Krisztián (talk) 20:26, 15 August 2015 (UTC)
reopen the case for separate keccak article
in the light of recent documents, i suggest keccak and sha-3 to be separated. rationale: in this document http://keccak.noekeon.org/NoteSoftwareInterface.pdf authors suggest a wide array of uses for keccak outside the scope of a hash function. also there are different usage modes, namely the overwrite mode absorbing (as opposed to the xor method), reduced rounds for first Keccak-f in special cases like keyed mode, and sakura tree hashing with special padding. as of now, it is impossible to incorporate these into wikipedia, because they are not related to SHA-3, and there is no keccak article. 178.21.48.247 (talk) 14:32, 26 July 2013 (UTC)
- It's not necessary to create a separate article for that, just create a subsection about the non-SHA features and make that clear in text. As an example, the Advanced Encryption Standard article also discusses the Rijndael-specific block and key lengths which are not in AES. -- intgr [talk] 06:37, 28 July 2013 (UTC)
- not necessary but reasonable 178.21.48.155 (talk) 11:16, 29 July 2013 (UTC)
- The variant of Keccak now being proposed by NIST for SHA-3 standardization is a specific implementation of Keccak (http://keccak.noekeon.org/NoteSoftwareInterface.pdf). I believe separating Keccak and SHA-3 into two articles would be wise. Even if that can't be accomodated, the differences between Keccak as a family of primitives, Keccak as suggested for use as a hash function, and SHA-3 as defined by NIST should really be clarified. Now that the standardization process is nearing completion, they are diverging and are no longer equivalent. —Rbpolsen ᴛ · ᴄ
more: CAESAR contestants ketje http://competitions.cr.yp.to/round1/ketjev1.pdf and keyak http://competitions.cr.yp.to/round1/keyakv1.pdf are based on smaller state and reduced round keccak. (i am 178.21.48.247 above) Krisztián Pintér (talk) 12:34, 19 March 2014 (UTC)
now what? now we have the SHAKE's as well. where to put it? Krisztián Pintér (talk) 18:59, 7 May 2014 (UTC)
The name SHA-3 seems to be rather restrictive to SHA3-{224,256,384,512}, while there is much more than that: the SHAKE's indeed, but also the new functions standardized by NIST in SP 800-185 (cSHAKE, KMAC, etc.), Ketje, Keyak, the STROBE protocol... So I think the current redirect does not help in showing the bigger picture, which would be a nice added value of this Wikipedia page. 62.147.254.143 (talk) 03:04, 4 August 2017 (UTC)
controversy section
added a little bit of info about the fuss that is going on. sadly, due to US government inaptness, i can't cite the djb mail from the NIST mailing list, it is not available. — Preceding unsigned comment added by 80.98.89.22 (talk) 16:53, 13 October 2013 (UTC)
- turns out that it is a registration only site, and they don't seem to hand out accounts as easily as they claim. does anyone have an alternative source? 176.63.52.22 (talk) 22:14, 3 November 2013 (UTC)
Removing statements by Paul Crowley
I've never heard of "Paul Crowley", and he doesn't have a Wikipedia article (in contrast to e.g. Bruce Schneier who is cited in the same section). A Google search for "Paul Crowley" doesn't turn up any cryptologist (there's an Irish football player, and a lawyer that comes on the first page). The citation itself seems to be a blog site. I'm being bold and removing the statement, particularly considering the controversy around the weakening of Keccak by NIST. We need to be careful who is being cited and their weight in the cryptologic community. Please cite who he is before adding him back.83.248.146.73 (talk) 14:16, 16 February 2014 (UTC)
- Paul Crowley cryptanalyzed Salsa20 and was awarded a prize for it. His comments on the controversy are technically substantiated and can bring another light to the controversy, so they are worth adding back.82.220.1.204 (talk) 16:17, 3 March 2014 (UTC)
- i personally have no objection, but i made it a little shorter Krisztián Pintér (talk) 17:22, 3 March 2014 (UTC)
Problem with the third item of the references
I think there is a little problem on the item three of the section "References" because it is showing the follow string in red
"|first1= missing |last1= in Authors list (help)"
I'm sorry, but I don't know how to fix it, so, I'm reporting here.
Regards,
Lp.vitor (talk) 22:39, 21 October 2014 (UTC)
Free implementations available already?
For the previous hash algorithms, there have been free implementations available under the BSD license. Is there such an implementation available for SHA-3 already? Schily (talk) 15:56, 18 August 2015 (UTC)
tweaks
i suggest the deletion of the tweaks session, or merging it into the history section. now that the standard is out, tweaks does not seem all that relevant. Krisztián Pintér (talk) 08:51, 8 September 2015 (UTC)
I think deletion is not good, but I support moving it to the history. --mafutrct (talk) 10:59, 8 October 2015 (UTC)
padding
AFAIK the padding is 1...0...1 for Keccak, 011...0...1 for SHA3 and 11111...0...1 for SHAKE. — Preceding unsigned comment added by 95.111.59.55 (talk) 12:13, 11 October 2015 (UTC)
- those bits are not part of the paddig, though obviously it will be implemented as one step in libraries. for example the rule for sha3 is: M || 01 || 10*1. this is practically equivalent to M || 0110*1. this is already covered in the article. Krisztián Pintér (talk) 12:36, 11 October 2015 (UTC)
I have a question: The article now says that a maximum or r-1 0's are added, but given that there is a 1 in front and a 1 at the end, and that the total number of bits should be r, it seems to me that the maximal number of 0's is r-2 instead of r-1. This should occur when padding the empty message.
- r-1 zeros are needed if there is only one bit space left in the final block. 1 goes to that block, followed by another block with r-1 zeros and a 1. this never happens in byte based implementations. Krisztián Pintér (talk) 11:18, 26 July 2017 (UTC)
Is the example section correct?
I made an implementation of SHA3, and ran SHA3-224(""), and I get: d46b7676dc1570d228520b1b9dd5caa1319ec425e2775ec399f96fb7.
I've noticed that there's no in-line citation in the examples section, so I doubt it's correct.
Do we have a reference implementation we can defer to? — Preceding unsigned comment added by Dannyniu (talk • contribs) 12:20, 14 May 2016 (UTC)
- i'm not really in the mood to track down more, but specifically empty-224 is here: http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/SHA3-224_Msg0.pdf
- Krisztián Pintér (talk) 13:40, 14 May 2016 (UTC)
- and some more empty here http://csrc.nist.gov/groups/ST/toolkit/examples.html#aHashing Krisztián Pintér (talk) 13:41, 14 May 2016 (UTC)
NIST controversy, capacity setting, pre-image and collision resistance.
I think some technical details may be worth mentioning. I can draft sentences on this, but finding reference may be over the wall for me.
Because of birthday attack, the collision resistance of any hash function is always half of that of its pre-image resistance. As NIST was considering the necessity of twice the pre-image resistance than that minimal target security level, many in the field (cite Daniel J. Bernstein, Bruce Schneier, quotes here) have raised concern over the move. While favorable in performance, the tweak would cause SHA3 to have significantly less pre-image resistance than their SHA2 counterparts. — Preceding unsigned comment added by Dannyniu (talk • contribs) 09:22, 31 May 2017 (UTC)
Also, there's the trivia that quantum computers can pre-image a sponge function in capacity/3 compared to outlen/2 on true random oracles. Dannyniu (talk) 09:30, 31 May 2017 (UTC)
- @Dannyniu and Ciphergoth: I just added a paragraph explaining the capacity change:
- The hash function competition called for hash functions with a d-bit output to have d/2-bit resistance to collision attacks and d-bit resistance to preimage attacks, the maximum achievable for d bits of output. Keccak's security proof allows an adjustable level of security based on a "capacity" c, providing c/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposed c=2d. The announced change was to accept the same d/2-bit security for all forms of attack and standardize c=d. This would have sped up Keccak by allowing an additional d bits of input to be hashed each round.
- I didn't see this discussion until after writing the above, but I think it addresses the issue. Review and edits solicited! I considered adding wording of the form "without weakening the hash in typical applications where the output size is determined by the required collision resistance", but after adding all the necessary caveats and clarifications, it just got too ponderous. Maybe someone else can do better? 104.153.72.218 (talk) 20:57, 12 July 2017 (UTC)
- to my knowledge, NIST did not explicitly called for preimage/collision resistances, but rather the requirement was to provide the same security level as the four SHA2 variants. and that happened to be b and b/2. the current text suggests that they actually set specific requirements and then changed their minds. the reality is that it did not occur to them that these can be separately tuned. if anyone has any source for the opposite, please share. i suggest reverting edit 02:19, 13 July 2017. Krisztián Pintér (talk) 07:00, 13 July 2017 (UTC)
- I couldn't find a reference for the claim that "quantum computers can pre-image a sponge function in capacity/3 compared to outlen/2 on true random oracles", and I suspenct that it is confusing pre-images and collisions, and the restriction to sponge functions seems wrong as well (the result applies to any hash function), see User talk:Dannyniu#NIST controversy, capacity setting, pre-image and collision resistance. If anyone has a reference, please provide it. --rtc (talk) 21:55, 16 July 2017 (UTC)
- Ref seems to be [1], which doesn't give an academic source either. Anyway, capacity = 2*outlen, so capacity/3 = 2*outlen/3 = 2/3*outline > outlen/2. --rtc (talk) 13:26, 22 July 2017 (UTC)
Speed in lede
I added the following to the lede:
- SHA-3 has failed to find widespread adoption. While fast in hardware, it has been critized for being slow in software; SHA2-512 is more than twice as fast as SHA3-512 and SHA-1 (though broken and with a much lower security claim) more than three times as fast on a Skylake processor at 3.2 Ghz.[1] On the other hand, SHA2-512 is susceptible to length extension attacks, so it is insecure for some applications, and some are thus recommending to use SHA3-512 instead.[2]
This was removed by User:Krisztián Pintér with the comment "removing irrelevant paragraph from lead. speed section suffices." I don't think so. The topic begs the question why hardly anybody is using SHA-3. The lede should give an explanation and give the facts. I don't think that this crucially relevant information should be buried deep down in the article. --rtc (talk) 12:05, 17 July 2017 (UTC)
- whether SHA3 is slow is not a central point that would belong to the lead. it was if speed was an actual issue, but your very link explains that this is mostly just warm air coming from half educated blake2 fanboys. there is a section for explaining the facts, that is more than enough. its adoption is not important either, considering that it was never meant to be used in the first place. NIST itself said that it is just a backup plan in case something happens to SHA2. i support putting a paragraph about this latter status. but i don't think you can successfully argue that the first thing a reader needs to know about sha3 is that it "failed" to spread and it is criticized for speed. lead is a place to priority information mostly in the "what the heck is this" domain, not minutia. Krisztián Pintér (talk) 12:22, 17 July 2017 (UTC)
- There are two crucial properties of cryptographic primitives: Their security and their speed. It is easy to design something that is secure but slow, and it is easy to design something that is fast but insecure. And an encyclopedia article in its lead should report the crucial properties of the subject. Software speed IS an actual issue for SHA-3. The alternatives that the authors provide are not disputing this fact, and thus there is no "warm air". It's simple, SHA3-512 is slow compared to SHA2-512. The authors say a) use hardware (not an option, it's simply not available in standard CPUs), b) reduce the security parameters (SHAKE), c) use cSHAKE to exploit parallelism (which is NOT a SHA-3, but a derived algorithm -- this article is about SHA-3), d) reduce the security parameters even more (KangarooTwelve) (again, NOT a SHA-3). yeah, "what the heck is this" -- it's a SLOW algorithm. That's what is is and that's what the lead should say. The authors say: "the initially-intended outcome of the competition is a set of four functions called SHA3-224, SHA3-256, SHA3-384, and SHA3-512. If 'SHA-3' means these four functions, then indeed SHA3-512 is unnecessarily slowed down by an excessive security parameter." That's not a fair statement. SHA2 has the same specifications, and it isn't as slow. The specifications are not to blame. --rtc (talk) 21:04, 17 July 2017 (UTC)
- it is debatable how important speed is, but it is definitely not important if the primitive in question is not extraordinarily slow or fast. despite the noise, keccak is not slow. bear in mind that the fair comparison is shake256 vs sha512, even if the former is less secure, but it is not a practical difference, and the standard itself recommends shakes. your own link says that the speed is comparable to primitives used in the industry. once more, the lead is there to summarize the key points. if you truly believe that a major characteristic of keccak is its slow speed, you didn't pay attention. i would suggest putting something there, because now it is short. in my view, we should include its status as a plan-b, not something you should transition to. and maybe before that, some brief mention of the sponge construction, which is the main novelty of the design. try to argue that "slow speed" is more important than these two. Krisztián Pintér (talk) 21:20, 17 July 2017 (UTC)
- If SHA-2 is twice as fast as SHA-3, that MEANS that SHA-3 is extraordinarily slow. As you say, shake256 is less secure than SHA512. Thus it is not a fair comparison. We do not have to discuss whether "it is not a practical difference" (though quantum computing + someone finding a fast way to compute the Keccak permutation function + moore's law + advances in cryptography is certainly something one has to consider. I remember articles in computer magazines of the 90s which claimed that a 1024 bit RSA key will not be within the reach of breaking for the expected age of the universe); the difference is simply there. SHA-3 is not a plan b. It was designed specifically to address the undesirable properties, not to say weaknesses, of the MD construction. See Merkle–Damgård_construction#Security_characteristics. "Crypto 2004 Conference: The Sky Falls ... Joux shows a surprising property in Merkle-Damgaard hashes"[3] That advances were made against SHA-1 etc., leading to the suspicion that they might be advanced to attack SHA-2 as well one day, only added to that. I am NOT saying that "a major characteristic of keccak is its slow speed", I am saying that SHA-3 (not Keccak with its many non-SHA3 or SHA3-derived variants) has been criticized for its slow speed in software. --rtc (talk) 21:51, 17 July 2017 (UTC)
- it is your task to show that speed is a problem. and it should be well sourced, because the lead is the most important part of an article. if you really believe that 20% slower than sha2 is a such serious point that it is more important than its novel construction or intended use, you need to find some very strong evidence for that. but i bet you can only find blogposts. Krisztián Pintér (talk) 22:02, 17 July 2017 (UTC)
- For the same security parameters, it is more than 50% slower. The source was given, it is the authors themselves. --rtc (talk) 22:08, 17 July 2017 (UTC)
- this is going nowhere. shake is the main instance, and recommended by NIST. the speed of SHA3-s does not matter, you will only use them in rare cases, where the bigger preimage resistance matters. but i really don't care about your opinion. it is not a forum, it is an encyclopedia. if you think speed is a huge issue, prove it. not that it is slower, but that it is so important that it should be in the lead. or show me proof that the speed is hindering its acceptance or adoption. your judgement is not relevant. Krisztián Pintér (talk) 22:14, 17 July 2017 (UTC)
- Your judgement is not relevant. I am merely reporting what the authors admit -- that SHA-3 has been criticized for being slow in software. --rtc (talk) 22:17, 17 July 2017 (UTC)
- this is going nowhere. shake is the main instance, and recommended by NIST. the speed of SHA3-s does not matter, you will only use them in rare cases, where the bigger preimage resistance matters. but i really don't care about your opinion. it is not a forum, it is an encyclopedia. if you think speed is a huge issue, prove it. not that it is slower, but that it is so important that it should be in the lead. or show me proof that the speed is hindering its acceptance or adoption. your judgement is not relevant. Krisztián Pintér (talk) 22:14, 17 July 2017 (UTC)
- For the same security parameters, it is more than 50% slower. The source was given, it is the authors themselves. --rtc (talk) 22:08, 17 July 2017 (UTC)
- it is your task to show that speed is a problem. and it should be well sourced, because the lead is the most important part of an article. if you really believe that 20% slower than sha2 is a such serious point that it is more important than its novel construction or intended use, you need to find some very strong evidence for that. but i bet you can only find blogposts. Krisztián Pintér (talk) 22:02, 17 July 2017 (UTC)
- If SHA-2 is twice as fast as SHA-3, that MEANS that SHA-3 is extraordinarily slow. As you say, shake256 is less secure than SHA512. Thus it is not a fair comparison. We do not have to discuss whether "it is not a practical difference" (though quantum computing + someone finding a fast way to compute the Keccak permutation function + moore's law + advances in cryptography is certainly something one has to consider. I remember articles in computer magazines of the 90s which claimed that a 1024 bit RSA key will not be within the reach of breaking for the expected age of the universe); the difference is simply there. SHA-3 is not a plan b. It was designed specifically to address the undesirable properties, not to say weaknesses, of the MD construction. See Merkle–Damgård_construction#Security_characteristics. "Crypto 2004 Conference: The Sky Falls ... Joux shows a surprising property in Merkle-Damgaard hashes"[3] That advances were made against SHA-1 etc., leading to the suspicion that they might be advanced to attack SHA-2 as well one day, only added to that. I am NOT saying that "a major characteristic of keccak is its slow speed", I am saying that SHA-3 (not Keccak with its many non-SHA3 or SHA3-derived variants) has been criticized for its slow speed in software. --rtc (talk) 21:51, 17 July 2017 (UTC)
- it is debatable how important speed is, but it is definitely not important if the primitive in question is not extraordinarily slow or fast. despite the noise, keccak is not slow. bear in mind that the fair comparison is shake256 vs sha512, even if the former is less secure, but it is not a practical difference, and the standard itself recommends shakes. your own link says that the speed is comparable to primitives used in the industry. once more, the lead is there to summarize the key points. if you truly believe that a major characteristic of keccak is its slow speed, you didn't pay attention. i would suggest putting something there, because now it is short. in my view, we should include its status as a plan-b, not something you should transition to. and maybe before that, some brief mention of the sponge construction, which is the main novelty of the design. try to argue that "slow speed" is more important than these two. Krisztián Pintér (talk) 21:20, 17 July 2017 (UTC)
- There are two crucial properties of cryptographic primitives: Their security and their speed. It is easy to design something that is secure but slow, and it is easy to design something that is fast but insecure. And an encyclopedia article in its lead should report the crucial properties of the subject. Software speed IS an actual issue for SHA-3. The alternatives that the authors provide are not disputing this fact, and thus there is no "warm air". It's simple, SHA3-512 is slow compared to SHA2-512. The authors say a) use hardware (not an option, it's simply not available in standard CPUs), b) reduce the security parameters (SHAKE), c) use cSHAKE to exploit parallelism (which is NOT a SHA-3, but a derived algorithm -- this article is about SHA-3), d) reduce the security parameters even more (KangarooTwelve) (again, NOT a SHA-3). yeah, "what the heck is this" -- it's a SLOW algorithm. That's what is is and that's what the lead should say. The authors say: "the initially-intended outcome of the competition is a set of four functions called SHA3-224, SHA3-256, SHA3-384, and SHA3-512. If 'SHA-3' means these four functions, then indeed SHA3-512 is unnecessarily slowed down by an excessive security parameter." That's not a fair statement. SHA2 has the same specifications, and it isn't as slow. The specifications are not to blame. --rtc (talk) 21:04, 17 July 2017 (UTC)
[outdent]
I’ve been following this dicussion today. The Wikipedia policy is pretty clear: We should only put things in the lede which reliable sources say about the topic in question. If reliable sources give extensive coverage to SHA-3’s software performance, then we put that there. But, that means a reliable source: A book about the topic, or a peer reviewed paper which gives an overview of modern cryptographic hash functions. Keep in mind that Adam Langley’s blog is not a reliable source as per our guidelines on what is and is not a reliable source. To say that SHA-3 is slow just because some people on some random Reddit discussion board feel it’s slow is not a reliable source; remember, the Pizzagate conspiracy theory started as a thread in Reddit. Since reliable sources do not give extensive discussion about SHA-3 being slower in software (in fact, they say SHA-3 is incredibly fast when implemented in hardware), to put any discussion based on a Reddit or Ycombinator thread violates WP:UNDUE. Samboy (talk) 02:12, 18 July 2017 (UTC)
- It is not putting "any discussion based on a Reddit or Ycombinator", but based on the authors themselves, who react to such criticism. Obviously it is relevant enough for them to respond, and so it should be relevant enough for the lead. I find it quite ridiculous with which specious arguments you and Krisztián try to keep the fundamental thruth about SHA-3 out of the lead. The authors say, yes, it's slow compared to SHA-2 but only because of "SHA3-512 is unnecessarily slowed down by an excessive security parameter" Well, that this is unnecessary is the author's opinion, however, it does not change the fact, implied by the authors ("indeed"), that SHA-3 is slow compared to SHA-2. At least they admit that the decision to "unnecessarily" "slow" SHA-3 was made after "a fierce controversy in 2013" SHA-3#Capacity_change_controversy Apparently one can have a different opinion about the "slowing down" being necessary. But all that is irrelevant; what is relevant is that it is slow. --rtc (talk) 06:24, 18 July 2017 (UTC)
- bear in mind that nobody debates the data points. the speeds are known. the debate is about how important these are, and whether it is actually impeding its adoption in a significant manner. the fact that it is X% slower than X other hash is not something you put in the lead unless it is important for some reason. show that it did not make it into tls for this reason. or some other important protocol or software chose another primitive as it was faster. something of that nature. Krisztián Pintér (talk) 07:26, 18 July 2017 (UTC)
- “based on the authors themselves” Which is as per Wikipedia policy (observe the line “Articles should be based on reliable, third-party, published sources” emphasis mine), not the most reliable of source. “which specious arguments you and Krisztián try to keep the fundamental thruth about SHA-3 out of the lead” You claim our argument is “specious”, but you have failed to tell us which Wikipedia policy makes our argument specious. The argument, again, is that there is not enough independent, reliable third party discussion about SHA-3’s speed when rendered in software — the same old tired argument which only Blake2 advocates really care about now that SHA-3 is the most recent NIST hashing standard — for this discussion to be placed in the lede. If you disagree, please link to this third party discussion; please do not pretend we’re not making a reasonable valid argument just because you disagree with its conclusion. Samboy (talk) 10:04, 18 July 2017 (UTC)
- Another point is that the general consensus among the cryptographic community is that anything over 256 bits of security is redundant. 256 bits of security means that if every single atom in the solar system was making 10,000,000 instances of SHAKE256 a second, it would still take 350,000 years to brute force through all possible 256-bit combinations. SHA-3-512 is only more secure than SHAKE256 in an abstract manner. I’ve removed the non-neutral wording implying SHAKE256 is somehow, in a real world manner, less secure than SHA3-512. Samboy (talk) 10:41, 18 July 2017 (UTC)
- It is NOT the general consensus that anything over 256 bits of security is redundant. Quantum computing can cut security in half. For instance, SHAKE128 only has 64 bits of security against quantum attacks, not that much. Also, advances in cryptography can cut down security. Further, there may be applications that use SHA2-512 in such a way that only, say, 256 bits of security are left (not far fetched at all: using the hash as a key for Disk_encryption_theory#XTS). And finally, there's moore's law. You may believe that it cannot apply forever, but who knows? Advances in science and technology are often unexpected. Therefore, SHA-3-512 is not more secure only in an abstract manner. The wording was giving the facts and added to understanding of the article, please do not remove those facts; it biases the article. Just accept that those who were advocating removal of 512 security in the debate lost. --rtc (talk) 15:47, 18 July 2017 (UTC)
- I removed your edits [2] and [3] because they are spin-doctoring the article. You claim they "imply" something you don't like, but in fact they are just giving the facts. The fact that the XOR is fast compared to the permutation. The fact that the hash becomes more secure the higher the capacity is chosen. The fact that preimage resistance was cut in half. All this is valuable information in the article. You removed "The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512,
at the expense of cutting the preimage resistance in half (but while keeping the collision resistance)." because you find that "256 bits of security is every atom in the solar system making 10,000,000 SHAKE256 calculations a second for over 350,000 years; let’s not imply that cutting preimage resistance down to 256 bits decreases real-world security" Even if it were true, your removal is inappropriate, as this is about SHA3-512 AND SHA3-256, and half of SHA3-256 security is 128, not 256. You also removed "the less efficient,but more secure,the hashing becomes, since more bits of the message can be XORed into the state (a quick operation) before each application ofthe complex and slowf." with the comment "Removing implication that 512 bits of security is somehow, in any real world sense, more secure than 256 bits, and removing non-neutral wording" Again, this has nothing to do with 256 bits. It applies to any number of bits and is an important property of the cipher. And there's no neutrality problem with the text; on the contrary, it is very important to stress that f is complex and slow while xoring is quick and simple. That is the crucial point of how keccak weights speed against security in its variants. The article gives due weight to all views, there is no need to censor facts just because you want the article to support only the one view you prefer. If you are desparate about your 350,000 years argument, find a reliable source for it and add it to the article, but please do not remove something from it. --rtc (talk) 16:01, 18 July 2017 (UTC)- it was probably a misunderstanding. actually i also interpreted the text first in the way as Samboy did, namely that it is slow in some absolute sense. probably the fact that you are pushing this speed agenda hard does not help. anyway, i changed the text so it is more precise, and can not be interpreted incorrectly. Krisztián Pintér (talk) 22:15, 18 July 2017 (UTC)
- Another point is that the general consensus among the cryptographic community is that anything over 256 bits of security is redundant. 256 bits of security means that if every single atom in the solar system was making 10,000,000 instances of SHAKE256 a second, it would still take 350,000 years to brute force through all possible 256-bit combinations. SHA-3-512 is only more secure than SHAKE256 in an abstract manner. I’ve removed the non-neutral wording implying SHAKE256 is somehow, in a real world manner, less secure than SHA3-512. Samboy (talk) 10:41, 18 July 2017 (UTC)
References
Speed redux
Since changes to this section have resulted in heated discussion from one editor, here are the latest changes I have made to the speed section (in bold)
- The lower r is (and, conversely, the higher c = b – r = 1600 – r), the less efficient, but more theoretically secure, the hashing becomes, since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensive f.
a) Theoretically secure, for two reasons:
The first reason is because, once we get at 256 bits of security (or, likewise 128 bits of security against an imaginary quantum computer), the numbers just don’t make sense in the real world. Since that one editor didn’t like my own back of the envelope calculation of making every atom in the solar system a brute force cryptographic calculating computer four times as fast as my core i7-7600U, which would require over 350,000 years to brute force 256 bits, let me quote Applied Cryptography to give an idea of how hard it would be to brute force only 128 bits:
- Assume the typical algae cell is the size of a cube 10 microns on a side (this is probably a large estimate), then 10 ** 15 of them can fill a cubic meter. Pump them into the ocean and cover 200 square miles (518 square kilometers) of water to a meter deep (you figure out how to do it—I’m just the idea man), and you’d have 10 ** 23 (over a hundred billion gallons) of them floating in the ocean. (For comparison, the Exxon Valdez spilled 10 million gallons of oil.) If each of them can try a million keys per second, they will recover the key for a 128-bit algorithm in just over 100 years. (The resulting algae bloom is your problem.)
For 256 bits to be brute forced with a quantum computer, imagine this same algae bloom — but this time with each algae cell somehow being a quantum computer.
The other reason is that a cryptographic sponge would have to be indistinguishable from a random permutation to actually get more than 256 bits of security. Any cryptographic weakness in Keccak’s round function, no matter how small, would make it so having a huge capacity like 1024 bits won’t actually give us 512 bits of security (remember, with a sponge, the theoretical security level is the capacity in bits divided by two).
b) fewer bits
“bits” is a countable noun, not a mass noun, so it is grammatically correct to say “fewer bits”, not “less bits.”
Samboy (talk) 07:21, 24 July 2017 (UTC)
- I still think you get it wrong about what that statement says. It is not about increasing the security above 256 bits or 128 bits into a range which you deem only "theoretically" more secure. It is a general property of the algorithm, which applies to any r. r can take any value from 1 to 1600. Which of the resulting securities are theoretically and which practically relevant is completely unrelated to what the statement says and raising the question there only confuses the reader. Please remove the "theoretical" in this place and raise the matter somewhere else in the article, if it is so important to you. Preferably, it belongs to the "Capacity change controversy" section.
- About your other reason, I simply don't get it how 256 bits are relevant here. A weakness in the permutation would decrease the security for all security levels, not merely 256 bits or 512 bits. And it would actually be an argument against the "theoretical", since if the algorithm is weaker than announced, then increasing the capacity (even above 512, which you deem the "theoretical" security range) may be a reasonable way to mitigate that. However, again, it is something completely unrelated to what that statement says. --rtc (talk) 08:07, 24 July 2017 (UTC)
- I haven't been following the discussion too closely, but I'm inclined to agree with rtc. In the real world, given enough time and effort, weaknesses will be found (which may be a cryptanalytic breakthrough, quantum computers or an advanced alien race attacking our communications ;) ). Looking at past breakages of ciphers and hash functions, a longer digest size (or key size) always gives you a larger security margin against attacks.
- So let's not make any judgments in the article whether above 256 bits is useless/theoretical or not, because that may yet change in the future. -- intgr [talk] 10:55, 24 July 2017 (UTC)
- the whole section is weird in my view. i would expect some data there, instead, i get some lengthy explanation about xor, author's response to criticism, which does not seem to exist, and a bare mention how shakes supposed to fix it despite the fact that shakes are the proposed main modes in the standard. Krisztián Pintér (talk) 07:23, 25 July 2017 (UTC)
- correction: i clearly remember reading that shakes are the main instances, but apparently it is either wrong or was dropped at some point. fips202 explicitly forbids shakes to be used as hash functions. on the other hand, NIST SP 800-185 exclusively uses shakes. Krisztián Pintér (talk) 10:06, 25 July 2017 (UTC)
- The section does contain some data. The explanation about xors is crucial to understand how the sponge construction allows balancing speed versus security. And the criticism as well as the author's reaction to the criticsm should be mentioned. --rtc (talk) 17:48, 25 July 2017 (UTC)
- correction: i clearly remember reading that shakes are the main instances, but apparently it is either wrong or was dropped at some point. fips202 explicitly forbids shakes to be used as hash functions. on the other hand, NIST SP 800-185 exclusively uses shakes. Krisztián Pintér (talk) 10:06, 25 July 2017 (UTC)
Security against quantum attacks
It seems that the table on the security against quantum attacks is not correct for the SHAKEs. IMHO, the resistance should be:
- Collision (Brassard et al.): min(d/3, c/3)
- Collision (Bernstein), preimages: min(d/2, c/2)
62.147.254.143 (talk) 21:18, 30 July 2017 (UTC)
Just implemented it now. 62.147.254.143 (talk) 02:58, 4 August 2017 (UTC)
- Start-Class Cryptography articles
- High-importance Cryptography articles
- WikiProject Cryptography articles
- C-Class Computing articles
- Low-importance Computing articles
- C-Class Computer networking articles
- High-importance Computer networking articles
- C-Class Computer networking articles of High-importance
- All Computer networking articles
- C-Class software articles
- High-importance software articles
- C-Class software articles of High-importance
- All Software articles
- C-Class Websites articles
- High-importance Websites articles
- C-Class Websites articles of High-importance
- All Websites articles
- C-Class Computer science articles
- Mid-importance Computer science articles
- All Computing articles
- C-Class Internet articles
- Mid-importance Internet articles
- WikiProject Internet articles
- C-Class numismatic articles
- Low-importance numismatic articles
- WikiProject Numismatics articles