Jump to content

Provable security

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Markulf (talk | contribs) at 15:17, 26 March 2016 (Practice oriented provable security). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Provable security refers to any type or level of security that can be proven. It is used in different ways by different fields.

Usually, this refers to mathematical proofs, which are common in cryptography. In such a proof, the capabilities of the attacker are defined by an adversarial model (also referred to as attacker model): the aim of the proof is to show that the attacker must solve the underlying hard problem in order to break the security of the modelled system. Such a proof does not consider side-channel attacks or other implementation-specific attacks, because they are usually impossible to model without implementing the system (and thus, the proof only applies to this implementation).

Outside of cryptography, the term is often used in conjunction with secure coding and security by design, both of which can rely on proofs to show the security of a particular approach. As with the cryptographic setting, this involves an attacker model and a model of the system. For example, code can be verified to match the intended functionality, described by a model: this can be done through static checking. These techniques are sometimes used for evaluating products (see Common Criteria): the security here depends not only on the correctness of the attacker model, but also on the model of the code.

Finally, the term provable security is sometimes used by sellers of security software that are attempting to sell security products like firewalls, antivirus software and intrusion detection systems. As these products are typically not subject to scrutiny, many security researchers consider this type of claim to be selling snakeoil.

In cryptography

In cryptography, a system has provable security if its security requirements can be stated formally in an adversarial model, as opposed to heuristically, with clear assumptions that the adversary has access to the system as well as enough computational resources. The proof of security (called a "reduction") is that these security requirements are met provided the assumptions about the adversary's access to the system are satisfied and some clearly stated assumptions about the hardness of certain computational tasks hold. An early example of such requirements and proof was given by Goldwasser and Micali for semantic security and the construction based on the quadratic residuosity problem.

There are several lines of research in provable security. One is to establish the 'correct' definition of security for a given, intuitively understood task. Another is to suggest constructions and proofs based on general assumptions as much as possible, for instance the existence of a one-way function. A major open problem is to establish such proofs based on P ≠ NP, since the existence of one-way functions is not known to follow from the P ≠ NP conjecture.

Some proofs of the security are in given theoretical models such as the random oracle model, where real cryptographic hash functions are represented by an idealization. 'Exact security' or 'concrete security' is the name given to provable security reductions where one quantifies security by computing precise bounds on computational effort, rather than an asymptotic bound which is guaranteed to hold for 'sufficiently large' values of the security parameter.

Controversies

Koblitz and Menezes have criticized aspects of provable security research in their papers Another Look at "Provable Security"[1] and Another Look at "Provable Security" II.[2] These views have been controversial in the community. A rebuttal, titled On Post-Modern Cryptography[3] was posted by Oded Goldreich, who argues that the rigorous analysis methodology of provable security is the only one compatible with science.

In 2007, Koblitz published "The Uneasy Relationship Between Mathematics and Cryptography"[4] in the Notices of the American Mathematical Society. Several rebuttals have been written by Oded Goldreich, Avi Wigderson and other researchers in the field.[3][5] Ivan Damgård later wrote position paper at ICALP 2007 on the technical issues,[6] and it was recommended by Scott Aaronson as a good in-depth analysis.[7]

Practice oriented provable security

Classical provable security primarily aimed at studying the relationship between asymptotically defined objects. Instead, practice-oriented provable security is concerned with concrete objects of cryptographic practice, such as hash functions, block ciphers, and protocols as they are deployed and used. [8]

Practice oriented provable security uses concrete security to analyse practical constructions with fixed key sizes. It emphasizes the use of models such as the random oracle model as tools, that are assessed with regards to their usefulness rather than being either true or false. Thus largely avoiding the controversy around provable security.

Further reading

  1. ^ "Cryptology ePrint Archive: Report 2004/152". iacr.org.
  2. ^ "Cryptology ePrint Archive: Report 2006/229". iacr.org.
  3. ^ a b "On Post-Modern Cryptography". weizmann.ac.il.
  4. ^ http://www.ams.org/notices/200708/tx070800972p.pdf
  5. ^ "in theory". in-theory.blogspot.com.
  6. ^ Damgård, I. (2007). "A "proof-reading" of Some Issues in Cryptography". Automata, Languages and Programming, 34th International Colloquium, ICALP 2007, Wroclaw, Poland, July 9-13, 2007. Proceedings. LNCS. 4596: 2–11. doi:10.1007/978-3-540-73420-8_2. ISBN 978-3-540-73419-2preprint {{cite journal}}: External link in |postscript= (help)CS1 maint: postscript (link)
  7. ^ "Shtetl-Optimized". scottaaronson.com.
  8. ^ Rogaway, Phillip. "Practice-Oriented Provable Security and the Social Construction of Cryptography". Unpublished essay corresponding to an invited talk at EUROCRYPT 2009. May 6, 2009preprint {{cite journal}}: External link in |postscript= (help)CS1 maint: postscript (link)