This article needs additional citations for verification. (February 2013) (Learn how and when to remove this template message)
Information leakage happens whenever a system that is designed to be closed to an eavesdropper reveals some information to unauthorized parties nonetheless. For example, when designing an encrypted instant messaging network, a network engineer without the capacity to crack encryption codes could see when messages are transmitted, even if he could not read them. During the Second World War, the Japanese for a while were using secret codes such as PURPLE; even before such codes were cracked, some basic information could be extracted about the content of the messages by looking at which relay stations sent a message onward. As another example of information leakage, GPU drivers do not erase their memories and thus, in shared/local/global memories, data values persist after deallocation. These data can be retrieved by a malicious agent.
Designers of secure systems often forget to take information leakage into account. A classic example of this is when the French government designed a mechanism to aid encrypted communications over an analog line, such as at a phone booth. It was a device that clamped onto both ends of the phone, performed the encrypting operations, and sent the signals over the phone line. Unfortunately for the French, the rubber seal that attached the device to the phone was not airtight. It was later discovered that although the encryption itself was solid, if heard carefully, one could hear the speaker, since the phone was picking up some of the speech. Information leakage can subtly or completely destroy the security of an otherwise secure system.
A modern example of information leakage is the leakage of secret information via data compression, by using variations in data compression ratio to reveal correlations between known (or deliberately injected) plaintext and secret data combined in a single compressed stream. Another example is the key leakage that can occur when using some public-key systems when cryptographic nonce values used in signing operations are insufficiently random.
Information leakage can sometimes be deliberate: for example, an algorithmic converter may be shipped that intentionally leaks small amounts of information, in order to provide its creator with the ability to intercept the users' messages, while still allowing the user to maintain an illusion that the system is secure. This sort of deliberate leakage is sometimes known as a subliminal channel.
Generally, only very advanced systems employ defenses against information leakage.
Following are the commonly implemented countermeasures :
- Use steganography to hide the fact that a message is transmitted at all.
- Use chaffing to make it unclear to whom messages are transmitted (but this does not hide from others the fact that messages are transmitted).
- For busy re-transmitting proxies, such as a Mixmaster node: randomly delay and shuffle the order of outbound packets - this will assist in disguising a given message's path, especially if there are multiple, popular forwarding nodes, such as are employed with Mixmaster mail forwarding.
- When a data value is no longer going to be used, erase it from the memory.
- "A Survey of Techniques for Improving Security of GPUs", Mittal et al., Hardware and Systems Security, 2018
- Kelsey, J. (2002). "Compression and Information Leakage of Plaintext". Fast Software Encryption. Lecture Notes in Computer Science. 2365. p. 263. doi:10.1007/3-540-45661-9_21. ISBN 978-3-540-44009-3.
- Ron Rivest (October 3, 2002). "6.857 Computer and Network Security Lecture Notes 9 : DSA/DSS, RSA, chosen-ciphertext attack" (PDF). MIT. Retrieved 2012-09-14.