Computationally bounded adversary

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In information theory, the computationally bounded adversary problem is a different way of looking at the problem of sending data over a noisy channel. In previous models the best that could be done was ensuring correct decoding for up to d/2 errors, where d was the Hamming distance of the code. The problem with doing it this way is that it does not take into consideration the actual amount of computing power available to the adversary. Rather, it only concerns itself with how many bits of a given code word can change and still have the message decode properly. In the computationally bounded adversary model the channel – the adversary – is restricted to only being able to perform a reasonable amount of computation to decide which bits of the code word need to change. In other words, this model does not need to consider how many errors can possibly be handled, but only how many errors could possibly be introduced given a reasonable amount of computing power on the part of the adversary. Once the channel has been given this restriction it becomes possible to construct codes that are both faster to encode and decode compared to previous methods that can also handle a large number of errors.

Comparison to other models[edit]

Worst-case model[edit]

At first glance, the worst-case model seems intuitively ideal. The guarantee that an algorithm will succeed no matter what is, of course, highly alluring. However, it demands too much. A real-life adversary cannot spend an indefinite amount of time examining a message in order to find the one error pattern which an algorithm would struggle with.

As a comparison, consider the Quicksort algorithm. In the worst-case scenario, Quicksort makes O(n2) comparisons; however, such an occurrence is rare. Quicksort almost invariably makes O(n log n) comparisons instead, and even outperforms other algorithms which can guarantee O(n log n) behavior. Let us suppose an adversary wishes to force the Quicksort algorithm to make O(n2) comparisons. Then he would have to search all of the n! permutations of the input string and test the algorithm on each until he found the one for which the algorithm runs significantly slower. But since this would take O(n!) time, it is clearly infeasible for an adversary to do this. Similarly, it is unreasonable to assume an adversary for an encoding and decoding system would be able to test every single error pattern in order to find the most effective one.

Stochastic noise model[edit]

The stochastic noise model can be described as a kind of "dumb" noise model. That is to say that it does not have the adaptability to deal with "intelligent" threats. Even if the attacker is bounded it is still possible that they might be able to overcome the stochastic model with a bit of cleverness. The stochastic model has no real way to fight against this sort of attack and as such as unsuited to dealing with the kind of "intelligent" threats that would be preferable to have defenses against.


Therefore, a computationally bounded adversarial model has been proposed as a compromise between the two. [1] This forces one to consider that messages may be perverted in conscious, even malicious ways, but without forcing an algorithm designer to worry about rare cases which likely will never occur.

Applications[edit]

Comparison to stochastic noise channel[edit]

Since any computationally bounded adversary could in O(n) time flip a coin for each bit, it is intuitively clear that any encoding and decoding system which can work against this adversary must also work in the stochastic noise model. The converse is less simple; however, it can be shown that any system which works in the stochastic noise model can also efficiently encode and decode against a computationally bounded adversary, and only at an additional cost which is polynomial in n. [1] The following method to accomplish this was designed by Dick Lipton, and is taken from:[1]

An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error removed.
An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error removed.

Let E() be an encoder for the stochastic noise model and D() be a simple decoder for the same, each of which runs in polynomial time. Furthermore, let both the sender and receiver share some random permutation function \pi and a random pattern R.

For encoding: 1. Let X = E(M).

2. Let  Y = \pi (X) \oplus R.

3. Transmit Y

Then for decoding: 1. Receive Y'. Compute Z = \pi^{-1} (Y' \oplus R).

2. Calculate M = D(Z).

Similarly to the Quicksort comparison above, if the channel wants to do something smart, it must first test all the permutations. However, this is infeasible for a computationally bounded adversary, so the most it can do is make a random error pattern  N. But then:

 Z = \pi^{-1}(Y' \oplus R) = \pi^{-1} (Y \oplus N \oplus R),

since Y' = Y \oplus N by definition.

= \pi^{-1}(Y \oplus R) \oplus N' , where N' = \pi^{-1}(N),

since any permutation is linear with respect to XOR,

= X \oplus N',

as per the definition of Y above.

Since \pi is random, N' is just random noise and we can use the simple decoder to decode the received message and get back M.

Specific applications[edit]

By assuming a computationally bounded adversary, it is possibly to design a locally decodable code which is both efficient and near-optimal, with a negligible error probability. These codes are used in complexity theory for things like self-correcting computations, probabilistically checkable proof systems, and worst-case to average-case hardness reductions in the constructions of pseudo-random generators. They are useful in cryptography as a result of their connection with private information retrieval protocols. They are also in a number of database applications like fault-tolerant data storage. [2]

Furthermore, it is possible to construct codes which surpass known bounds for worst-case codes--specifically, unique decoding with a 1-R error rate. [3] This can be done by concatenating timestamped digital signatures onto messages. A computationally bounded channel cannot forge a signature; and while it may have valid past signatures, the receiver can use list decoding and select a message only if its signature has the correct timestamp.

References[edit]

  1. ^ a b c Lipton (6 May 2009). "Worst Case Complexity". Gödel’s Lost Letter and P=NP. Retrieved 2013-04-01. 
  2. ^ Ostrovsky, Pandey, Sahai. "Private Locally Decodable Codes". Retrieved 2013-04-01. 
  3. ^ Micali, Peikert, Sudan, A. Wilson. "Optimal Error Correction for Computationally Bounded Noise.". Retrieved 2013-04-01.