# Randomness tests

Randomness tests (or tests for randomness), in data evaluation, are used to analyze the distribution pattern of a set of data. In stochastic modeling, as in some computer simulations, the expected random input data can be verified, by a formal test for randomness, to show that the simulation runs were performed using randomized data. In some cases, data reveals an obvious non-random pattern, as with so-called "runs in the data" (such as expecting random 0–9 but finding "4 3 2 1 0 4 3 2 1..." and rarely going above 4). If a selected set of data fails the tests, then parameters can be changed or other randomized data can be used which does pass the tests for randomness.

There are many practical measures of randomness for a binary sequence. These include measures based on statistical tests, transforms, and complexity or a mixture of these. The use of Hadamard transform to measure randomness was proposed by S. Kak and developed further by Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[1]

## Background

The issue of randomness is an important philosophical and theoretical question. Many random number generators in use today generate what are called "random sequences" but they are actually the result of prescribed algorithms and so they are called pseudo-random number generators. These generators do not always generate sequences which are sufficiently random and generate very repetitive patterns such as the infamous RANDU which fails many randomness tests dramatically including the Spectral Test. The use of an ill-conceived random number generator will result in invalid experiments, due to the lack of randomness.

Tests for randomness are not restricted to analysing the output of pseudo-random number generators, they can also be used to determine whether a set of data has a recognisable pattern to it. For example Wolfram used randomness tests on the output of Rule 30 to examine its potential for generating random numbers,[2] though it was shown to have an effective key size far smaller than its actual size[3] and to perform poorly on a chi-squared test.[4]

## Specific tests for randomness

There are many practical measures of randomness for a binary sequence. These include measures based on statistical tests, transforms, and complexity or a mixture of these. The use of Hadamard transform to measure randomness was proposed by S. Kak and developed further by Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[1]

Several of these tests, which are of linear complexity, provide spectral measures of randomness. T. Beth and Z-D. Dai[5] showed that Kolmogorov complexity and linear complexity are practically the same. However, the proofs by Beth and Dai are incorrect as shown in Y. Wang.[6] On the other hand, Y. Wang showed that for Martin-Lof random sequences, the Kolmogorov complexity is essentially the same as linear complexity.

These practical tests make it possible to compare and contrast the randomness of strings. On probabilistic grounds, all strings, say of length 64, have the same randomness. However, consider the two following strings:

string 1: 0101010101010101010101010101010101010101010101010101010101010101
string 2: 1100100001100001110111101110110011111010010000100101011110010110

They effectively have a different Kolmogorov complexity. The first string admits a short linguistic description, namely "32 repetitions of '01'", which consists of 20 characters, and it can be efficiently constructed out of some basis sequences. The second one has no obvious simple description other than writing down the string itself, which has 64 characters, and it has no comparably efficient basis function representation. Using linear Hadamard spectral tests (see Hadamard transform), the first of these sequences will be found to be of much less randomness than the second one, which agrees with intuition.