# UMAC

In cryptography, a message authentication code based on universal hashing, or UMAC, is a type of message authentication code (MAC) calculated choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function used. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message.

A specific type of UMAC, also commonly referred to just UMAC, is specified in RFC 4418, it has provable cryptographic strength and is usually a lot less computationally intensive than other MACs. UMAC's design is optimized for 32-bit architectures with SIMD support, with a performance of 1 CPU cycle per byte (cpb) with SIMD and 2 cpb without SIMD. A closely related variant of UMAC that is optimized for 64-bit architectures is given by VMAC, which has been submitted to the IETF as a draft (draft-krovetz-vmac-01) but never gathered enough attention for becoming a standardized RFC.

## Background

### Universal hashing

Let's say the hash function is chosen from a class of hash functions H, which maps messages into D, the set of possible message digests. This class is called universal if, for any distinct pair of messages, there are at most |H|/|D| functions that map them to the same member of D.

This means that if an attacker wants to replace one message with another and, from his point of view the hash function was chosen completely randomly, the probability that the UMAC will not detect his modification is at most 1/|D|.

But this definition is not strong enough — if the possible messages are 0 and 1, D={0,1} and H consists of the identity operation and not, H is universal. But even if the digest is encrypted by modular addition, the attacker can change the message and the digest at the same time and the receiver wouldn't know the difference.

### Strongly universal hashing

A class of hash functions H that is good to use will make it difficult for an attacker to guess the correct digest d of a fake message f after intercepting one message a with digest c. In other words,

${\displaystyle \Pr _{h\in H}[h(f)=d|h(a)=c]\,}$

needs to be very small, preferably 1/|D|.

It is easy to construct a class of hash functions when D is field. For example, if |D| is prime, all the operations are taken modulo |D|. The message a is then encoded as an n-dimensional vector over D (a1, a2, ..., an). H then has |D|n+1 members, each corresponding to an (n + 1)-dimensional vector over D (h0, h1, ..., hn). If we let

${\displaystyle h(a)=h_{0}+\sum _{i=1}^{n}h_{i}a_{i}\,}$

we can use the rules of probabilities and combinatorics to prove that

${\displaystyle \Pr _{h\in H}[h(f)=d|h(a)=c]={1 \over |D|}}$

If we properly encrypt all the digests (e.g. with a one-time pad), an attacker cannot learn anything from them and the same hash function can be used for all communication between the two parties. This may not be true for ECB encryption because it may be quite likely that two messages produce the same hash value. Then some kind of initialization vector should be used, which is often called the nonce. It has become common practice to set h0 = f(nonce), where f is also secret.

Notice that having massive amounts of computer power does not help the attacker at all. If the recipient limits the amount of forgeries it accepts (by sleeping whenever it detects one), |D| can be 232 or smaller.

### Example

The following C function generates a 24 bit UMAC. It assumes that secret is a multiple of 24 bits, msg is not longer than secret and result already contains the 24 secret bits e.g. f(nonce). nonce does not need to be contained in msg.

C language code (original)
/* DUBIOUS: This does not seem to have anything to do with the (likely long) RFC
* definiton. This is probably an example for the general UMAC concept.
* Who the heck from 2007 (Nroets) chooses 3 bytes in an example?
*
* We gotta move this along with a better definition of str. uni. hash into
* uni. hash. */
#define uchar uint8_t
void UHash24 (uchar *msg, uchar *secret, size_t len, uchar *result)
{
uchar r1 = 0, r2 = 0, r3 = 0, s1, s2, s3, byteCnt = 0, bitCnt, byte;

while (len-- > 0) {
/* Fetch new secret for every three bytes. */
if (byteCnt-- == 0) {
s1 = *secret++;
s2 = *secret++;
s3 = *secret++;
byteCnt = 2;
}
byte = *msg++;
/* Each byte of the msg controls whether a bit of the secrets make it into the hash.、
*
* I don't get the point about keeping its order under 24, because with a 3-byte thing
* it by definiton only holds polynominals order 0-23. The "sec" code have identical
* behavior, although we are still doing a LOT of work for each bit
*/
for (uchar bitCnt = 0; bitCnt < 8; bitCnt++) {
/* The last bit controls whether a secret bit is used. */
if (byte & 1) {
r1 ^= s1; /* (sec >> 16) & 0xff */
r2 ^= s2; /* (sec >>  8) & 0xff */
r3 ^= s3; /* (sec      ) & 0xff */
}
byte >>= 1; /* next bit. */
/* and multiply secret with x (i.e. 2), subtracting (by XOR)
the polynomial when necessary to keep its order under 24 (?!)  */
uchar doSub = s3 & 0x80;
s3 <<= 1;
if (s2 & 0x80) s3 |= 1;
s2 <<= 1;
if (s1 & 0x80) s2 |= 1;
s1 <<= 1;
if (doSub) {  /* 0b0001 1011 --> */
s1 ^= 0x1B; /* x^24 + x^4 + x^3 + x + 1 [16777243 -- not a prime] */
}
} /* for each bit in the message */
} /* for each byte in the message */
*result++ ^= r1;
*result++ ^= r2;
*result++ ^= r3;
}

C language code (revised)
#define uchar     uint8_t
#define swap32(x) ((x) & 0xff) << 24 | ((x) & 0xff00) << 8 | ((x) & 0xff0000) >> 8 | (x) & 0xff000000) >> 24)
/* This is the same thing, but grouped up (generating better assembly and stuff).
It is still bad and nobody has explained why it's strongly universal. */
void UHash24Ex (uchar *msg, uchar *secret, size_t len, uchar *result)
{
uint32_t sec = 0, ret = 0, content = 0;

while (len > 0) {
/* Read three in a chunk. */
content = 0;
switch (read = (len >= 3 ? 3 : len)) {
case 2: content |= (uint32_t) msg[2] << 16; /* FALLTHRU */
case 1: content |= (uint32_t) msg[1] << 8;  /* FALLTHRU */
case 0: content |= (uint32_t) msg[0];
}

/* Fetch new secret for every three bytes. */
sec = (uint32_t) secret[2] << 16 | (uint32_t) secret[1] << 8 | (uint32_t) secret[0];
secret += 3;

/* The great compressor. */
for (bitCnt = 0; bitCnt < 24; bitCnt++) {
/* A hard data dependency to remove: output depends
* on the intermediate.
* Doesn't really work with CRC byte-tables. */
if (byte & 1) {
ret ^= sec;
}
byte >>= 1; /* next bit. */
/* Shift register. */
sec <<= 1;
if (sec & 0x01000000)
sec ^= 0x0100001B;
sec &= 0x00ffffff;
} /* for each bit in the message */
} /* for each 3 bytes in the message */
result[0] ^= ret & 0xff;
result[1] ^= (ret >>  8) & 0xff;
result[2] ^= (ret >> 16) & 0xff;
}


## NH and the RFC UMAC

### NH

Functions in the above unnamed[citation needed] strongly universal hash-function family uses n multiplies to compute a hash value.

The NH family halves the number of multiplications, which roughly translates to a two-fold speed-up in practice.[1] For speed, UMAC uses the NH hash-function family. NH is specifically designed to use SIMD instructions, and hence UMAC is the first MAC function optimized for SIMD.[2]

The following hash family is ${\displaystyle 2^{-w}}$-universal:[2]

${\displaystyle \operatorname {NH} _{K}(M)=\left(\sum _{i=0}^{(n/2)-1}((m_{2i}+k_{2i}){\bmod {~}}2^{w})\cdot ((m_{2i+1}+k_{2i+1}){\bmod {~}}2^{w})\right){\bmod {~}}2^{2w}}$.

where

• The message M is encoded as an n-dimensional vector of w-bit words (m0, m1, m2, ..., mn-1).
• The intermediate key K is encoded as an n+1-dimensional vector of w-bit words (k0, k1, k2, ..., kn). A pseudorandom generator generates K from a shared secret key.

Practically, NH is done in unsigned integers. All multiplications are mod 2^w, all additions mod 2^w/2, and all inputs as are a vector of half-words (${\displaystyle w/2=32}$-bit integers). The algorithm will then use ${\displaystyle \lceil k/2\rceil }$ multiplications, where ${\displaystyle k}$ was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input.

### RFC 4418

RFC 4418 does a lot to wrap NH to make it a good UMAC. The overall UHASH ("Universal Hash Function") routine produces a variable length of tags, which corrsponds to the number of iterations (and the total lengths of keys) needed in all three layers of its hashing. Several calls to an AES-based key derivation function is used to provide keys for all three keyed hashes.

• Layer 1 (1024 byte chunks -> 8 byte hashes concatenated) uses NH because it is fast.
• Layer 2 hashes everything down to 16 bytes using a POLY function that performs prime modulus arithmetics, with the prime changing as the size of the input grows.
• Layer 3 hashes the 16-byte string to a fixed length of 4 bytes. This is what one iteration generates.

In RFC 4418, NH is rearranged to take a form of:

Y = 0
for (i = 0; i < t; i += 8) do
Y = Y +_64 ((M_{i+0} +_32 K_{i+0}) *_64 (M_{i+4} +_32 K_{i+4}))
Y = Y +_64 ((M_{i+1} +_32 K_{i+1}) *_64 (M_{i+5} +_32 K_{i+5}))
Y = Y +_64 ((M_{i+2} +_32 K_{i+2}) *_64 (M_{i+6} +_32 K_{i+6}))
Y = Y +_64 ((M_{i+3} +_32 K_{i+3}) *_64 (M_{i+7} +_32 K_{i+7}))
end for


This definition is designed to encourage programmers to use SIMD instructions on the accumulation, since only data with four indices away are likely to not be put in the same SIMD register, and hence faster to multiply in bulk. On a hypothetical machine, it could simply translate to:

Hypothetical assembly
movq        $0, regY ; Y = 0 movq$0,   regI  ; i = 0
loop:
add         reg1, regM, regI ; reg1 = M + i
vldr.4x32   vec1, reg1       ; load 4x32bit vals from memory *reg1 to vec1
vldr.4x32   vec2, reg2
vmul.4x64   vec3, vec1, vec2 ; vec3 = vec1 * vec2
uaddv.4x64  reg3, vec3       ; horizontally sum vec3 into reg3
add         regY, regY, reg3 ; regY = regY + reg3