Adding controlled noise from predetermined distributions is a way of designing differentially private mechanisms. This technique is useful for designing private mechanisms for real-valued functions on sensitive data. Some commonly used distributions for adding noise include Laplace and Gaussian distributions.

## Definitions

Let ${\mathcal {D}}$ be a collection of all datasets and $f:{\mathcal {D}}\to \mathbb {R}$ be a real-valued function. The sensitivity  of a function, denoted $\Delta f$ , is defined by

$\Delta f=\max |f(x)-f(y)|,$ where the maximum is over all pairs of datasets $x$ and $y$ in ${\mathcal {D}}$ differing in at most one element. For functions with higher dimensions, the sensitivity is usually measured under $\ell _{1}$ or $\ell _{2}$ norms.

Throughout this article, ${\mathcal {M}}$ is used to denote a randomized algorithm that releases a sensitive function $f$ under the $\epsilon$ - (or $(\epsilon ,\delta )$ -) differential privacy.

## Mechanisms for Real-Valued Functions

### Laplace Mechanism

Introduced by Dwork et al., this mechanism adds noise drawn from a Laplace distribution:

${\mathcal {M}}_{\mathrm {Lap} }(x,f,\epsilon )=f(x)+\mathrm {Lap} \left(\mu =0,b={\frac {\Delta f}{\epsilon }}\right),$ where $\mu$ is the expectation of the Laplace distribution and $b$ is the scale parameter. Roughly speaking, a small-scale noise should suffice for a weak privacy constraint (corresponding to a large value of $\epsilon$ ), while a greater level of noise would provide a greater degree of uncertainty in what was the original input (corresponding to a small value of $\epsilon$ ).

To argue that the mechanism satisfies $\epsilon$ -differential privacy, it suffices to show that the output distribution of ${\mathcal {M}}_{\mathrm {Lap} }(x,f,\epsilon )$ is close in a multiplicative sense to ${\mathcal {M}}_{\mathrm {Lap} }(y,f,\epsilon )$ everywhere.

{\begin{aligned}{\frac {\mathrm {Pr} ({\mathcal {M}}_{\mathrm {Lap} }(x,f,\epsilon )=z)}{\mathrm {Pr} ({\mathcal {M}}_{\mathrm {Lap} }(y,f,\epsilon )=z)}}&={\frac {\mathrm {Pr} (f(x)+\mathrm {Lap} (0,{\frac {\Delta f}{\epsilon }})=z)}{\mathrm {Pr} (f(y)+\mathrm {Lap} (0,{\frac {\Delta f}{\epsilon }})=z)}}\\&={\frac {\mathrm {Pr} (\mathrm {Lap} (0,{\frac {\Delta f}{\epsilon }})=z-f(x))}{\mathrm {Pr} (\mathrm {Lap} (0,{\frac {\Delta f}{\epsilon }})=z-f(y))}}\\&={\frac {{\frac {1}{2b}}\exp \left(-{\frac {|z-f(x)|}{b}}\right)}{{\frac {1}{2b}}\exp \left(-{\frac {|z-f(y)|}{b}}\right)}}\\&=\exp \left({\frac {|z-f(y)|-|z-f(x)|}{b}}\right)\\&\leq \exp \left({\frac {|f(y)-f(x)|}{b}}\right)\\&\leq \exp \left({\frac {\Delta f}{b}}\right)=\exp(\epsilon ).\end{aligned}} The first inequality follows from the triangle inequality and the second from the sensitivity bound. A similar argument gives a lower bound of $\exp(-\epsilon )$ .

A discrete variant of the Laplace mechanism, called the geometric mechanism, is universally utility-maximizing. It means that for any prior (such as auxiliary information or beliefs about data distributions) and any symmetric and monotone univariate loss function, the expected loss of any differentially private mechanism can be matched or improved by running the geometric mechanism followed by a data-independent post-processing transformation. The result also holds for minimax (risk-averse) consumers. No such universal mechanism exists for multi-variate loss functions.

### Gaussian Mechanism

Analogous to Laplace mechanism, Gaussian mechanism adds noise drawn from a Gaussian distribution whose variance is calibrated according to the sensitivity and privacy parameters.

${\mathcal {M}}_{\text{Gauss}}(x,f,\epsilon ,\delta )=f(x)+{\mathcal {N}}\left(\mu =0,\sigma ^{2}={\frac {2\ln(1.25/\delta )\cdot (\Delta f)^{2}}{\epsilon ^{2}}}\right).$ Note that, unlike Laplace mechanism, ${\mathcal {M}}_{\text{Gauss}}$ only satisfies $(\epsilon ,\delta )$ -differential privacy. To prove so, it is sufficient to show that, with probability at least $1-\delta$ , the distribution of ${\mathcal {M}}_{\text{Gauss}}(x,f,\epsilon ,\delta )$ is close to ${\mathcal {M}}_{\text{Gauss}}(y,f,\epsilon ,\delta )$ . The proof is a little more involved (see Appendix A in Dwork and Roth).

## Mechanisms for High Dimensional Functions

For high dimensional functions of the form $f:{\mathcal {D}}\to \mathbb {R} ^{d}$ , where $d\geq 2$ , the sensitivity of $f$ is measured under $\ell _{1}$ or $\ell _{2}$ norms. The equivalent Gaussian mechanism that satisfies $(\epsilon ,\delta )$ -differential privacy for such function is

${\mathcal {M}}_{\text{Gauss}}(x,f,\epsilon ,\delta )=f(x)+{\mathcal {N}}^{d}\left(\mu =0,\sigma ^{2}={\frac {2\ln(1.25/\delta )\cdot (\Delta _{2}f)^{2}}{\epsilon ^{2}}}\right),$ where $\Delta _{2}f$ represents the sensitivity of $f$ under $\ell _{2}$ norm and ${\mathcal {N}}^{d}(0,\sigma ^{2})$ represents a $d$ -dimensional vector, where each coordinate is a noise sampled according to ${\mathcal {N}}(0,\sigma ^{2})$ independent of the other coordinates (see Dwork and Roth for proof).