Difference between revisions of "Random number generation"

From ArchWiki
Jump to: navigation, search
m (See also: adding links from discussion)
m (Faster alternatives: adding AL because these paras are so heavily linked, it is hard to see internal ones)
Line 34: Line 34:
== Faster alternatives ==
== Faster alternatives ==
A more practical compromise between performance and security is the use of a [[Wikipedia:Pseudorandom_number_generator|pseudorandom number generator]], for example:  
A more practical compromise between performance and security is the use of a [[Wikipedia:Pseudorandom_number_generator|pseudorandom number generator]]. In Arch Linux repositories for example:  
* [[Haveged]]
* [[Haveged]]
* [[Frandom]]
* [[Frandom]]

Revision as of 23:01, 28 August 2013

From wikipedia:Random number generation:

A random number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that lack any pattern, i.e. appear random.

Generation of random data is crucial for several applications like making cryptographic keys (e.g. for Disk Encryption), securely wiping disks, running encrypted Software Access Points.

Kernel built-in RNG

The Linux kernel's built-in RNGs /dev/{u}random are highly commended for producing reliable random data providing the same security level that is used for the creation of cryptographic keys. The random number generator gathers environmental noise from device drivers and other sources into an entropy pool.

Note that the man random command will misdirect to the library function manpage random(3) while for information about the /dev/random device files you should run man 4 random to read random(4).


/dev/random uses an entropy pool of 4096 bits (512 Bytes) to generate random data and stops when the pool is exhausted until it gets (slowly) refilled. /dev/random is designed for generating cryptographic keys (e.g. SSL, SSH, dm-crypt's LUKS), but it is impractical to use for wiping current HDD capacities: what makes disk wiping take so long is waiting for the system to gather enough true entropy. In an entropy-starved situation (e.g. a remote server) this might never end. While doing search operations on large directories or moving the mouse in X can slowly refill the entropy pool, it's designated pool size alone will be indication enough of the inadequacy for wiping a disk.

You can always compare /proc/sys/kernel/random/entropy_avail against /proc/sys/kernel/random/poolsize to keep an eye on the system's entropy pool.

While Linux kernel 2.4 did have writable /proc entries for controlling the entropy pool size, in newer kernels only read_wakeup_threshold and write_wakeup_threshold are writable. The pool size is now hardcoded in kernel line 275 of /drivers/char/random.c:

 * Configuration information
#define INPUT_POOL_WORDS 128

The kernel's pool size is given by INPUT_POOL_WORDS * OUTPUT_POOL_WORDS which makes, as already stated, 4096 bits.

Warning: Do not use even /dev/random to generate critical cryptographic keys on a system you do not control. If in doubt, for example in shared server environments, rather choose to create the keys on another system and transfer them.


In contrast to /dev/random, /dev/urandom reuses existing entropy pool data while the pool is replenished: the output will contain less entropy than the corresponding read from /dev/random, but its quality should be sufficient for a paranoid disk wipe, preparing for block device encryption, wiping LUKS keyslots, wiping single files and many other purposes. Nevertheless it might still take a long time to bottle-feed the neverending surge of large drives with data.

Warning: /dev/urandom is not recommended for the generation of long-term cryptographic keys.

Faster alternatives

A more practical compromise between performance and security is the use of a pseudorandom number generator. In Arch Linux repositories for example:

There are also cryptographically secure pseudorandom number generators like Yarrow (FreeBSD/OS-X) or Fortuna (the intended successor of Yarrow).

See also