Difference between revisions of "Frandom"

From ArchWiki
Jump to: navigation, search
(Example)
(added another (more thorough) example and changed formatting for examples)
Line 24: Line 24:
 
= Example =
 
= Example =
 
1) On a 1.73 GHZ Thinkpad T43 with 2 GB ram:
 
1) On a 1.73 GHZ Thinkpad T43 with 2 GB ram:
# time dd if=/dev/frandom of=/dev/sdb2
+
<pre>
 +
# time dd if=/dev/frandom of=/dev/sdb2
 
  dd: writing to `/dev/sdb2': No space left on device
 
  dd: writing to `/dev/sdb2': No space left on device
 
  587384596+0 records in
 
  587384596+0 records in
Line 32: Line 33:
 
  user    3m34.693s
 
  user    3m34.693s
 
  sys    77m28.660s
 
  sys    77m28.660s
 +
</pre>
 
Summary: 300 GB in approx 3.5 hours
 
Summary: 300 GB in approx 3.5 hours
  
  
 
2) On a 2.4 GHZ (T8300 Core 2 Duo) Thinkpad T61 with 2 GB ram:
 
2) On a 2.4 GHZ (T8300 Core 2 Duo) Thinkpad T61 with 2 GB ram:
# dd if=/dev/frandom of=/dev/sdb bs=1M
+
<pre>
 +
# dd if=/dev/frandom of=/dev/sdb bs=1M
 
  dd: writing `/dev/sdb': No space left on device
 
  dd: writing `/dev/sdb': No space left on device
 
  476941+0 records in
 
  476941+0 records in
 
  476940+0 records out
 
  476940+0 records out
 
  500107862016 bytes (500 GB) copied, 5954.52 s, 84.0 MB/s
 
  500107862016 bytes (500 GB) copied, 5954.52 s, 84.0 MB/s
 +
</pre>
 
Summary: 500 GB in approx 1.65 hours
 
Summary: 500 GB in approx 1.65 hours
 +
 +
 +
3) On a 2.8 GHz (Athlon2 X4) with 4 GB ram:
 +
<pre>
 +
# dd if=/dev/frandom of=/dev/sdc3 bs=1M seek=100KB
 +
dd: writing `/dev/sdc3': No space left on device
 +
1807429+0 records in
 +
1807428+0 records out
 +
1895225712640 bytes (1.9 TB) copied, 20300.3 s, 93.4 MB/s
 +
</pre>
 +
Summary: ~2TB in ~5.64 hours. However, on the same machine:
 +
<pre>
 +
# dd if=/dev/frandom of=/dev/null bs=1M count=1000
 +
1000+0 records in
 +
1000+0 records out
 +
1048576000 bytes (1.0 GB) copied, 7.81581 s, 134 MB/s
 +
</pre>
 +
versus
 +
<pre>
 +
dd if=/dev/urandom of=/dev/null bs=1M count=1000
 +
1000+0 records in
 +
1000+0 records out
 +
1048576000 bytes (1.0 GB) copied, 144.296 s, 7.3 MB/s
 +
</pre>
 +
This makes frandom 10-20 times faster on this machine, meaning it would take approx 50-120 hours (2-5 days!) to randomize 2TB using urandom.

Revision as of 05:17, 19 September 2011

Summary

frandom is a fast alternative to /dev/urandom. It can be used wherever fast random number generation is required, eg for randomising large hard drives prior to encryption.

From the frandom page: "The frandom suite comes as a Linux kernel module for several kernels, or a kernel patch for 2.4.22. It implements a random number generator, which is 10-50 times faster than what you get from Linux' built-in /dev/urandom."

Does frandom generate good random numbers? Refer to the frandom page for this and other technical info.

Installation

Frandom is available as a package from the AUR.

Once the daemon has been started, it is available from /dev/frandom. It is run in the normal way:

# /etc/rc.d/frandom {start|stop|restart}

Or if you prefer, it can be started at boot by adding it /etc/rc.conf:

DAEMONS=(... frandom ...)

Wiping a drive/partition

Use the following dd command. This will wipe all the data on the specified device, take care!

# dd if=/dev/frandom of=/dev/sdx1

Example

1) On a 1.73 GHZ Thinkpad T43 with 2 GB ram:

# time dd if=/dev/frandom of=/dev/sdb2
 dd: writing to `/dev/sdb2': No space left on device
 587384596+0 records in
 587384595+0 records out
 300740912640 bytes (301 GB) copied, 12844.6 s, 23.4 MB/s
 real    214m4.620s
 user    3m34.693s
 sys     77m28.660s

Summary: 300 GB in approx 3.5 hours


2) On a 2.4 GHZ (T8300 Core 2 Duo) Thinkpad T61 with 2 GB ram:

 
# dd if=/dev/frandom of=/dev/sdb bs=1M
 dd: writing `/dev/sdb': No space left on device
 476941+0 records in
 476940+0 records out
 500107862016 bytes (500 GB) copied, 5954.52 s, 84.0 MB/s

Summary: 500 GB in approx 1.65 hours


3) On a 2.8 GHz (Athlon2 X4) with 4 GB ram:

# dd if=/dev/frandom of=/dev/sdc3 bs=1M seek=100KB
dd: writing `/dev/sdc3': No space left on device
1807429+0 records in
1807428+0 records out
1895225712640 bytes (1.9 TB) copied, 20300.3 s, 93.4 MB/s

Summary: ~2TB in ~5.64 hours. However, on the same machine:

# dd if=/dev/frandom of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 7.81581 s, 134 MB/s

versus

dd if=/dev/urandom of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 144.296 s, 7.3 MB/s

This makes frandom 10-20 times faster on this machine, meaning it would take approx 50-120 hours (2-5 days!) to randomize 2TB using urandom.