Difference between revisions of "Securely wipe disk"

From ArchWiki
Jump to: navigation, search
m (Coreutils: merging isn't moving ... reason added)
m (Random data: wording to link to introduction of section)
(44 intermediate revisions by 7 users not shown)
Line 17: Line 17:
 
== Common use cases ==
 
== Common use cases ==
 
=== Wipe all data left on the device ===
 
=== Wipe all data left on the device ===
There may be (possibly unencrypted) data left on the device and you want to protect against simple Forensic Investigation that would be possible with for example [[File Recovery]] Software.
+
The most common usecase for completely and irrevocably wiping a device will be when the device it going to be given away or also maybe sold. There may be (unencrypted) data left on the device and you want to protect against simple forensic investigation that is mere child's play with for example [[File Recovery]] software.
  
If you are not going to set up block device encryption but just want to roughly wipe everything from the disk you could consider using /dev/zero or simple patterns instead of a cryptographically strong [[Wikipedia:Random_number_generator|random number generator]]. (Referred to as RNG in this article from now on.) This allows to wipe big disks with maximum performance.
+
If you want to quickly wipe everything from the disk /dev/zero or simple patterns allow maximum performance while adequate randomness can be advantageous in some cases that should be covered up in [[#Data remanence]].
  
It is meant to provide a level of data erasure not allowing recovery with normal system functions and hardware interfaces like standard ATA/SCSI commands. Any File Recovery Software mentioned above then would need to be specialized on proprietary storage-hardware features.
+
Every overwritten bit means to provide a level of data erasure not allowing recovery with normal system functions (like standard ATA/SCSI commands) and hardware interfaces. Any file recovery software mentioned above then would need to be specialized on proprietary storage-hardware features.
  
Without at least undocumented drive commands or fiddling about the device’s controller or firmware to make them read out for example reallocated sectors (bad blocks that [[S.M.A.R.T.]] retired from use) in case of a HDD no data can get recreated.
+
In case of a HDD data recreation will not be possible without at least undocumented drive commands or fiddling about the device’s controller or firmware to make them read out for example reallocated sectors (bad blocks that [[S.M.A.R.T.]] retired from use).
  
Read the section on the possibility of [[#Data remanence]] if you want to take wiping serious. This is exceedingly important for all Flash storage devices.
+
There are different wiping issues with different physical storage technologys, most notably all Flash memory based devices and older magnetic storage (old HDD's, floppy disks, tape).
  
 
=== Preparations for block device encryption ===
 
=== Preparations for block device encryption ===
If you want to prepare your drive to securely set up [[Disk_Encryption#Block_device_encryption|Block device encryption]] inside the wiped area afterwards you really should use random data.
+
If you want to prepare your drive to securely set up [[Disk Encryption#Block device encryption]] inside the wiped area afterwards you really should use [[#Random data]] generated by a trusted cryptographically strong random number generator (referred to as RNG in this article from now on).
  
{{Warning|If Block device encryption is mapped on a partition that contains anything else than random/encrypted data, disclosure of usage patterns on the encrypted drive is possible and weakens the encryption the kind of it does for filesystem-level-encryption. Do never use /dev/zero, simple patterns (badblocks, eg.) or other unrandom data before setting up Block device encryption if you are serious about it!}}
+
{{Wikipedia|Random number generation}}
 +
 
 +
{{Warning|If Block device encryption is mapped on a partition that contains anything else than random/encrypted data, disclosure of usage patterns on the encrypted drive is possible and weakens the encryption being comparable with filesystem-level-encryption. Do never use /dev/zero, simple patterns (badblocks, eg.) or other unrandom data before setting up Block device encryption if you are serious about it!}}
  
 
== Select a data source for overwriting ==
 
== Select a data source for overwriting ==
{{Moveto|#Select_a_program|What refers to applications involved should get moved to the corresponding section. A short general introduction to unrandom (/dev/zero, write-test, ...) and random sources should complement from the section above.}}
 
There are three sources of random data commonly used for securely overwriting hard disk partitions; {{ic|/dev/urandom}}, badblocks, and shred.
 
  
=== dd and /dev/urandom ===
+
As just said If you want to wipe sensitive data you can use anything matching your needs.
  
#dd if=/dev/urandom of=/dev/<drive> bs=4096
+
If you want to setup block device encryption afterwards you should always wipe at least with Pseudorandom data.
  
Where {{ic|/dev/<drive>}} is the drive to be encrypted.
+
For data that is not truly random your disk's writing speed should be the only limiting factor. If you need random data, the required system performance to generate it may extremely depend on what you choose as source of entropy.
  
{{Note| Using {{ic|/dev/urandom}} will take a long time to completely overwrite a drive with "random" data. In the strictest sense, {{ic|/dev/urandom}} is less random than {{ic|/dev/random}}; however, {{ic|/dev/random}} uses the kernel entropy pool and will halt overwriting until more input entropy once this pool has been exhausted. This makes the use of {{ic|/dev/random}} for overwriting hard disks impractical.}}
+
{{Note|Everything regarding [[Benchmarking disk wipes]] should get merged there.}}
  
{{Note| Users may also find that {{ic|/dev/urandom}} takes an excessively long time on large drives of several hundred gigabytes or more (more than twenty-four hours). [[Frandom]] offers a faster alternative.}}
+
=== Unrandom data ===
 
+
Overwriting with {{ic|/dev/zero}} or simple patterns is considered secure in most resources. In the case of current HDD's it should be sufficient for fast disk wipes.
{{Note|You can retrieve progress of the dd command with this command: {{ic|kill -USR1 $(pidof dd)}}}}
+
 
+
=== pattern write test ===
+
{{Note|The {{ic|badblocks}} command overwrites the drive at a much faster rate by generating data that is not truly random.}}
+
 
+
See also [[#Badblocks]].
+
 
+
=== Performance ===
+
==== Benchmarking disk wipes ====
+
{{Note|Everything regarding [[Benchmarking disk wipes]] should get merged there.}}
+
  
==== zero/patterns ====
 
For Data that is not truely random your disk's writing speed should be the only limiting factor. If you need random data performance may heavily depend on what you choose as source of randomness.
 
 
{{Warning|A drive that is abnormally fast in writing patterns or zeroing could be doing transparent compression. It is obviously presumable not all blocks get wiped this way. Some [[#Flash memory]] devices do "feature" that.}}
 
{{Warning|A drive that is abnormally fast in writing patterns or zeroing could be doing transparent compression. It is obviously presumable not all blocks get wiped this way. Some [[#Flash memory]] devices do "feature" that.}}
  
==== /dev/random ====
+
==== Pattern write test ====
{{Note|Data that is hard to compress (random data) will get written slower if the drive tries in compressing it. This should not lead to [[#Data remanence]] though. As maximum write-speed is not the performance-bottleneck it can get completely neglected while wiping disks with random data.}}
+
[[#Badblocks]] can write simple patterns to every block of a device and then read and check them searching for damaged areas (just like memtest86* does with memory).
  
The Kernel built-in RNG [[Wikipedia:/dev/random|/dev/random]] provides you the same quality random data you would use for keygeneration, but can be nearly impractical to use at least for wiping current HDD capacitys.
+
As the pattern is written to every accesible block this effectively wipes the device.
What makes disk wiping take so long with is to wait for it to [[Wikipedia:Hardware_random_number_generator#Using_observed_events|gather enough true entropy]].
+
In an entropy starved situation (e.g. remote server) this might never end while doing search operations on large directories or if your at your desktop running a first person shooter can speed up things a lot.
+
  
You can always compare {{ic|/proc/sys/kernel/random/entropy_avail}} against {{ic|/proc/sys/kernel/random/poolsize}} to keep an eye on your entropy pool.
+
=== Random data ===
  
==== Pseudorandom Data ====
+
For differences between random and pseudorandom data as source, please see [[Random Number Generation]].
A Good Compromise between Performance and Security might be the use of a [[Wikipedia:Pseudorandom_number_generator|pseudorandom number generator]] (like [[Frandom]]) or a [[Wikipedia:Cryptographically_secure_pseudorandom_number_generator|cryptographically secure pseudorandom number generator]]
+
like [[Wikipedia:Yarrow_algorithm|Yarrow]] (FreeBSD/OS-X) or [[Wikipedia:Fortuna_(PRNG)|Fortuna]] (the intended successor of Yarrow)
+
  
=== Conclusion ===
+
{{Note|Data that is hard to compress (random data) will get written slower, if the drive logic mentioned in the [[#Unrandom data]] warning tries compressing it. This should not lead to [[#Data remanence]] though. As maximum write-speed is not the performance-bottleneck it can get completely neglected while wiping disks with random data.}}
If you want to wipe sensitive data you can use anything matching your needs.
+
 
+
If you want to setup block device encryption afterwards you should always wipe at least with Pseudorandom data.
+
 
+
As a matter of course the best wiping practice is to never write unencrypted data.
+
  
 
== Select a program ==
 
== Select a program ==
Line 121: Line 100:
  
 
==== shred ====
 
==== shred ====
{{Box BLUE|From [http://en.wikipedia.org/wiki/Shred_%28Unix%29 Wikipedia]:|Shred is a Unix command that can be used to securely delete files and devices so that they can be recovered only with great difficulty with specialised hardware, if at all.}}
+
Shred is a Unix command that can be used to securely delete files and devices so that they can be recovered only with great difficulty with specialised hardware, if at all.[http://en.wikipedia.org/wiki/Shred_(Unix)] Shred uses three passes, writing pseudo-[[Securely_wipe_disk#Random data|random data]] to the device during each pass. This can be reduced or increased.
 
+
Shred uses three passes, writing pseudo-random data to the harddrive each pass. This can be reduced or increased.
+
  
 +
The following command invokes shred with its default settings and displays the progress.
 
  # shred -v /dev/<drive>
 
  # shred -v /dev/<drive>
  
This invokes shred with default settings, displaying the progress to stdout.
+
Alternatively, shred can be instructed to do only one pass with entropy from, e.g. {{ic|/dev/urandom}}.
+
 
  # shred --verbose --random-source=/dev/urandom -n1 /dev/<drive>
 
  # shred --verbose --random-source=/dev/urandom -n1 /dev/<drive>
 
Invokes shred telling it to only do one pass, with entropy from /dev/urandom.
 
  
 
=== Badblocks ===
 
=== Badblocks ===
Badblocks is in {{Pkg|e2fsprogs}}
 
 
 
For letting [[Badblocks#read-write_Test|badblocks perform a disk wipe]] a destructive read-write test has to be done.
 
For letting [[Badblocks#read-write_Test|badblocks perform a disk wipe]] a destructive read-write test has to be done.
  
 
  # badblocks -c 10240 -wsv /dev/<drive>
 
  # badblocks -c 10240 -wsv /dev/<drive>
  
 +
== Select a target ==
 +
{{Note|Fdisk will not work on [[GUID Partition Table|GPT]] formatted devices. Use gdisk ({{Pkg|gptfdisk}}) instead.}}
 +
Use fdisk to locate all read/write devices the user has read acess to.
  
{{Moveto|Badblocks|why not?}}
+
Check the output for lines that start with devices such as {{ic|/dev/sdX}}.
  
Badblocks can be made to write "random patterns" with the {{ic|-t}} option.
+
This is an example for a HDD formatted to boot a linux system:
{{Warning|A '''"random pattern"''' is a contradiction in itself. As badblocks does not reuse entropy like urandom but simply repeats one "random pattern" it should not be used where random data would be needed, e.g. for [[#Preparations for block device encryption|block device encryption]].}}
+
  
# badblocks -wsv'''t random''' /dev/<drive>
+
{{hc|# fdisk -l|<nowiki>Disk /dev/sda: 250.1 GB, 250059350016 bytes, 488397168 sectors
 +
Units = sectors of 1 * 512 = 512 bytes
 +
Sector size (logical/physical): 512 bytes / 512 bytes
 +
I/O size (minimum/optimal): 512 bytes / 512 bytes
 +
Disk identifier: 0x00ff784a
  
 +
  Device Boot      Start        End      Blocks  Id  System
 +
/dev/sda1  *        2048      206847      102400  83  Linux
 +
/dev/sda2          206848  488397167  244095160  83  Linux</nowiki>}}
  
Badblocks can run a settable number of consecutive passes with the {{ic|-p}} option. Default is one pass.
+
Or the Arch Install Medium written to a 4GB USB thumb drive:
  
# badblocks -wsv'''p <number>''' /dev/<drive>
+
{{hc|# fdisk -l|<nowiki>Disk /dev/sdb: 4075 MB, 4075290624 bytes, 7959552 sectors
 +
Units = sectors of 1 * 512 = 512 bytes
 +
Sector size (logical/physical): 512 bytes / 512 bytes
 +
I/O size (minimum/optimal): 512 bytes / 512 bytes
 +
Disk identifier: 0x526e236e
  
This makes it more likely to find all weak blocks. As a side effect this could help in limiting [[#Data_remanence]] to very rare cases for most storage devices.
+
  Device Boot      Start        End      Blocks  Id  System
 +
/dev/sdb1  *          0      802815      401408  17  Hidden HPFS/NTFS</nowiki>}}
  
{{Note|[[S.M.A.R.T.]] (Self-Monitoring, Analysis, and Reporting Technology) is featured in almost every HDD still in use nowadays.}}
+
=== Block size ===
 +
{{Wikipedia|Dd (Unix)#Block size}}
 +
If you have a [[Wikipedia:Advanced Format|Advanced Format]] hard drive it is recommended that you specify a block size larger than the default 512 bytes. To speed up the overwriting process choose a block size matching your drive's physical geometry by appending the block size option to the dd command (i.e. {{ic|<nowiki>bs=4096</nowiki>}} for 4KB).
  
== Select a target ==
+
Fdisk prints physical and logical sector size for every disk.
{{Note|Fdisk will not work on [[GUID Partition Table|GPT]] formatted devices. Use {{Pkg|gdisk}} instead.}}
+
Use fdisk to locate all read/write devices. This will include USB drives if the user can access them. List the partition tables:
+
# fdisk -l
+
 
+
Check the output for lines that start with devices such as {{ic|/dev/sda}}. For example:
+
Disk /dev/sdc: 4063 MB, 4063232000 bytes
+
125 heads, 62 sectors/track, 1024 cylinders
+
Units = cylinders of 7750 * 512 = 3968000 bytes
+
Disk identifier: 0x00000000
+
In the preceding example the USB thumb drive is listed as {{ic|/dev/sdc}}.
+
+
== Block size ==
+
{{Wikipedia|Dd (Unix)#Block size}}
+
If you have a [[Wikipedia:Advanced Format|Advanced Format]] hard drive it is recommended that you specify a block size larger than the default 512 bytes. To speed up the overwriting process choose a block size matching your drive's physical geometry by appending the block size option to the dd command (i.e. {{ic|<nowiki>bs=4096</nowiki>}}).
+
  
To quickly find the block size of the device issue the following command:
+
Alternatively sysfs does expose information:
  # dumpe2fs -h /dev/sdX | grep 'Block size:'
+
  /sys/block/sdX/queue/physical_block_size
For more information read [http://www.linfo.org/get_block_size.html How to Find the Block Size] on The Linux Information Project and the [http://www.gnu.org/software/coreutils/manual/html_node/Block-size.html#Block-size Block size] section in the core GNU utilites manual.
+
/sys/block/sdX/queue/logical_block_size
 +
/sys/block/sdX/alignment_offset
  
 
== Overwrite the disk ==
 
== Overwrite the disk ==
 
{{warning|There is no confirmation regarding the sanity of this command so '''repeatedly check''' that the correct drive or partition has been targeted. Make certain that the {{ic|<nowiki>of=...</nowiki>}} option points to the target drive and not to a system disk.}}
 
{{warning|There is no confirmation regarding the sanity of this command so '''repeatedly check''' that the correct drive or partition has been targeted. Make certain that the {{ic|<nowiki>of=...</nowiki>}} option points to the target drive and not to a system disk.}}
  
Zero-fill the disk by writing a zero byte to every addressable location on the disk using the [[Wikipedia:/dev/zero|/dev/zero]] stream.
+
Zero-fill the disk by writing a zero byte to every addressable location on the disk using the [[Wikipedia:/dev/zero|/dev/zero]] stream. {{ic|iflag}} and {{ic|oflag}} as below will try to disable buffering, which is senseless for a constant stream.
  
  # dcfldd if=/dev/zero of=/dev/sdX bs=4096
+
  # dd if=/dev/zero of=/dev/sdX iflag=nocache oflag=direct bs=4096
  
or the [[Wikipedia:/dev/random|/dev/random]] stream:
+
Or the [[Wikipedia:/dev/random|/dev/urandom]] stream:
  # dcfldd if=/dev/urandom of=/dev/sdX bs=4096
+
  # dd if=/dev/urandom of=/dev/sdX bs=4096
  
The process is finished when dcfldd reports, {{ic|No space left on device}}. For example:
+
The process is finished when dd reports, {{ic|No space left on device}}:
  18944 blocks (75776Mb) written.dcfldd:: No space left on device
+
  dd: writing to ‘/dev/sdb’: No space left on device
 +
7959553+0 records in
 +
7959552+0 records out
 +
4075290624 bytes (4.1 GB) copied, 1247.7 s, 3.3 MB/s
  
 
== Data remanence ==
 
== Data remanence ==
Line 196: Line 175:
 
The residual representation of data may remain even after attempts have been made to remove or erase the data.
 
The residual representation of data may remain even after attempts have been made to remove or erase the data.
  
Residual data may be removed by writing random data to the disk or with more than one iteration. However, more than one iteration may not significantly decrease the ability to reconstruct the data of hard disk drives. For more information see [http://www.h-online.com/news/Secure-deletion-a-single-overwrite-will-do-it--/112432 Secure deletion: a single overwrite will do it] or [http://www.springerlink.com/content/408263ql11460147/ Overwriting Hard Drive Data: The Great Wiping Controversy].
+
Residual data may get wiped by writing (random) data to the disk with a single or even more than one iteration. However, more than one iteration may not significantly decrease the possibility to reconstruct the data of hard disk drives. For more information see [http://www.h-online.com/news/Secure-deletion-a-single-overwrite-will-do-it--/112432 Secure deletion: a single overwrite will do it - The H Security].
  
If the data can be located on the disk and you can confirm that it has never been copied anywhere else, a random number generator provides a quick and thorough alternative.
+
=== Random data ===
 +
If the data can get exactly located on the disk and was never copied anywhere else, wiping with random data can be thoroughgoing and impressively quick as long there is enough entropy in the pool.
  
=== Residual magnetism ===
+
A good example is cryptsetup using /dev/urandom for [[Dm-crypt_with_LUKS#Wipe_LUKS_keyslots|wiping the LUKS keyslots]].
 +
 
 +
=== Hardware specific issues ===
 +
==== Flash memory ====
 +
[[Wikipedia:Write amplification]] and other characteristics make Flash memory a stubborn target for reliable wiping.
 +
As there is a lot of transparent abstraction in between data as seen by a device's controller chip and the operating system sight data is never overwritten in place and wiping particular blocks or files is not reliable.
 +
 
 +
Other "features" like transparent compression (all SandForce SSD's) can compress your /dev/zero or pattern stream so if wiping is fast beyond belief this might be the case.
 +
 
 +
Disassembling Flash memory devices, unsoldering the chips and analyzing data content without the controller in between is feasible without difficulty using [http://www.flash-extractor.com/manual/reader_models/ simple hardware]. Data recovery companys do it for cheap money.
 +
 
 +
For more information see: [http://www.usenix.org/events/fast11/tech/full_papers/Wei.pdf Reliably Erasing Data From Flash-Based Solid State Drives].
 +
 
 +
==== Residual magnetism ====
 
Wiped hard disk drives and other magnetic storage can get disassembled in a cleanroom and then analyzed with equipment like a [[Wikipedia:Magnetic force microscope|magnetic force microscope]]. This may allow the overwritten data to be reconstructed by analyzing the measured [[Wikipedia:Remanence|residual magnetics]].
 
Wiped hard disk drives and other magnetic storage can get disassembled in a cleanroom and then analyzed with equipment like a [[Wikipedia:Magnetic force microscope|magnetic force microscope]]. This may allow the overwritten data to be reconstructed by analyzing the measured [[Wikipedia:Remanence|residual magnetics]].
  
This method of data recovery for current HDD's is largely theoretical and would require substantial financial resources. Nevertheless [[Wikipedia:Degaussing#Degaussing magnetic data storage media|degaussing]] is still practiced.
+
This method of data recovery for current HDD's is largely theoretical and would require substantial financial resources. Nevertheless [[Wikipedia:Degaussing#Degaussing magnetic data storage media|degaussing]] is still a practiced countermeasure.
  
=== Old magnetic storage ===
+
==== Old magnetic storage ====
 
Securely wiping old magnetic storage (e.g. floppy disks, magnetic tape) is much harder due to much lower [[Wikipedia:Memory storage density|memory storage density]]. Many iterations with random data might be needed to wipe any sensitive data. To ensure that data has been completely erased most resources advise physical destruction.
 
Securely wiping old magnetic storage (e.g. floppy disks, magnetic tape) is much harder due to much lower [[Wikipedia:Memory storage density|memory storage density]]. Many iterations with random data might be needed to wipe any sensitive data. To ensure that data has been completely erased most resources advise physical destruction.
  
=== Flash memory ===
+
==== Operating system, programs and filesystem ====
Like older magnetic storage, flash memory can be difficult to wipe because of  [[Wikipedia:Wear_leveling|wear leveling]] and transparent compression. For more information see [http://www.usenix.org/events/fast11/tech/full_papers/Wei.pdf Reliably Erasing Data From Flash-Based Solid State Drives].
+
{{Note|This is not specific to any hardware obviously.}}
 
+
The operating system, executed programs or [[Wikipedia:Journaling file system|journaling file system]]s may copy your unencrypted data throughout the block device. When writing to plain disks this should only be relevant in conjunction with one of the above.
=== Filesystem, operating system, programs ===
+
The operating system, executed programs or [[Wikipedia:Journaling file system|journaling file system]]s may copy your unencrypted data throughout the block device. However, this should only be relevant in conjunction with one of the above, because you are writing to plain disks.
+
  
 
== See also ==
 
== See also ==

Revision as of 13:07, 28 August 2013

Summary help replacing me
Wipe all traces left from (un-)encrypted data and/or prepare for block device encryption
Related
File Recovery
Benchmarking disk wipes
Frandom
Disk Encryption#Preparing the disk
dm-crypt with LUKS

Wiping a disk is done by writing new data over every single bit.

Note: References to "disks" in this article also apply to loopback devices.

Common use cases

Wipe all data left on the device

The most common usecase for completely and irrevocably wiping a device will be when the device it going to be given away or also maybe sold. There may be (unencrypted) data left on the device and you want to protect against simple forensic investigation that is mere child's play with for example File Recovery software.

If you want to quickly wipe everything from the disk /dev/zero or simple patterns allow maximum performance while adequate randomness can be advantageous in some cases that should be covered up in #Data remanence.

Every overwritten bit means to provide a level of data erasure not allowing recovery with normal system functions (like standard ATA/SCSI commands) and hardware interfaces. Any file recovery software mentioned above then would need to be specialized on proprietary storage-hardware features.

In case of a HDD data recreation will not be possible without at least undocumented drive commands or fiddling about the device’s controller or firmware to make them read out for example reallocated sectors (bad blocks that S.M.A.R.T. retired from use).

There are different wiping issues with different physical storage technologys, most notably all Flash memory based devices and older magnetic storage (old HDD's, floppy disks, tape).

Preparations for block device encryption

If you want to prepare your drive to securely set up Disk Encryption#Block device encryption inside the wiped area afterwards you really should use #Random data generated by a trusted cryptographically strong random number generator (referred to as RNG in this article from now on).

Template:Wikipedia

Warning: If Block device encryption is mapped on a partition that contains anything else than random/encrypted data, disclosure of usage patterns on the encrypted drive is possible and weakens the encryption being comparable with filesystem-level-encryption. Do never use /dev/zero, simple patterns (badblocks, eg.) or other unrandom data before setting up Block device encryption if you are serious about it!

Select a data source for overwriting

As just said If you want to wipe sensitive data you can use anything matching your needs.

If you want to setup block device encryption afterwards you should always wipe at least with Pseudorandom data.

For data that is not truly random your disk's writing speed should be the only limiting factor. If you need random data, the required system performance to generate it may extremely depend on what you choose as source of entropy.

Note: Everything regarding Benchmarking disk wipes should get merged there.

Unrandom data

Overwriting with /dev/zero or simple patterns is considered secure in most resources. In the case of current HDD's it should be sufficient for fast disk wipes.

Warning: A drive that is abnormally fast in writing patterns or zeroing could be doing transparent compression. It is obviously presumable not all blocks get wiped this way. Some #Flash memory devices do "feature" that.

Pattern write test

#Badblocks can write simple patterns to every block of a device and then read and check them searching for damaged areas (just like memtest86* does with memory).

As the pattern is written to every accesible block this effectively wipes the device.

Random data

For differences between random and pseudorandom data as source, please see Random Number Generation.

Note: Data that is hard to compress (random data) will get written slower, if the drive logic mentioned in the #Unrandom data warning tries compressing it. This should not lead to #Data remanence though. As maximum write-speed is not the performance-bottleneck it can get completely neglected while wiping disks with random data.

Select a program

/dev/<drive> is the drive to be encrypted.

Coreutils

Merge-arrows-2.pngThis article or section is a candidate for merging with Core_Utilities.Merge-arrows-2.png

Notes: Basic file operations are not specific to disk wiping! Unrelated stuff in this section should get merged and then deleted and replaced with a link to Core Utilities. Did you ever want to write an article about dd and Co? Then just go ahead. (Discuss in Talk:Securely wipe disk#)

Official documentation for dd and shred is linked to under #See also.

Dd

Template:Wikipedia

Note: cp does the same as dd without any operands but is not designed for more versatile disk wiping procedures.
Checking progress of dd while running

By default, there is no output of dd until the task has finished. With kill and the "USR1"-Signal you can force status output without actually killing the program. Open up a 2nd root terminal and issue the following command:

# killall -USR1 dd
Note: This will affect all other running dd-processes as well.

Or:

# kill -USR1 <PID_OF_dd_COMMAND>

For example:

# kill -USR1 $(pidof dd)

This causes the terminal in which dd is running to output the progress at the time the command was run. For example:

605+0 records in
605+0 records out
634388480 bytes (634 MB) copied, 8.17097 s, 77.6 MB/s
Dd spin-offs

Other dd alike programs feature periodical status output like i.e. a simple progress bar.

dcfldd

dcfldd is an enhanced version of dd with features useful for forensics and security. It accepts most of dd's parameters and includes status output. The last stable version of dcfldd was released on December 19, 2006.[1]

ddrescue

GNU ddrescue is a data recovery tool. It's capable of ignoring read errors what is a useless feature for disk wiping in almost any case. GNU ddrescue Manual

shred

Shred is a Unix command that can be used to securely delete files and devices so that they can be recovered only with great difficulty with specialised hardware, if at all.[2] Shred uses three passes, writing pseudo-random data to the device during each pass. This can be reduced or increased.

The following command invokes shred with its default settings and displays the progress.

# shred -v /dev/<drive>

Alternatively, shred can be instructed to do only one pass with entropy from, e.g. /dev/urandom.

# shred --verbose --random-source=/dev/urandom -n1 /dev/<drive>

Badblocks

For letting badblocks perform a disk wipe a destructive read-write test has to be done.

# badblocks -c 10240 -wsv /dev/<drive>

Select a target

Note: Fdisk will not work on GPT formatted devices. Use gdisk (gptfdisk) instead.

Use fdisk to locate all read/write devices the user has read acess to.

Check the output for lines that start with devices such as /dev/sdX.

This is an example for a HDD formatted to boot a linux system:

# fdisk -l
Disk /dev/sda: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ff784a

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      206847      102400   83  Linux
/dev/sda2          206848   488397167   244095160   83  Linux

Or the Arch Install Medium written to a 4GB USB thumb drive:

# fdisk -l
Disk /dev/sdb: 4075 MB, 4075290624 bytes, 7959552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x526e236e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           0      802815      401408   17  Hidden HPFS/NTFS

Block size

Template:Wikipedia If you have a Advanced Format hard drive it is recommended that you specify a block size larger than the default 512 bytes. To speed up the overwriting process choose a block size matching your drive's physical geometry by appending the block size option to the dd command (i.e. bs=4096 for 4KB).

Fdisk prints physical and logical sector size for every disk.

Alternatively sysfs does expose information:

/sys/block/sdX/queue/physical_block_size
/sys/block/sdX/queue/logical_block_size
/sys/block/sdX/alignment_offset

Overwrite the disk

Warning: There is no confirmation regarding the sanity of this command so repeatedly check that the correct drive or partition has been targeted. Make certain that the of=... option points to the target drive and not to a system disk.

Zero-fill the disk by writing a zero byte to every addressable location on the disk using the /dev/zero stream. iflag and oflag as below will try to disable buffering, which is senseless for a constant stream.

# dd if=/dev/zero of=/dev/sdX iflag=nocache oflag=direct bs=4096

Or the /dev/urandom stream:

# dd if=/dev/urandom of=/dev/sdX bs=4096

The process is finished when dd reports, No space left on device:

dd: writing to ‘/dev/sdb’: No space left on device
7959553+0 records in
7959552+0 records out
4075290624 bytes (4.1 GB) copied, 1247.7 s, 3.3 MB/s

Data remanence

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: This section is too dependent on links to Wikipedia. Links to diverse and high quality resources should be added. (Discuss in Talk:Securely wipe disk#)
Template:Wikipedia

The residual representation of data may remain even after attempts have been made to remove or erase the data.

Residual data may get wiped by writing (random) data to the disk with a single or even more than one iteration. However, more than one iteration may not significantly decrease the possibility to reconstruct the data of hard disk drives. For more information see Secure deletion: a single overwrite will do it - The H Security.

Random data

If the data can get exactly located on the disk and was never copied anywhere else, wiping with random data can be thoroughgoing and impressively quick as long there is enough entropy in the pool.

A good example is cryptsetup using /dev/urandom for wiping the LUKS keyslots.

Hardware specific issues

Flash memory

Wikipedia:Write amplification and other characteristics make Flash memory a stubborn target for reliable wiping. As there is a lot of transparent abstraction in between data as seen by a device's controller chip and the operating system sight data is never overwritten in place and wiping particular blocks or files is not reliable.

Other "features" like transparent compression (all SandForce SSD's) can compress your /dev/zero or pattern stream so if wiping is fast beyond belief this might be the case.

Disassembling Flash memory devices, unsoldering the chips and analyzing data content without the controller in between is feasible without difficulty using simple hardware. Data recovery companys do it for cheap money.

For more information see: Reliably Erasing Data From Flash-Based Solid State Drives.

Residual magnetism

Wiped hard disk drives and other magnetic storage can get disassembled in a cleanroom and then analyzed with equipment like a magnetic force microscope. This may allow the overwritten data to be reconstructed by analyzing the measured residual magnetics.

This method of data recovery for current HDD's is largely theoretical and would require substantial financial resources. Nevertheless degaussing is still a practiced countermeasure.

Old magnetic storage

Securely wiping old magnetic storage (e.g. floppy disks, magnetic tape) is much harder due to much lower memory storage density. Many iterations with random data might be needed to wipe any sensitive data. To ensure that data has been completely erased most resources advise physical destruction.

Operating system, programs and filesystem

Note: This is not specific to any hardware obviously.

The operating system, executed programs or journaling file systems may copy your unencrypted data throughout the block device. When writing to plain disks this should only be relevant in conjunction with one of the above.

See also