Talk:Securely wipe disk

From ArchWiki
Revision as of 10:03, 1 October 2012 by Nonix (Talk | contribs) (Average block size: spaces needed)

Jump to: navigation, search

Average block size

Based on my intuition I agree with Nonix's statement in the Accuracy template, which is located at the top of the Overwrite the disk section. A common block size of 4096 bytes is equivalent to 4KB. ~ Filam (talk) 03:29, 21 September 2012 (UTC)

Just fixed it!
(For the record only: The article suggested using blocksizes of 4MB!) --Nonix (talk) 10:55, 21 September 2012 (UTC)
Badblocks per default does test 64 KB in 1 KB blocks at a time. Here it's suggested up to now to test 10 MB in 1 KB blocks. (-c 10240) Is something wrong with the general aim of the #Block size section to align blocksize to physical geometry and write it block by block? Or is this aimed at no-HDD Storage? Note what Wikipedia has to say in the matter. On the other hand they say only "scan"-speed, not writing speed. Review is needed here. --Nonix (talk) 00:06, 27 September 2012 (UTC)

Logical emulation of block size

I was unsure about the capabilities of dumpe2fs, so i wrote a new accuracy-TP. It works on Filesystem-layer so i think it's not suitable for 512e. Besides it assumes a working filesystem directly written on the device. Feel free to remove the Template if you know better. :) --Nonix (talk) 10:55, 21 September 2012 (UTC)

I removed Nonix's Accuracy template regarding logical emulation of block size since the article does not reference it. Therefore the accuracy of the article is not in question. However, it is a fair point of discussion. Nonix wrote:
Does dumpe2fs recognize 512e-discs? (4K physical sectors on the drive media with 512 byte logical emulation)
~ Filam (talk) 14:17, 21 September 2012 (UTC)

I updated the Section on fdisk. Fdisk should be the standard tool for the logical and physical Sector size printing-job. It can detect 512e as I read in a magazine tough I don't know if it always does. Any objections? --Nonix (talk) 11:01, 27 September 2012 (UTC)

Moved contributions from Talk:Dm-crypt_with_LUKS

How about including shred instead of or along with badblocks

I just used it recently and it worked great.

 # shred -v -n 1 /dev/sdb
 shred: /dev/sdb: pass 1/1 (random)...572GiB/932GiB 61%
 shred: /dev/sdb: pass 1/1 (random)...573GiB/932GiB 61%
 shred: /dev/sdb: pass 1/1 (random)...574GiB/932GiB 62%
 shred: /dev/sdb: pass 1/1 (random)...575GiB/932GiB 62%
 shred: /dev/sdb: pass 1/1 (random)...576GiB/932GiB 63%

Are there any objections or complaints about this tool? My experience was that it was A LOT faster then using /dev/random as well, and it provides display to the user of the progress. Id say it should be the first recommended option. MrSk1yj8 (talk) 29 July 2012

I agree and took the freedom to add shred. However, the random source is an intrinsic to dm-crypt. A great feature of shred is that you can choose to use a random source other than the default (see wiki example). So you can actually use /dev/urandom or whatever you prefer. That (together with -v progress view) makes it a great replacement to dd in my view. --Indigo (talk) 20:23, 17 September 2012 (UTC)

on the bashing of /dev/urandom

I don't take an opinion on whether old overwritten data can be read.

However, there is an unrelated reason to fill a LUKS partition from /dev/urandom before LUKS-initializing it (and after checking for bad blocks if you wanted to do that).

It makes it harder for people trying to read your disk and find out what's on it. If you filled it with zeroes, for example, then they would be able to tell which portions of the partition had been written to since you initialized it.

compare gentoo docs, --Idupree 22:45, 3 March 2010 (EST)

Agreed, /dev/urandom should be used to clear partitions, at least as default in the examples. If anyone wants to zero the partitions instead of using random data, they are free to do so. --Montschok 20:53, 11 August 2010 (EDT)