Talk:RAID

From ArchWiki

GPT partitions

zap (destroy) GPT and MBR data structures

 sgdisk --zap-all /dev/sdb

create largest possible new partition

 sgdisk --largest-new=1 /dev/sdb

check partition table integrity

 sgdisk --verify /dev/sdb

print partition table

 sgdisk --print /dev/sdb
Is this a mis-paste? I can't quite see why it is here?
jasonwryan (talk) 00:36, 19 July 2013 (UTC)Reply[reply]
This is here because it's how I prepare hard-drives before setting them up for RAID. Not everyone uses GPT *yet* so didn't want to just stick it on the main page.. ~ AskApache (talk) 09:26, 3 October 2013 (UTC)Reply[reply]
This is nice, especially if you want to script partitioning in a more readable way than piping input into fdisk or gdisk. --Nearwood (talk) 18:16, 15 March 2017 (UTC)Reply[reply]

Major re-write

I've done a pretty major overhaul to the article over the past week. Please check it for accuracy. One of my goals to was add a thread of continuity to the article so it reads as complete work rather than as a hodgepodge of advice. I feel that mixing formatting types and utils for example is confusing to newbies. I recommend sticking with GTP as you can see in the text.Graysky (talk) 23:22, 5 October 2013 (UTC)Reply[reply]

RAID 1 and Stride/Stripe

The section "Build the Array" mentions "In a RAID1 the chunk switch is actually not needed." and mdadm outputs "chunk size ignored for this level". cat /proc/mdstat outputs "65536KB chunk" regardless of what chunk size was chosen during creating.

Yet the section "Calculating the Stride and Stripe-width" has an example for RAID1 and uses a 64KB chunk size for calculating it. What is this math based on if it is impossible to choose a chunk size for RAID1 —This unsigned comment is by Malstrond (talk) 13 January 2014. Please sign your posts with ~~~~!

i think you are right. the man page says "[chunk] is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.". raid 1 is a single chunck. so it ignores that flag. ~ Gcb (talk)

Note that /proc/mdstat has two "chunks" listed:
 Personalities : [raid6] [raid5] [raid4]
 md0 : active raid6 sda2[0] sdb2[1] sde2[4] sdd2[3] sdc2[2]
     2929501200 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/5] [UUUUU]
     bitmap: 1/8 pages [4KB], 65536KB chunk
The former is my specified chunk, the latter is for some other specification unknown to me.
--Nearwood (talk) 18:13, 15 March 2017 (UTC)Reply[reply]

Subsituting one identical disk from another

It's useful to use sfdisk -d /dev/sda | sfdisk /dev/sdb for copy the partition from one of the disks of the raid to the replacing disk. External references: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array. If you don't use sfdisk, then you could receive the error: mdadm: /dev/sdb1 not large enough to join array —This unsigned comment is by Xan (talk) 8 November 2014. Please sign your posts with ~~~~!

Add a drive (RAID5, RAID6)

In order to maintain fail-safety in the event of an interruption, a backup file should be created with

--backup-file location

We shouldn't tell people to add a drive without it.

—This unsigned comment is by Orbita (talk) 01:14, 24 August 2019 (UTC). Please sign your posts with ~~~~!Reply[reply]

Assemble a RAID10 n2 on aarch64 kernel 5.2.9-1 mdadm v4.1

The sequence proposed in the page to assemble the array is not sufficent in my case.

   mdadm --detail --scan >> /etc/mdadm.conf

Generated the following line in ``mdadm.conf``

   INACTIVE-ARRAY /dev/md127 metadata=1.2 name=raspi:myarray UUID=cec39bd8:b5a340f3:ca18cc5b:dcdedede

Which caused any ``mdadm`` command to spit out ``mdadm: Unknown keyword INACTIVE-ARRAY``.

I had to:

   mdadm --assemble --verbose --force /dev/md127 /dev/sd{a,b,c}

And then

   mdadm --detail --scan >> /etc/mdadm.conf

Generated

   ARRAY /dev/md127 metadata=1.2 name=raspi:myarray UUID=cec39bd8:b5a340f3:ca18cc5b:dcdedede

Which made ``mdadm`` happy and the array functional.

Should the page be updated? I am not that sure that ``mdadm`` with force and the explicit list of devices is correct in the general case.

Feryllt (talk) 19:41, 25 August 2019 (UTC)Reply[reply]