How to Linux to Hardware Raid

This tutorial explains how to view, list, create, add, remove, delete, resize, format, mount and configure RAID Levels (0, one and 5) in Linux step by pace with applied examples. Learn basic concepts of software RAID (Chunk, Mirroring, Striping and Parity) and essential RAID device management commands in detail.

RAID stands for Redundant Array of Contained Disks. There are two types of RAID; Hardware RAID and Software RAID.

Hardware RAID

Hardware RAID is a concrete storage device which is congenital from multiple hard disks. While connecting with system all disks appears equally a unmarried SCSI disk in system. From system points of view there is no difference between a regular SCSI disk and a Hardware RAID device. Arrangement tin can employ hardware RAID device as a single SCSI disk.

Hardware RAID has its own independent deejay subsystem and resource. It does not apply any resources from system such every bit power, RAM and CPU. Hardware RAID does non put whatsoever actress load in system. Since it has its own dedicate resources, it provides high performance.

Software RAID

Software RAID is a logical storage device which is built from attached disks in system. It uses all resources from system. Information technology provides tedious performance but cost nothing. In this tutorial we will learn how to create and manage software RAID in item.

This tutorial is the last function of our commodity "Linux Disk Management Explained in Easy Language with Examples". You can read other parts of this article hither.

Linux Disk Direction Tutorial

This is the first part of this article. This part explains basic concepts of Linux deejay management such every bit BIOS, UEFI, MBR, GPT, SWAP, LVM, RAID, primary partition, extended partition and Linux file organisation type.

Manage Linux Deejay Partition with fdisk Command

This is the 2nd part of this commodity. This part explains how to create primary, extended and logical partitions from fdisk control in Linux step by pace with examples.

Manage Linux Disk Partitioning with gdisk Command

This is the tertiary role of this article. This part explains how to create GPT (GUID partition table) partitions from gdisk command in Linux step by step with examples.

Linux Disk Management with parted control

This is the fourth function of this commodity. This part explains how to create chief, extended, logical and GPT partitions from parted command in Linux footstep by footstep with examples.

How to create Bandy partition in Linux

This is the fifth part of this commodity. This role explains how to create swap partition in Linux with examples including bones swap direction tasks such as how to increase, mount or clear swap memory.

Learn how to configure LVM in Linux step by step

This is the sixth role of this article. This part explains basic concepts of LVM in detail with examples including how to configure and manage LVM in Linux step by footstep.

Basic concepts of RAID

A RAID device can be configured in multiple ways. Depending on configuration it can exist categorized in 10 unlike levels. Before nosotros discuss RAID levels in more detail, permit's take a quick look on some of import terminology used in RAID configuration.

Chunk: - This is the size of information cake used in RAID configuration. If clamper size is 64KB then there would be sixteen chunks in 1MB (1024KB/64KB) RAID array.

Hot Spare: - This is the additional disk in RAID array. If any disk fails, data from faulty disk volition be migrated in this spare deejay automatically.

Mirroring: - If this feature is enabled, a copy of same data volition be saved in other disk as well. It is simply like making an additional copy of data for backup purpose.

Striping: - If this feature is enabled, data will be written in all available disks randomly. Information technology is just like sharing data between all disks, and then all of them make full equally.

Parity: - This is method of regenerating lost data from saved parity information.

Dissimilar RAID levels are defined based on how mirroring and stripping are required. Among these levels only Level 0, Level1 and Level5 are mostly used in Red Chapeau Linux.

RAID Level 0

This level provides striping without parity. Since it does not store any parity data and perform read and write operation simultaneously, speed would be much faster than other level. This level requires at least 2 hard disks. All hard disks in this level are filled as. You lot should use this level simply if read and write speed are concerned. If you lot make up one's mind to use this level then e'er deploy culling data backup plan. As any single disk failure from array volition upshot in total data loss.

RAID Level 1

This level provides parity without striping. It writes all data on ii disks. If one deejay is failed or removed, we still have all data on other disk. This level requires double hard disks. It means if you desire to use two hard disks so yous have to deploy four hard disks or if yous want use one hard disk so you take to deploy ii hard disks. Starting time hard disk drive stores original data while other disk stores the exact copy of outset disk. Since information is written twice, performance will be reduced. You should use this level only if data safety is concerned at any cost.

RAID Level 5

This level provides both parity and striping. It requires at least three disks. It writes parity information equally in all disks. If one deejay is failed, data tin can exist reconstructed from parity data available on remaining disks. This provides a combination of integrity and performance. Wherever possible you should always utilize this level.

If you want to use hardware RAID device, apply hot swappable hardware RAID device with spare disks. If any deejay fails, information volition be reconstructed on the first bachelor spare disk without any downtime and since it is a hot swappable device, you lot tin replace failed device while server is however running.

If RAID device is properly configured, in that location volition be no departure between software RAID and hardware RAID from operating system's point of view. Operating organization volition admission RAID device equally a regular hard disk, no matter whether it is a software RAID or hardware RAID.

Linux provides doctor kernel module for software RAID configuration. In order to use software RAID nosotros have to configure RAID md device which is a composite of ii or more storage devices.

How to configure software RAID pace by step

For this tutorial I presume that you have united nations-partitioned disk space or additional hard disks for practise. If you are post-obit this tutorial on virtual software such as VMware workstation, add together three additional hard disks in system. To learn how to add together boosted hd in virtual organisation, please meet the first part of this tutorial. If you are following this tutorial on physical auto, adhere an additional hard disk. You tin can use a USB stick or pen drive for practise. For demonstration purpose I have fastened three boosted difficult disks in my lab system.

Each deejay is 2GB in size. We tin list all attached hard disks with fdisk –50 command.

fdisk -l command

We can likewise use lsblk command to view a structured overview of all attached storage devices.

lsblk command

As nosotros can see in above output there are three united nations-partitioned disks available with each of 2G in size.

The mdadm package is used to create and manage the software RAID. Make sure it is installed before we start working with software RAID. To acquire how to install and manage package in linux see the following tutorials

How to configure YUM Repository in RHEL
RPM Command Explained with Example

For this tutorial I presume that mdadm package is installed.

rpm -qa mdadm

Creating RAID 0 Array

We can create RAID 0 assortment with disks or partitions. To sympathize both options we will create two dissever RAID 0 arrays; one with disks and other with partitions. RAID 0 Assortment requires at least two disks or partitions. We will use /dev/sdc and /dev/sdd disk to create RAID 0 Array from disks. Nosotros will create two partitions in /dev/sdb and later use them to create some other RAID 0 Array from partitions.

To create RAID 0 Array with disks use following command

#mdadm --create --verbose /dev/[ RAID assortment Name or Number] --level=[RAID Level] --raid-devices=[Number of storage devices] [Storage Device] [Storage Device]          

Permit's understand this command in detail

mdadm:- This is the primary command

--create:- This option is used to create a new md (RAID) device.

--verbose:- This selection is used to view the real fourth dimension update of procedure.

/dev/[ RAID array Name or Number]:- This argument is used to provide the name and location of RAID array. The md device should be created under the /dev/ directory.

--level=[RAID Level]:- This option and argument are used to define RAID level which want to create.

--raid-devices=[Number of storage devices]:- This option and statement are used to specify the number of storage devices or partitions which we want to employ in this device.

[Storage Device]:- This choice is used to specify the name and location of storage device.

Following command will be used to create a RAID 0 assortment from disks /dev/sdc and /dev/sdd with md0 name.

mdadm create raid array

To verify the assortment we tin can utilize following command

cat /proc/mdstat command

Above output confirms that RAID array md0 has been successfully created from two disks (sdd and sdc) with RAID level 0 configurations.

Creating RAID 0 Assortment with partitions

Create a 1GiB partition with fdisk command

fdisk create new partition

By default all partitions are created equally Linux standard. Alter segmentation blazon to RAID and save the sectionalization. Get out from fdisk utility and run partprobe command to update the run time kernel partition table.

fdisk command change partition type

To acquire fdisk command and its sub-command in detail please see the second part of this tutorial which explains how to create and manage partitions with fdisk control step by step.

Let'due south create ane more than sectionalisation merely this time utilize parted control.

create new partition with parted

To acquire parted command in detail please sees the fourth part of this tutorial which explains how to manage deejay with parted command step by footstep.

We have created two partitions. Let's build some other RAID (Level 0) assortment but this time utilize partitions instead of disks.

Same command will exist used to create RAID array from partitions.

madam create command

When we utilize mdadm command to create a new RAID array, information technology puts its signature on provided device or partition. It means we can create RAID assortment from any partition blazon or even from a deejay which does not contain whatever partition at all. So which partition blazon we use here is not important, the important signal which we should always consider is that partition should non incorporate whatsoever valuable data. During this process all data from partition will be wiped out.

Creating File arrangement in RAID Assortment

Nosotros cannot use RAID array for data storage until it contains a valid file system. Post-obit command is used to create a file arrangement in array.

#mkfs –t [File system blazon] [RAID Device]          

Allow'south format md0 with ext4 file organization and md1 with xfs file system.

format md device

RAID 0 Arrays are set to use. In social club to apply them nosotros have to mount them somewhere in Linux file organisation. Linux file system (principal directory structure) starts with root (/) directory and everything goes under information technology or its subdirectories. Nosotros accept to mount partitions somewhere under this directory tree. Nosotros can mount partitions temporary or permanently.

Temporary mounting RAID 0 Array

Following command is used to mount the assortment temporary.

#mount [what to mountain] [where to mountain]          

Mount command accepts several options and arguments which I will explain separately in another tutorial. For this tutorial this basic syntax is sufficient.

what to mount :- This is the array.

where to mountain :- This is the directory which will exist used to access the mounted resource.

Once mounted, whatever action we will perform in mounted directory volition be performed in mounted resources. Permit'southward empathise it practically.

  • Create a mount directory in / directory
  • Mount /dev/md0 array
  • List the content
  • Create a test directory and file
  • List the content again
  • Un-mount the /dev/md0 array and list the content once again
  • Now mount the /dev/md1 assortment and list the content
  • Again create a test directory and file. Use different name for file and directory
  • List the content
  • Un-mountain the /dev/md1 array and listing the content again

Following figure illustrates this do step by step

temporary mount

Every bit above figure shows whatever action we performed in mount directory was really performed in respective array.

Temporary mount option is good for assortment which we access occasionally. If we access array on regular basis so this approach volition not helpful. Each time we reboot the system all temporary mounted resources are become un-mounted automatically. And then if we take an array which is going to be used regularly, we should mount it permanently.

Mounting RAID Array permanently

Each resource in file organization has a unique ID chosen UUID. When mounting an array permanently we should use UUID instead of its name. From version vii, RHEL also uses UUID instead of device proper name.

The UUID stands for Universally Unique Identifier. Information technology is a 128-bit number, expressed in hexadecimal (base 16) format.

If you lot have a static environment, you may use device proper noun. But if you accept dynamic environment, yous should e'er utilise UUID. In dynamic environs device proper noun may change each time when system boot. For instance nosotros attached an additional SCSI disk in system; it will be named equally /dev/sdb. We mounted this deejay permanently with its device name. At present suppose someone else removed this deejay and attached new SCSI disk in the same slot. New disk volition also be named as /dev/sdb. Since name of quondam deejay and new disk is same, new disk will be mounted at the identify of old disk. This manner, device name could create a serious trouble in dynamic environment. Just this issue can solve with UUID. No matter how we attach the resources with organization, its UUID will remain always fix.

If you take static environment, you may consider device name to mount the resources. But if y'all have dynamic environment, you should always use UUID.

To know the UUID of all partitions we tin use blkid command. To know the UUID of a specific partitioning we accept to use its proper noun as argument with this command.

blkid command

Once we know the UUID, we can use it instead of device name. We tin too utilise copy and paste choice to type the UUID.

  • Apply blkid command to print the UUID of array.
  • Copy the UUID of assortment.
  • Apply mount command to mountain the array. Use paste option instead of typing UUID.

Following figure illustrates above steps

temporary mount with uuid command

When system boots, information technology looks in /etc/fstab file to find out the devices (partitions, LVs, bandy or assortment) which need to be mount in file system automatically. By default this file has entry for those partitions, logical volumes and bandy space which were created during the installation. To mount any boosted device (Array) automatically we take to brand an entry for that device in this file. Each entry in this file has six fields.

default fstab file

Number Filed Description
1 What to mount Device which we want to mount. Nosotros can apply device name, UUID and characterization in this filed to represent the device.
2 Where to mount The directory in principal Linux File System where we desire to mount the device.
3 File arrangement File organization blazon of device.
4 Options Merely like mountain control we can also employ supported options hither to control the mount process. For this tutorial nosotros will employ default options.
five Dump support To enable the dump on this device use ane. Use 0 to disable the dump.
vi Automatic check Whether this device should be checked while mounting or not. To disable employ 0, to enable use 1 (for root partitioning) or 2 (for all partitions except root sectionalization).

Let's make some directories to mount the arrays which nosotros take created recently

mkdir command

Accept the fill-in of fstab file and open up it for editing

etc/fstab backup

Make entries for arrays and save the file.

fstab entries

For sit-in purpose I used both device proper noun and UUID to mountain the partitions. Afterwards saving e'er check the entries with mount –a command. This command will mount everything listed in /etc/fstab file. Then if we made whatever mistake while updating this file, nosotros volition get an mistake as the output of this command.

If you lot become whatever fault as the output of mount –a command, right that before rebooting the system. If at that place is no fault, reboot the arrangement.

mount -a command

The df –h command is used to cheque the available infinite in all mounted partitions. We tin can apply this command to verify that all partitions are mounted correctly.

df -h command

Above output confirms that all partitions are mounted correctly. Allow's list the both RAID devices.

list md device

How to delete RAID Assortment

We cannot delete a mounted assortment. United nations-mountain all arrays which we created in this practise

umount command

Utilize post-obit command to stop the RAID assortment

#mdadm --stop /dev/[Array Name]          

mdstop command

Remove the mount directory and copy the original fstab file back.

If you haven't taken the backup of original fstab file, remove all entries from this file which you made.

restore fstab file

Finally reset all disks used in this exercise.

dd command linux

The dd command is the easiest way to residual the disk. Disk utilities store their configuration parameters in super block. Usually super block size is defined in KB so we just overwritten the outset 10MB space with aught bytes in each disk. To learn dd command in detail, encounter the fifth function of this tutorial which explains this command in detail.

At present reboot the system and apply df –h command again to verify that all RIAD devices which we created in this exercise are gone.

df -h command

How to create RAID ane and RAID 5 array

We tin create RAID one or RAID v array by following same procedure. All steps and commands will be same except the mdadm --create command. In this command you lot have to change the RAID level, number of disks and location of associated disks.

To create RAID 1 assortment from /dev/sdd and /dev/sdb disks utilise post-obit command

raid 1 array create

To create RAID 1 array from /dev/sdb1 and /dev/sdb2 partitions use following command

raid 1 partition

You may get metadata alert if you have used same disks and partitions to create RAID array previously and that disks or partitions still contain metadata data. Remember nosotros cleaned simply 10Mb starting infinite leaving remaining space untouched. Yous can safely ignore this message or can clean the unabridged disk before using them again.

To create RAID 5 array from /dev/sdb, /dev/sdc and /dev/sdd disks use following command.

raid 5 from disks

RAID 5 Configuration requires at least 3 disks or partitions. That's why we used 3 disks here.

To create RAID v assortment from /dev/sdb1, /dev/sdb2 and /dev/sdb3 partitions employ following command

raid 5 from partition

To avoid unnecessary errors always rest disks before using them in new practice.

And then far in this tutorial we have learnt how to create, mount and remove RAID array. In following section we will larn how to manage and troubleshoot a RAID Array. For this section I assume that you lot have at to the lowest degree 1 array configured. For demonstration purpose I will utilize last configured (RAID 5 with iii partitions) example. Let's create file arrangement in this array and mount it.

temporary mount md device

Permit's put some dummy data in this directory.

dummy data

I redirected the manual page of ls control in /testingdata/manual-of-ls-command file. After, to verify that file contains actual information I used wc control which counts line, word and characters of file.

How to view the detail of RAID device

Post-obit control is used to view the detailed data almost RAID device.

#mdadm --detail /dev/[RAID Device Name]          

This information includes RAID Level, Array size, used sized from total bachelor size, devices used in creating this Array, devices currently used, spare devices, failed devices, chunk size, UUID of Assortment and much more than.

mdadm detial

How to add together boosted disk or partitioning in RIAD

There are several situations where we have to increment the size of RAID device for example a raid device might be filled up with information or a disk class Array might exist failed. To increment the infinite of RAID device we have to add additional disk or partition in existing Array.

In running example nosotros used /dev/sdb deejay to create 3 partitions. The /dev/sdc and /dev/sdd are nonetheless available to use. Before we add them in this Array make sure they are cleaned. Concluding time we used dd command to clean the disks. We can utilise that control again or use following command

#mdadm --nothing-superblock /dev/[Disk proper noun]          

To cheque a disk whether it contains superblock or not nosotros tin employ following command

#mdadm --examine /dev/[Disk proper name]          

Following effigy illustrates the use of both commands on both disks

mdadm exiamne

Now both disks are set for RAID Assortment. Post-obit command is used to add additional disk in existing assortment.

#mdadm --manage /dev/[RAID Device] --add /dev/[disk or partition]          

Let'south add together /dev/sdc disk in this array and confirm the same.

mdadm add aditional space

Right now this disk has been added every bit a spare disk. This disk will non be used until whatever deejay fails from existing array or we manually strength RAID to use this deejay.

If any disk fails and spare disks are available, RAID will automatically select the first available spare disk to supersede the faulty disk. Spare disks are the best backup program in RAID device.

For fill-in we will add together some other disk in assortment, allow's employ this disk to increment the size of array. Following command is used to abound the size of RAID device.

#mdadm --grow --raid-devices=[Number of Device] /dev/[RAID Device]          

RAID arranges all devices in a sequence. This sequence is built from the club in which disks are added in array. When nosotros use this command RAID volition add next working device in active devices.

Following figure illustrates this command

mdadm grow array

As we can come across in above output disk has been added in array and the size of array has been successfully increased.

Removing faulty device

If spare device is available, RAID will automatically supervene upon the faulty device with spare device. Terminate user will non see any change. He volition exist able to access the data as usual. Allow'southward empathize it practically.

Right now in that location is no spare disk available in array. Let's add together one spare deejay.

mdadm command spare disk

When a disk fails, RAID marks that disk as failed device. Once marked, it tin can exist removed safely. If we want to remove any working device form array for maintenance or troubleshooting purpose, we should ever mark that equally a failed device earlier removing. When a device is marked every bit failed device, all data from failed device is reconstructed in working devices.

To mark a disk as failed device following command is used.

#mdadm --manage --set-faulty /dev/[Assortment Name] /dev/[Faulty Deejay]          

We recently increased the size of this array. So earlier doing this practice let's verify one time over again that array notwithstanding contains the valid data.

wc command

As higher up output confirms that array still contains valid information. Now let's mark a device /dev/sdc as faulty device from array and confirm the operation.

mdadm set faulty disk

As above output confirms that device sdc which is number iv in array sequence has been marked as failed [F] device.

As we know if spare disk is bachelor, information technology volition exist used as the replacement of faulty device automatically. No manual action is required in this process. Let'southward confirm that spare disk has been used as the replacement of faulty disk.

mdadm remove faulty device

Finally let's verify that data is still present in array.

verify data

Above output confirms that assortment nevertheless contains valid data.

That's all for this tutorial.

0 Response to "How to Linux to Hardware Raid"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel