2007年8月17日 星期五

利用mdadm在Linux中配置RAID

1:Raid定義
RAID,全稱 Redundant Array of Inexpensive Disks,中文名為廉價磁盤冗餘陣列.RAID可分為軟RAID和硬RAID,軟RAID是通過軟件實現多塊硬盤冗餘的.而硬RAID是一般通過 RAID卡來實現RAID的.前者配置簡單,管理也比較靈活.對於中小企業來說不失為一最佳選擇.硬RAID往往花費比較貴.不過,在性能方面具有一定優勢.

2:RAID分類
RAID可分為以下幾種,做個表格認識下:
RAID 0 存取速度最快 沒有容錯
RAID 1 完全容錯 成本高,硬盤使用率低.
RAID 3 寫入性能最好 沒有多任務功能
RAID 4 具備多任務及容錯功能 Parity 磁盤驅動器造成性能瓶頸
RAID 5 具備多任務及容錯功能 寫入時有overhead
RAID 0+1 速度快、完全容錯 成本高

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb3
3:Linux RAID 5實驗詳解

假設我有4塊硬盤,(沒有條件的朋友可以用虛擬機設置出4塊硬盤出來).分別為/dev/sda /dev/sdb /dev/sdc /dev/sdd.首先做的就是分區了.
[root@localhost /]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n #按n創建新分區
Command action
e extended
p primary partition (1-4) #輸入p 選擇創建主分區
p
Partition number (1-4): 1 #輸入 1 創建第一個主分區
First cylinder (1-130, default 1): #直接回車,選擇分區開始柱面這裡就從 1 開始
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-102, default 130):
Using default value 130
Command (m for help): w #然後輸入w寫盤
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

其它分區照這樣做全部分出一個區出來.下面是總分區信息:
[root@localhost /]# fdisk -l
Disk /dev/sda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 130 1044193+ 83 Linux
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 130 1044193+ 83 Linux
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 130 1044193+ 83 Linux
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 130 1044193+ 83 Linux

下一步就是創建RAID了.
[root@localhost ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[a-d]1 #意思是創建RAID設備名為md0, 級別為RAID 5
mdadm: array /dev/md0 started. 使用3個設備建立RAID,空餘一個做備用.

OK,初步建立了RAID了,我們看下具體情況吧.
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Fri Aug 3 13:53:34 2007
Raid Level : raid5
Array Size : 2088192 (2039.25 MiB 2138.31 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Aug 3 13:54:02 2007
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 -1 spare /dev/sdd1
UUID : e62a8ca6:2033f8a1:f333e527:78b0278a
Events : 0.2

讓RAID開機啟動.配置RIAD配置文件吧.默認名字為mdadm.conf,這個文件默認是不存在的,要自己建立.該配置文件存在的主要作用是系統啟動的時候能夠自動加載軟RAID,同時也方便日後管理.
說明下,mdadm.conf文件主要由以下部分組成:DEVICES選項制定組成RAID所有設備, ARRAY選項指定陣列的設備名、RAID級別、陣列中活動設備的數目以及設備的UUID號.
[root@localhost ~]# mdadm --detail --scan > /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=e62a8ca6:2033f8a1:f333e527:78b0278a
devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
#默認格式是不正確的,需要做以下方式的修改:
[root@localhost ~]# vi /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf
devices /dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=e62a8ca6:2033f8a1:f333e527:78b0278a

將/dev/md0創建文件系統,
[root@localhost ~]# mkfs.ext3 /dev/md0
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
261120 inodes, 522048 blocks
26102 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16320 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.內容

掛載/dev/md0到系統中去,我們實驗是否可用:
[root@localhost ~]# cd /
[root@localhost /]# mkdir mdadm
[root@localhost /]# mount /dev/md0 /mdadm/
[root@localhost /]# cd /mdadm/
[root@localhost mdadm]# ls
lost+found
[root@localhost mdadm]# cp /etc/services .
[root@localhost mdadm]# ls
lost+found services

好了,如果其中某個硬盤壞了會怎麼樣呢?系統會自動停止這塊硬盤的工作,然後讓後備的那塊硬盤頂上去工作.我們可以實驗下.
[root@localhost mdadm]# mdadm /dev/md0 --fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
[root@localhost mdadm]# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sdc1[3](F) sdd1[2] sdb1[1] sda1[0] # F標籤以為此盤為fail.
2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices:

如果我要移除一塊壞的硬盤或添加一塊硬盤呢?
#刪除一塊硬盤
[root@localhost mdadm]# mdadm /dev/md0 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1
[root@localhost mdadm]# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sdd1[2] sdb1[1] sda1[0]
2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices:
#增加一塊硬盤
[root@localhost mdadm]# mdadm /dev/md0 --add /dev/sdc1
mdadm: hot added /dev/sdc1
[root@localhost mdadm]# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sdc1[3] sdd1[2] sdb1[1] sda1[0]
2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices:

http://wuqingying.blog.51cto.com/13185/36803

【下列文章您可能也有興趣】

沒有留言: