Multipath config status check in Linux :How to?


Multipath config status check in Linux


Using dmsetup command:

# ls -lrt /dev/mapper  //To View the Mapper disk paths and Lvols

#dmsetup table 

#dmsetup ls 

#dmsetup status



Using Multipathd Command ( Daemon ) 


#echo 'show paths' |multipathd -k

#echo 'show maps' |multipathd -k


Explained multipathd below:


A.DISPLAY PATH STATUS 

Multipathd has a mode (the -k flag) were it can be used to connect to the running multipathd process over a socket.

If there is no running multipathd you will get the following error.

[root@k2 ~]# multipathd -k
ux_socket_connect: Connection refused

If the daemon is running, you can issue commands like below,

#multipathd -k

Multipathd>

#multipathd> show multipaths status 

name failback queueing paths dm-st 
mpath0 immediate - 4 active 
mpath1 immediate - 4 active 


B.SHOW TOPOLOGY 

#multipathd> show topology 

mpath0 (360050768018380367000000000000049) dm-0 IBM,2145 
[size=5.0G][features=1 queue_if_no_path][hwhandler=0 ] 
\_ round-robin 0 [prio=100][enabled] 
\_ 1:0:3:0 sdg 8:96 [active][ready] 
\_ 1:0:1:0 sde 8:64 [active][ready] 
\_ round-robin 0 [prio=20][enabled] 
\_ 1:0:0:0 sda 8:0 [active][ready] 
\_ 1:0:2:0 sdc 8:32 [active][ready] 


C.SHOW PATHS 

#multipathd> show paths 
hcil dev dev_t pri dm_st chk_st next_check 
1:0:0:0 sda 8:0 10 [active][ready] XXXXXXX... 14/20 
1:0:0:1 sdb 8:16 10 [active][ready] XXXXXXX... 14/20 
1:0:2:0 sdc 8:32 10 [active][ready] XXXXXXX... 14/20 
1:0:2:1 sdd 8:48 10 [active][ready] XXXXXXX... 14/20 
... excess deleted ... 


D.Fail a path 

# multipathd -k"fail path sdc"

# multipathd -k"show paths" 

E. DELETE A PATH 

#multipathd> del path sdc 
ok 

F.SUSPEND / ENABLE A PATH 

#multipathd> suspend map mpath0 
ok 

And For enable the map 

#multipathd> resume map mpath0 




Linux Software raid : Disk status check / Replace




Linux Software raid : Disk status check / replace 


A. Detecting a drive failure

 Need to check the log ile /var/log/messages for disk related error .

 or 

 check the RAID status in /proc/mdstat

 #cat /proc/mdstat 

 Note : If the out put displayed disk with [U_] then there is disk issue.


B.Check the RAID status / Query the RAID status

 # cat /proc/mdstat

  or

 # mdadm --detail /dev/mdx

  or


 # lsraid -a /dev/mdx


C.Manually fail the disk in RAID Array


 # raidsetfaulty /dev/md1 /dev/sdc2

should be enough to fail the disk /dev/sdc2 of the array /dev/md1. If you are using mdadm, just type liek below

  or 

 # mdadm --manage --set-faulty /dev/md1 /dev/sdc2


D. Replace the failed disk in RAID array


 #raidhotremove /dev/md1 /dev/sdc2  ( Consider the failed disk is /dev/sdc2 in array /dev/md1)

  or 

 # mdadm /dev/md1 -r /dev/sdc2 

Now we can fix the new disk and run the below commands.

 # raidhotadd /dev/md1 /dev/sdc2

  or

 # raidhotadd /dev/md1 /dev/sdc2


E.Monitor the RAID status


 # mdadm --monitor --mail=root@localhost --delay=1800 /dev/md2







Blogger Tips and TricksLatest Tips And TricksBlogger Tricks