File creation time in Linux : How to get ??


 
Linux does not have file creation time history until EXT3. In EXT4 this extra feature is there.

First get the inode for the file for which you want to get creation time.

# ls -i file_name

135528  //inode for this file

#debugfs -R 'stat <135528>' /dev/sda1

crtime 0x3f1cacc966104fc -- Tue May 27 16:35:28 2012

>> Now you could see the file creation time.



Linux or any Unix flavors, there are 3 time stamps:

1. atime : The last time file was opened or executed
2. ctime : The time the inode information was updated (chmod, chown etc does that). ctime also gets updated when file is modified
3. mtime: The time file was closed after it was opened for write

Command to see atime # ls -lu
Command to see ctime # ls -lc
Command to see mtime # ls -l

in EXT 4 - crtime will give creation time stamp.

you can use stat command to get the above 3 time stamps.(atime, mtime , ctime)

# stat myfile

 File: `myfile'
 Size: 41              Blocks: 8          IO Block: 4096   regular file
Device: fd04h/64772d    Inode: 655406      Links: 1
Access: (0777/-rwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-07-30 13:06:26.000000000 +0100
Modify: 2012-07-30 13:06:26.000000000 +0100
Change: 2012-07-30 13:06:46.000000000 +0100




Disk adding issue on HPVM : How to solve?

How to add new disk to HPVM ? if Already 9 disks already added ??

I faced this issue while am trying to add 10 th disk to VM guest node. Already we had 9 disk added to VM machine and i got a request to add one more new disk to HPVM

I got error message like VM Machine required a reboot to make this changes done. Then i found that we need to use controller path for adding more than 9 disk ,while use the command to add disk. 

O/p from our server 

#hpvmstatus -P vmmachineA

-----
------
Storage controller details

Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   1   0   0   0 lv        /dev/vg_name_00/rlv_os_00
disk    scsi         0   1   0   1   0 lv        /dev/vg_name_01/rlv_app_01
disk    scsi         0   1   0   2   0 lv        /dev/vg_name_02/rlv_app_01
disk    scsi         0   1   0   3   0 lv        /dev/vg_name_03/rlv_app_03
disk    scsi         0   1   0   4   0 lv        /dev/vg_name_04/rlv_app_05
disk    scsi         0   1   0   5   0 lv        /dev/vg_name_05/rlv_app_06
disk    scsi         0   1   0   6   0 lv        /dev/vg_name_06/rlv_app_00
disk    scsi         0   1   0   7   0 lv        /dev/vg_name_07/rlv_app_01
disk    scsi         0   1   0   8   0 lv        /dev/vg_name_08/rlv_app_02
disk    scsi         0   1   0   9   0 lv        /dev/vg_name_09/rlv_app_03 -----This is 10th disk which i added newly using the below method.


Steps to add 10th disk in HPVM Guest:
 
On HOSt node

#ioscan -eC disk

#inq -sym_wwn |grep -i "lun no"

#pvcreate /dev/disk/rdisk??

#vgcreate /dev/vg_name?? /dev/disk/disk??

#lvcreate -L "disk size in MB" -n /dev/vg_name??/lv_?? /dev/vg_name??


#hpvmmodify -P ngmhv456 -a disk:scsi:lv:/dev/vg_ngme??/rlv_name??


( we can use this command upto 9 disks)


or


hpvmmodify -P ngmhv456 -a disk:scsi:0,1,9:lv:/dev/vg_name??/rlv_name??


( this command we need to use for adding 10 th disk)

Problem :

If you try to add 10th disk to your vm guest node on HPVM using below command u will get error message  like "you need to Reboot ur vm machine to make this changes"
 
# hpvmmodify -P ngmhv456 -a disk:scsi:lv:/dev/vg_ngme??/rlv_name??


Reason : 
In a controller we can add maximum 15 disks only. And we can add upto 9 disks without mention the controller path. For 10th disk we need to mention the controller path.


solution 1:

We have to mention the controller path while add the disk. Using the below command maximum we can add upto 15 disks to one controller.

# hpvmmodify -P ngmhv456 -a disk:scsi:lv:/dev/vg_ngme??/rlv_name??


 ( we can use this command upto 9 disks)


For 10th disk use the below command



# hpvmmodify -P ngmhv456 -a disk:scsi:0,1,9:lv:/dev/vg_name??/rlv_name?? 


solution 2: ( At the Time of VM Guest creation)

If you want to add more than 15 disk to VM guest without reboot , then we need to follow below steps.

At the time of VM Guest creation we need to give the 3 controllers paths. Instead of default path.


# hpvmmodify -P ngmhv456 -a disk:scsi:0,1,0:lv:/dev/vg_name??/rlv_name??

# hpvmmodify -P ngmhv456 -a disk:scsi:0,2,0:lv:/dev/vg_name??/rlv_name??

# hpvmmodify -P ngmhv456 -a disk:scsi:0,3,0:lv:/dev/vg_name??/rlv_name??


The 3 controllers are 0/1/0 & 0/2/0 & 0/3/0.


Since in one controller we can have maximum of 15 disks. Once we haved 15 disks to one controller and if you want to add one more controller to VM guest then we need
to reboot the VM guest node. To Avoid this reboot we can give 3 different controller at the time of VM guest node creation.



Adding new disk to HPVM : How to ??

Scenario :

The SAN team allocated a new lun to the  HOST (Physical server) where the VM machine is running. And now u need to add that lun to HOST node and then add that disk to Gust node.



And here am going to create a new RAW lvol on VM Host and that Raw lvol used as disk on VM Guest.

A. On HOST Node ( Where the VM machine is running )

Once SAN team allocated the disk to HOST node follow the below steps

#ioscan -fnC disk

#ioscan -eC disk

#inq -sym_wwn |grep -i "lun no"

#pvcreate /dev/disk/rdisk??

#vgcreate /dev/vg_name?? /dev/disk/disk??

#lvcreate -L "disk size in MB" -n /dev/vg_name??/lv_?? /dev/vg_name??

(Create a RAW Lvol only , no need to create a FS, since we are going to allocate this RAW lvol as disk to HPVM)



B. Adding disk to Guest :


# /opt/hpvm/bin/hpvmstatus -P VM_Machine_Name
  // To check the existing disk details for the VM machine


Now we need to add this newly created LVOL to GUEST node as new disk.


#hpvmmodify -P vir_machine -a disk:scsi:lv:/dev/vg_name??/LV_name??
 // New lvol which we created

or 

# hpvmmodify -P vir_machine -a disk:scsi:0,1,7:lv:/dev/vg_name??/rlv_name?? //Mention the controller id


Once added the disk check and confirm

#/opt/hpvm/bin/hpvmstatus -P VM_Machine_Name  //check the storage interface details in o/p


For example we will get like below ,

Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   1   0   0   0 lv        /dev/vg_vir_machine_00/rlv_os_00
disk    scsi         0   1   0   1   0 lv        /dev/vg_vir_machine_01/rlv_app_01
disk    scsi         0   1   0   2   0 lv        /dev/vg_vir_machine_02/rlv_app_02
disk    scsi         0   1   0   3   0 lv        /dev/vg_vir_machine_03/rlv_app_03
disk    scsi         0   1   0   4   0 lv        /dev/vg_vir_machine_04/rlv_app_04
disk    scsi         0   1   0   5   0 lv        /dev/vg_vir_machine_05/rlv_app_05
disk    scsi         0   1   0   6   0 lv        /dev/vg_vir_machine_06/rlv_app_06
disk    scsi         0   1   0   7   0 lv        /dev/vg_vir_machine_07/rlv_app_07 ---->>Consider this is new LVOL (disk)



C.Now login to the Guest node ( Virtual machine)


#ioscan  -fnC disk

#ioscan -eC disk

#pvcreate /dev/disk/diskname??  //New disk name

Now you can add this to any VG and then you can create a new lvol or extend the exisiting FS.


QA : How to Find new disk in Guest node:


On HOST -

#hpvmstatus -P vmmachine


[ disk    scsi         0   1   0   7   0 lv        /dev/vg_vir_machine_07/rlv_app_07 ---->>Consider this is new LVOL (disk)  ]

Note down the lvol path - 0   1   0   7   0

ON Guest:

#ioscan -fnC disk

The new disk path will be @ 0/1/7/0 - Check and confirm the same in hpvmstatus -P vmmachine o/p

and then do # ls -lrt /dev/disk or /dev/dsk/ you will get the new disk path directory in recent date.




How to add more than 10 disk to HPVM : Click Here


Controlling Services in Linux servers

Controlling Services in Linux servers

The below tools/ commands are used to control the services in linux servers

    1.ntsysv
    2.chkconfig
    3.redhat-config-services
    4.service

ntsysv:   simple interface for configuring runlevels.

       A. It is a console based interactive utility that allows you to control what services run when entering a given run level. It configures the current run level by default. By using the --level option, you can configure other run levels.
       B. ntsysv returns 0 on success, 2 on error, and 1 if the user cancelled (or backed out of) the program. 

    
       
chkconfig: chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving system administrators of the task of directly manipulating the numerous symbolic links in those directories.

         usage:  chkconfig --list [name]
                      chkconfig --add
                      chkconfig --del
                      chkconfig [--level ] )

redhat-config-services: It is an X client.It will give the display of each of the services that are started and stopped at each run level.
                        Services can be added, deleted, or re-ordered in the run levels 3 through 5 with this utility.

service:   It is used to start or stop a standalone service immediately. Most services accept the arguments start, stop, restart, reload, condrestart and status.

         usage:  service --status-all
                     service --status-all | grep ntpd
                     service [start|stop|restart|status]
                     service vsftpd restart

List services and their open ports

              # netstat -tulpn



New in RHEL 6 : Quick View

The below are few new features in RHEL 6 

1. Ext4 is the default file system
2. Improved level of security introiduced for Virtual machines - Named as "SVirt"
3. uuid is used by default in the /etc/fstab file
4. Upstart has replaced the init
5. Mounting via NFS default to NFS 4
6. In addition to /etc/sysctl.conf file, there is also /etc/sysctl.d directory. Instead of modifying directly /etc/sysctl.conf, files can be placed under /etc/sysctl.d directory.
7. modprod.d directory instead of /etc/modprobe.conf
8. iSCSI partitions may be used as either root or boot filesystems.
9. Automated I/O alignment and self-tuning is supported.



Import VG with disks VGID in HP UX : How to ?


If u want to import a VG using disks VG ID , use the below procedure. Here we are not going to use map file.


To get  VG id and PVID for a disk use below command

# ioscan -funC disk

# xd -An -j8200 -N16 -tx      

 (This command will give VG ID and PV ID info for disk)

Once u got the VGID , now u can import to a VG ( VG name with ur choice) with same VGID disks.

#vgimport -v MVG




Note : Here once VG imported you will get the LV names in different order. Not same like /etc/lvmtab.
This is the only different import a vg using map file and this method.



Power path installation and config in Linux servers




1. Before install power path ,modify the lvm.conf file filter options

#vi /etc/lvm/lvm.conf

 filter =[ "a|/dev/cciss|", "a|/dev/emc|", "r/.*/" ]

Note : Modify the filter as needed using standard shell-scripting regular    
expressions. For example, to include partitions sda1 to sda9 for LVM2 while filtering out the remaining sd device nodes, set the filter  field to
 filter=["a/sda[1-9]$/", "r/sd*/", "a/.*/"].


2. Rebuild the LVM2 cache. Enter:


 # vgscan -v

3.verify the filter is working fine

 # lvmdiskscan


4. Recreate the initrd image to reflect the changes to the /etc/lvm/lvm.conf file. Enter:

 # mkinitrd

 ( ex  - /sbin/mkinitrd -v -f /boot/initrd-2.6.18-164.el5.img  initrd-2.6.18-164.el5 )
===========================

1. Install EMC Powerpath on Linux

Download the Powerpath software from EMC powerlink website. If you’ve purchased EMC support, you should have access to powerlink.


# rpm -ivh EMCpower.LINUX-5.3.0.00.00-185.rhel5.i386.rpm
Preparing...                ########################################### [100%]
   1:EMCpower.LINUX         ########################################### [100%]

All trademarks used herein are the property of their respective owners.
NOTE:License registration is not required to manage the CLARiiON AX series array.


2. Register EMC Powerpath

Before you can use the EMC powerpath software, you should register it using the EMC Powerpath License key received when you purchased the software from EMC.

Use emcpreg tool to install EMC Powerpath license key as shown below.


# emcpreg -install

===========   EMC PowerPath Registration ===========
Do you have a new registration key or keys to enter?[n] y
Enter the registration keys(s) for your product(s),
one per line, pressing Enter after each key.
After typing all keys, press Enter again.

Key (Enter if done): **emc-powerpath-license-key**
1 key(s) successfully added.
Key successfully installed.

Key (Enter if done):
Key  is invalid, ignored.
Try again or press Enter if done.
1 key(s) successfully registered.




3. Verify EMC Powerpath Registration

Use EMC powermt command to check the registration as shown below.

# powermt check_registration

Key **emc-powerpath-license-key**
  Product: PowerPath
  Capabilities: All

4.Start the Power path service

# /etc/init.d/PowerPath start

5. Verify Multiple Paths using below commands

Once you’ve installed EMC powerpath, execute powermt display dev=all as shown below to verify whether multiple paths as displayed accordingly.

root:ngmlx003# powermt display dev=all |more
Pseudo name=emcpowerac
Symmetrix ID=000290103691
Logical device ID=15AA
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   2 qla2xxx                  sdaa      FA 10aA   active  alive       0      0
   3 qla2xxx                  sdbe      FA  7aA   active  alive       0      0
   4 qla2xxx                  sdci      FA 13aA   active  alive       0      0
   5 qla2xxx                  sddm      FA  4aA   active  alive       0      0

Pseudo name=emcpowerab
Symmetrix ID=000290103691
Logical device ID=1628
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   2 qla2xxx                  sdab      FA 10aA   active  alive       0      0
   3 qla2xxx                  sdbf      FA  7aA   active  alive       0      0
   4 qla2xxx                  sdcj      FA 13aA   active  alive       0      0
   5 qla2xxx                  sddn      FA  4aA   active  alive       0      0



Quorum Server Setup in Service Guard cluster


Below steps will explain about the setup of the Quorum Server Service which is to be used within Service Guard to replace the “Cluster Lock Disk

At Quorum server:
==============

1.Down load and Install Quorum s/w

http://h20293.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=B8467BA

2. Configuration in Quorum server

A.  Create a directory called /var/adm/qs
B.  Create a directory called /etc/cmcluster (if it doesn’t already exist)
C.  Create a file called /etc/cmcluster/qs_authfile
D.  Insert into the qs_authfile the hostnames of all nodes in a cluster that needs to use the Quorum service. 

    Note:  All hosts contained in a cluster need to be specified.

3. Add the following entry into /etc/inittab

  qs:345:respawn:/usr/lbin/qs >> /var/adm/qs/qs.log 2>&1

3.B. Tell init to re-read /etc/inittab

# init q


4. Check the log /var/adm/qs/qs.log for any errors and that the service is listening

Oct 19 11:29:21:0:Server is up and waiting for connections at port 1238


5. Check the process table for the Quorum Service.

serverA{root}# ps -ef | grep qs
    root 15663 15662  0 11:29:20 ?         0:00 /usr/lbin/qsc
    root 15662     1  0 11:29:20 ?         0:00 /usr/lbin/qsc


At Cluster nodes:
=============

 Adding a cluster to use the Quorum Server

6.Update the /etc/cmcluster/qs_authfile on the quorum server with the hostnames of the cluster

7.Generate a new Cluster configuration file as follows

# cmquerycl -q -n -n -C .config

Note : The ASCII file will contain the QS_HOST, QS_POLLING_INTERVAL, and
QS_TIMEOUT_EXTENSION parameters in the cluster configuration ASCII
file.
Note : Increasing these values will impact the failover time accordingly.


8.Apply the new cluster config file using cmcheckconf and cmapplyconf. 

#cmcheckconf
#cmapplyconf


9. The cluster will need to be down if converting from a cluster lock disk.

#cmhaltcl



Blogger Tips and TricksLatest Tips And TricksBlogger Tricks