How to change Time zone in Linux


Method 1: Creating link file to corresponding Timezone

# cd /etc
# rm localtime  //delete the existing localtime file

Check the available time zones in US

# ls /usr/share/zoneinfo/US/    
Alaska          Arizona         Eastern         Hawaii          Michigan        Pacific
Aleutian        Central         East-Indiana    Indiana-Starke  Mountain        Samoa

Note: For other country timezones, browse the /usr/share/zoneinfo directory

Now we can change the time zone using below step

# ln -sf /usr/share/zoneinfo/Asia/Calcutta localtime

Now check the time

#date 

# date
Mon Aug 17 23:10:14 IST 2013

But if you are doing any patching then after patching this Time zone will get changed. So this method is not perfect one.


                                            



Method 2: Change TimeZone Using /etc/timezone File


# vi /etc/timezone
America/Los_Angeles 

then export the TZ variable

$ export TZ=America/Los_Angeles
$ date


Method 3: Command line tools.

Using below interactive method also we can change the Time zone

Ubuntu: dpkg-reconfigure tzdata
Redhat: redhat-config-date
CentOS/Fedora: system-config-date
FreeBSD/Slackware: tzselect



FS extend and FS reduce in RHEL 4 ??


How to reduce FS size in RHEL4 ??


Step 1: umount /fs
step 1: e2fsck -f /dev/vg??/lvol??
Step 2: resize2fs /dev/vg??/lvol?? "New Size"
step 3: lvreduce -L "new Size" /dev/vg??/lvol??
step 4: mount /fs


Example i have reduced /mnt/u001 FS size from 58GB to 50GB. Check the below output


ServerA:/tmp# umount /mnt/u001

ServerA:/tmp# e2fsck -f /dev/mapper/vg01-lvol0
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg01-lvol0: 254375/7602176 files (0.4% non-contiguous), 8528446/15204352 blocks
ServerA:/tmp#


ServerA:/tmp# resize2fs /dev/mapper/vg01-lvol0 50G
resize2fs 1.35 (28-Feb-2004)
Resizing the filesystem on /dev/mapper/vg01-lvol0 to 13107200 (4k) blocks.

The filesystem on /dev/mapper/vg01-lvol0 is now 13107200 blocks long.

ServerA:/tmp#


ServerA:/tmp# lvreduce -L 50G /dev/mapper/vg01-lvol0
  WARNING: Reducing active logical volume to 50.00 GB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol0? [y/n]: y
  Reducing logical volume lvol0 to 50.00 GB
  Logical volume lvol0 successfully resized
ServerA:/tmp#
ServerA:/tmp#
ServerA:/tmp# lvdisplay /dev/mapper/vg01_app-lvol0
  --- Logical volume ---
  LV Name                /dev/vg01/lvol0
  VG Name                vg01
  LV UUID                ZbIk91-8OUL-Wleo-MfzZ-QkvV-A2c2-amKGil
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                50.00 GB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:35

ServerA:/tmp# 



Now FS size 

serverA:/tmp# df -hP /mnt/u001
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg01-lvol0   50G   32G   16G  67% /mnt/u001
ServerA:/tmp#


How to extend a FS in RHEL 4??


Step 1 : lvextend -L +size "lvolname"
Step 2 : ext2online "lvolname"


serverA:/tmp# lvextend -L +5G /dev/mapper/vg01_app-lvol8
  Extending logical volume lvol8 to 90.98 GB
  Logical volume lvol8 successfully resized
serverA:/tmp#


serverA:/tmp# ext2online /dev/mapper/vg01_app-lvol8
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
serverA:/tmp#

serverA:/tmp# df -hP /mnt/u006
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg01_app-lvol8   90G   81G  7.7G  92% /mnt/u006
serverA:/tmp# 



Locked VMDK files in ESXi server


 In ESX/ESXi 2.0

To determine which ESX host has locked the file:

    Execute:
    
    # vmkfstools -D vmhba0:2:0:8
    # less /var/log/vmkernel
    
    Look for the string lock=  in the affected ESX hosts .vmdk file. This IP address is the IP of the ESX host that created the lock.
    
    Jul  3 10:13:22 cs_tse_d03 vmkernel: 13:20:24:17.562 cpu0)File descriptor 3 (RZ_NLB_TESTW2003EE1.vmdk) is allocated
    Jul  3 10:13:22 cs_tse_d03 vmkernel: 13:20:24:17.562 cpu0)^Ilength=6291456000 toolsVersion=6313 hwVersion=3 lock=10.16.156.45 RO=0
    
    Note: lock= 0.0.0.0 indicates that a file is not locked by another host.



In ESX/ESXi 3.x/4.x and ESXi 5.0


How to check VMDK fiel locked or not ?


# vmkfstools -D /vmfs/volumes/LUN/VM/disk-flat.vmdk

You see output similar to:

Lock [type 10c00001 offset 54009856 v 11, hb offset 3198976
gen 9, mode 0, owner  4655cd8b-3c4a19f2-17bc-00145e808070  mtime 114]
Addr <4, 116, 4>, gen 5, links 1, type reg, flags 0, uid 0, gid 0, mode 600
len 5368709120, nb 0 tbz 0, cow 0, zla 3, bs 1048576


Where:

    The owner 4655cd8b-3c4a19f2-17bc-00145e808070 indicates that the MAC address of the ESX/ESXi host locking the file is 00:14:5E:80:80:70.

    Note: If the owner has the entry 00000000-00000000-0000-000000000000 it indicates that the file has either a read-only lock or a multi-writer lock or there is no lock on the file.

    The mode indicates the type of lock that is on the file. The list of mode locks are:

        mode 0 = no lock
        mode 1 = is an exclusive lock (vmx file of a powered on VM, the currently used disk (flat or delta), *vswp, etc.)
        mode 2 = is a read-only lock (e.g. on the ..-flat.vmdk of a running VM with snapshots)
        mode 3 = is a multi-writer lock (e.g. used for MSCS clusters disks or FT VMs).


3.With ESX 3.x/4.x and ESXi 5.0 you can use the lsof command on the host holding the lock to attempt to identify the process which is holding the lock:

# lsof |grep /vmfs/volumes/LUN/VM/disk-flat.vmdk

4.Once identifed the process which is locking VMDK , we can kill that using #kill -9 PID.


or

Clearing the file lock by rebooting the ESX host  

As a final troubleshooting step, try restarting the ESX host that holds the lock.   To restart the ESX host:  

Note: Prior to restarting the entire VMware ESX host, restart the management agents. For more information see Restarting the Management agents on an ESX or ESXi Server (1003490).

   1.Migrate all virtual machines from the host to new hosts.
   2.When the virtual machine are moved, place the host in maintenance mode and reboot.Note:
   If you have only one ESX host or do not have the ability to migrate virtual machines, you must schedule a downtime for all affected virtual machines prior to rebooting. When the host has rebooted, start the affected virtual machine.




Free up space in Linux & Reserved block size in Linux


Problem:

root@ServerA# df -h
Filesystem            Size  Used Avail Use% Mounted on

/dev/sda1              99M   13M   85M  14% /boot
none                  2.9G     0  2.9G   0% /dev/shm
/dev/sda1             148G  146G  1.1G 100% /u01
/dev/sdb1              74G   71G  2.2G  98% /u02
/dev/sdc1              74G   71G  1.6G  98% /u03

The above disk status shows that /dev/sda1 is 100% used but still there is 1.1G free space available.

solution:

This is because of reserved block unix size in linux.

#tune2fs -m 1 /dev/sda1



What is reserved Blocks ?

Reserved blocks are disk blocks reserved by the kernel for processes owned by privileged users to prevent operating system from a crash due to unavailability of storage space for critical processes. 
For example, just imagine the size of root file system is 14 GB and the root file system is 100% full, all the non privileged user processes would not be able to write data to the root file system whereas processes which are owned by  privileged user (root by default) would still be able to write the data to the file system. With the help of reserved blocks, operating system keeps running for hours or sometimes days even though root file system is 100% full.

The default percentage of reserved block is 5 % of the total size of file system and can be increased or decreased based upon the requirement.

Reserved blocks are supported on ext2 and ext3 file system(s)


How to check how many blocks are reserved :

#dumpe2fs -h /dev/VolGroup00/LogVol00  | grep -i block

dumpe2fs 1.39 (29-May-2006)
Block count: 3637248
Reserved block count:  181862
Free blocks: 2709898
First block:  0
Block size:  4096
Reserved GDT blocks:  887
Blocks per group:  32768
Inode blocks per group:   1024
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
Journal backup:   inode blocks

In above example, Block Count (Total Blocks) = 3637248 and Reserved Block Count = 181862 so the reserved block percentage option is set to 181862/3637248*100 i.e. 5 % (default value).

(or)

# tune2fs -l /dev/sdb1


How to change reserved block percentage value :



The value for Reserved Block Percentage can be set at the time of creating the file system as well as after creating the file system.


1) At the time of creation of file-system:

# mkfs.ext3 -m 1/dev/sda2 (replace sda2 with your partition name)


2) To set the reserved block percentage value after creating file system, use the following command

# tune2fs -m 3 /dev/VolGroup00/LogVol00

Above command would set the reserved block percentage value to 3 % of total block count. Which user can access reserved blocks



Linux LVM Interview Questions : Part 2

Questions: PART 2

1.What are LVM1 and LVM2?
2.What is the maximum size of a single LV?
3.List of important LVM related files and Directories?
4.What is the steps to create LVM in Linux?
5.How to extend a File system in Linux?
6.How to reduce the File system size  in Linux?
7.How to add new LUN from storage to Linux server?
8.How to resize root file system on RHEL 6?
9.How to find server is configured with LVM RAID ? 
10.How to check Linux server is configured with power path disks?
11.How to check server is configured with Multipath disks??




Answers:

1.What are LVM1 and LVM2?

LVM1 and LVM2 are the versions of LVM. 
LVM2 uses device mapper driver contained in 2.6 kernel version.
LVM 1 was included in the 2.4 series kernels.

2.What is the maximum size of a single LV?

For 2.4 based kernels, the maximum LV size is 2TB. 
For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. 

3.List of important LVM related files and Directories?

## Directories
/etc/lvm                - default lvm directory location
/etc/lvm/backup         - where the automatic backups go
/etc/lvm/cache          - persistent filter cache
/etc/lvm/archive        - where automatic archives go after a volume group change
/var/lock/lvm             - lock files to prevent metadata corruption

# Files
/etc/lvm/lvm.conf       - main lvm configuration file
$HOME/.lvm              - lvm history 


4.What is the steps to create LVM in Linux?

Create a physical volume by using pvcreate command

consider the disk is local.

#fdisk -l 

#fdisk /dev/sda

Press "n" to create new partition. And mention the size / allocate whole disk to single partition. and assign the partition number also.

#press "t" to change the partition as LVM partition. 

#enter "8e" ( 8e - is Hex decimal code for LVM ) 

#Enter "w" to write tghe information on Disk.

#fdisk -l ( Now you will get newly created disk numbers)

#pvcreate /dev/sda2

Add physical volume to volume group by “vgcreate” command

#vgcreate VLG0 /dev/sda2

Create logical volume from volume group by “lvcreate” command.

#lvcreate -L 1G -n LVM1 VG0

Now create file system on /dev/sda2 partition by “mke2fs”  or "mkfs.ext3" command.

#mke2fs -j /dev/VG0/LVM1

or 

#mkfs.ext3 /dev/vg0/LVM1

How to mount this as file system

#mkdir /test

#mount /dev/VG0/LVM1 /test  

5.How to extend a File system in Linux?

Check the free space on vg 

#vgdisplay -v VG1

Now extend the FS

# lvextend -L+1G /dev/VG1/lvol1

# resize2fs /dev/VG1/lvol1

6.How to reduce the File system size  in Linux?

1.First we need to reduce the file system size using "resize2fs"
2.Then reduce the lvol size using "lvreduce"

#resize2fs -f /dev/VolGroup00/LogVol00 3G

#lvreduce -L 5G /dev/VG1/Lvol1


7.How to add new LUN from storage to Linux server?

Step 1: Get the list of HBA and exisiting disk details.

#ls /sys/class/fc_host

#fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

Step 2: Scan the HBA ports (Need to scan all HBA port)

#echo "1" > /sys/class/fc_host/host??/issue_lip

# echo "- - -" > /sys/class/scsi_host/host??/scan

Do this above steps for all HBA cards

Step3 : Check the newly added Lun     

# cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l

# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l


Once found the disk then do below steps to add to VolumeGroup

#pvcreate /dev/diskpath

#vgextend /dev/vg1 /dev/diskpath

#vgs or #vgdisplay /dev/vg1


8.How to resize root file system on RHEL 6?

Here is the list of steps to reduce the  root file system (lv_root) on a RHEL 6 Linux server:

Boot the system into rescue mode. Do not mount the file systems (select the option to 'Skip' in the rescue mode and start a shell)

Bring the Volume Group online

#lvm vgchange -a -y

Run fsck on the FS

#e2fsck -f /dev/vg_myhost/lv_root

Resize the file system with new size

#resize2fs -f /dev/vg00/lv_root 20G

Reduce the Logical Volume of the FS with the new size

#lvreduce -L20G /dev/vg00/lv_root

Run fsck to make sure the FS is still ok

#e2fsck -f /dev/vg00/lv_root

Optionally mount the file system in the rescue mode

#mkdir -p /mnt/sysimage/root
#mount -t ext4 /dev/mapper/vg00-lv_root /mnt/sysimage/root
#cd /mnt/sysimage/root

Unmount the FS

#cd
#umount /mnt/sysimage/root

Exit rescue mode and boot the system from the hard disk
#exit

Select the reboot option from the recue mode

9.How to find server is configured with LVM RAID ? 

1.How to check linux LVM RAID ?

 check the RAID status in /proc/mdstat

 #cat /proc/mdstat 
 or
 # mdadm --detail /dev/mdx
  or
 # lsraid -a /dev/mdx

2.Check the Volume group disks 

 #vgdisplay -v vg01

 In disk we will get the device names like /dev/md1 , /dev/md2 . It means LVM RAID disks are configured and its added to Volume Group.


10.How to check Linux server is configured with power path disks?

1.Check power path is installed on server?

#rpm -qa |grep -i emc

2.Check the power path status on server?

#/etc/init.d/PowerPath status

#chkconfig --list PowerPath

# lsmod |grep -i emc

3.Check the Volume group disks 

 #vgdisplay -v vg01

 In disk we will get the device names like /dev/emcpowera , /dev/emcpowerb . It means powerpath disks are configured and its added to Volume Group.

4.Check the power path disk status using below command

 #powermt display dev=all


11.How to check server is configured with Multipath disks??

# ls -lrt /dev/mapper  //To View the Mapper disk paths and Lvols

#dmsetup table 

#dmsetup ls 

#dmsetup status

2.Using Multipathd Command ( Daemon ) 


#echo 'show paths' |multipathd -k

#echo 'show maps' |multipathd -k

3.Check multipath Daemon is running or not 

#ps -eaf |grep -i multipathd

4.check the VG disk paths

#vgs or vgdisplay -v vg01 

If multipath disks are added and configured with VG then we will get disk paths like /dev/mpath0 , /dev/mpath1.

5.If you want to check the disk path status u can use below command also

#multipathd -k

#multipathd> show multipaths status

#multipathd> show topology

#multipathd> show paths


Blogger Tips and TricksLatest Tips And TricksBlogger Tricks