ext2/ext3/ext4 features in Linux

EXT2
  • Ext2 does not have journal feature.
  • Speed of file system(read-write) bit faster then ext3
  • Require fsck to recover data after unplanned reboot
  • By default there is no Online file system growth.
  • mkfs.ext2 or mke2fs Commands to format

EXT3
  • The main benefit of ext3 is that it allows journaling.
  • Bit slower then ext2 file-system
  • Does not require manual fsck (automatic file recovery is done at booting time)
  • Online file system growth
  • mkfs.ext3 or mke2fs -j commands to format


Ext4
  • # Supports huge individual file size and overall file system size.
  • # Maximum individual file size can be from 16 GB to 16 TB
  • # Overall maximum ext4 file system size is 1 EB (exabyte).
  • Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
  • You can also mount an existing ext3 fs as ext4 fs
  • In ext4, you also have the option of turning the journaling feature “off”.


FSCK : 

If you want to run a fsck for any file system , please run the below command .

#umount /filesystem ; fsck -y /filesystem ; mount /filesystem ; mount -o remount,rw /filesystem ; exit



ext2 to ext3 conversion and Revert : how to ?

Converting ext2 to ext3:

For example, if you are upgrading /dev/sda2 that is mounted as /home, from ext2 to ext3, do the following.

#umount /dev/sda2

#tune2fs -j /dev/sda2

#mount /dev/sda2 /home


Reverting ext3 to ext2 :

#umount /dev/sda2

#tune2fs -O ^has_journal /dev/sda2

#e2fsck -y /dev/sda2

#mount -t ext2 /dev/sda2 /home

#rm -f .journal



Bridge network creation in XEN

Bridging is a technique used for connecting different network segments. Bridged networks used to  connects all the virtual machines to the outside world through virtual network interfaces connected to the bridges created by Xen.

When using bridged networking, Xen creates a network bridge and then connects the actual physical network interface to this bridge

These are the lines needed for bridged networking.

1    (network-script network-bridge)
2    (vif-script vif-bridge)
3    # (network-script network-route)
4    # (vif-script vif-route)
5    # (network-script network-nat)
6    # (vif-script vif-nat)


Note : Network script network-bridge is in the directory /etc/xen/scripts

Steps to create network-bridge in XEN :

1. Execute the /etc/xen/scripts/network-bridge script in /etc/xen/xend-config.sxp

2. This will create a new network bridge called xenbr0.

3. Copy the MAC address and IP address from the physical network
interface eth0.

4. Stop the physical network interface eth0.

5. Create a new pair of connected virtual ethernet interfaces—veth0 and vif0.0.

6. Assign the previously copied MAC address and IP address to the virtual  interface   veth0.

7. Rename the physical network interface to peth0 from eth0.

8. Rename the virtual network interface veth0 to eth0.

9. Attach peth0 and vif0.0 to the bridge xenbr0.

10. Bring up the bridge xenbr0, and the network interfaces peth0, eth0, and vif0.0.



Enable ssh in HPUX

To install and enable ssh service in HP unix , please follow the below steps..

1. swlist -l product|grep -i ssh -->> check any ssh software is already installed or not.

2. If it is not installed then down load ".depot" from www.software.hp.com

3. Install the ssh depot using below command

#swinstall -s /pathtodepot/sshdepot.deopt


4.Edit /etc/rc.config.d/sshd and set SSHD_START=1


5. start the SSH service now

  #/sbin/init.d/sshd start or /etc/init.d/secsh start




SAN - Storage reconfig in Linux

Dynamic SAN fabric reconfiguration

This section provides four methods that you can use to force the Linux operating system to recognize disk that are added or removed from the fabric.

When you add or remove disks to the fabric, you can use any of the following four ways to force the Linux host to recognize these changes:

  1. Reboot the host
  2. Unload and reload the host adapter driver
  3. Rescan the bus by echoing the /sys filesystem (only for Linux 2.6 kernels)
  4. Manually add and remove SCSI disks by echoing the /proc or /sys filesystem

1.Reboot the Host or Unload and Reload the host adapter driver

Since devices are discovered by scanning the SCSI bus, it is typically easiest to rescan the SCSI bus to detect any SAN fabric changes. A bus rescan is automatically triggered by reloading the host adapter driver or by rebooting the system.

Before unloading the host adapter driver or rebooting the host, you must:

    1.Stop all I/O
    2.Unmount all file systems
    3. If SDD is being used, unload the SDD driver with the sdd stop command before reloading the host adapter driver. After the host adapter driver is reloaded then reload SDD with the sdd start command.

Reloading the host adapter driver assumes that the host adapter driver is built as a module. Rebooting the system works regardless of whether or not the host adapter driver is compiled into the kernel or as a module.


2.Rescan the bus by echoing the /sys filesystem

For Linux 2.6 kernels only, a rescan can be triggered through the /sys interface without having to unload the host adapter driver or reboot the system. The following command will scan all channels, targets, and LUNs on host H.

echo “- - -” > /sys/class/scsi_host/hostH/scan

3. Manually add and remove SCSI disks

You can use the following commands to manually add and remove SCSI disk.

Note: In the following command examples, H, B, T, L, are the host, bus, target, and LUN IDs for the device.

You can unconfigure and remove an unused SCSI disk with the following command:

    echo "scsi remove-single-device H B T L" > /proc/scsi/scsi

    If the driver cannot be unloaded and loaded again, and you know the host, bus, target and LUN IDs for the new devices, you can add them through the /proc/scsi/scsi file using the following command:
    echo "scsi add-single-device H B T L" > /proc/scsi/scsi

For Linux 2.6 kernels, devices can also be added and removed through the /sys filesystem. Use the following command to remove a disk from the kernel’s recognition:

    echo “1” > /sys/class/scsi_host/hostH/device/H:B:T:L/delete

    or, as a possible variant on other 2.6 kernels, you can use the command:

    echo “1” > /sys/class/scsi_host/hostH/device/targetH:B:T/H:B:T:L/delete

To reregister the disk with the kernel use the command:

    echo “B T L” > /sys/class/scsi_host/hostH/scan

Network bonding in Linux : How to setup?


Why we need to use bonding ?

Linux network Bonding is creation of a single bonded interface by combining 2 or more Ethernet interfaces. Its helps in high availability of your network interface and provides performance . Bonding is same as port trunking or teaming.


Bonding allows to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed


Steps for bonding in Linux:

Step 1 : Create a Bond0 Configuration File

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168.1.5
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

Step 2 :Modify eth0 and eth1 config files

Modify eth0, eth1  configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above. Make sure to add the MASTER and SLAVE configuration in these files.

$ vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
# Settings for Bond
MASTER=bond0
SLAVE=yes

$ vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
# Settings for bonding
MASTER=bond0
SLAVE=yes



Step 3 : Edit the config file for Bonding

Set the parameters for bond0 bonding kernel module. Select the network bonding mode based on you need, The below are modes available in bonding..

    mode=0 (Balance Round Robin)
    mode=1 (Active backup)
    mode=2 (Balance XOR)
    mode=3 (Broadcast)
    mode=4 (802.3ad)
    mode=5 (Balance TLB)
    mode=6 (Balance ALB)

Add the following lines to /etc/modprobe.conf

# bonding commands
alias bond0 bonding
options bond0 mode=1 miimon=100

Step 4 : Test the configuration

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5. Check the bonding status

Restart the network, or restart the computer.

$ service network restart # Or restart computer

5.1 - Check the bonding status:

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:63



Look at ifconfig -a and check that your bond0 interface is active. Thats it !!!

To verify whether the failover bonding works..

  •     Do an ifdown eth0 and check /proc/net/bonding/bond0 and check the “Current Active slave”.
  •     Do a continuous ping to the bond0 ipaddress from a different machine and do a ifdown the active interface. The ping should not break.


Samble output of ifconfig :


bond0     Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:192.168.1.20  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
 UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
 RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:250825 (244.9 KiB)  TX bytes:244683 (238.9 KiB)
eth0      Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:192.168.1.20  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
 UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
 RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:251161 (245.2 KiB)  TX bytes:180289 (176.0 KiB)
 Interrupt:11 Base address:0x1400
eth1      Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:192.168.1.20  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
 UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
 RX packets:4 errors:0 dropped:0 overruns:0 frame:0
 TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:258 (258.0 b)  TX bytes:66516 (64.9 KiB)
 Interrupt:10 Base address:0x1480

Booting Problem in SOLARIS - Part 3

 Booting Problem in SOLARIS - Part 1 - Click Here
 Booting Problem in SOLARIS - Part 2 - Click Here


4. The file just loaded does not appear to be executable
Boot block on the hard disk is corrupted .Boot the system in single user mode with cdrom and reinstall boot block .
#installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t3d0s0

5. bootblk: can’t find the boot program
boot block cannot find the boot programe – ufsboot in Solaris . Either ufsboot is missing or corrupted . In such cases it can be restored from the cdrom after booting from cdrom & mounting the hard disk
# cp /platform/`uname -i`/ufsboot  /mnt/platform/`uname -i`

6. boot: cannot open kernel/unix
Kernel directory or unix kernel file in this directory is not found .Probably deleted during fsck or deleted by mistake. Copy it from the cdrom or restore from the backup tape.
# cp /platform/`uname -i`/kernel/unix /mnt/platform/`uname -i`/kernel

7. Error reading ELF header ?
Kernel directory or unix kernel file in this directory is corrupted. Copy it from the cdrom or restore from the backup tape.
# cp /platform/`uname -i`/kernel/unix /mnt/platform/`uname -i`/kernel

8. Cannot open /etc/path_to_inst
System can not find the /etc/path_to_install file .It might be missing or corrupted and needs to be rebuild.

To rebuild this file boot the system with -ar option :
ok>boot -ar
Press enter to select default values for the questions asked during booting and select yes to rebuild /etc/path_to_install

The /etc/path_to_inst on your system does not exist or is empty. Do you want to rebuild this file [n]? y

system will continue booting after rebuilding the file.


9. Can’t stat /dev/rdsk/c0t3d0s0
When booted from cdrom and done fsck the root partition comes out to be fine but on booting from root disk this error occurs. The device name for / is missing from /dev/dsk directory and to resolve the issue /dev & /devices directories has to be restored from root backup tapes .

Install Linux latest relase without CD / DVD / USB : How to ??

Dont you have CD/DVD or USB to Install Newly released Linux in your Laptop/Desktop?
No worries you can do that with in linux.

1. mkdir /fedora ; cp -rvf /home/saravanan/fedora.iso /fedora
2. mount -o loop /fedora/fedora.iso /media/iso
3. cd /media/iso/isolinux
4. cp vmlinuz initrd.img /fedora/
5. edit grub.conf --> /boot/grub/grub.conf
6. add these lines at the last.

title Install Linux
root (hdX,Y)
kernel /distro/Linux_kernel
initrd /distro/Ram_disk

7. save and exit and reboot the system then you will get the following prompt in the booting option.

screen1 - >http://cdn.linuxforu.com/wp-content/uploads/temp-uploads/2009/03/1.jpg?d9c344

screen2 - > http://cdn.linuxforu.com/wp-content/uploads/temp-uploads/2009/03/2.jpg?d9c344

screen3 - > http://cdn.linuxforu.com/wp-content/uploads/temp-uploads/2009/03/3.jpg?d9c344

screen4 - > http://cdn.linuxforu.com/wp-content/uploads/temp-uploads/2009/03/4.jpg?d9c344

tune2fs Command in Linux

Do you know that 5% of the space is reserved in your filesystem?

The logic of keeping the reserved 5% secret is so that the standard user does not take the unavailable (reserved) space into consideration. You can see the total space with tune2fs (run as root). For example:


# df /tmp

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sdd1 404727 369777 18244 96% /tmp


# tune2fs -l /dev/sdd1


tune2fs 1.32 (09-Nov-2002)

Filesystem volume name:

Last mounted on:

Filesystem UUID: 6c114425-117e-4026-90df-4068ac4b7212

Filesystem magic number: 0xEF53

Filesystem revision #: 1 (dynamic)

Filesystem features: has_journal filetype needs_recovery sparse_super

Default mount options: (none)

Filesystem state: clean

Errors behavior: Continue

Filesystem OS type: Linux

Inode count: 102408

Block count: 417658

Reserved block count: 16706

Free blocks: 34950

Free inodes: 97512

First block: 1

Block size: 1024

Fragment size: 1024

Blocks per group: 8192

Fragments per group: 8192

Inodes per group: 2008

Inode blocks per group: 251

Last mount time: Tue Dec 9 05:03:43 2003

Last write time: Tue Dec 9 05:03:43 2003

Mount count: 13

Maximum mount count: 29

Last checked: Sat Nov 29 05:23:33 2003

Check interval: 15552000 (6 months)

Next check after: Thu May 27 06:23:33 2004

Reserved blocks uid: 0 (user root)

Reserved blocks gid: 0 (group root)

First inode: 11

Inode size: 128

Journal UUID:

Journal inode: 81

Journal device: 0x0000

First orphan inode: 19

Note that tune2fs reports 417658 blocks with 34950 free blocks of which 16706 are reserved. 34950 - 16706 = 18244 which is the amount free reported by df.

Fix to this problem:

#tune2fs -r 0 /dev/file_system_name


NOTE: This command works on Linux Ext3/ext2 filesystems as tune2fs is a Linux utility to tune ext3/ext2 filesystem.

Benefits : Expected benefits can be an tuned filesystem and also a filesystem that reports correct usage stats.

PowerPath Powermt Commands - EMC

There are 10 major command to check the POWER PATH config on HP UX servers. 
Please follow the below commands

1.powermt display ====>Display High Level HBA I/O Paths
2.powermt display dev=emcpowera  ===>Display for specific LUN
3.powermt display dev=all ====>  Display All Attached LUNs
4.powermt check_registration ===> Display PowerPath Registration Key / Status
5.powermt display options ===> Display EMC PowerPath Options
6.powermt display hba_mode ====> Display PowerPath HBA Mode
7.powermt display paths – Display available I/O Paths.
8.powermt displays port_mode ===>Display Port Status
9.powermt version ====> Display EMC PowerPath Version
10.powermt check ===>Check the I/O Paths




1. #powermt display ===>Display High Level HBA I/O Paths

Ex. o/p:

Symmetrix logical device count=212
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
###  HW Path                       Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
   3 0/4/0/0/0/1                   optimal     424      0       -     0    848
   5 0/5/0/0/0/1                   optimal     424      0       -     0    848



2. #powermt display dev=emcpowera ===>Display specific LUN

When there are multiple LUNs connected to a server, you might want to view information about a specific LUN by providing the logical name of the LUN as shown below.



3.#powermt display dev=all ====>  Display All Attached LUNs

Mostly we used to run this command powermt, which will display all the attached logical devices to the server.

Pseudo name=disk915
Symmetrix ID=000290103691
Logical device ID=06B8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   3 0/4/0/0/0/1.0x5006048c52a862e7.0x40a6000000000000 c14t4d6   FA  8cB   active  alive       0      2
   3 0/4/0/0/0/1.0x5006048c52a862f7.0x40a6000000000000 c15t4d6   FA  8dB   active  alive       0      2
   5 0/5/0/0/0/1.0x5006048c52a862e8.0x40a6000000000000 c16t4d6   FA  9cB   active  alive       0      2
   5 0/5/0/0/0/1.0x5006048c52a862f8.0x40a6000000000000 c17t4d6   FA  9dB   active  alive       0      2

Pseudo name=disk988
Symmetrix ID=000290103691
Logical device ID=074B
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   5 0/5/0/0/0/1.0x5006048c52a862e8.0x40dc000000000000 c16t11d4  FA  9cB   active  alive       0      2
   3 0/4/0/0/0/1.0x5006048c52a862e7.0x40dc000000000000 c14t11d4  FA  8cB   active  alive       0      2
   3 0/4/0/0/0/1.0x5006048c52a862f7.0x40ce000000000000 c15t9d6   FA  8dB   active  alive       0      2
   5 0/5/0/0/0/1.0x5006048c52a862f8.0x40ce000000000000 c17t9d6   FA  9dB   active  alive       0      2


Details:

    a. Pseudo name=emcpowera – The device name that can be used by the server. For example, 
     /dev/emcpowera.
    b. CLARiiON ID=AAA00000000000 [dev-server] - EMC CLARiiON CX3 serial number and
    the server name.
    c. Logical device ID=11111111 [LUN 1] – LUN number. For example, LUN 1.
    d. state=alive; policy=CLAROpt; – This displays that this particular LUN is valid and using
    the CLAROpt policy.
    e. Owner: default=SP B, current=SP B – This indicates that the default (and current) owner for    
         this  LUN is storage processor SP B.


4. powermt check_registration – Display PowerPath Registration Key / Status

If you’ve lost the PowerPath registration key that you’ve used during the EMC PowerPath installation, you can retrieve it using the following command.

# powermt check_registration
Key AAAA-BBBB-CCCC-DDDD-EEEE-FFFF
  Product: PowerPath
  Capabilities: All


5. #powermt display options ===> Display EMC PowerPath Options

Displays the high level EMC SAN array options as shown below.


6.#powermt display hba_mode ====> Display PowerPath HBA Mode

This is similar to #1, but displays whether hba is enabled or not, as shown in the last column of the output.

Examble output:

Symmetrix logical device count=212
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  Stats
###  HW Path                       Summary   Total   Dead  Q-IOs Mode
==============================================================================
   3 0/4/0/0/0/1                   optimal     424      0     0 Enabled
   5 0/5/0/0/0/1                   optimal     424      0     0 Enabled

7.powermt display paths ===> Display available I/O Paths.

This displays all available path for your SAN device.

8.powermt displays port_mode ===>Display Port Status

Displays the status of the individual ports on the HBA. i.e Whether the port is enabled or not.

9.powermt version ====> Display EMC PowerPath Version


How to identify the version number of EMC PowerPath software?


10.powermt check ===>Check the I/O Paths

If we made changes to the HBA’s, or I/O paths, then run the powermt check, to take appropriate action. For example,
if you manually removed an I/O path, check command will detect a dead path and remove it from the EMC path list.

Enable HPUX agile naming in VxVM: How to ?

By default Veritas Volume Manager uses HP-UX legacy naming scheme instead of the agile name mode.

The below steps we need to follow to change it to agile name mode..

1. Display VxVM disk information and get the current naming scheme.

#vxdisk list

#vxddladm get namingscheme

NAMING_SCHEME       PERSISTENCE         MODE               
===============================================
OS Native           Yes                 Legacy


2.To change this use again the vxddladm command.


vxddladm set namingscheme=osn mode=new


The parameter used are naming scheme and mode. The available option for the first are:

    ebn – Enclosure based names.
    osn – Operative system names.

If ebn is used neither legacy mode nor new mode can be set since hardware names provided by the disk array will be used so use osn as namingscheme.

The second parameter is mode and of course defines which naming model will be used in the osn naming scheme.

The following three values can be set:

  1.default
  2.legacy
  3.new


3. Now you can check the changes thrw below  commands 

# vxdisk list
#vxddladm get namingscheme
#vxddladm get namingscheme

NAMING_SCHEME       PERSISTENCE         MODE               
===============================================
OS Native           Yes                 New    

You can setback to the legacy name  scheme using teh same procedure.

Booting problem in SOLARIS - Part 2

Author : GUNA

To View Booting problem in Solaris :part 1 - CLICK HERE

2. Making boot device alias
 
In case system can not boot from primary disk and it is needed to make another boot disk to access the data , nvalias command is used .

nvalias command makes the device alias and assigns an alternate name to a physical disk. Physical address of target disk is required which can be had by 
show-disk command on ok>.

ok> nvalias disk7 /iommu@f,e0000000/sbus@f,e0001000/dma@3,81000/esp@3,80000/sd2,0
The new aliased disk can be named as boot disk or can be used for booting by refering its name .
ok> setenv boot-device disk7
ok>reset
or
ok> boot disk7

3. Timeout waiting for ARP/RARP packet ?
 
At ok> type printenv and look for these parameters .
boot-device disk
mfg-switch? false
diag-switch? false

if you see “boot-device net ” or true value for the other two parameter change it to the values above.

In case you wants to boot from network make sure your client is properly configured in boot server and network connections & configuration are proper.
               

Booting problem in solaris - Part 1

1. Booting in single user mode and mounting root hard disk

Most important step in diagnosing the booting problems is booting the system in single user mode and examining the hard disk for possible errors & work out the corrective measure.

Single user mode can be achieved by any of the following methods :-

ok> boot -s (from root disk)
ok> boot net -s (from network)
ok>boot cdrom -s (from cdrom)
Rebooting with command: cdrom -s
Configuring the /devices directory
Configuring the /dev directory |
INIT: SINGLE USER MODE
#
# fsck /dev/rdsk/c0t3d0s0
# mount /dev/dsk/c0t3d0s0 /mnt
Perform the required operation on mounted disk , now accessible through /mnt ,& unmount the hard disk after you are done ;

# umount /mnt
# reboot



To View Booting problem in Solaris : part 2 - CLICK HERE  



Configure HACMP : Step by Step Procedure:-

1. Install the nodes, make sure the redundancy is maintained for power supplies, n/w and fiber n/ws. Then Install AIX on the nodes.

2. Install all the HACMP filesets except HAview and HATivoli.
Install all the RSCT filesets from the AIX base CD.
Make sure that the AIX, HACMP patches and server code are at the latest level ( ideally recommended).

4. Check for fileset bos.clvm to be present on both the nodes. This is required to make the VGs enhanced concurrent capable.

5. V.IMP: Reboot both the nodes after installing the HACMP filesets.

6. Configure shared storage on both the nodes. Also in case of a disk heartbeat, assign a 1GB shared storage LUN on both nodes.

7. Create the required VGs only on the first node. The VGs can be either normal VGs or enhanced concurrent VGs. Assign particular major number to each VGs while creating the VGs. Record the major no. information.

To check the Majar no. use the command:
#ls –lrt /dev |grep

Mount automatically at system restart should be set to NO.

8. Varyon the VGs that was just created.

9. V.IMP: Create log LV on each VG first before creating any new LV. Give a unique name to logLV.

Destroy the content of logLV by: logform /dev/loglvname

Repeat this step for all VGs that were created.

10. Create all the necessary LVs on each VG.

11. Create all the necessary filesystems on each LV created…..you can create mount pts as per the requirement of the cust,

Mount automatically at system restart should be set to NO.

12. umount all the filesystems and varyoff all the VGs.

13. chvg –an    ---à All VGs will be set to do not mount automatically at system restart.

14. Go to node 2 and run cfgmgr –v to import the shared volumes.

15. Import all the VGs on node 2
use smitty importvg -----à import with the same major number as assigned on node 1.

16. Run chvg –an for all VGs on node 2.

17. V.IMP: Identify the boot1, boot2, service ip and persistent ip for both the nodes and make the entry in the /etc/hosts.


Make sure that the /etc/hosts file is same across both the nodes. The entries should be same and consistent across nodes.

18. Assign boot1 and boot2 ips to Ethernet interfaces (en#) on both the nodes.

Use smitty chinet -----à Assign boot ips to 2 interfaces on each node.

19. Here the planning ends. Now we can start with the actual HACMP setup:

20. Define the name for the cluster:

smitty hacmp -> Extended Configuration -> Extended topology configuration -> Configure an HACMP cluster - > Add an HACMP cluster:
Give the name of the cluster and press enter.

21. Define the cluster nodes.
smitty hacmp -> Extended Configuration -> Extended topology configuration -> Configure an HACMP node - > Add a node to an HACMP cluster

Define both the nodes on after the other.
22. Discover HACMP config: This will import for both nodes all the node info, boot ips, service ips from the /etc/hosts

smitty hacmp -> Extended configurations -> Discover hacmp related information



23. Add HACMP communication interfaces. (ether interfaces.)

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP networks -> Add a network to the HACMP cluster.

Select ether and Press enter.

Then select diskhb and Press enter. Diskhb is your non-tcpip heartbeat.


24. Include the interfaces/devices in the ether n/w and diskhb already defined.

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP communication interfaces/devices -> Add communication interfaces/devices.










Include all the four boot ips (2 for each nodes) in this ether interface already defined.
Then include the disk for heartbeat on both the nodes in the diskhb already defined.


25. Add the persistent IPs :

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP persistent nodes IP label/Adderesses


Add a persistent ip label for both nodes.


26. Define the service IP labels for both nodes.

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource configuration -> Configure HACMP service IP label



27. Add Resource Groups:

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource group configuration



Continue similarly for all the resource groups.
The node selected first while defining the resource group will be the primary owner of that resource group. The node after that is secondary node.
Make sure you set primary node correctly for each resource group.
Also set the fallover/fallback policies as per the requirement of the setup.


28. Set attributes of the resource groups already defined.:
Here you have to actually assign the resources to the resource groups.

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource group configuration


Add the service IP label for the owner node and also the VGs owned by the owner node of this resource group.

Continue similarly for all the resource groups.
29. Synchronize the cluster:
This will sync the info from one node to second node.

Smitty cl_sync

30. That’s it. Now you are ready to start the cluster.

Smitty clstart
You can start the cluster together on both nodes or start individually on each node.
31. Wait for the cluster to stabilize. You can check when the cluster is up by following commands

a. netstat –i

b. ifconfig –a : look-out for service ip. It will show on each node if the cluster is up.


c. Check whether the VGs under cluster’s RGs are varied-ON and the filesystems  in the VGs are mounted after the cluster start.

Here test1vg and test2vg are VGs which are varied-ON when the cluster is started and filesystems /test2 and /test3 are mounted when the cluster starts.

/test2 and /test3 are in test2vg which is part of the RG which is owned by this node.

32. Perform all the tests such as resource take-over, node failure, n/w failure and verify the cluster before releasing the system to the customer.

Thanks and Have fun with HACMP!!!!

P.S: Only one piece of advice: DO YOUR PLANNING THROUGHLY and DOCUMENT THE CONFIGURATION.

GBL_LOST_MI_TRACE_BUFFERS warnings : How to fix ??



Error :  Measurement Buffers Lost see metric GBL_LOST_MI_TRACE_BUFFERS warnings HP UNIX.

Reason :  This warnings are frequently expect on high spec busy server. The midaemon process gets its raw information from “event traces” that are sent from the kernel instrumentation (KI) that is an integral part of HPUX. This instrumentation is written into virtually every system call in the kernel. Whenever a process enters or leaves a system call, one or more event traces is sent “over the fence” to the midaemon process. Midaemon has two threads…a reader thread and a writer thread. The reader thread accepts these event traces, while the writer thread “massages” them somewhat, and then writes the pertinent information out to the midaemon "shared memory database" or SMD, a table that is allocated in the shared memory space. Now, think of a large funnel…at the top or large end of the funnel there are multiple processors pouring in these event traces, while at the bottom or small end of the funnel there is one processor running the midaemon reader thread pulling the event traces out. On a busy, good-sized system, say 16 processors or more, it is possible that this logical “funnel” can overflow, resulting in the GBL_LOST_MI_TRACE_BUFFER messages.

How to fix ??

What can be done to reduce the or eliminate:-

For Glance version :- C.04.XX.XXX

1. Contact support and request the C.04.73.115 hotfix for OVPA.

2. Set the "-bufsets" and "-skipbuf" parameters in the midaemon startup command appropriately. System adminstators could consider these values as appropriate settings:
No. Of CPUs midaemon parameters
8 -bufsets 16 -skipbuf 8 -smdvss 256M
16 -bufsets 24 -skipbuf 12 -smdvss 256M
32 -bufsets 32 -skipbuf 16 -smdvss 512M
64 -bufsets 32 -skipbuf 16 -smdvss 512M
Appropriate values for larger numbers of CPUs, e.g. 128 and 256 are still being investigated. Setting the above parameters requires that the "maxdsize" kernel parameter be set to at least 2 GB.

3. Utilize the undocumented "-no_fileio_traces" parameter in the midaemon startup command string.

This is best done by editing the /etc/rc.config.d/ovpa file. In order to set the appropriate specifications for a 64 CPU system with "-no_fileio_traces", for example, would entail adding
The following two lines to that file:

MIPARMS = "-p -no_fileio_traces -bufsets 32 -skipbuf 16 -smdvss 512M"
export MIPARMS



For Glance version :- C.05.XX.XXX

To activate logical IO metric collection and display, use the following procedures:
If the Performance Agent (PA) and Glanceplus are both installed, follow these seven steps:

1. Change directory to /etc/rc.config.d
2. Edit the "ovpa" file using vi or the editor of choice
3. Find the line immediately preceding MWA_START = 1
4. Add the following two statements and save the change:
MIPARMS="-fileio_traces -p”
export MIPARMS
5. Run the commands: “ovpa stop” and “midaemon -T”. Check the midaemon status with “perfstat -p” to make sure it has terminated and is ready to use the new configuration.
6. Execute the command: “ovpa start”
7. Start Glance, go to the "IO by Disk" screen using the “d” key, and check to ensure that Glance is now populating the logl rds and logl wrts (logical IO) metrics.

If only Glanceplus is installed on the system, i.e. the Performance Agent is not installed, then replace the above steps with the following:

1. Stop all executions of glance and midaemon. Use "ps -ef | grep midaemon" to ensure that there is no active midaemon process.
2. Start a new midaemon process using the command "midaemon -p -fileio_traces".
3. Execute step #7 above.
 


vx_nospace - error while extend FS: How to resolve ??

Error : vxfs fsadm: V-3-23643: Retry the operation after freeing up some space


This issue we could face while extend the File system in HP-UX.

Extending the Logical Volume completes as normal.

root:test1# lvextend -L 2048 /dev/vg_test/lvol02
Logical volume "/dev/vg_test/lvol02" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test_1.conf
root:test1#

Error while extending the file system to the new size.

root:test1# fsadm -b 2048M /ofa/test
fsadm: /etc/default/fs is used for determining the file system type
vxfs fsadm: V-3-23585: /dev/vg_test/rlvol02 is currently 1048576 sectors - size will be increased

vxfs: msgcnt 610590 mesg 001: V-2-1: vx_nospace - /dev/vg_test/lvol02 file system full (256 block extent)vxfs fsadm: V-3-20340: attempt to resize /dev/vg_test/rlvol02 failed with errno 28
vxfs fsadm: V-3-23643: Retry the operation after freeing up some space
root:test1#

Reason : The issue is due to fragmented file system leaving no available free PE at the end of file system & the fsadm command believes there is no free PE to extend the FS & failing with the error.

Solution : To resolve this issue we need to defragment the file system using fsadm, please follow the given steps address this issue.

# fstyp -v /dev/vg_test/lvol2 (to check the FS type)

1. #sync (Tried the sync command to flush out any un-written data .)

2. #fsadm -E -D /ofa/test ( Identify whether the fragmentation is causing the issue & check the fragmentation status.)

3. #fsadm -e -d /ofa/test (Perform the defragmentation)

4. #fsadm -F vxfs -b 2048M /ofa/test (Now the file system would be extended as Normal)

5. bdf /ofa/test ( Verify the file system size to confirm it is extended ) 


If you FS is utilized 100% and there is no free space means then unmount the FS and extend it.

#umount /FS 


Then do #fsadm -b as normal.
Blogger Tips and TricksLatest Tips And TricksBlogger Tricks