CPU / processor / core info in Linux :How to get ??

To get processor info we need to run below commands in Linux

1. #cat /proc/cpuinfo

or

2. #dmidecode

To get the core info in linux use the below commands. If the processor is dual core or quad core or single core processor means we will get the "core id"value in the below commands. If no core (single core) processor  means we wont get the core id . so from the word "core id" we can get the processor is core processor or physical processor.

For examble here i took  2 servers ..

server1 - has 4 physical - single cored processors 

server 2 - has 2 physical - dual core processors

4 Physical  processor with no core (single core)  means we will get o/p like below


root:server1#  egrep "^processor|^cpu cores|^core id" /proc/cpuinfo
processor       : 0
processor       : 1
processor       : 2
processor       : 3
root:server1#

To get core info (whether core is there or not)

root:server1# grep core\ id /proc/cpuinfo |uniq -d |wc -l
0

To get total no of physical processor

root:server1# grep core\ id /proc/cpuinfo | grep -c \ 0$ | grep ^0$ >> /dev/null && grep -c processor /proc/cpuinfo || grep core\ id /proc/cpuinfo | grep -c \ 0$
4


Two dual core processor means (2- physical processor) we will get o/p like below


root:server2# egrep "^processor|^cpu cores|^core id" /proc/cpuinfo
processor       : 0
core id         : 0  ----------------- processor 1
cpu cores       : 2
processor       : 1
core id         : 0  ----------------- processor 1
cpu cores       : 2

processor       : 2
core id         : 1  ----------------- processor 2
cpu cores       : 2
processor       : 3
core id         : 1  ----------------- processor 2
cpu cores       : 2

root:server2#

To get core info run the below command

root:server2# grep core\ id /proc/cpuinfo |uniq -d |wc -l   -------> this will give how many cores are there / if no core means it will give out put as 0.
2


To get how many physical processor is there on server ,then run below command

root:server2# grep core\ id /proc/cpuinfo | grep -c \ 0$ | grep ^0$ >> /dev/null && grep -c processor /proc/cpuinfo || grep core\ id /proc/cpuinfo | grep -c \ 0$
2


====================================

How to enable Quota in HP UX ?

 Note:  Thanks to Shameer For sharing this Tips.


Enable the HPUX default quota by following steps:


For each file system for which quotas are to be enabled, perform the following tasks:
           1.   Mount the file system.
           2.   Add quota to the existing options list in /etc/fstab.  For
                example, change the string default for the root (/) entry to
                default,quota.  Once this is done, quotas will automatically
                be enabled for all relevant file systems on system reboot.
           3.   Create the quotas file in the mount directory of the file
                system.  For example, for the /mnt file system, run the
                command
                       cpset /dev/null /mnt/quotas 600 root bin
           4.   Establish one or more prototype user quotas using the
                edquota command (see edquota(1M)).
                If you want a number of users on your system to have the
                same limits, use edquota to set those quotas for a prototype
                user; then use the edquota -p command to replicate those
                limits for that group of users.
           5.   Turn on the quotas on the file system using quotaon.  For
                example, run the command
                       /usr/sbin/quotaon /mnt
           6.   Run quotacheck (see quotacheck(1M)) on the file system to
                record the current usage statistics.

    Adding a new user

      To add a new user to the quota system:
           1.   Use edquota to copy the quotas of an existing user.
           2.   Run quotacheck.

    Adding a new file system to an established system

      Repeat steps 1 through 5 above under "Initial Setup" for the new file
      system.

======================================================================================


We can enable the HPUX default quota by following steps:

1) mount the corresponding file system using the quota option

2) Activate the quota.
2) specify users with the quota limit for the file system

I'm hereby giving an example of enabling quota for my user id user1.

STEP: 1. Mount the file system in quota mode.

# mount |grep quota
/quotatest on /dev/vg00/lv_test quota,delaylog on Wed Oct 19 16:13:39 2011
#

STEP :2 Edit the quota for the user ( inthis example for user1)

serverA {root}# edquota user1
"/var/tmp/EdP.a18825" 1 line, 71 characters
fs /quotatest blocks (soft = 1024, hard = 2048) inodes (soft = 0, hard = 0)
~
"/var/tmp/EdP.a18825" 1 line, 76 characters

STEP 3 : Check the quota status.

# quota -v
Disk quotas for user1 (uid 44312):
Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
/quotatest        7   1024   2048                  3      0      0


STEP 4 : Create test files to increase the usage for the user "user1" in the specific file system. Warning message will be displayed if the soft limit is crossed.

#cd /quotatest/user1

# ls -lrt
total 14
-rw-r-----   1 user1     osg           6000 Oct 19 16:22 testfile
-rw-r-----   1 user1     osg            500 Oct 19 16:23 testfile3

# prealloc testfile4 50000000
msgcnt 6 vxfs: mesg 044: vx_bsdquotaupdate - warning: /quotatest file system user 44312 disk quota exceeded
prealloc: Disk quota exceeded

# quota -v
Disk quotas for user1 (uid 44312):
Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
/quotatest      937   1024   2048                  6      0      0


STEP 5 : Create test files to increase the usage for the user "user1" in the specific file system to cross the threshold. Error message will be displayed if the
hard limit is crossed that the quota is exceeded.

# prealloc testfile7 450000
msgcnt 10 vxfs: mesg 044: vx_bsdquotaupdate - warning: /quotatest file system user 44312 disk quota exceeded

# ls -lrt
total 2752
-rw-r-----   1 user1     osg           6000 Oct 19 16:22 testfile
-rw-r-----   1 user1     osg            500 Oct 19 16:23 testfile3
-rw-r-----   1 user1     osg          50000 Oct 19 16:34 testfile4
-rw-r-----   1 user1     osg         450000 Oct 19 16:36 testfile5
-rw-r-----   1 user1     osg         450000 Oct 19 16:37 testfile6
-rw-r-----   1 user1     osg         450000 Oct 19 16:37 testfile7

# quota -v
Disk quotas for user1 (uid 44312):
Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
/quotatest     1377   1024   2048    7.0 days      7      0      0

======================================================================================

Dump device is too small ? : AIX

 - Thanks to Arun.

1) # /usr/lib/ras/dumpcheck –p
Also you can find the output from # errpt -a
The Output as below:
The largest dump device is too small.
Largest dump device
dumplv01
Largest dump device size in kb
3538944
Current estimated dump size in kb
3561676

Synchronizing

# lsvg rootvg

VOLUME GROUP: rootvg VG IDENTIFIER: 00cdd76d00004c00000001079dbde229
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 2184 (279552 megabytes)
MAX LVs: 256 FREE PPs: 1995 (255360 megabytes)
LVs: 12 USED PPs: 189 (24192 megabytes)
OPEN LVs: 11 QUORUM: 1
TOTAL PVs: 4 VG DESCRIPTORS: 4
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 4 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

FREE PPs shows you have 25GB free on the rootvg and your PP Size is 128mb (IE you can allocate fs extentions in 128mb chunks)

Now show your dump devices :
# sysdumpdev -l

primary /dev/lg_dumplv
secondary /dev/lg_dumplv2
copy directory /var/adm/ras
forced copy flag TRUE
always allow dump TRUE
dump compression OFF

# lslv lg_dumplv
LOGICAL VOLUME: lg_dumplv VOLUME GROUP: rootvg
LV IDENTIFIER: 00cdd76d00004c00000001079dbde229.10 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: sysdump WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 128 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 9 PPs: 8
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: N/A LABEL: None

8PPs*128mb*1024 = 1048576 which matches the current size

Calculate the new dump size from the high water mark

1049600/1024/128 = 8.0078125 !!!! Hmmm ok so allocate one more PP just to keep it happy (bad example - I have had to add 12PPs on systems before)

Extent the dumpdev :

# extendlv lg_dumplv 1

# lslv lg_dumplv


LOGICAL VOLUME: lg_dumplv VOLUME GROUP: rootvg
LV IDENTIFIER: 00cdd76d00004c00000001079dbde229.10 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: sysdump WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 128 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 9 PPs: 9
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: N/A LABEL:

New size!

Now clear the error
# errclear 0

LP configuration save/restore: How to?

The SAM utility provides a facility to save and restore the entire lp configuration.  It is saved in /var/sam/lp. 

Through command if you want to do this means use the below commands.

Assuming that you want to copy the lp configuration from server1 to server2:

1. server1# /usr/sam/lbin/lpmgr –S

2. server1# rcp –r /var/sam/lp server2:/var/sam/lp

3. server2# /usr/sam/lbin/lpmgr –R

What is APA ? How to configure APA using nwmgr utility?

By Guna :

What is APA ?

HP Auto Port Aggregation (APA) is a software product that creates link aggregates, often called
trunks, which provide a logical grouping of two or more physical ports into a single fat pipe.


HP APA provides the following

• Automatic link failure detection and recovery
• Support for load balancing of network traffic across all of the links in the aggregation.
• Support for the creation of failover groups, providing a failover capability for links. In the
event of a link failure, LAN Monitor automatically migrates traffic to a standby link.
• Support for the TCP Segmentation Offload (Large Send) feature, if an aggregate is created
with all Ethernet cards capable of TCP Segmentation Offload (TSO).
• Support for Virtual VLANs (VLANs) over APA link aggregates and failover groups.
• Support for 64-bit MIB (RFC 2863) statistics, if all the interfaces within a link aggregate or
failover group support 64-bit statistics.
• Support for IPv6 addresses on a link aggregate or failover group.

Verify APA is installed on your system?


1. Verify that the product was installed by issuing the following command:
# swlist -l product | grep -i HP-APA

Output similar to the following displays:
HP-APA-FMT B.11.31.20 HP Auto-Port Aggregation APA formatter product.
HP-APA-KRN B.11.31.20 HP Auto-Port Aggregation kernel products.
HP-APA-LM B.11.31.20 HP Auto-Port Aggregation LM commands.
HP-APA-NETMOD B.11.31.20 HP Auto-Port Aggregation nwmgr/NCweb libraries.
HP-APA-RUN B.11.31.20 HP Auto-Port Aggregation APA command products.


2. Verify that the software is configured in the kernel by issuing the following command:
# what /stand/vmunix | egrep -i hp_apa
Output similar to the following displays:
$Revision: hp_apa: HP Auto-Port Aggregation (APA): B.11.31.20 Aug 20 2008 11:30
If nothing is displayed, rebuild the kernel.

Configure APA using nwmgr :

Step 1 : Check first if any APA is already configured

# nwmgr -S apa

Note: If LAN_MONITOR is enabled you need to remove that (Its only required when you are planning to have failover capability over APA.)

Delete failover group first
# nwmgr -d -S apa -A links=all -c lan900
# nwmgr -s -S apa -A all --saved --from cu


Step 2 : Check 1st lan Interface status

#ifconfig lan3
#ifconfig lan3 unplumb
#ifconfig lan4
#ifconfig lan4 unplumb
#netstat -in

Step 3 : Create Link Aggregate.

#nwmgr -a -S apa -c lan900 -A links=3,4 -A mode=MANUAL -A lb=LB_MAC
Save configuration.

# nwmgr -s -S apa -A all --saved --from cu
Check status of newly created link aggregate.
#nwmgr -S apa -I 900 -A all

Step 4 : Configure IP addrss to link aggregate (lan900) and change the netconf file.

#vi /etc/rc.confg.d/netconf

Then add the IP address and save the file.

Step 5 : Restart APA.

#/sbin/init.d/hpapa stop
#/sbin/init.d/hpapa start

Step 6 : # Check IP configuration

#netstat -in
#exit.



Configure link aggregates using APA (auto port aggregation) :Quick Guide

There are 2 Config files for APA:

/etc/rc.config.d/hp_apaconf
/etc/rc.config.d/hp_apaportconf


and one more for Network config 

/etc/rc.config.d/netconf


Note : We can configure the APA using NWMGR command utility also.

Load balancing policy


1.LB_MAC
2.LB_IP
3.LB_PORT

Protocol Types: 

 
Note: ROUTER and Switches should support APA.

1. FEC_AUTO – Cisco’s proprietary Fast EtherChannel (FEC/PAgP)   
        technology). This is NOT standard for all CISCO switches.
2.lACP_AUTO – IEEE 802.2ad link aggregation control protocol (LACP)
3. MANUAL configured port trunks (Default)


Steps to configure APA

Step 1.Choose the netwokr cards
Step 2.Choose the Protocol
Step 3.Edit the config file according to protocol
Step 4.start the apa
Step 5 : Assign a IP address
Step 6 : Verify the status



STEP 1: Which lan we can configure APA??

Link Speeds and Duplex settings should be the same.

#ioscan -fnkC lan       ## to determine available lans

each lan used in the aggregate must be disabled before starting APA.

# ifconfig lan(n) down
# ifconfig lan(n) unplumb


STEP 2 :If we paln to config LAN 1,2 edit the config file and modify like below

#vi /etc/rc.config.d/hp_apaconf

HP_APA_INTERFACE_NAME[0]=lan900
HP_APA_LOAD_BALANCE_MODE[0]=LB_MAC
HP_APA_MANUAL_LA[0]="1,2"<— lans 1, 2


Then Edit the hp_apaportconf file

#Vi /etc/rc.config.d/hp_apaportconf

HP_APAPORT_INTERFACE_NAME[0]=lan1
HP_APAPORT_CONFIG_MODE[0]=MANUAL
HP_APAPORT_INTERFACE_NAME[1]=lan2
HP_APAPORT_CONFIG_MODE[1]=MANUAL

STEP 3 : (modify the config files as per protocol which you have selected)

Mode 1 : FEC_AUTO Port Configuration Mode

If u choosed - FEC_AUTO Port Configuration Mode then modify the conf file like below

#vi /etc/rc.config.d/hp_apaconf

HP_APA_INTERFACE_NAME[0]=lan900
HP_APA_LOAD_BALANCE_MODE[0]=LB_MAC
HP_APA_GROUP_CAPABILITY[0]=900       <– Any integer value pointing to the physical ports in the hp_apaportconf


#vi /etc/rc.config.d/hp_apaportconf

HP_APAPORT_INTERFACE_NAME[0]=lan1
HP_APAPORT_GROUP_CAPABILITY[0]=900 <— must be the same value as in hp_apaconf
HP_APAPORT_CONFIG_MODE[0]=FEC_AUTO
HP_APAPORT_INTERFACE_NAME[1]=lan2
HP_APAPORT_GROUP_CAPABILITY[1]=900
HP_APAPORT_CONFIG_MODE[1]=FEC_AUTO


Mode 2 : LACP_AUTO Port Configuration Mode

If u choosed - LACP_AUTO Port Configuration Mode then modify the conf file like below

#vi /etc/rc.config.d/hp_apaconf

HP_APA_INTERFACE_NAME[0]=lan900
HP_APA_LOAD_BALANCE_MODE[0]=LB_MAC
HP_APA_KEY[0]=900            <— an integer value pointing to the physical ports in hp_apaportconf


#vi  /etc/rc.config.d/hp_apaportconf
HP_APAPORT_INTERFACE_NAME[0]=lan1
HP_APAPORT_KEY[0]=900             < — must be the same value as HP_APA_KEY in hp_apaconf
HP_APAPORT_CONFIG_MODE[1]=LACP_AUTO
HP_APAPORT_INTERFACE_NAME[1]=lan2

STEP 4 : To stop/start the new configuration  (APA doesn’t require a reboot to take effect)

/sbin/init.d/hpapa stop
/sbin/init.d/hpapa start

STEP 5 : Assign an ipaddress to lan900.

# ifconfig lan900 ipaddress netmask netmaskaddress
# ifconfig lan900                 # to check
# modify /etc/rc.config.d/netconf      ##to make the ipaddress permanent

STEP 6 :Verify the status of the link aggregate

# lanadmin -x -v 900           ## will show the number of ports, state, mode
# lanscan -v                              ## verify which link aggregates have been configured.
# lanadmin –x –p 2 900      ## verify the status of a particular port.

Cluster and package status in MC service guard cluster

Below table will explain what is the cluster and package status (required state) at the time of any cluster configuration change and package configuration change. Hope this will help for system admins.


Task / Change in cluster config Cluster status / Required Cluster State
Add a new node. All cluster nodes must be running.
Delete an existing node A node can be deleted even if it is down or unreachable.
Change maximum configure packages Cluster must not be running
Change timing parameters Cluster must not be running
Change cluster lock configuration Cluster must not be running
Change serial heartbeat configuration Cluster must not be running.
Change IP address for heartbeats Cluster must not be running
Change addresses of monitored subnets Cluster must not be running.




Package Modification Package status / Package required state
Add a new package Other packages can be running.
Remove a package. Package must be halted. Cluster can be running.
Add a new node. Package(s) may be running.
Remove a node. Package(s) may be running on different nodes.
Add/remove a new service process Package must be halted.
Add/remove a new subnet.  Package must be halted. / Cluster may need halting if subnet  is new to the cluster
Add/remove a new EMS resource Package must be halted. / Cluster may need halting if EMS resource is new to the cluster.
Changes to the run/halt script contents Recommended that package be halted to avoid any timing problems while running the script.
Script timeouts Package may be running.
Service timeouts Package must be halted.
Failfast parameters Package must be halted.
Auto_Run Package may be running.
Local LAN failover Package may be running.
Change node adoption order Package may be running.

XEN - Virtualization

Hi All, Herewith i have shared a XEN Linux Virtualization reference guide. 

Which disk server got booted ?

To find out the disk from which the server got booted, we need to use below command 

#ll  /dev/disk | grep $(echo "bootdev/x"|adb /stand/vmunix /dev/kmem | awk '/0x/ {print substr($1,5)}')

examble : 
root:server1# ll /dev/disk | grep $(echo "bootdev/x"|adb /stand/vmunix /dev/kmem | awk '/0x/ {print substr($1,5)}')
brw-r-----   1 bin        sys          1 0x000003 Jul 13  2010 disk3_p2
root:server1#

HP SUM - Smart update manager - Guide

HP SUM is a technology, included in many HP products for installing and updating firmware and software components on HP ProLiant and HP Integrity servers, enclosures, and options.

To get more details about the HP SUM, Please go through the below document.

HP SUM Guide.pdf



 

Useful commands in HP UX


1.To get hardware details and server info


#/opt/ignite/bin/print_manifest

#cat /var/opt/ignite/local/manifest/manifest.info

2.Swap info in HP UX

#swapinfo (displayed in KB)
#swapinfo -m (display in Mb)
#swapinfo -tm (total / Mb)

3.Kernel Bit check in HP UX


#getconf KERNEL_BITS ( version 11)
#/opt/ignite/bin/print_manifest |grep -i 'os mode'


Note: determine if system supports 64 bit
#getconf HW_CPU_SUPP_BITS
#/opt/ignite/bin/print_manifest |grep -i 'hw capability'

4.To check memory details

#dmesg | grep -i physical
#/usr/sam/lbin/getmem
#/opt/ignite/bin/print_manifest

5.Display network packets

#nettl -start
#nettl -status all

Xterm in HP UX


xterm terminal is powerfull. It is available with almost all variants of Unix OS.


Explanation of switches available for xterm:
======================================================================
-T "Title name" : Used for displaying title
-bg : For background color
-fg : For foreground color
-fn : For font name (it can be name as well as size also)
-e : This x-application will open in xterm window
-geometry 180x60 : To define xterm's window size and location (optional)
-display : To send the xterm's display on "Hostname" (i.e. for remote display)

How to open Xterm on local system:
======================================================================
xterm -T "This xterm is from: `whoami`@`hostname`" -bg black -fg white -fn fixed -geometry 180x60
xterm -T "This xterm is from: `whoami`@`hostname`" -bg black -fg white -fn "*-fixed-*-*-*-20-*" -geometry 180x60


How to open Xterm on remote system:
======================================================================
xterm -T "This xterm is from: `whoami`@`hostname`" -bg black -fg white -fn fixed -display 10.10.10.230:6.0
xterm -T "This xterm is from: `whoami`@`hostname`" -bg black -fg white -fn "*-fixed-*-*-*-20-*" -display 10.10.10.230:6.0
xterm -T "This xterm is from: `whoami`@`hostname`" -bg black -fg white -fn fixed -display 10.10.10.230:6.0 -e xclock

Multipath in HP UX

Multipath:

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

Features of Multipath:

    * Dynamic load balancing
    * Traffic shaping
    * Automatic path management
    * Dynamic reconfiguration

How to find multipath on hp ux ??


1.pvlinks

   # strings /etc/lvmtab
   # vgdisplay -v vgname

2.Native Multipathing (11.31) (agile device name)

  # ioscan -m dsf h/w address 
    
    Examble : ioscan -m dsf/dev/rdisk/disk586
              Persistent DSF      Legacy DSF(s)
          =============================================
              /dev/rdisk/disk586  /dev/rdsk/c15t0d0
                                  /dev/rdsk/c16t0d0


  # scsimgr -D lun_map (which will give the number of paths to that lun)

  # ioscan -kfnNC  (will show the persistent device file in 11.31)

  # evainfo (EVA) and xpinfo (XP) - commands will show the multipaths for luns.

HP UX Operating system OEs

HP-UX 11i v3 Base OE (BOE)

    Delivers the full HP-UX 11i operating system plus file system and partitioning software and applications for Web serving, system management and security. BOE includes all the software formerly in FOE & TCOE (see below), plus software formerly sold stand-alone (e.g. Auto Port Aggregator).

HP-UX 11i v3 Virtualization Server OE (VSE-OE)


    Delivers everything in BOE plus GlancePlus performance analysis and software mirroring, and all Virtual Server Environment software which includes virtual partitions, virtual machines, workload management, capacity advisor and applications. VSE-OE includes all the software formerly in EOE (see below), plus additional virtualization software.

HP-UX 11i v3 High Availability OE (HA-OE)

    Delivers everything in BOE plus HP Serviceguard clustering software for system failover and tools to manage clusters, as well as GlancePlus performance analysis and software mirroring applications.

HP-UX 11i v3 Data Center OE (DC-OE)

    Delivers everything in one package, combining the HP-UX 11i operating system with virtualization and high availability. Everything in the HA-OE and VSE-OE is in the DC-OE. Solutions for wide-area disaster recovery and the compiler bundle are sold separately.

HP-UX 11i v2 (11.23)

    HP's public roadmap indicates v2 availability through December, 2010, while recommending upgrading to v3. The following lists the currently available HP-UX 11i v2 OEs:

HP-UX 11i v2 Foundation OE (FOE)

    Designed for Web servers, content servers and front-end servers, this OE includes applications such as HP-UX Web Server Suite, Java, and Mozilla Application Suite. This OE is bundled as HP-UX 11i FOE.

HP-UX 11i v2 Enterprise OE (EOE)


    Designed for database application servers and logic servers, this OE contains the HP-UX 11i v2 Foundation OE bundles and additional applications such as GlancePlus Pak to enable an enterprise-level server. This OE is bundled as HP-UX 11i EOE.

HP-UX 11i v2 Mission Critical OE (MCOE)

    Designed for the large, powerful back-end application servers and database servers that access customer files and handle transaction processing, this OE contains the Enterprise OE bundles, plus applications such as MC/ServiceGuard and Workload Manager to enable a mission-critical server. This OE is bundled as HP-UX 11i MCOE.


HP-UX 11i v2 Minimal Technical OE (MTOE)

    Designed for workstations running HP-UX 11i v2, this OE includes the Mozilla Application Suite, Perl, VxVM, and Judy applications, plus the OpenGL Graphics Developer's Kit. This OE is bundled as HP-UX 11i MTOE.

HP-UX 11i v2 Technical Computing OE (TCOE)


    Designed for both compute-intensive workstation and server applications, this OE contains the MTOE bundles plus extensive graphics applications and Math Libraries. This OE is bundled as HP-UX 11i-TCOE.

HP-UX 11i v1 (11.11)

    According to HP's roadmap, was sold through December 2009, with continued support for v1 at least until December 2013.
Blogger Tips and TricksLatest Tips And TricksBlogger Tricks