Useful AIX, Solaris Hp0ux Commands
Useful AIX, Solaris Hp0ux Commands
Useful AIX, Solaris Hp0ux Commands
************************************************************************
****/
/* Document : UNIX command examples, mainly based on Solaris, AIX,
HP */
/* and ofcourse, also Linux.
*/
/* Doc. Version : 115
*/
/* File : unix.txt
*/
/* Purpose : some examples for the Oracle, DB2, SQLServer DBA
*/
/* Date : 07-07-2009
*/
/* Compiled by : Albert van der Sel
*/
/* Best use : Use find/search in your editor to find a string,
command, */
/* or any identifier
*/
/
************************************************************************
****/
############################################
SECTION 1. COMMANDS TO RETREIVE SYSTEM INFO:
############################################
==========================
1. HOW TO GET SYSTEM INFO:
==========================
Memory:
-------
AIX: bootinfo -r
lsattr -E -l mem0
lsattr -E -l sys0 -a realmem
svmon -G
vmstat -v
vmo -L
lparstat -i
or use a tool as "topas" or "nmon" (these are utilities)
Swap:
-----
AIX: lsps -a
lsps -s
pstat -s
HP: swapinfo -a
Solaris: swap -l
prtswap -l
Linux: swapon -s
cat /proc/swaps
cat /proc/meminfo
cpu:
----
Solaris: psrinfo -v
prtconf
psrset -p
prtdiag
OS version:
-----------
HP: uname -a
Solaris: uname -a
cat /etc/release (or other way to view that file, like
"more /etc/release")
Tru64: /usr/sbin/sizer -v
AIX Example:
# oslevel -s
5300-08-03-0831
# oslevel -qs
Known Service Packs
-------------------
5300-08-03-0831
5300-08-02-0822
5300-08-01-0819
5300-08-00-0000
5300-07-05-0831
5300-07-04-0818
5300-07-03-0811
5300-07-02-0806
5300-07-01-0748
5300-06-08-0831
5300-06-07-0818
5300-06-06-0811
5300-06-05-0806
5300-06-04-0748
5300-06-03-0732
5300-06-02-0727
5300-06-01-0722
5300-05-CSP-0000
5300-05-06-0000
5300-05-05-0000
5300-05-04-0000
5300-05-03-0000
5300-05-02-0000
5300-05-01-0000
5300-04-CSP-0000
5300-04-03-0000
5300-04-02-0000
5300-04-01-0000
5300-03-CSP-0000
AIX firmware:
lsmcode -c display the system firmware level and service
processor
lsmcode -r -d scraid0 display the adapter microcode levels for a RAID
adapter scraid0
lsmcode -A display the microcode level for all supported
devices
prtconf shows many setting including memory, firmware,
serial# etc..
# uname -L
1 lpar01
For HP-UX:
Use commands like "parstatus" or "getconf PARTITION_IDENT" to get npar
information.
patches:
--------
The last six digits of the ROM level represent the platform
firmware date in the format, YYMMDD.
Netcards:
---------
Network sniffing:
-----------------
-- Solaris:
-- AIX:
# tcpdump port 23
# tcpdump -i en0
A good way to use tcpdump is to save the network trace to a file with
the -w flag and then analyze the trace by using different
filtering options together with the -r flag. The following example show
how to run a basic tcpdump network trace,
saving the output in a file with the -w flag (on a Ethernet network
interface):
# tcpdump -w /tmp/tcpdump.en0 -i en0
To limit the number of traced packets, use the -c flag and specify the
number, such as in the following example
that traces the first 128 packets (on a token-ring network interface):
# tcpdump -c 128 -w /tmp/tcpdump.tr0 -i tr0
To start the iptrace daemon with the System Resource Controller (SRC),
# startsrc -s iptrace -a "/tmp/nettrace"
The recorded packets are received on and sent from the local host. All
packet flow between the local host and all other hosts on any interface
is
recorded. The trace information is placed into the /tmp/nettrace file.
To record packets coming in and going out from a specific remote host,
enter the command in the following format:
# iptrace -i en0 -s airmail -b /tmp/telnet.trace
-- HPUX:
nettl command:
Turn on inbound and outbound PDU tracing for the transport and session
(OTS/9000) subsystems
and send binary trace messages to file /var/adm/trace.TRC000.
# nettl -traceon pduin pduout -entity transport session \
-file /var/adm/trace
2. Reproduce problem.
-- AIX:
The following subsystems are part of the nfs group: nfsd, biod,
rpc.lockd, rpc.statd, and rpc.mountd.
The nfs subsystem (group) is under control of the "resource controller",
so starting and stopping nfs
is actually easy
# startsrc -g nfs
# stopsrc -g nfs
Or use smitty.
-- Redhat Linux:
# /sbin/service nfs restart
# /sbin/service nfs start
# /sbin/service nfs stop
-- Solaris:
If the nfs daemons aren't running, then you will need to run:
# /etc/init.d/nfs.server start
-- HP-UX:
Issue the following command on the NFS server to start all the necessary
NFS processes (HP):
# /sbin/init.d/nfs.server start
-- AIX:
# refresh -s inetd
-- HPUX:
# /usr/sbin/inetd -c
-- Solaris:
# /etc/init.d/inetd stop
# /etc/init.d/inetd start
# pkill -HUP inetd # The command will restart the inetd and
reread the configuration.
-- RedHat / Linux
# service xinetd restart
or
# /etc/init.d/inetd restart
prtconf:
--------
Use this command to obtain detailed system information about your Sun
Solaris installation
# /usr/sbin/prtconf
# prtconf -v
Displays the size of the system memory and reports information about
peripheral devices
Other commands:
---------------
# prtmem
# memps -m
# bootinfo -r
# lsattr -El sys0 -a realmem
# prtconf (you can grep it on memory)
You can have a more detailed and comprehensive look at AIX memory by
using "vmstat -v" and "vmo -L" or "vmo -a":
For example:
# vmstat -v
524288 memory pages
493252 lruable pages
67384 free pages
7 memory pools
131820 pinned pages
80.0 maxpin percentage
20.0 minperm percentage
80.0 maxperm percentage
25.4 numperm percentage
125727 file pages
0.0 compressed percentage
0 compressed pages
25.4 numclient percentage
80.0 maxclient percentage
125575 client pages
0 remote pageouts scheduled
14557 pending disk I/Os blocked with no pbuf
6526890 paging space I/Os blocked with no psbuf
18631 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
49038 external pager filesystem I/Os blocked with no
fsbuf
0 Virtualized Partition Memory Page Faults
0.00 Time resolving virtualized partition memory page
faults
The vmo command really gives lots of output. In the following example
only a small fraction of the output is shown:
# vmo -L
..
lrubucket 128K 128K 128K 64K 4KB pages
D
------------------------------------------------------------------------
--------
maxclient% 80 80 80 1 100 % memory
D
maxperm%
minperm%
------------------------------------------------------------------------
--------
maxfree 1088 1088 1088 8 200K 4KB pages
D
minfree
memory_frames
------------------------------------------------------------------------
--------
maxperm 394596 394596
S
------------------------------------------------------------------------
--------
maxperm% 80 80 80 1 100 % memory
D
minperm%
maxclient%
------------------------------------------------------------------------
--------
maxpin 424179 424179
S
..
..
>> To further look at your virtual memory and its causes, you can use a
combination of:
------------------------------------------------------------------------
---------------
To print out the memory usage statistics for the users root and
steve
taking into account only working segments, type:
svmon -U -g -t 10
To print out the memory usage statistics for the user steve,
including the
list of the process identifiers, type:
svmon -U steve -l
svmon -U emcdm -l
# vmo -o npswarn=value
# schedo -o pacefork=15
Note: sysdumpdev -e
Although the sysdumpdev command is used to show or alter the dumpdevice
for a system dump,
you can also use it to show how much real memory is used.
The command
# sysdumpdev -e
provides an estimated dump size taking into account the current memory
(not pagingspace) currently
in use by the system.
Warning:
The pstat command, which displays many system tables such as a process
table, inode table, or processor status table,
The pstat command interprets the contents of the various system tables
and writes it to standard output.
Use the pstat command from the AIX 5.2 command prompt. See the command
reference for details and examples,
or use the syntax summary in the table below.
Flags
-a Displays entries in the process table
-A Displays all entries in the kernel thread table
-f Displays the file table
-i Displays the i-node table and the i-node data block
addresses
-p Displays the process table
-P Displays runnable kernel thread table entries only
-s Displays information about the swap or paging space
usage
-S Displays the status of the processors
-t Displays the tty structures
-u ProcSlot Displays the user structure of the process in the
designated slot of the process table. An error message is generated if
you attempt to display a swapped out process.
-T Displays the system variables. These variables are
briefly described in var.h
-U ThreadSlot Displays the user structure of the kernel thread in the
designated slot of the kernel thread table. An error message is
generated if you attempt to display a swapped out kernel thread.
&&&
------------------------------------------------------------------------
---------
Note 1: How to get a "reaonable" view on memory consumption of a process
in UNIX:
------------------------------------------------------------------------
---------
-- Some people like to use the ps command with some special flags, like
ps -vg
ps auxw # or ps auxw | sort -r +3 |head -10 (top users)
But those commands seems not so very satisfactory, and not "complete"
in their output.
-- There are some great common utilities like topas, nmon, top etc.., or
tools specific to a certain Unix, like SMC for Solaris.
No bad word on those tools, because they are great. But some people
think that they are not satisfactory
on the subject of memory consumption of a process (although they show
a lot of other interesting information).
Those tools also show a "total" memory usage, which is a good indicator.
For example:
# pmap -x $$
492328: -ksh
Address Kbytes RSS Anon Locked Mode Mapped File
00010000 192 192 - - r-x-- ksh
00040000 8 8 8 - rwx-- ksh
00042000 40 40 8 - rwx-- [ heap ]
FF180000 680 680 - - r-x-- libc.so.1
FF23A000 24 24 - - rwx-- libc.so.1
FF240000 8 8 8 - rwx-- libc.so.1
FF280000 576 576 - - r-x-- libnsl.so.1
FF310000 40 40 - - rwx-- libnsl.so.1
FF31A000 24 16 - - rwx-- libnsl.so.1
FF350000 16 16 - - r-x-- libmp.so.2
FF364000 8 8 - - rwx-- libmp.so.2
FF380000 40 40 - - r-x-- libsocket.so.1
FF39A000 8 8 - - rwx-- libsocket.so.1
FF3A0000 8 8 - - r-x-- libdl.so.1
FF3B0000 8 8 8 - rwx-- [ anon ]
FF3C0000 152 152 - - r-x-- ld.so.1
FF3F6000 8 8 8 - rwx-- ld.so.1
FFBFC000 16 16 8 - rw--- [ stack ]
-------- ------- ------- ------- -------
total Kb 1856 1848 48 -
# svmon -G
# svmon -U
# svmon -P -t 10 (top 10 users)
# svmon -U steve -l (memory stats for user steve)
# ls -l /proc/{pid}/as
# prstat -a -s rss
The ipcs, vmstat, iostat and that type of commands, are ofcourse more or
less the same
in Linux as they are in Solaris or AIX.
# ps -k | grep aioserver
331962 - 0:15 aioserver
352478 - 0:14 aioserver
450644 - 0:12 aioserver
454908 - 0:10 aioserver
565292 - 0:11 aioserver
569378 - 0:10 aioserver
581660 - 0:11 aioserver
585758 - 0:17 aioserver
589856 - 0:12 aioserver
593954 - 0:15 aioserver
598052 - 0:17 aioserver
602150 - 0:12 aioserver
606248 - 0:13 aioserver
827642 - 0:14 aioserver
991288 - 0:14 aioserver
995388 - 0:11 aioserver
1007616 - 0:12 aioserver
1011766 - 0:13 aioserver
1028096 - 0:13 aioserver
1032212 - 0:13 aioserver
AIX 5L supports asynchronous I/O (AIO) for database files created both
on file system partitions and on raw devices.
AIO on raw devices is implemented fully into the AIX kernel, and does
not require database processes
to service the AIO requests. When using AIO on file systems, the kernel
database processes (aioserver)
control each request from the time a request is taken off the queue
until it completes. The kernel database
processes are also used with I/O with virtual shared disks (VSDs) and
HSDs with FastPath disabled. By default,
FastPath is enabled. The number of aioserver servers determines the
number of AIO requests that can be executed
in the system concurrently, so it is important to tune the number of
aioserver processes when using file systems
to store Oracle Database data files.
- Use one of the following commands to set the number of servers. This
applies only when using asynchronous I/O
on file systems rather than raw devices:
# smit aio
su - oracle
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk async_on
make -f ins_rdbms.mk ioracle
This might give you, for example, the shared memory identifier
"50855977".
Now clear the segment:
# ipcrm -m 50855977
It might also be, that still a semaphore and/or queue is still "left
over".
In that case you might also try commands like the following example:
ipcs -q
ipcs -s
Note: in some cases the "slibclean" command can be used to clear unused
modules in kernel and library memory.
Just give as root the command:
# slibclean
Other Example:
--------------
If you run the following command to remove a shared memory segment and
you get this error:
# ipcrm -m 65537
ipcrm: 0515-020 shmid(65537) was not found.
However, if you run the ipcs command, you still see the segment there:
If you look carefully, you will notice the "D" in the forth column. The
"D" means:
So, to clear the shared memory segment, find the process which is still
associated with the segment:
where process_owner is the name of the owner using the shared segment
Running another ipcs command will show the shared memory segment no
longer exists:
ipcrm -m 65537
Solaris:
========
showrev:
--------
#showrev
Displays system summary information.
#showrev -p
Reports which patches are installed
versions:
---------
# oslevel
# oslevel -r tells you which maintenance level you have.
# oslevel -q -s
Known Service Packs
-------------------
5300-05-04
5300-05-03
5300-05-02
5300-05-01
5300-05-00
5300-04-CSP
5300-04-03
5300-04-02
5300-04-01
5300-03-CSP
>> Example:
5300-02 is TL 02
5300-02-04 is TL 02 and SP 04
5300-02-CSP is TL 02 and CSP for TL 02
(and there won't be anymore SPs because when you see a CSP it is because
the next TL has been released.
In this case it would be TL 03).
>> How can I determine which fileset updates are missing from a
particular AIX level?
To determine which fileset updates are missing from 5300-04, for
example, run the following command:
# oslevel -s
5300-04-02
# oslevel -s
5300-03-CSP
# model
9000/800/rp7410
: machine info on AIX
How do I find out the Chip type, System name, Node name, Model Number
etc.?
The uname command provides details about your system. uname -p Displays
the chip type of the system.
For example, powerpc.
Architecture:
-------------
To see if you have a CHRP machine, log into the machine as the root
user, and run the following command:
# bootinfo -p
chrp
- Solaris:
# iasinfo -vk
if [ -x /usr/bin/isainfo ]; then
bits=`/usr/bin/isainfo -b`
else
bits=32
fi
- AIX:
Command: /bin/lslpp -l bos.64bit ...to see if bos.64bit is
installed & committed.
-or- /bin/locale64 ...error message if on 32bit
machine such as:
Could not load program
/bin/locale64:
Cannot run a 64-bit
program on a 32-bit machine.
Or use:
# /usr/bin/getconf HARDWARE_BITMODE
64
Note:
-----
The AIX 5L has pre-configured kernels. These are listed below for
Power
processors:
/unix
/usr/lib/boot/unix
The base operating system 64-bit runtime fileset is bos.64bit.
Installing bos.64bit also installs
the /etc/methods/cfg64 file. The /etc/methods/cfg64 file provides the
option of enabling or disabling
the 64-bit environment via SMIT, which updates the /etc/inittab file
with the load64bit line.
(Simply adding the load64bit line does not enable the 64-bit
environment).
IMPORTANT NOTE: If you are changing the kernel mode to 32-bit and you
will run
9.2 on this server, the following line should be included in
/etc/inittab:
This allows 64-bit applications to run on the 32-bit kernel. Note that
this
line is also mandatory if you are using the 64-bit kernel.
scinstall:
----------
# scinstall -pv
Displays Sun Cluster software release and package version information
Solaris:
--------
# psrinfo -v
Shows the number of processors and their status.
Linux:
------
# cat /proc/cpuinfo
# cat /proc/cpuinfo | grep processor|wc -l
Especially with Linux, the /proc directory contains special "files" that
either extract information from
or send information to the kernel
HP-UX:
------
The "getconf" command can give you a lot of interesting info. The
parameters are:
Example:
# getconf CPU_VERSION
get_cpu_version()
{
AIX:
----
# pmcycles -m
Cpu 0 runs at 1656 MHz
Cpu 1 runs at 1656 MHz
Cpu 2 runs at 1656 MHz
Cpu 3 runs at 1656 MHz
# schedo -a
When you want to keep the setting across reboots, you must use the
bosboot command
in order to create a new boot image.
runlevel:
---------
To show the init runlevel:
# who -r
Top users:
----------
To get a quick impression about the top 10 users in the system at this
time:
ps -vg:
-------
Using "ps vg" gives a per process tally of memory usage for each running
process. Several fields give memory usage
in different units, but these numbers do not tell the whole story on
where all the memory goes.
First of all, the man page for ps does not give an accurate description
of the memory related fields.
Here is a better description:
RSS - This tells how much RAM resident memory is currently being used
for the text and data segments
for a particular process in units of kilobytes. (this value will always
be a multiple of 4 since memory is allocated in 4 KB pages).
%MEM - This is the fraction of RSS divided by the total size of RAM for
a particular process.
Since RSS is some subset of the total resident memory usage for a
process, the %MEM value will also be lower than actual.
TRS - This tells how much RAM resident memory is currently being used
for the text segment for a particular process
in units of kilobytes. This will always be less than or equal to RSS.
SIZE - This tells how much paging space is allocated for this process
for the text and data segments in units
of kilobytes. If the executable file is on a local filesystem, the page
space usage for text is zero.
If the executable is on an NFS filesystem, the page space usage will be
nonzero. This number may be greater
than RSS, or it may not, depending on how much of the process is paged
in. The reason RSS can be larger is that
RSS counts text whereas SIZE does not.
These fields only report on a process text and data segments. Segment
size which cannot be interrogated at this time are:
shared memory:
--------------
To check shared memory segment, semaphore array, and message queue
limits, issue the ipcs -l command.
# ipcs
The following tools are available for monitoring the performance of your
UNIX-based system.
pfiles:
-------
/usr/proc/bin/pfiles
This shows the open files for this process, which helps you diagnose
whether you are having problems
caused by files not getting closed.
lsof:
-----
This utility lists open files for running UNIX processes, like pfiles.
However, lsof gives more
useful information than pfiles. You can find lsof at
ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/.
You can see CIO (concurrent IO) in the FILE-FLAG column if you run lsof
+fg, e.g.:
tarunx01:/home/abielewi:# /p570build/LSOF/lsof-4.76/usr/local/bin/lsof
+fg /baanprd/oradat
You should also see O_CIO in your file open calls if you run truss,
e.g.:
open("/opt/oracle/rcat/oradat/redo01.log",
O_RDWR|O_CIO|O_DSYNC|O_LARGEFILE) = 18
VMSTAT SOLARIS:
---------------
# vmstat
This command is ideal for monitoring paging rate, which can be found
under the page in (pi) and page out (po) columns.
Other important columns are the amount of allocated virtual storage
(avm) and free virtual storage (fre).
This command is useful for determining if something is suspended or just
taking a long time.
Example:
When analyzing vmstat output, there are several metrics to which you
should pay attention. For example,
keep an eye on the CPU run queue column. The run queue should never
exceed the number of CPUs on the server.
If you do notice the run queue exceeding the amount of CPUs, it's a good
indication that your server
has a CPU bottleneck.
To get an idea of the RAM usage on your server, watch the page in (pi)
and page out (po) columns
of vmstat's output. By tracking common virtual memory operations such as
page outs, you can infer
the times that the Oracle database is performing a lot of work. Even
though UNIX page ins must correlate
with the vmstat's refresh rate to accurately predict RAM swapping,
plotting page ins can tell you
when the server is having spikes of RAM usage.
Once captured, it's very easy to take the information about server
performance directly from the
Oracle tables and plot them in a trend graph. Rather than using an
expensive statistical package
such as SAS, you can use Microsoft Excel. Copy and paste the data from
the tables into Excel.
After that, you can use the Chart Wizard to create a line chart that
will help you view server
usage information and discover trends.
# VMSTAT AIX:
-------------
vmstat can be used to give multiple statistics on the system. For CPU-
specific work, try the following command:
# vmstat -t 1 3
This will take 3 samples, 1 second apart, with timestamps (-t). You can,
of course, change the parameters
as you like. The output is shown below.
Columns r (run queue) and b (blocked) start going up, especially above
10. This usually is an indication
that you have too many processes competing for CPU.
In the cpu section, us (user time) indicates the time is being spent in
programs. Assuming Java is
at the top of the list in tprof, then you need to tune the Java
application).
In the cpu section, if sys (system time) is higher than expected, and
you still have id (idle) time left,
this may indicate lock contention. Check the tprof for lock related
calls in the kernel time. You may want
to try multiple instances of the JVM. It may also be possible to find
deadlocks in a javacore file.
In the cpu section, if wa (I/O wait) is high, this may indicate a disk
bottleneck, and you should use
iostat and other tools to look at the disk usage.
Values in the pi, po (page in/out) columns are non-zero may indicate
that you are paging and need more memory.
It may be possible that you have the stack size set too high for some of
your JVM instances.
It could also mean that you have allocated a heap larger than the amount
of memory on the system. Of course,
you may also have other applications using memory, or that file pages
may be taking up too much of the memory
Other example:
--------------
# vmstat 1
Let's look at the last section, which also comes up in most other CPU
monitoring tools, albeit with different headings:
us -- user time
sy -- system time
id -- idle time
wa -- waiting on I/O
# IOSTAT:
---------
This command is useful for monitoring I/O activities. You can use the
read and write rate to estimate the
amount of time required for certain SQL operations (if they are the only
activity on the system).
This command is also useful for determining if something is suspended or
just taking a long time.
Basic synctax is iostat <options> interval count
option - let you specify the device for which information is needed like
disk ,
cpu or terminal. (-d , -c , -t or -tdc ) . x options gives
the extended statistics .
Example:
$ iostat -xtc 5 2
extended disk statistics tty cpu
disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b tin tout us sy wt
id
sd0 2.6 3.0 20.7 22.7 0.1 0.2 59.2 6 19 0 84 3 85 11
0
sd1 4.2 1.0 33.5 8.0 0.0 0.2 47.2 2 23
sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd3 10.2 1.6 51.4 12.8 0.1 0.3 31.2 3 31
# netstat
This command lets you know the network traffic on each node, and the
number of error packets encountered.
It is useful for isolating network problems.
Example:
To find out all listening services, you can use the command
# netstat -a -f inet
# top
For example:
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
2795 oraclown 1 59 0 265M 226M sleep 0:13 4.38% oracle
2294 root 11 59 0 8616K 7672K sleep 10:54 3.94% bpbkar
13907 oraclown 11 59 0 271M 218M cpu2 4:02 2.23% oracle
14138 oraclown 12 59 0 270M 230M sleep 9:03 1.76% oracle
2797 oraclown 1 59 0 189M 151M sleep 0:01 0.96% oracle
2787 oraclown 11 59 0 191M 153M sleep 0:06 0.69% oracle
2799 oraclown 1 59 0 190M 151M sleep 0:02 0.45% oracle
2743 oraclown 11 59 0 191M 155M sleep 0:25 0.35% oracle
2011 oraclown 11 59 0 191M 149M sleep 2:50 0.27% oracle
2007 oraclown 11 59 0 191M 149M sleep 2:22 0.26% oracle
2009 oraclown 11 59 0 191M 149M sleep 1:54 0.20% oracle
2804 oraclown 1 51 0 1760K 1296K cpu2 0:00 0.19% top
2013 oraclown 11 59 0 191M 148M sleep 0:36 0.14% oracle
2035 oraclown 11 59 0 191M 149M sleep 2:44 0.13% oracle
114 root 10 59 0 5016K 4176K sleep 23:34 0.05% picld
Process ID
This column shows the process ID (pid) of each process. The process ID
is a positive number,
usually less than 65536. It is used for identification during the life
of the process.
Once a process has exited or been killed, the process ID can be reused.
Username
This column shows the name of the user who owns the process. The kernel
stores this information
as a uid, and top uses an appropriate table (/etc/passwd, NIS, or NIS+)
to translate this uid in to a name.
Threads
This column displays the number of threads for the current process. This
column is present only
in the Solaris 2 port of top.
For Solaris, this number is actually the number of lightweight processes
(lwps) created by the
threads package to handle the threads. Depending on current resource
utilization, there may not
be one lwp for every thread. Thus this number is actually less than or
equal to the total number
of threads created by the process.
Nice
This column reflects the "nice" setting of each process. A process's
nice is inhereted from its parent.
Most user processes run at a nice of 0, indicating normal priority.
Users have the option of starting
a process with a positive nice value to allow the system to reduce the
priority given to that process.
This is normally done for long-running cpu-bound jobs to keep them from
interfering with
interactive processes. The Unix command "nice" controls setting this
value. Only root can set
a nice value lower than the current value. Nice values can be negative.
On most systems they range from -20 to 20.
The nice value influences the priority value calculated by the Unix
scheduler.
Size
This column shows the total amount of memory allocated by each process.
This is virtual memory
and is the sum total of the process's text area (program space), data
area, and dynamically
allocated area (or "break"). When a process allocates additional memory
with the system call "brk",
this value will increase. This is done indirectly by the C library
function "malloc".
The number in this column does not reflect the amount of physical memory
currently in use by the process.
Resident Memory
This column reflects the amount of physical memory currently allocated
to each process.
This is also known as the "resident set size" or RSS. A process can have
a large amount
of virtual memory allocated (as indicated by the SIZE column) but still
be using very little physical memory.
Process State
This column reflects the last observed state of each process. State
names vary from system to system.
These states are analagous to those that appear in the process states
line: the second line of the display.
The more common state names are listed below.
cpu - Assigned to a CPU and currently running
run - Currently able to run
sleep - Awaiting an external event, such as input from a device
stop - Stopped by a signal, as with control Z
swap - Virtual address space swapped out to disk
zomb - Exited, but parent has not called "wait" to receive the exit
status
CPU Time
This column displayes the accumulated CPU time for each process. This is
the amount of time
that any cpu in the system has spent actually running this process. The
standard format shows
two digits indicating minutes, a colon, then two digits indicating
seconds.
For example, the display "15:32" indicates fifteen minutes and thirty-
two seconds.
When a time value is greater than or equal to 1000 minutes, it is
displayed as hours with the suffix H.
For example, the display "127.4H" indicates 127 hours plus four tenths
of an hour (24 minutes).
When the number of hours exceeds 999.9, the "H" suffix is dropped so
that the display
continues to fit in the column.
CPU Percentage
This column shows the percentage of the cpu that each process is
currently consuming.
By default, top will sort this column of the output.
Some versions of Unix will track cpu percentages in the kernel, as the
figure is used in the calculation
of a process's priority. On those versions, top will use the figure as
calculated by the kernel.
Other versions of Unix do not perform this calculation, and top must
determine the percentage explicity
by monitoring the changes in cpu time.
On most multiprocessor machines, the number displayed in this column is
a percentage of the total
available cpu capacity. Therefore, a single threaded process running on
a four processor system will never
use more than 25% of the available cpu cycles.
Command
This column displays the name of the executable image that each process
is running.
In most cases this is the base name of the file that was invoked with
the most recent kernel "exec" call.
On most systems, this name is maintained separately from the zeroth
argument. A program that changes
its zeroth argument will not affect the output of this column.
# modinfo
The modinfo command provides information about the modules currently
loaded by the kernel.
# uptime
uptime - show how long the system has been up
/export/home/oraclown>uptime
11:32am up 4:19, 1 user, load average: 0.40, 1.17, 0.90
The proc tools are called that way, because the retreive information
fromn the /proc virtual filesystem
They are:
-- pfiles:
reports all the files which are opened by a given pid
-- pldd
lists all the dynamic libraries linked to the process
-- pwdx
gives the directory from which the process is running
-- ptree
The ptree utility prints the process trees containing the specified pids
or users, with child processes
indented from their respective parent processes. An argument of all
digits is taken to be a process-ID,
otherwise it is assumed to be a user login name. The default is all
processes.
Use it like
# ptree <PID>
----------------------------------------------------------------------
Remark: many Linux distros adopted the ptree command, as the "pstree"
command.
As in
------------------------------------------------------------------------
# pfiles 13789
13789: /apps11i/erpdev/10GAS/Apache/Apache/bin/httpd -d
/apps11i/erpdev/10G
Current rlimit: 1024 file descriptors
0: S_IFIFO mode:0000 dev:350,0 ino:114723 uid:65060 gid:54032 size:301
O_RDWR
1: S_IFREG mode:0640 dev:307,28001 ino:612208 uid:65060 gid:54032
size:386
O_WRONLY|O_APPEND|O_CREAT
/apps11i/erpdev/10GAS/opmn/logs/HTTP_Server~1
2: S_IFIFO mode:0000 dev:350,0 ino:143956 uid:65060 gid:54032 size:0
O_RDWR
3: S_IFREG mode:0600 dev:307,28001 ino:606387 uid:65060 gid:54032
size:1056768
O_RDWR|O_CREAT
/apps11i/erpdev/10GAS/Apache/Apache/logs/mm.19389.mem
4: S_IFREG mode:0600 dev:307,28001 ino:606383 uid:65060 gid:54032 size:0
O_RDWR|O_CREAT
5: S_IFREG mode:0600 dev:307,28001 ino:621827 uid:65060 gid:54032
size:1056768
O_RDWR|O_CREAT
6: S_IFDOOR mode:0444 dev:351,0 ino:58 uid:0 gid:0 size:0
O_RDONLY|O_LARGEFILE FD_CLOEXEC door to nscd[421]
/var/run/name_service_door
7: S_IFIFO mode:0000 dev:350,0 ino:143956 uid:65060 gid:54032 size:0
O_RDWR
8: S_IFCHR mode:0666 dev:342,0 ino:47185924 uid:0 gid:3 rdev:90,0
O_RDONLY
/devices/pseudo/kstat@0:kstat
etc..
..
..
O_RDWR|O_CREAT
/apps11i/erpdev/10GAS/Apache/Apache/logs/dms_metrics.19389.shm.sem
21: S_IFREG mode:0600 dev:307,28001 ino:603445 uid:65060 gid:54032
size:17408
O_RDONLY FD_CLOEXEC
/apps11i/erpdev/10GAS/rdbms/mesg/ocius.msb
23: S_IFSOCK mode:0666 dev:348,0 ino:60339 uid:0 gid:0 size:0
O_RDWR
SOCK_STREAM
SO_SNDBUF(49152),SO_RCVBUF(49152),IP_NEXTHOP(0.0.192.0)
sockname: AF_INET 3.56.189.4 port: 45395
peername: AF_INET 3.56.189.4 port: 12501
256: S_IFREG mode:0444 dev:85,0 ino:234504 uid:0 gid:3 size:1616
O_RDONLY|O_LARGEFILE
/etc/inet/hosts
Suppose you tried pldd on the same process gave this result:
# pldd 13789
13789: /apps11i/erp
dev/10GAS/Apache/Apache/bin/httpd -d /apps11i/erpdev/10G
/apps11i/erpdev/10GAS/lib32/libdms2.so
/lib/libpthread.so.1
/lib/libsocket.so.1
/lib/libnsl.so.1
/lib/libdl.so.1
/lib/libc.so.1
/platform/sun4u-us3/lib/libc_psr.so.1
/lib/libmd5.so.1
/platform/sun4u/lib/libmd5_psr.so.1
/lib/libscf.so.1
/lib/libdoor.so.1
/lib/libuutil.so.1
/lib/libgen.so.1
/lib/libmp.so.2
/lib/libm.so.2
/lib/libresolv.so.2
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_onsint.so
/lib/librt.so.1
/apps11i/erpdev/10GAS/lib32/libons.so
/lib/libkstat.so.1
/lib/libaio.so.1
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_mmap_static.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_vhost_alias.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_env.so
..
..
etc
/usr/lib/libsched.so.1
/apps11i/erpdev/10GAS/lib32/libclntsh.so.10.1
/apps11i/erpdev/10GAS/lib32/libnnz10.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_wchandshake.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_oc4j.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_dms.so
/apps11i/erpdev/10GAS/Apache/Apache/libexec/mod_rewrite.so
/apps11i/erpdev/10GAS/Apache/oradav/lib/mod_oradav.so
/apps11i/erpdev/10GAS/Apache/modplsql/bin/modplsql.so
# pmap -x $$
492328: -ksh
Address Kbytes RSS Anon Locked Mode Mapped File
00010000 192 192 - - r-x-- ksh
00040000 8 8 8 - rwx-- ksh
00042000 40 40 8 - rwx-- [ heap ]
FF180000 680 680 - - r-x-- libc.so.1
FF23A000 24 24 - - rwx-- libc.so.1
FF240000 8 8 8 - rwx-- libc.so.1
FF280000 576 576 - - r-x-- libnsl.so.1
FF310000 40 40 - - rwx-- libnsl.so.1
FF31A000 24 16 - - rwx-- libnsl.so.1
FF350000 16 16 - - r-x-- libmp.so.2
FF364000 8 8 - - rwx-- libmp.so.2
FF380000 40 40 - - r-x-- libsocket.so.1
FF39A000 8 8 - - rwx-- libsocket.so.1
FF3A0000 8 8 - - r-x-- libdl.so.1
FF3B0000 8 8 8 - rwx-- [ anon ]
FF3C0000 152 152 - - r-x-- ld.so.1
FF3F6000 8 8 8 - rwx-- ld.so.1
FFBFC000 16 16 8 - rw--- [ stack ]
-------- ------- ------- ------- -------
total Kb 1856 1848 48 -
1.2.13 Wellknown tools for AIX:
===============================
1. commands:
------------
2. topas:
---------
The information on the bottom left side shows the most active processes;
here, java is consuming 83.6% of CPU.
The middle right area shows the total physical memory (1 GB in this
case) and Paging space (512 MB),
as well as the amount being used. So you get an excellent overview of
what the system is doing
in a single screen, and then you can select the areas to concentrate
based on the information being shown here.
Don't get caught up in this whole wait i/o thing. a single cpu system
with 1 i/o outstanding and no other runable threads (i.e. idle) will
have 100% wait i/o. There was a big discussion a couple of years ago on
removing the kernel tick as it has confused many many many techs.
So, if you have only 1 or few cpu, then you are going to have high wait
i.o
figures, it does not neccessarily mean your disk subsystem is slow.
3. trace:
---------
nmon is a free software tool that gives much of the same information as
topas, but saves the information
to a file in Lotus 123 and Excel format. The download site is
http://www.ibm.com/developerworks/eserver/articles/analyze_aix/.
The information that is collected included CPU, disk, network, adapter
statistics, kernel counters,
memory and the "top" process information.
5. tprof:
---------
tprof is one of the AIX legacy tools that provides a detailed profile of
CPU usage for every
AIX process ID and name. It has been completely rewritten for AIX 5.2,
and the example below uses
the AIX 5.1 syntax. You should refer to AIX 5.2 Performance Tools
update: Part 3 for the new syntax.
This example shows that over half the CPU time is associated with the
oracle application and that Java
is using about 3970/19577 or 1/5 of the CPU. The wait usually means idle
time, but can also include
the I/O wait portion of the CPU usage.
svmon:
------
svmon is the most useful tool at your disposal when monitoring a Java
process, especially native heap.
The article "When segments collide" gives examples of how to use svmon
-P <pid> -m to monitor the
native heap of a Java process on AIX. But there is another variation,
svmon -P <pid> -m -r, that is very
effective in identifying native heap fragmentation. The -r switch prints
the address range in use, so it gives
a more accurate view of how much of each segment is in use.
As an example, look at the partially edited output below:
Other example:
In this example, there are 16384 pages of total size of memory. Multuply
this number by 4096
to see the total real memory size. In this case the total memory is 64
MB.
filemon:
--------
filemon can be used to identify the files that are being used most
actively. This tool gives a very
comprehensive view of file access, and can be useful for drilling down
once vmstat/iostat confirm disk
to be a bottleneck.
Example:
The generated log file is quite large. Some sections that may be useful
are:
------------------------------------------------------------------------
#MBs #opns #rds #wrs file volume:inode
------------------------------------------------------------------------
------------------------------------------------------------------------
Detailed File Stats
------------------------------------------------------------------------
curt:
-----
curt Command
Purpose
The CPU Utilization Reporting Tool (curt) command converts an AIX trace
file into a number of statistics related
to CPU utilization and either process, thread or pthread activity. These
statistics ease the tracking of
specific application activity. curt works with both uniprocessor and
multiprocessor AIX Version 4 and AIX Version 5
traces.
Syntax
curt -i inputfile [-o outputfile] [-n gennamesfile] [-m trcnmfile] [-a
pidnamefile] [-f timestamp]
[-l timestamp] [-ehpstP]
Description
The curt command takes an AIX trace file as input and produces a number
of statistics related to
processor (CPU) utilization and process/thread/pthread activity. It will
work with both uniprocessor and
multiprocessor AIX traces if the processor clocks are properly
synchronized.
genkld:
-------
genkld Command
Purpose
The genkld command extracts the list of shared objects currently loaded
onto the system and displays the address,
size, and path name for each object on the list.
Syntax
genkld
Description
For shared objects loaded onto the system, the kernel maintains a linked
list consisting of data structures
called loader entries. A loader entry contains the name of the object,
its starting address, and its size.
This information is gathered and reported by the genkld command.
Implementation Specifics
This command is valid only on the POWER-based platform.
Examples
To obtain a list of loaded shared objects, enter:
# genkld
..
d0791c00 18ab27 /usr/lib/librtl.a[shr.o]
d0194500 7e07 /usr/lib/libbsd.a[shr.o]
d019d0f8 3d39 /usr/lib/libbind.a[shr.o]
d0237100 1eac0 /usr/lib/libwlm.a[shr.o]
d01d5100 1fff9 /usr/lib/libC.a[shr.o]
d02109e0 262b2 /usr/lib/libC.a[shrcore.o]
d01f6c60 190dc /usr/lib/libC.a[ansicore_32.o]
d01b0000 24cfd /usr/lib/boot/bin/libcfg_chrp
d010a000 367ad /usr/lib/libpthreads.a[shr_xpg5.o]
d0142000 3cee /usr/lib/libpthreads.a[shr_comm.o]
d017f100 1172a /usr/lib/libcfg.a[shr.o]
d016c100 128b2 /usr/lib/libodm.a[shr.o]
d014c100 b12d /usr/lib/libi18n.a[shr.o]
d0158100 13b41 /usr/lib/libiconv.a[shr4.o]
d01410f8 846 /usr/lib/libcrypt.a[shr.o]
..
etc..
1.2.14 Not so well known tools for AIX: the proc tools:
=======================================================
--proctree
Displays the process tree containing the specified process IDs or users.
To display the ancestors
and all the children of process 12312, enter:
# proctree 21166
11238 /usr/sbin/srcmstr
21166 /usr/sbin/rsct/bin/IBM.AuditRMd
#proctree -a 21166
1 /etc/init
11238 /usr/sbin/srcmstr
21166 /usr/sbin/rsct/bin/IBM.AuditRMd
-- procstack
Displays the hexadecimal addresses and symbolic names for each of the
stack frames of the current thread
in processes. To display the current stack of process 15052, enter:
# procstack 15052
15052 : /usr/sbin/snmpd
d025ab80 select (?, ?, ?, ?, ?) + 90
100015f4 main (?, ?, ?) + 1814
10000128 __start () + 8c
-- procmap
Displays a process address map. To display the address space of process
13204, enter:
# procmap 13204
13204 : /usr/sbin/biod 6
10000000 3K read/exec biod
20000910 0K read/write biod
d0083100 79K read/exec /usr/lib/libiconv.a
20013bf0 41K read/write /usr/lib/libiconv.a
d007a100 34K read/exec /usr/lib/libi18n.a
20011378 4K read/write /usr/lib/libi18n.a
d0074000 11K read/exec /usr/lib/nls/loc/en_US
d0077130 8K read/write /usr/lib/nls/loc/en_US
d00730f8 2K read/exec /usr/lib/libcrypt.a
f03c7508 0K read/write /usr/lib/libcrypt.a
d01d4e20 1997K read/exec /usr/lib/libc.a
f0337e90 570K read/write /usr/lib/libc.a
-- procldd
Displays a list of libraries loaded by a process. To display the list of
dynamic libraries loaded by
process 11928, enter
# procldd 11928. T
11928 : -sh
/usr/lib/nls/loc/en_US
/usr/lib/libcrypt.a
/usr/lib/libc.a
-- procflags
Displays a process tracing flags, and the pending and holding signals.
To display the tracing flags of
process 28138, enter:
# procflags 28138
28138 : /usr/sbin/rsct/bin/IBM.HostRMd
data model = _ILP32 flags = PR_FORK
/64763: flags = PR_ASLEEP | PR_NOREGS
/66315: flags = PR_ASLEEP | PR_NOREGS
/60641: flags = PR_ASLEEP | PR_NOREGS
/66827: flags = PR_ASLEEP | PR_NOREGS
/7515: flags = PR_ASLEEP | PR_NOREGS
/70439: flags = PR_ASLEEP | PR_NOREGS
/66061: flags = PR_ASLEEP | PR_NOREGS
/69149: flags = PR_ASLEEP | PR_NOREGS
-- procsig
Lists the signal actions for a process. To list all the signal actions
defined for process 30552, enter:
# procsig 30552
30552 : -ksh
HUP caught
INT caught
QUIT caught
ILL caught
TRAP caught
ABRT caught
EMT caught
FPE caught
KILL default RESTART BUS caught
-- proccred
Prints a process' credentials. To display the credentials of process
25632, enter:
# proccred 25632
25632: e/r/suid=0 e/r/sgid=0
-- procfiles
Prints a list of open file descriptors. To display status and control
information on the file descriptors
opened by process 20138, enter:
# procfiles -n 20138
20138 : /usr/sbin/rsct/bin/IBM.CSMAgentRMd
Current rlimit: 2147483647 file descriptors
0: S_IFCHR mode:00 dev:10,4 ino:4178 uid:0 gid:0 rdev:2,2
O_RDWR name:/dev/null
2: S_IFREG mode:0311 dev:10,6 ino:250 uid:0 gid:0 rdev:0,0
O_RDWR size:0 name:/var/ct/IBM.CSMAgentRM.stderr
4: S_IFREG mode:0200 dev:10,6 ino:255 uid:0 gid:0 rdev:0,0
-- procwdx
Prints the current working directory for a process. To display the
current working directory
of process 11928, enter:
# procwdx 11928
11928 : /home/guest
-- procstop
Stops a process. To stop process 7500 on the PR_REQUESTED event, enter:
# procstop 7500 .
-- procrun
Restart a process. To restart process 30192 that was stopped on the
PR_REQUESTED event, enter:
# procrun 30192 .
-- procwait
Waits for all of the specified processes to terminate. To wait for
process 12942 to exit and display
the status, enter
# procwait -v 12942 .
12942 : terminated, exit status 0
Overview
System Requirements
You are not required to use the CGIs included with Nagios. However, if
you do decide to use them,
you will need to have the following software installed...
Ports exist for most unixes, like Linux, Solaris, AIX etc..
rstat is an RPC client program to get and print statistics from any
machine running the rpc.rstatd daemon,
its server-side counterpart. The rpc.rstad daemon has been used for many
years by tools such as Sun's perfmeter
and the rup command. The rstat program is simply a new client for an old
daemon. The fact that the rpc.rstatd daemon
is already installed and running on most Solaris and Linux machines is a
huge advantage over other tools
that require the installation of custom agents.
The rstat client compiles and runs on Solaris and Linux as well and can
get statistics from any machine running
a current rpc.rstatd daemon, such as Solaris, Linux, AIX, and OpenBSD.
The rpc.rstatd daemon is started
from /etc/inetd.conf on Solaris. It is similar to vmstat, but has some
advantages over vmstat:
You can get statistics without logging in to the remote machine,
including over the Internet.
It includes a timestamp.
The fact that it runs remotely means that you can use a single central
machine to monitor the performance
of many remote machines. It also has a disadvantage in that it does not
give the useful scan rate measurement
of memory shortage, the sr column in vmstat. rstat will not work across
most firewalls because it relies on
port 111, the RPC port, which is usually blocked by firewalls.
To use rstat, simply give it the name or IP address of the machine you
wish to monitor. Remember that rpc.rstatd
must be running on that machine. The rup command is extremely useful
here because with no arguments,
it simply prints out a list of all machines on the local network that
are running the rstatd demon.
If a machine is not listed, you may have to start rstatd manually.
On Solaris, first try running the rstat client because inetd is often
already configured to automatically
start rpc.rstatd on request. If it the client fails with the error "RPC:
Program not registered,"
make sure you have this line in your /etc/inet/inetd.conf and kill -HUP
your inetd process to get it to
re-read inetd.conf, as follows:
% rstat enkidu
2001 07 10 10 36 08 0 0 0 100 0 27 54 1 0 0 12
0.1
This command will give you a one-second average and then it will exit.
If you want to continuously monitor,
give an interval in seconds on the command line. Here's an example of
one line of output every two seconds:
% rstat enkidu 2
2001 07 10 10 36 28 0 0 1 98 0 0 7 2 0 0 61
0.0
2001 07 10 10 36 30 0 0 0 100 0 0 0 2 0 0 15
0.0
2001 07 10 10 36 32 0 0 0 100 0 0 0 2 0 0 15
0.0
2001 07 10 10 36 34 0 0 0 100 0 5 10 2 0 0 19
0.0
2001 07 10 10 36 36 0 0 0 100 0 0 46 2 0 0 108
0.0
^C
To get a usage message, the output format, the version number, and where
to go for updates, just type rstat
with no parameters:
% rstat
usage: rstat machine [interval]
output:
yyyy mm dd hh mm ss usr wio sys idl pgin pgout intr ipkts opkts coll cs
load
docs and src at http://patrick.net/software/rstat/rstat.html
Notice that the column headings line up with the output data.
-- AIX:
In order to get rstat working on AIX, you may need to configure rstatd.
As root
1. Edit /etc/inetd.conf
Uncomment or add entry for rstatd
Eg
rstatd sunrpc_udp udp wait root /usr/sbin/rpc.rstatd rstatd 100001 1-3
2. Edit /etc/services
Uncomment or add entry for rstatd
Eg
rstatd 100001/udp
3. Refresh services
refresh -s inetd
4. Start rstatd
/usr/sbin/rpc.rstatd
The list above should actually be enough, but we shall list the same for
AIX:
==================================
2. NFS and Mount command examples:
==================================
AIX:
----
# mount -r -v cdrfs /dev/cd0 /cdrom
Solaris:
--------
# mount -r -F hsfs /dev/dsk/c0t6d0s2 /cdrom
HPUX:
-----
SuSE Linux:
-----------
# mount -t iso9660 /dev/cdrom /cdrom
# mount -t iso9660 /dev/cdrom /media/cdrom
Redhat Linux:
-------------
# mount -t iso9660 /dev/cdrom /media/cdrom
Sometimes on some Linux, and some scsi CDROM devices, you might try
2.1 NFS:
========
We will discuss the most important feaures of NFS, by showing how its
implemented on
Solaris, Redhat and SuSE Linux. Most of this applies to HP-UX and AIX as
well.
rpc.mountd - The running process that receives the mount request from
an NFS client and checks to see
if it matches with a currently exported file system.
rpc.nfsd - The process that implements the user-level part of the NFS
service. It works with the Linux kernel
to meet the dynamic demands of NFS clients, such as
providing additional server threads for
NFS clients to uses.
rpc.lockd - A daemon that is not necessary with modern kernels. NFS
file locking is now done by the kernel.
It is included with the nfs-utils package for users of
older kernels that do not include this
functionality by default.
rpc.statd - Implements the Network Status Monitor (NSM) RPC protocol.
This provides reboot notification
when an NFS server is restarted without being gracefully
brought down.
rpc.rquotad - An RPC server that provides user quota information for
remote users.
Not all of these programs are required for NFS service. The only
services that must be enabled are rpc.mountd,
rpc.nfsd, and portmap. The other daemons provide additional
functionality and should only be used if your server
environment requires them.
Warning
NFS mount privileges are granted specifically to a client, not a user.
If you grant a client machine access
to an exported file system, any users of that machine will have access
to the data.
The portmap service can be used with the host access files
(/etc/hosts.allow and /etc/hosts.deny) to control
which remote systems are permitted to use RPC-based services on your
machine. Access control rules for portmap
will affect all RPC-based services. Alternatively, you can specify each
of the NFS RPC daemons to be affected
by a particular access control rule. The man pages for rpc.mountd and
rpc.statd contain information regarding
the precise syntax of these rules.
-- portmap Status
As portmap provides the coordination between RPC services and the port
numbers used to communicate with them,
it is useful to be able to get a picture of the current RPC services
using portmap when troubleshooting.
The rpcinfo command shows each RPC-based service with its port number,
RPC program number, version,
and IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled for portmap,
rpcinfo -p can be useful:
# rpcinfo -p
-- /etc/exports
The /etc/exports file is the standard for controlling which file systems
are exported to which hosts,
as well as specifying particular options that control everything. Blank
lines are ignored, comments can be made
using #, and long lines can be wrapped with a backslash (\). Each
exported file system should be on its own line.
Lists of authorized hosts placed after an exported file system must be
separated by space characters.
Options for each of the hosts must be placed in parentheses directly
after the host identifier, without any spaces
separating the host and the first parenthesis.
/some/directory bob.domain.com
/another/exported/directory 192.168.0.3
n5111sviob
After re-exporting /etc/exports with the "/sbin/service nfs reload"
command, the bob.domain.com host will be
able to mount /some/directory and 192.168.0.3 can mount
/another/exported/directory. Because no options
are specified in this example, several default NFS preferences take
effect.
Warning
The way in which the /etc/exports file is formatted is very important,
particularly concerning the use of
space characters. Remember to always separate exported file systems
from hosts and hosts from one another
with a space character. However, there should be no other space
characters in the file unless they are used
in comment lines.
For example, the following two lines do not mean the same thing:
/home bob.domain.com(rw)
/home bob.domain.com (rw)
The first line allows only users from bob.domain.com read-write access
to the /home directory.
The second line allows users from bob.domain.com to mount the
directory read-only (the default), but the rest
of the world can mount it read-write. Be careful where space
characters are used in /etc/exports.
Any NFS share made available by a server can be mounted using various
methods. Of course, the share can be
manually mounted, using the mount command, to acquire the exported file
system at a particular mount point.
However, this requires that the root user type the mount command every
time the system restarts.
In addition, the root user must remember to unmount the file system when
shutting down the machine.
Two methods of configuring NFS mounts include modifying the /etc/fstab
or using the autofs service.
> /etc/fstab
Placing a properly formatted line in the /etc/fstab file has the same
effect as manually mounting the
exported file system. The /etc/fstab file is read by the
/etc/rc.d/init.d/netfs script at system startup.
The proper file system mounts, including NFS, are put into place.
The <options> area specifies how the file system is to be mounted. For
example, if the options
area states rw,suid on a particular mount, the exported file system will
be mounted read-write and the
user and group ID set by the server will be used. Note, parentheses are
not to be used here.
File systems can easily be imported manually from an NFS server. The
only prerequisite is a running
RPC port mapper, which can be started by entering the command
# rcportmap start
With YaST, turn a host in your network into an NFS server - a server
that exports directories and files
to all hosts granted access to it. This could be done to provide
applications to all coworkers of a group
without installing them locally on each and every host. To install such
a server, start YaST and select
`Network Services' -> `NFS Server'
Next, activate `Start NFS Server' and click `Next'. In the upper text
field, enter the directories to export.
Below, enter the hosts that should have access to them.
There are four options that can be set for each host: single host,
netgroups, wildcards, and IP networks.
A more thorough explanation of these options is provided by man exports.
`Exit' completes the configuration.
If you do not want to use YaST, make sure the following systems run on
the NFS server:
Also define which file systems should be exported to which host in the
configuration file "/etc/exports".
For each directory to export, one line is needed to set which machines
may access that directory
with what permissions. All subdirectories of this directory are
automatically exported as well.
Authorized machines are usually specified with their full names
(including domain name), but it is possible
to use wild cards like * or ? (which expand the same way as in the Bash
shell). If no machine is specified here,
any machine is allowed to import this file system with the given
permissions.
Set permissions for the file system to export in brackets after the
machine name. The most important options are:
#
# /etc/exports
#
/home sun(rw) venus(rw)
/usr/X11 sun(ro) venus(ro)
/usr/lib/texmf sun(ro) venus(rw)
/ earth(ro,root_squash)
/home/ftp (ro)
# End of exports
This tells the kernel to attach the file system found on "device" (which
is of type type)
at the directory "dir".
The previous contents (if any) and owner and mode of dir become
invisible,
and as long as this file system remains mounted,
the pathname dir refers to the root of the file system on device.
In Solaris:
===========
In Linux:
=========
In AIX:
=======
In HP-UX:
=========
An example of /etc/vfstab:
--------------------------
At Remote server:
share, shareall, or add entry in /etc/dfs/dfstab
# share -F nfs /var/mail
Unmount a mounted FS
Before you can mount file systems located on a remote system, NFS
software must be installed and
configured on both local and remote systems. Refer to Installing and
Administering NFS for information.
For information on mounting NFS file systems using SAM, see SAM's online
help.
You must know the name of the host machine and the file system's
directory on the remote machine.
Establish communication over a network between the local system (that
is, the "client") and the
remote system. (The local system must be able to reach the remote system
via whatever hosts database is in use.)
(See named(1M) and hosts(4).) If necessary, test the connection with
/usr/sbin/ping; see ping(1M).
Make sure the file /etc/exports on the remote system lists the file
systems that you wish to make available
to clients (that is, to "export") and the local systems that you wish to
mount the file systems.
For example, to allow machines called rolf and egbert to remotely mount
the /usr file system, edit the file
/etc/exports on the remote machine and include the line:
NOTE: If you wish to invoke exportfs -a at boot time, make sure the NFS
configuration file /etc/rc.config.d/nfsconf
on the remote system contains the following settings: NFS_SERVER=1 and
START_MOUNTD=1.
The client's /etc/rc.config.d/nfsconf file must contain NFS_CLIENT=1.
Then issue the following command
to run the script:
/sbin/init.d/nfs.server start
# mount
# mount -a
# mountall -l
# mount -t type device dir
# mount -F pcfs /dev/dsk/c0t0d0p0:c /pcfs/c
# mount /dev/md/dsk/d7 /u01
# mount sun:/home /home
# mount -t nfs 137.82.51.1:/share/sunos/local /usr/local
# mount /dev/fd0 /mnt/floppy
# mount -o ro /dev/dsk/c0t6d0s1 /mnt/cdrom
# mount -V cdrfs -o ro /dev/cd0 /cdrom
Once the file system is mounted, the directory becomes the mount point.
All the file systems will now be usable
as if they were subdirectories of the file system they were mounted on.
The table of currently mounted file systems
can be found by examining the mounted file system information file. This
is provided by a file system that is usually
mounted on /etc/mnttab.
1. The superblock for the mounted file system is read into memory
2. An entry is made in the /etc/mnttab file
3. An entry is made in the inode for the directory on which the file
system is mounted which marks the directory
as a mount point
OPTIONS
-F FSType
Used to specify the FSType on which to operate. The FSType must be
specified or must be determinable from
/etc/vfstab, or by consulting /etc/default/fs or /etc/dfs/fstypes.
-a [ mount_points. . . ]
Perform mount or umount operations in parallel, when possible.
If mount points are not specified, mount will mount all file systems
whose /etc/vfstab "mount at boot"
field is "yes". If mount points are specified, then /etc/vfstab "mount
at boot" field will be ignored.
If mount points are specified, umount will only umount those mount
points. If none is specified, then umount
will attempt to unmount all file systems in /etc/mnttab, with the
exception of certain system
required file systems: /, /usr, /var, /var/adm, /var/run, /proc, /dev/fd
and /tmp.
-v Print the list of mounted file systems in verbose format. Must be the
only option specified.
-V Echo the complete command line, but do not execute the command.
umount generates a command line by using the
options and arguments provided by the user and adding to them
information derived from /etc/mnttab. This
option should be used to verify and validate the command line.
generic_options
Options that are commonly supported by most FSType-specific command
modules. The following options are
available:
Example mount:
Typical examples:
Note 1:
-------
If you specify only the Directory parameter, the mount command takes it
to be the name of the directory or file on which
a file system, directory, or file is usually mounted (as defined in
the /etc/filesystems file). The mount command looks up
the associated device, directory, or file and mounts it. This is the
most convenient way of using the mount command,
because it does not require you to remember what is normally mounted on
a directory or file. You can also specify only
the device. In this case, the command obtains the mount point from
the /etc/filesystems file.
The mount all command causes all file systems with the mount=true
attribute to be mounted in their normal places.
This command is typically used during system initialization, and the
corresponding mounts are referred to as
automatic mounts.
Example mount command on AIX:
-----------------------------
$ mount
/var:
dev = /dev/hd9var
vfs = jfs2
log = /dev/hd8
mount = automatic
check = false
type = bootfs
vol = /var
free = false
/tmp:
dev = /dev/hd3
vfs = jfs2
log = /dev/hd8
mount = automatic
check = false
vol = /tmp
free = false
/opt:
dev = /dev/hd10opt
vfs = jfs2
log = /dev/hd8
mount = true
check = true
vol = /opt
free = false
/dev/lv01 = /u01
/dev/lv02 = /u02
/dev/lv03 = /u03
/dev/lv04 = /data
/dev/lv00 = /spl
fsstat command:
---------------
On Solaris, the following example shows the statistics for each file
operation for "/" (using the -f option):
$ fsstat -f /
Mountpoint: /
operation #ops bytes
open 8.54K
close 9.8K
read 43.6K 65.9M
write 1.57K 2.99M
ioctl 2.06K
setfl 4
getattr 40.3K
setattr 38
access 9.19K
lookup 203K
create 595
remove 56
link 0
rename 9
mkdir 19
rmdir 0
readdir 2.02K 2.27M
symlink 4
readlink 8.31K
fsync 199
inactive 2.96K
fid 0
rwlock 47.2K
rwunlock 47.2K
seek 29.1K
cmp 42.9K
frlock 4.45K
space 8
realvp 3.25K
getpage 104K
putpage 2.69K
map 13.2K
addmap 34.4K
delmap 33.4K
poll 287
dump 0
pathconf 54
pageio 0
dumpctl 0
dispose 23.8K
getsecattr 697
setsecattr 0
shrlock 0
vnevent 0
fuser command:
--------------
AIX:
Purpose
Identifies processes using a file or file structure.
Syntax
fuser [ -c | -d | -f ] [ -k ] [ -u ] [ -x ] [ -V ]File ...
Description
The fuser command lists the process numbers of local processes that use
the local or remote files
specified by the File parameter. For block special devices, the command
lists the processes that use
any file on that device.
Flags
To list the process numbers and user login names of processes using
the /etc/filesystems file, enter:
# fuser -u /etc/filesystems
Either command lists the process number and user name, and then
terminates each process that is using
the /dev/hd1 (/home) file system. Only the root user can terminate
processes that belong to another user.
You might want to use this command if you are trying to unmount the
/dev/hd1 file system and a process
that is accessing the /dev/hd1 file system prevents this.
To list all processes that are using a file which has been deleted from
a given file system, enter:
# fuser -d /usr
- To kill all processes accessing the file system /home in any way.
# fuser -km /home
Short note on stopping and starting NFS. See other sections for more
detail.
-- AIX:
The following subsystems are part of the nfs group: nfsd, biod,
rpc.lockd, rpc.statd, and rpc.mountd.
The nfs subsystem (group) is under control of the "resource controller",
so starting and stopping nfs
is actually easy
# startsrc -g nfs
# stopsrc -g nfs
Or use smitty.
-- Redhat Linux:
# /sbin/service nfs restart
# /sbin/service nfs start
# /sbin/service nfs stop
-- Solaris:
If the nfs daemons aren't running, then you will need to run:
# /etc/init.d/nfs.server start
-- HP-UX:
Issue the following command on the NFS server to start all the necessary
NFS processes (HP):
# /sbin/init.d/nfs.server start
# cd /sbin/init.d
# ./nfs.client start
===========================================
3. Change ownership file/dir, adding users:
===========================================
Examples:
chown -R oracle:oinstall /opt/u01
chown -R oracle:oinstall /opt/u02
chown -R oracle:oinstall /opt/u03
chown -R oracle:oinstall /opt/u04
# groupadd dba
# useradd oracle
# mkdir /usr/oracle
# mkdir /usr/oracle/9.0
# chown -R oracle:dba /usr/oracle
# touch /etc/oratab
# chown oracle:dba /etc/oratab
>>> Solaris:
set rstchown=1
set rstchown=0
When a system disallows use of the chown command, you can expect to see
dialog like this:
Examples:
# passwd tempusr
UID must be unique and is typically a number between 100 and 60002
GID is a number between 0 and 60002
When the POSIX or Korn Shell is your login shell, it looks for these
following files and executes them, if they exist:
/etc/profile
This default system file is executed by the shell program and sets up
default environment variables.
.profile
If this file exists in your home directory, it is executed next at
login.
ENV
When you invoke the shell, it looks for a shell variable called ENV
which is usually set in your .profile. ENV is evaluated and if it is set
to an existing file, that file is executed. By convention, ENV is
usually set to .kshrc but may be set to any file name.
These files provide the means for customizing the shell environment to
fit your needs.
it looks for these following files and executes them, if they exist:
/etc/profile
# mkuser albert
The mkuser command does not create password information for a user. It
initializes the password field
with an * (asterisk). Later, this field is set with the passwd or pwdadm
command.
New accounts are disabled until the passwd or pwdadm commands are used
to add authentication
information to the /etc/security/passwd file.
You can use the Users application in Web-based System Manager to change
user characteristics. You could also
use the System Management Interface Tool (SMIT) "smit mkuser" fast path
to run this command.
There are two stanzas, user and admin, that can contain all defined
attributes except the id and admin attributes.
The mkuser command generates a unique id attribute. The admin attribute
depends on whether the -a flag is used with
the mkuser command.
user:
pgroup = staff
groups = staff
shell = /usr/bin/ksh
home = /home/$USER
auth1 = SYSTEM
To create the davis user account with the default values in the
/usr/lib/security/mkuser.default file, type:
# mkuser davis
Only the root user or users with the UserAdmin authorization can create
davis as an administrative user.
To create the davis user account and set the su attribute to a value of
false, type:
# mkuser su=false davis
smit <Enter>
The utility displays a form for adding new user information. Use the
<Up-arrow> and <Down-arrow> keys to move through
the form. Do not use <Enter> until you are finished and ready to exit
the screen.
Fill in the appropriate fields of the Create User form (as listed in
Create User Form) and press <Enter>.
The utility exits the form and creates the new user.
smit <Enter>
-- Example 1:
Add user john to the system with all of the default attributes.
# useradd john
Add the user john to the system with a UID of 222 and a primary group
of staff.
-- Example 2:
You can use tools like useradd or groupadd to create new users and
groups from the shell prompt.
But an easier way to manage users and groups is through the graphical
application, User Manager.
Or invoke the Gnome Linuxconf GUI Tool by typing "linuxconf". In Red Hat
Linux, linuxconf is found in the
/bin directory.
================================
4. Change filemode, permissions:
================================
Examples:
---------
to remove read write and execute permissions on the file biglist for the
group and others
% chmod go-rwx biglist
make executable:
% chmod +x mycommand
set mode:
% chmod 644 filename
rwxrwxrwx=777
rw-rw-rw-=666
rw-r--r--=644 corresponds to umask 022
r-xr-xr-x=555
rwxrwxr-x=775
1 = execute
2 = write
4 = read
so a file with, say 640, means, the owner can read and write (4+2=6),
the group can read (4)
and everyone else has no permission to use the file (0).
chmod -R a+X .
This command would set the executable bit (for all users) of all
directories and executables
below the current directory that presently have an execute bit set. Very
helpful when you want to set
all your binary files executable for everyone other than you without
having to set the executable bit
of all your conf files, for instance. *wink*
chmod -R g+w .
This command would set all the contents below the current directory
writable by your current group.
chmod -R go-rwx
This command would remove permissions for group and world users without
changing the bits for the file owner.
Now you don't have to worry that 'find . -type f -exec chmod 600 {}\;'
will change your binary files
non-executable. Further, you don't need to run an additional command to
chmod your directories.
========================
5. About the sticky bit:
========================
- This info is valid for most Unix OS including Solaris and AIX:
----------------------------------------------------------------
A 't' or 'T' as the last character of the "ls -l" mode characters
indicates that the "sticky" (save text image) bit is set. See ls(1) for
an explanation the distinction between 't' and 'T'.
The sticky bit has a different meaning, depending on the type of file it
is set on...
[Example]
drwxrwxrwt 104 bin bin 14336 Jun 7 00:59 /tmp
Only root is permitted to turn the sticky bit on or off. In addition the
sticky bit applies to anyone
who accesses the file. The syntax for setting the sticky bit on a dir
/foo directory is as follows:
chmod +t /foo
[Example]
-r-xr-xr-t 6 bin bin 24111111111664 Nov 14 2000
/usr/bin/vi
Solaris:
--------
castle% ls -l /var/spool/uucppublic
drwxrwxrwt 2 uucp uucp 512 Sep 10 18:06 uucppublic
castle%
You can set sticky bit permissions by using the chmod command to assign
the octal value 1 as the first number
in a series of four octal values. Use the following steps to set the
sticky bit on a directory:
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <1nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the
permissions of the file have changed.
The following example sets the sticky bit permission on the pubdir
directory:
================
6. About SETUID:
================
The real user ID identifies the owner of the process, the effective uid
is used in most
access control decisions, and the saved uid stores a previous user ID so
that it
can be restored later.
Similar, a process has three group ID's.
When a process is created by fork, it inherits the three uid's from the
parent process.
When a process executes a new file by exec..., it keeps its three uid's
unless the
set-user-ID bit of the new file is set, in which case the effective uid
and saved uid
are assigned the user ID of the owner of the new file.
castle% ls -l /usr/bin/passwd
-r-sr-sr-x 3 root sys 96796 Jul 15 21:23 /usr/bin/passwd
castle%
You setuid permissions by using the chmod command to assign the octal
value 4 as the first number
in a series of four octal values. Use the following steps to setuid
permissions:
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <4nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the
permissions of the file have changed.
castle% ls -l /usr/bin/mail
-r-x-s-x 1 bin mail 64376 Jul 15 21:27 /usr/bin/mail
castle%
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <2nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the
permissions of the file have changed.
The following example sets setuid permission on the myprog2 file:
=========================
7. Find command examples:
=========================
Introduction
The find command allows the Unix user to process a set of files and/or
directories in a file subtree.
EXAMPLES
--------
This command will search in the current directory and all sub
directories for a file named rc.conf.
Note: The -print option will print out the path of any file that is
found with that name. In general -print wil
print out the path of any file that meets the find criteria.
This command will search in the current directory and all sub
directories. All files named rc.conf will be processed
by the chmod -o+r command. The argument '{}' inserts each found file
into the chmod command line.
The \; argument indicates the exec command line has ended.
The end results of this command is all rc.conf files have the other
permissions set to read access
(if the operator is the owner of the file).
This command will search in the current directory and all sub
directories.
All files that contain the string will have their path printed to
standard output.
This command search all subdirs all files to find text CI_ADJ_TYPE
This command will find all files in the root directory larger than 1 MB.
thread 1:
---------
olddate="200407010001"
newdat="200407312359"
touch -t $olddate ./tmpoldfile
touch -t $newdat ./tmpnewfile
find /path/to/directory -type f -newer a ./tmpoldfile ! -newer a
./tmpnewfile
the "-newer a " means access time, you can use "-newer m " for modify
time
thread 2:
---------
On to find, you can find -newer and then ! -newer, like this:
$ find /dir -newer start_date ! -newer stop_date -print
Combine that with ls -l, you get:
$ find /dir -newer start_date ! -newer stop_date -print0 | xargs -0 ls
-l
(Or you can try -exec to execute ls -l. I am not sure of the format, so
you have to muck around a little bit)
HTH
.
thread 3:
---------
thread 4:
---------
Other examples:
---------------
# find . -name file -print
# find / -name $1 -exec ls -l {} \;
* Search and list all files from current directory and down for the
string ABC:
find ./ -name "*" -exec grep -H ABC {} \;
find ./ -type f -print | xargs grep -H "ABC" /dev/null
egrep -r ABC *
* Find all files of a given type from current directory on down:
find ./ -name "*.conf" -print
* Find all user files larger than 5Mb:
find /home -size +5000000c -print
* Find all files owned by a user (defined by user id number. see
/etc/passwd) on the system: (could take a very long time)
find / -user 501 -print
* Find all files created or updated in the last five minutes: (Great for
finding effects of make install)
find / -cmin -5
* Find all users in group 20 and change them to group 102: (execute as
root)
find / -group 20 -exec chown :102 {} \;
* Find all suid and setgid executables:
find / \( -perm -4000 -o -perm -2000 \) -type f -exec ls -ldb {} \;
find / -type f -perm +6000 -ls
Example:
--------
cd /database/oradata/pegacc/archive
archdir=`pwd`
if [ $archdir=="/database/oradata/pegacc/archive" ]
then
find . -name "*.dbf" -mtime +5 -exec rm {} \;
else
echo "error in onderhoud PEGACC archives" >>
/opt/app/oracle/admin/log/archmaint.log
fi
Example:
--------
The following example shows how to find files larger than 400 blocks in
the current directory:
Type:
ls -li
The inode for this file is 153805. Use find -inum [inode] to make sure
that the file is correctly identified.
Here, we see that it is. Then used the -exec functionality to do the
remove. .
Note that if this strangely named file were not of zero-length, it might
contain accidentally misplaced
and wanted data. Then you might want to determine what kind of data the
file contains and move the file
to some temporary directory for further investigation, for example:
Will rename the file to unknown.file, so you can easily inspect it.
COOL EXAMPLE: Using find and cpio to create really good backups:
----------------------------------------------------------------
Then DO NOT USE "cp -R" or something similar. Instead use "find" in
combination with the "cpio" backup command.
# cd /dir1/dira
# find . | cpio -pvdm /dir2/dirb
In using the find command where you want to delete files older than a
certain date, you can use
commands like
find . -name "*.log" -mtime +30 -exec rm {} \; or
find . -name "*.dbf" -atime +30 -exec rm {} \;
Why should you choose, or not choose, between atime and mtime?
atime -- The atime--access time--is the time when the data of a file was
last accessed. Displaying the contents
of a file or executing a shell script will update a file's
atime, for example.
mtime -- The mtime--modify time--is the time when the actual contents of
a file was last modified.
This is the time displayed in a long directoring listing (ls
-l).
Thats why backup utilities use the mtime when performing incremental
backups:
When the utility reads the data for a file that is to be included in a
backup, it does not
affect the file's modification time, but it does affect the file's
access time.
So for most practical reasons, if you want to delete logfiles (or other
files) older than a certain
date, its best to use the mtime attribute.
pago-am1:/usr/local/bb>istat bb18b3.tar.gz
Inode 20 on device 10/9 File
Protection: rw-r--r--
Owner: 100(bb) Group: 100(bb)
Link count: 1 Length 427247 bytes
===================
7. Crontab command:
===================
crontab [ -e | -l | -r | -v | File ]
A crontab file contains entries for each cron job. Entries are separated
by newline characters.
Each crontab file entry contains six fields separated by spaces or tabs
in the following form:
minute hour day_of_month month weekday command
0 0 * 8 * /u/harry/bin/maintenance
Notes:
------
# /etc/init.d/cron stop
# /etc/init.d/cron start
Note 2:
-------
Or if you used a name other than "cronjobs", substitute the name you
selected for the occurrence of "cronjobs" above.
Note 3:
-------
# use /bin/sh to run commands, no matter what /etc/passwd says
SHELL=/bin/sh
# mail any output to `paul', no matter whose crontab this is
MAILTO=paul
#
# run five minutes after midnight, every day
5 6-18 * * * /opt/app/oracle/admin/scripts/grepora.sh
# run at 2:15pm on the first of every month -- output mailed to paul
15 14 1 * * $HOME/bin/monthly
# run at 10 pm on weekdays, annoy Joe
0 22 * * 1-5 mail -s "It's 10pm" joe%Joe,%%Where are your kids?%
23 0-23/2 * * * echo "run 23 minutes after midn, 2am, 4am ..., everyday"
5 4 * * sun echo "run at 5 after 4 every sunday"
2>&1 means:
Note 4:
-------
thread
Q:
> Isn't there a way to refresh cron to pick up changes made using
> crontab -e? I made the changes but the specified jobs did not run.
> I'm thinking I need to refresh cron to pick up the changes. Is this
> true? Thanks.
A:
Crontab -e should do that for you, that's the whole point of using
it rather than editing the file yourself.
Why do you think the job didn't run?
Post the crontab entry and the script. Give details of the version of
Tru64 and the patch level.
Then perhaps we can help you to figure out the real cause of the
problem.
Hope this helps
A:
I have seen the following problem when editing the cron file for another
user:
crontab -e idxxxxxx
su - idxxxxxx
crontab -l |crontab
su - idxxxxxx
crontab -e
Note 5:
-------
* daemon Defines whether the user can execute programs using the
system
* resource controller (SRC). Possible values: true or
false.
Note 6:
-------
su - user
and put the following in the crontab of that user:
* * * * * date >/tmp/elog
After checking the /tmp/elog file, which will rapidly fills with dates,
don't forget
to remove the crontab entry shown above.
On many unix systems the scheduling "at" command and "atq" commands are
available.
With "at", you can schedule commands, and with "atq" you can view all
your, or other users, scheduled tasks.
To submit an at job, type at followed by the time that you would like
the program to execute. You'll see the at> prompt displayed and it's
here
that you enter the at commands. When you are finished entering the at
command, press control-d to exit the at prompt
and submit the job as shown in the following example:
# at 07:45am today
at> who > /tmp/log
at> <Press Control-d>
To show jobs:
# jobs
To show processes:
# ps
# ps -ef | grep ora
Stop a process:
# kill -9 3535 (3535 is the pid, process id)
Another way:
Use who to check out your current users and their terminals. Kill all
processes related to a specific terminal:
# fuser -k /dev/pts[#]
or
When working with the UNIX operating system, there will be times when
you will want to run commands that are immune
to log outs or unplanned login session terminations. This is especially
true for UNIX system administrators.
The UNIX command for handling this job is the nohup (no hangup) command.
Normally when you log out, or your session terminates unexpectedly, the
system will kill all processes you have started.
Starting a command with nohup counters this by arranging for all
stopped, running, and background jobs to ignore
the SIGHUP signal.
You may optionally add an ampersand to the end of the command line to
run the job in the background:
nohup command [arguments] &
If you do not redirect output from a process kicked off with nohup, both
standard output (stdout) and
standard error (stderr) are sent to a file named nohup.out. This file
will be created in $HOME (your home directory)
if it cannot be created in the working directory. Real-time monitoring
of what is being written to nohup.out
can be accomplished with the "tail -f nohup.out" command.
The nohup command runs the command specified by the Command parameter
and any related Arg parameters,
ignoring all hangup (SIGHUP) signals. Use the nohup command to run
programs in the background after logging off.
To run a nohup command in the background, add an & (ampersand) to the
end of the command.
==========================================
9. Backup commands, TAR, and Zipped files:
==========================================
For SOLARIS as well as AIX, and many other unix'es, the following
commands can be used:
tar, cpio, dd, gzip/gunzip, compress/uncompress, backup and restore.
Very important:
If you will backup to tape, make sure you know what is your "rewinding"
class and "nonrewinding" class
of your tapedevice.
9.1 tar: Short for "Tape Archiver":
===================================
-c create
-r append
-x extract
-v verbose
-t list
Extract the contents of example.tar and display the files as they are
extracted.
# tar -xvf example.tar
Create a tar file named backup.tar from the contents of the directory
/home/ftp/pub
# tar -cf backup.tar /home/ftp/pub
If you use an absolute path, you can only restore in "a like"
destination directory.
If you use a relative path, you can restore in any directory.
In this case, use tar with a relative pathname, for example if you want
to backup /home/bcalkins
change to that directory and use
Example:
--------
mt -f /dev/rmt1 rewind
mt -f /dev/rmt1.1 fsf 6
tar -xvf /dev/rmt1.1 /data/download/expdemo.zip
Most common errors messages with tar:
-------------------------------------
Possible Causes
From the command line, you issued the tar command to extract files from
an archive that was not created
with the tar command.
Possible Causes
You issued the tar command to read an archive from a tape device that
has a different block size
than when the archive was created.
Solution:
Actually on AIX this is not OK. The tape will rewind after each tar
command, effectively
you will end up with ONLY the last backupstatement.
You should use the non-rewinding class instead, like for example:
The following table shows the names of the rmt special files and their
characteristics.
mt -f /dev/rmt1 rewind
mt -f /dev/rmt1.1 fsf 2 in order to put the pointer to the beginning of
block 3.
Another example:
mt -f /dev/rmt1 rewind
mt -f /dev/rmt1.1 fsf 8
tar -xvf /dev/rmt1.1 /u01/oradata/spltrain/temp01.dbf
Tapedrives on Solaris:
----------------------
You can also add special character letter to specify density using
following format
/dev/rmt/ZX
#!/usr/bin/ksh
# VERSIE: 0.1
# DATUM : 27-12-2005
# DOEL VAN HET SCRIPT:
# - STOPPEN VAN DE APPLICATIES
# - VERVOLGENS BACKUP NAAR TAPE
# - STARTEN VAN DE APPLICATIES
# CONTROLEER VOORAF OF DE TAPELIBRARY GELADEN IS VIA
"/opt/backupscripts/load_lib.sh"
BACKUPLOG=/opt/backupscripts/backup_to_rmt1.log
export BACKUPLOG
########################################
# 1. REGISTRATIE STARTTIJD IN EEN LOG #
########################################
########################################
# 2. STOPPEN APPLICATIES #
########################################
########################################
# 3. BACKUP COMMANDS #
########################################
case $DAYNAME in
Tue) tapeutil -f /dev/smc0 move 256 4116
tapeutil -f /dev/smc0 move 4101 256
;;
Wed) tapeutil -f /dev/smc0 move 256 4117
tapeutil -f /dev/smc0 move 4100 256
;;
Thu) tapeutil -f /dev/smc0 move 256 4118
tapeutil -f /dev/smc0 move 4099 256
;;
Fri) tapeutil -f /dev/smc0 move 256 4119
tapeutil -f /dev/smc0 move 4098 256
;;
Sat) tapeutil -f /dev/smc0 move 256 4120
tapeutil -f /dev/smc0 move 4097 256
;;
Mon) tapeutil -f /dev/smc0 move 256 4121
tapeutil -f /dev/smc0 move 4096 256
;;
esac
sleep 50
sleep 10
# TIJDELIJKE ACTIE
date >> /opt/backupscripts/running.log
ps -ef | grep pmon >> /opt/backupscripts/running.log
ps -ef | grep BBL >> /opt/backupscripts/running.log
ps -ef | grep was >> /opt/backupscripts/running.log
who >> /opt/backupscripts/running.log
defragfs /prj
########################################
# 4. STARTEN APPLICATIES #
########################################
sleep 30
########################################
# 5. REGISTRATIE EINDTIJD IN EEN LOG #
########################################
weekday=`date +%a-%A`
echo $weekday
Thu-Thursday
%a
Displays the locale's abbreviated weekday name.
%A
Displays the locale's full weekday name.
%b
Displays the locale's abbreviated month name.
%B
Displays the locale's full month name.
%c
Displays the locale's appropriate date and time
representation. This is the default.
%C
Displays the first two digits of the four-digit year as a
decimal number (00-99). A year is divided by 100 and truncated to an
integer.
%d
Displays the day of the month as a decimal number (01-31).
In a two-digit field, a 0 is used as leading space fill.
%D
Displays the date in the format equivalent to %m/%d/%y.
%e
Displays the day of the month as a decimal number (1-31). In
a two-digit field, a blank space is used as leading space fill.
# compress -v bigfile.exe
Would compress bigfile.exe and rename that file to bigfile.exe.Z.
# uncompress *.Z
would uncompress the files *.Z
9.3 gzip:
=========
# gzip filename.tar
To decompress:
# gzip -d filename.tar.gz
# gunzip filename.tar.gz
# gzip -d users.dbf.gz
9.4 bzip2:
==========
#bzip2 filename.tar
This will become filename.tar.bz2
9.5 dd:
=======
Solaris:
--------
to duplicate a tape:
# dd if=/dev/rmt/0 of=/dev/rmt/1
AIX:
----
same command syntax apply to IBM AIX. Here is an AIX pSeries machine
with floppydrive example:
clone a diskette:
# dd if=/dev/fd0 of=/tmp/ddcopy
# dd if=/tmp/ddcopy of=/dev/fd0
Note:
9.6 cpio:
=========
solaris:
--------
cpio <mode><option>
copy-out: cpio -o
copy_in : cpio -i
pass : cpio -p
# cd /var/bigspace
# cpio -idmv Linux9i_Disk1.cpio.gz
# cpio -idmv Linux9i_Disk2.cpio.gz
# cpio -idmv Linux9i_Disk3.cpio.gz
# cd /work
# ls -R | cpio -ocB > /dev/rmt/0
# cd /work
# cpio -icvdB < /dev/rmt/0
AIX uses the same syntax. Usually, you should use the following command:
Example:
--------
Just cd to the directory that you want to clone and use a command
similar to the following examples.
cd /spl/SPLDEV1
find . -print | cpio -pdmv /spl/SPLDEVT
find . -print | cpio -pdmv /backups2/data
Example:
--------
Example:
--------
# cd filesystem1
Example:
--------
Copying directories
Both cpio and tar may be used to copy directories while preserving
ownership, permissions, and directory structure.
cpio example:
cd fromdir
find . | cpio -pdumv todir
tar example:
cd fromdir; tar cf - . | (cd todir; tar xfp -)
Errors:
-------
cpio: 0511-903
cpio: 0511-904
The pax utility supports several archive formats, including tar and
cpio.
-r: Read mode .when -r is specified, pax extracts the filenames and
directories found in the archive.
The archive is read from disk or tape. If an extracted file is a
directory, the hierarchy
is extracted as well. The extracted files are created relative to
the current directory.
-w: Write mode. If you want to create an archive, you use -w.
Pax writes the contents of the file to the standard output in an
archive format specified
by the -x option.
-rw: Copy mode. When both -r and -w are specified, pax copies the
specified files to
the destination directory.
Examples:
To list a verbose table of contents stored on tape rmt0, use None mode
and f
# pax -v -f /dev/rmt0
9.8 pkzip25:
============
PKZIP Usage:
Examples:
extract
extract files from a .ZIP file. Its a configurable switch.
default = all
Example:
Examples:
---------
To generate a system backup with a new /image.data file, but exclude the
files in directory /home/user1/tmp,
create the file "/etc/exclude.rootvg" containing the line
/home/user1/tmp/, and type:
# mksysb -i -e /dev/rmt1
This command will backup the /home/user1/tmp directory but not the files
it contains.
There will be four images on the mksysb tape, and the fourth image will
contain ONLY rootvg JFS or JFS2
mounted file systems. The target tape drive must be local to create a
bootable tape.
+---------------------------------------------------------+
| Bosboot | Mkinsttape | Dummy TOC | rootvg |
| Image | Image | Image | data |
|-----------+--------------+-------------+----------------|
|<----------- Block size 512 ----------->| Blksz defined |
| | by the device |
+---------------------------------------------------------+
Special notes:
--------------
Question:
I'm attempting to restore a mksysb tape to a system that only has 18GB
of drive space available for the Rootvg.
Does the mksysb try to restore these mirrored LVs, or does it just make
one copy?
If it is trying to rebuild the mirror, is there a way that I can get
around that?
Answer:
I had this same problem and received a successful resolution. I place
those same tasks here:
1) Create a new image.data file, run mkszfile file.
2) Change the image.data as follows:
a) cd /
b) vi image.data
c) In each lv_data stanza of this file, change the values of the copies
line by one-half (i.e. copies = 2, change to copies = 1)
Also, change the number of Physical Volumes "hdisk0 hdisk1" to "hdisk0".
d) Save this file.
3) Create another mksysb from the command line that will utilize the
newly edited image.data file by the command:
mksysb /dev/rmt0 (Do not use smit and do not run with the -i flag,
both will generate a new image.data file
4) Use this new mksysb to restore your system on other box without
mirroring.
---------------------------------------------------------
$ tctl fsf 3
$ restore -xvf /dev/rmt0.1 ./your/file/name
For example, if you need to get the vi command back, put the mksysb tape
in the tape drive
(in this case, /dev/rmt0) and do the following:
Further explanation why you must use the fsf 3 (fast forward skip file
3):
The format of the tape is as follows:
1. A BOS boot image
2. A BOS install image
3. A dummy Table Of Contents
4. The system backup of the rootvg
So if you just need to restore some files, first forward the tape
pointer to position 3, counting from 0.
With a mksysb image on disk you don't have any positioning to do, like
with
a tape.
Prepare for migrating to the AIX 5.3 BOS by completing the following
steps:
The following steps migrate your current version of the operating system
to AIX 5.3.
If you are using an ASCII console that was not defined in your previous
system, you must define it.
For more information about defining ASCII consoles, see Step 3. Setting
up an ASCII terminal.
Turn the system unit power switch from Off (0) to On (|).
When the system beeps twice, press F5 on the keyboard (or 5 on an ASCII
terminal). If you have a graphics display,
you will see the keyboard icon on the screen when the beeps occur. If
you have an ASCII terminal
(also called a tty terminal), you will see the word "keyboard" when the
beeps occur.
Note: If your system does not boot using the F5 key (or the 5 key on an
ASCII terminal), refer to your
hardware documentation for information about how to boot your system
from an AIX product CD.
The system begins booting from the installation media. The mksysb
migration installation proceeds
as an unattended installation (non-prompted) unless the
MKSYSB_MIGRATION_DEVICE is the same CD or DVD drive
as the one being used to boot and install the system. In this case, the
user is prompted to switch
the product CD for the mksysb CD or DVD(s) to restore the image.data and
the /etc/filesystems file.
After this happens the user is prompted to reinsert the product media
and the installation continues.
When it is time to restore the mksysb image, the same procedure repeats.
The BOS menus do not currently support mksysb migration, so they cannot
be loaded. In a traditional migration,
if there are errors that can be fixed by prompting the user for
information through the menus,
the BOS menus are loaded. If such errors or problems are encountered
during mksysb migration,
the installation asserts and an error stating that the migration cannot
continue displays.
Depending on the error that caused the assertion, information specific
to the error might be displayed.
If the installation asserts, the LED shows "088".
Question:
I have to clone a standalone 6H1 equipped with a 4mm tape, from
another 6H1 which is node of an SP and which does not own a tape !
The consequence is that my source mksysb is a file that is recorded in
/spdata/sys1/install/aixxxx/images
How will I copy this file to a tape to create the correct mksysb tape
that could be used to restore on my target machine ?
Answer:
using the following method in the case the two server are in the same
AIX level and kernel type (32/64 bits, jfs or jfs2)
- the both servers must communicate over an IP network and have .rhosts
file documented (for using rsh)
cp /var/adm/ras/bosinst.data /bosinst.data
mkszfile
copy these files (bosinst.data and image.data) under "/" on the remote
system
on the server:
tctl -f /dev/rmt0 status
if the block size is not 512:
echo " Dummy tape TOC" | dd of=/dev/rmt0.1 conv=sync bs=512 > /dev/null
2>&1 (create the third file "dummy toc")
mknod /tmp/pipe p
this last command create the fourth file with "rootvg" in backup/restore
format
You can use Web-based System Manager or SMIT to create a root volume
group backup on CD or DVD with the
ISO9660 format, as follows:
Use the Web-based System Manager Backup and Restore application and
select System backup wizard method.
This method lets you create bootable or non-bootable backups on CD-R,
DVD-R, or DVD-RAM media.
OR
The following procedure shows you how to use SMIT to create a system
backup to CD.
(The SMIT procedure for creating a system backup to an ISO9660 DVD is
similar to the CD procedure.)
Type the smit mkcd fast path. The system asks whether you are using an
existing mksysb image.
Type the name of the CD-R device. (This can be left blank if the Create
the CD now? field is set to no.)
If you are creating a mksysb image, select yes or no for the mksysb
creation options, Create map files?
and Exclude files?. Verify the selections, or change as appropriate.
The mkcd command always calls the mksysb command with the flags to
extend /tmp.
Enter the file system in which to store the mksysb image. This can be a
file system that you created in the rootvg,
in another volume group, or in NFS-mounted file systems with read-write
access. If this field is left blank,
the mkcd command creates the file system, if the file system does not
exist, and removes it when the command completes.
Enter the file systems in which to store the CD or DVD file structure
and final CD or DVD images. These can be
file systems you created in the rootvg, in another volume group, or in
NFS-mounted file systems. If these fields
are left blank, the mkcd command creates these file systems, and removes
them when the command completes,
unless you specify differently in later steps in this procedure.
If you did not enter any information in the file systems' fields, you
can select to have the mkcd command either
create these file systems in the rootvg, or in another volume group. If
the default of rootvg is chosen
and a mksysb image is being created, the mkcd command adds the file
systems to the exclude file and calls
the mksysb command with the -e exclude files option.
If you change the Remove final images after creating CD? field to no,
the file system for the CD images
(that you specified earlier in this procedure) remains after the CD has
been recorded.
If you change the Create the CD now? field to no, the file system for
the CD images (that you specified earlier
in this procedure) remains. The settings that you selected in this
procedure remain valid, but the CD is not
created at this time.
If you intend to use an Install bundle file, type the full path name to
the bundle file. The mkcd command copies
the file into the CD file system. You must have the bundle file already
specified in the BUNDLES field,
either in the bosinst.data file of the mksysb image or in a user-
specified bosinst.data file. When this
option is used to have the bundle file placed on the CD, the location in
the BUNDLES field of the bosinst.data
file must be as follows:
/../usr/sys/inst.data/user_bundles/bundle_file_name
You can specify the full path name to a customization script in the
Customization script field. If given,
the mkcd command copies the script to the CD file system. You must have
the CUSTOMIZATION_FILE field already set
in the bosinst.data file in the mksysb image or else use a user-
specified bosinst.data file with the CUSTOMIZATION_FILE field set. The
mkcd command copies this file to the RAM file system. Therefore, the
path in the CUSTOMIZATION_FILE field must be as follows:
/../filename
You can use your own bosinst.data file, rather than the one in the
mksysb image, by typing the full path name
of your bosinst.data file in the User supplied bosinst.data file field.
To turn on debugging for the mkcd command, set Debug output? to yes. The
debug output goes to the smit.log.
You can use your own image.data file, rather than the image.data file in
the mksysb image, by typing the
full path name of your image.data file for the User supplied image.data
file field.
== Technote:
APAR status
Closed as program error.
Error description
On a system, that does not have tape support
installed, running mkszfile will show the
following error:
0301-150 bosboot: Invalid or no boot device
specified.
Local fix
Install device support for scsi tape devices.
Problem summary
Error message when creating backup if devices.scsi.tape.rte
not installed even if the system does not have a tape drive.
Problem conclusion
Redirect message to /dev/null.
Temporary fix
Ignore message.
Comments
APAR information
APAR number IY52551 IY95261
Reported component name AIX 5L POWER V5
Reported component ID 5765E6200
Reported release 520
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2004-01-12
Closed date 2004-01-12
Last modified date 2004-02-27
== Technote:
APAR status
Closed as program error.
Error description
If /dev/ipldevice is missing, mksfile will show the
bosboot usage statement.
Problem conclusion
Do not run bosboot against /dev/ipldevice.
Temporary fix
Comments
APAR information
APAR number IY95261
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2007-02-22
Closed date 2007-02-22
Last modified date 2007-06-06
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
== thread:
Q:
>
> Someone out there knows the fix for this one; if you get a moment,
would you
> mind giving me the fix?
>
>
> # mksysb -i /dev/rmt0
>
> /dev/ipldevice not found
>
A:
Q:
I was installing Atape driver and noticed bosboot failure when installp
calls bosboot with /dev/ipldevice. Messages below:
ln /dev/rhdisk0 /dev/ipldevice
A:
Are you using EMC disk? There is a known problem with the later
Powerpath versions where the powerpath startup script removes the
/dev/ipldevice file if there is more than one device listed in the
bootlist.
A:
Yes, running EMC PowerPath 4.3 for AIX, with EMC Clariion CX600 Fibre
disks attached to SAN. I always boot from, and mirror the OS on IBM
internal disks. We order 4 internal IBM drives. Two for primary OS and
mirror, the other two for alt_disk and mirrors.
Thanks for the tip. I will investigate at EMC Powerlink site for fix. I
know PowerPath 4.4 for AIX is out, but still pretty new.
A:
-----Original Message-----
From: IBM AIX Discussion List [mailto:[email protected]] On Behalf Of
Robert Miller
Sent: Wednesday, April 07, 2004 6:13 PM
To: [email protected]
Subject: Re: 64 Bit Kernel
It may be one of those odd IBMisms where they want to call something a
certain name so they put it in as a link to the actual critter...
Looking on my box, the /dev/ipldevice has the same device major and
minor numbers as hdisk0 - tho it is interesting that ipldevice is a
character device, where a drive is usually a block device:
mybox:rmiller$ ls -l /dev/ipl*
crw------- 2 root system 23, 0 Jan 15 2002 /dev/ipldevice
mybox:rmiller$ ls -l /dev/hdisk0
brw------- 1 root system 23, 0 Sep 13 2002 /dev/hdisk0
A:
> Hi,
ln /dev/rhdisk0 /dev/ipldevice
== thread:
When running the command "bosboot -ad /dev/ipldevice" in IBM AIX, you
get the following error:
A device specified with the bosboot -d command is not valid. The bosboot
command was unable to finish processing
because it could not locate the required boot device. The installp
command calls the bosboot command
with /dev/ipldevice. If this error does occur, it is probably because
/dev/ipldevice does not exist.
/dev/ipldevice is a link to the boot disk.
# ls -l /dev/ipldevice
ls: 0653-341 The file /dev/ipldevice does not exist.
2) In this case, it does not exist. To identify the boot disk, enter
"lslv -m hd5". The boot disk name displays.
# lslv -m hd5
hd5:N/A
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0001 hdisk4 0001 hdisk1
# ln /dev/boot_device_name /dev/ipldevice
(An example of boot_device_name is rhdisk0.)
In my case, I ran:
# ln /dev/rhdisk4 /dev/ipldevice
Q:
Hello, we have a wierd and urgent problem, with a few of our p595 LPARs
running AIX 5.3. The LPARs ran AIX 5.3 TL 7
and booted off EMC SAN disks, using EMC Powerpath. Every boot we run
"pprootdev on" and
"pprootdev fix". We can issue "bosboot -a" and we can reboot the
machines.
Now, on two occasions, right after the update to AIX 5.3 Technology
Level 9, Service Pack 3
the system fails to reboot. When starting the partition to the SMS menu
you can see the correct (rootvg) devices
being scanned, but the devices are NOT listed as a possible boot device.
When booting off a NIM server and trying to restore an mksysb, the
entire rootvg (all disks in it)
is invisible and cannot be selected. Some user-defined volume groups are
also "missing".
When giving the partition access to a 'new' EMC disk, this new disk
shows up and can be used to restore
the mksysb. When that is complete, the original disks show up properly
(using lspv etc.) and seem perfectly alright.
Anyone ran into this same problem? Any idea's, suggestions, fixes??
A:
Being able to see a device but being unable to access the data area
sounds like a SCSI disk reservation problem.
It turns out, that on AIX 5.3, on certain ML/TL levels (below TL 6), an
mksysb error turns up,
if you have other volume groups defined other than rootvg, while there
is NO filesystem created on
those Volume groups.
>> thread 1:
Q:
Hi
# ls -al /tmp/vgdata/vg_dev/
total 16
drwxr-xr-x 2 root staff 256 Apr 02 08:38 .
drwxrwxr-x 5 root system 256 Apr 02 08:20 ..
-rw-r--r-- 1 root staff 2002 Apr 02 08:35 filesystems
-rw-r--r-- 1 root staff 1537 Apr 02 08:35 vg_dev.data
# oslevel -r
5300-05
# df -k | grep tmp
/dev/hd3 1310720 1309000 1% 42 1% /tmp
A:
I had this issue as well with VIO 1.3. I called IBM support
about it and it is a known issue. The APAR is IY87935. The fix
will not be released until AIX 5.3 TL 6, which is due out in
June. It occurs when you run savevgstruct on a user defined
volume group that contains volumes where at least one does not
have a filesystem defined on it. The workaround is to define a
filesystem on every volume in the user defined volume group.
>> thread 2:
http://www-1.ibm.com/support/docview.wss?uid=isg1IY87935
APAR status
Closed as program error.
Error description
The mkvgdata command when executed on a volume group that does
not have any mounted filesystems:
Local fix
Problem summary
The mkvgdata command when executed on a volume group that does
not have any mounted filesystems:
Problem conclusion
Check variable.
Temporary fix
Comments
APAR information
APAR number IY87935
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2006-08-09
Closed date 2006-08-09
Last modified date 2006-08-09
The backup command creates copies of your files on a backup medium, such
as a magnetic tape or diskette.
But you can also backup to diskspace.
The copies are in one of the two backup formats:
Unless you specify another backupmedia with the -f parameter, the backup
command automatically
writes its output to /dev/rfd0 which is the diskette drive.
On Sunday:
# backup -0 -uf /dev/rmt0 /data
On Monday:
# backup -1 -uf /dev/rmt0 /data
..
..
On Saturday:
# backup -6 -uf /dev/rmt0 /data
Note that we do noy use the -i flag, but instead backup an entire fs
"/".
Other examples:
---------------
Another example:
Check (or list) the backup can be done by using a command similar to the
following example:
Because the echo commands returns $?, that would mean that 0 as a result
means that the listing the backup
is succesfull.
If you do not use the echo, the above command just shows the complete
listing of the backup contents.
It does not do the actual restore, it just shows a listing.
If you actually want to restore the backup, use the following command:
To backup a user Volume Group (VG, see also sections 30 and 31) you can
use savevg to backup a VG
and restvg to restore a VG.
Purpose
Gives subcommands to a streaming tape device.
Syntax
tctl [ -f Device ] [ eof | weof | fsf | bsf | fsr | bsr | rewind |
offline | rewoffl | erase | retension | reset | status ] [ Count ]
Description
The tctl command gives subcommands to a streaming tape device. If you do
not specify the Device variable
with the -f flag, the TAPE environment variable is used. If the
environment variable does not exist,
the tctl command uses the /dev/rmt0.1 device. (When the tctl command
gives the status subcommand,
the default device is /dev/rmt0.) The Device variable must specify a raw
(not block) tape device.
The Count parameter specifies the number of end-of-file markers, number
of file marks, or number of records.
If the Count parameter is not specified, the default count is 1.
Examples
To rewind the rmt1 tape device, enter:
tctl -f /dev/rmt1 rewind
To move forward two file marks on the default tape device, enter:
tctl fsf 2
To read a tape device formatted in 80-byte blocks and put the result in
a file, enter:
tctl -b 80 read > file
Note: The only valid block sizes for quarter-inch (QIC) tape drives are
0 and 512.
To write over one of several backups on an 8 mm tape, position the tape
at the start of the backup file
and issue these commands:
tctl bsf 1
tctl eof 1
Purpose
Gives subcommands to streaming tape device.
Syntax
mt [ -f TapeName ] Subcommand [ Count ]
Description
The mt command gives subcommands to a streaming tape device. If you do
not specify the -f flag
with the TapeName parameter, the TAPE environment variable is used. If
the environment variable
does not exist, the mt command uses the /dev/rmt0.1 device. The TapeName
parameter must be a raw (not block)
tape device. You can specify more than one operation with the Count
parameter.
Subcommands
Examples
To rewind the rmt1 tape device, enter:
mt -f /dev/rmt1 rewind
To move forward two files on the default tape device, enter:
mt fsf 2
To write two end-of-file markers on the tape in the /dev/rmt0.6 file,
enter:
mt -f /dev/rmt0.6 weof 2
which moves the tape in slot 10 to the drive (obviously, this will
depend on your own individual tape library,
may I suggest the manual?).
Example:
--------
We are using 3583 automated tape library for backups.for tapeutil
command u need to have a file atape.sys
on ur system.to identify the positioning of tape drives and source just
type tapeutil it will give
u a number of options.choose element information to identify the source
and tape drive numbers.
In our case the tape drives numbers are 256 and 257 and the source
number to insert the tape is 16.
we usually give the following commands to load and move the tape.
Loading Tape:-
tapeutil -f /dev/smc0 move -s 16 -d 256
(to insert the tape in tapedrive 1,where 16 is source and 256 is
destination)
to take the backup:-
Example:
--------
In order to move tapes in and out of the Library here is what I do.
First I unload the tape with the command #tapeutil -f /dev/rmtx unload
Where x is 0,1,2,3...
then I move the tape from external slot (16) using the media changer,
not the tape drive.
Example:
--------
You can get the slot numbers, and volsers in them, with the command:
/usr/bin/tapeutil -f /dev/smc0 inventory
To find an open slot just look for a slot with a blank "Volume Tag".
Example:
--------
#!/bin/ksh
DEVICE=$1
HOST=$2
TAPE=$3
case $TAPE in
2) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
3) tapeutil -f /dev/smc0 move 23 11
tapeutil -f /dev/smc0 move 12 23
;;
4) tapeutil -f /dev/smc0 move 23 12
tapeutil -f /dev/smc0 move 13 23
;;
5) tapeutil -f /dev/smc0 move 23 13
tapeutil -f /dev/smc0 move 14 23
;;
esac
Example:
--------
Example:
--------
case $DAYNO in
01) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
02) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
03) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
04) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
05) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
06) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
07) tapeutil -f /dev/smc0 move 23 10
tapeutil -f /dev/smc0 move 11 23
;;
esac
Example:
--------
case $DAYNAME in
Sun) tapeutil -f /dev/smc0 move 256 4098
tapeutil -f /dev/smc0 move 4099 256
;;
Mon) tapeutil -f /dev/smc0 move 256 4099
tapeutil -f /dev/smc0 move 4100 256
;;
Tue) tapeutil -f /dev/smc0 move 256 4100
tapeutil -f /dev/smc0 move 4113 256
;;
Wed) tapeutil -f /dev/smc0 move 256 4113
tapeutil -f /dev/smc0 move 4114 256
;;
Thu) tapeutil -f /dev/smc0 move 256 4114
tapeutil -f /dev/smc0 move 4109 256
;;
Fri) tapeutil -f /dev/smc0 move 256 4109
tapeutil -f /dev/smc0 move 4124 256
;;
Sat) tapeutil -f /dev/smc0 move 256 4124
tapeutil -f /dev/smc0 move 4110 256
;;
esac
Example:
--------
Example:
--------
mt -f /dev/rmt1 rewind
mt -f /dev/rmt1.1 fsf 6
tar -xvf /dev/rmt1.1 /data/download/expdemo.zip
SPL bld
About Ts3310:
-------------
Abstract
Configuration Information for IBM TS3310 (IBM TotalStorage 3576)
Content
Notes:
4. The IBM device driver is required. The IBM device drivers are
available at ftp://ftp.software.ibm.com/storage/devdrvr.
Example:
--------
Then, start moving the tape to each drive in turn, and verify which
device name it is associated with
by running tctl or mt rewoffl. If it returns without error, the device
name matches the element number.
Move the tape from the tape slot to the first drive:
tapeutil -f /dev/smc0 move 1025 256
tctl -f/dev/rmt0 rewoffl
If the command returns with no errors, then element # 256 matches device
name /dev/rmt0.
NOTE: the 'rewoffl' flag on tctl simply rewinds and ejects the tape from
the drive.
Contents:
1. How to view the bootlist:
2. How to change the bootlist:
3. How to make a device bootable:
4. How to make a backup of the OS:
5. Shutdown a pSeries AIX system in the most secure way:
7. Recovery of rootvg
At boottime, once the POST is completed, the system will search the boot
list for a
bootable image. The system will attempt to boot from the first entry in
the bootlist.
Its always a good idea to see what the OS thinks are the bootable
devices and the order of what the OS
thinks it should use. Use the bootlist command to view the order:
# bootlist -m normal -o
As the first item returned, you will see hdisk0, the bootable harddisk.
If you need to check the bootlist in "service mode", for example if you
want to boot from tape to restore the rootvg, use
# bootlist -m service -o
This command makes sure the hdisk0 is the first device used to boot the
system.
If you want to change the bootlist for the system in service mode, you
can change the list in order to use rmt0
So, if hdisk0 must be bootable, or you want to be sure its bootable, use
You can use this backup to reinstall a system to its original state
after it has been corrupted.
If you create the backup on tape, the tape is bootable and includes the
installation programs
needed to install from the backup.
# mksysb -i /dev/rmt0
# mksysb -i -e /dev/rmt0
$ tctl fsf 3
$ restore -xvf /dev/rmt0.1 ./your/file/name
For example, if you need to get the vi command back, put the mksysb tape
in the tape drive (in this case, /dev/rmt0)
Further explanation why you must use the fsf 3 (fast forward skip file
3):
The format of the tape is as follows:
1. A BOS boot image
2. A BOS install image
3. A dummy Table Of Contents
4. The system backup of the rootvg
So if you just need to restore some files, first forward the tape
pointer to position 3, counting from 0.
7. Recovery of rootvg
Type the number of your choice and press Enter. Choice is indicated by
>>>.
Maintenance
Type the number of the tape drive containing the system backup to be
installed and press Enter.
Type the number that corresponds to the tape drive that the mysysb tape
is in and press enter.
The next screen you should see is :-
+-----------------------------------------------------
88 Help ? |Select 1 or 2 to install from tape device
/dev/rmt0
99 Previous Menu |
|
>>> Choice [1]:
There are two ways you can recover from a tape with make_net_recovery.
The method you choose depends on your needs.
- Use make_medialif
This method is useful when you want to create a totally self-contained
recovery tape. The tape will be bootable
and will contain everything needed to recover your system, including the
archive of your system. During recovery,
no access to an Ignite-UX server is needed. Using make_medialif is
described beginning on
"Create a Bootable Archive Tape via the Network" and also on the Ignite-
UX server in the file:
/opt/ignite/share/doc/makenetrec.txt
- Use make_boot_tape
This method is useful when you do not have the ability to boot the
target machine via the network, but are still
able to access the Ignite-UX server via the network for your archive and
configuration data. This could happen
if your machine does not support network boot or if the target machine
is not on the same subnet as the
Ignite-UX server. In these cases, use make_boot_tape to create a
bootable tape with just enough information
to boot and connect with the Ignite-UX server. The configuration files
and archive are then retrieved from the
Ignite-UX server. See the make_boot_tape(1M) manpage for details.
-- make_boot_tape:
make_boot_tape(1M)
make_boot_tape(1M)
NAME
make_boot_tape - make a bootable tape to connect to an Ignite-UX
server
SYNOPSIS
/opt/ignite/bin/make_boot_tape [-d device-file-for-tape] [-f
config-
file] [-t tmpdir] [-v]
DESCRIPTION
The tape created by make_boot_tape is a bootable tape that
contains
just enough information to boot the system and then connect to the
Ignite-UX server where the tape was created. Once the target
system
has connected with the Ignite-UX server, it can be installed or
recovered using Ignite-UX. The tape is not a fully self-contained
install tape; an Ignite-UX server must also be present. The
configuration information and software to be installed on the
target
machine reside on the Ignite-UX server, not on the tape. If you
need
to build a fully self-contained recovery tape, see
make_recovery(1m)
or make_media_lif(1m).
Examples:
---------
# make_boot_tape
For example:
# cat /etc/dumpdates
/dev/rdsk/c0t0d0s0 0 Wed Jul 28 16:13:52 2004
/dev/rdsk/c0t0d0s7 0 Thu Jul 29 10:36:13 2004
/dev/rdsk/c0t0d0s7 9 Thu Jul 29 10:37:12 2004
Use the /etc/dumpdates file to verify that backups are being done. This
verification is particularly important
if you are having equipment problems. If a backup cannot be completed
because of equipment failure, the backup
is not recorded in the /etc/dumpdates file.
If you need to restore an entire disk, check the /etc/dumpdates file for
a list of the most recent dates and levels
of backups so that you can determine which tapes you need to restore the
entire file system.
The following example shows how to create a snapshot of the /usr file
system.
The backing-store file is /scratch/usr.back.file. The virtual device
is /dev/fssnap/1.
You can display the current snapshots on the system by using the fssnap
-i option. If you specify a file system,
you see detailed information about that snapshot. If you don't specify a
file system, you see information about all
of the current UFS snapshots and their corresponding virtual devices.
For example:
# /usr/lib/fs/ufs/fssnap -i
Snapshot number : 0
Block Device : /dev/fssnap/0
Raw Device : /dev/rfssnap/0
Mount point : /usr
Device state : idle
Backing store path : /var/tmp/snapshot3
Backing store size : 256 KB
Maximum backing store size : Unlimited
Snapshot create time : Wed Oct 08 10:38:25 2003
Copy-on-write granularity : 32 KB
Snapshot number : 1
Block Device : /dev/fssnap/1
Raw Device : /dev/rfssnap/1
Mount point : /
Device state : idle
Backing store path : /tmp/bs.home
Backing store size : 448 KB
Maximum backing store size : Unlimited
Snapshot create time : Wed Oct 08 10:39:29 2003
Copy-on-write granularity : 32 KB
Note 1:
------
-- To restore the / (root) file system, boot from the Solaris CD-ROM and
then run ufsrestore.
1. Insert the Solaris 8 Software CD 1, and boot the CD-ROM with the
single-user mode option.
ok boot cdrom -s
# newfs /dev/rdsk/c0t0d0s0
# mount /dev/dsk/c0t0d0s0 /a
# cd /a
# ufsrestore rf /dev/rmt/0
Note - Remember to always restore a file system starting with the level
0 backup tape and continuing with the next lowest level
tape up through the highest level tape.
# rm restoresymtable
6. Install the bootblk in sectors 1-15 of the boot disk. Change to the
directory containing the bootblk, and run the installboot command.
# cd /usr/platform/`uname -m`/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
# init 6
-- To restore the /usr and /var file systems repeat the steps described
above, except step 6.
This step is required only when restoring the (/) root file system.
Example
# newfs /dev/rdsk/c#t#d#s#
# mount /dev/dsk/c#t#d#s# /mnt
# cd /mnt
# ufsrestore rf /dev/rmt/#
# rm restoresymtable
# cd /
# umount /mnt
# fsck /dev/rdsk/c#t#d#s#
# ufsdump 0uf /dev/rmt/# /dev/rdsk/c#t#d#s#
Note 2:
-------
=============
10. uuencode:
=============
Example:
example:
The following example packages up a source tree, compresses it,
uuencodes it and mails it to
a user on another system. When uudecode is run on the target system, the
file ``src_tree.tar.Z''
will be created which may then be uncompressed and extracted into the
original tree.
example:
uuencode <file_a> <file_b> > <uufile> |
| note: here, file_a is encoded and a new file named uufile is produced
|
| when you decode file uufile a file named file_b is produced
|
# uuencode dipl.doc dipl.doc >dipl.uu
Hier wird die Datei dipl.doc (z.B. ein WinWord-Dokument) in die Datei
dipl.uu umgewandelt. Dabei legen wir fest,
dasz die Datei nach dem Decodieren wieder dipl.doc heiszen soll.
example:
uuencode long_name.tar.Z arc.trz > arc.uue
# sort +1 -2 people
# sort +2b people
# sort +2n +1 people
# sort +1 -2 *people > everybody
# sort -u +1 hardpeople softpeople > everybody # -u=unique
# sort -t: +5 /etc/passw # -t field sep.
13. SED:
========
Note: depending on your shell and system, in most cases, you might need
to enclose s/string/newstring by a " or a '.
you can also use a regular expression, for instance we can put a left
margin of 5
spaces on the people file
or use _ as a delimter
Example:
--------
spool Publisher.06.PublisherDefineChangeTable.tdba_cdc.cdc_LEG.log ;
connect / as sysdba ;
begin
dbms_cdc_publish.create_change_table
( owner => 'tdba_cdc'
, change_table_name => 'cdc_LEG'
, change_set_name => 'BODI_CDC_SET'
, source_schema => 'rm_live'
, source_table => 'LEG'
, column_type_list => ' IDFLT NUMBER(9) , IDLEG NUMBER(9) ,
LEGDATE DATE , IDLEGDATA NUMBER(9) , CANCELLED CHAR(1) ,
IDWORKSET NUMBER(9) , IDTEXTTTS NUMBER(9) , IDSEGMENTDATACOMBINE
NUMBER(9) '
, capture_values => 'both'
, source_colmap => 'y'
, target_colmap => 'y'
, options_string => 'tablespace tdba_cdc'
) ;
end ;
/
Try:
gives:
spool Publisher.06.PublisherDefineChangeTable.tdba_cdc.cdc_LEG.log ;
connect / as sysdba ;
begin
dbms_cdc_publish.create_change_table
( owner => 'tdba_cdc'
, change_table_name => 'cdc_LEG'
, change_set_name => 'BODI_CDC_SET'
, source_schema => 'rm_live'
, source_table => 'LEG'
, column_type_list => ' IDFLT NUMBER(9) , IDLEG NUMBER(9) ,
LEGDATE DATE , IDLEGDATA NUMBER(9) , CANCELLED CHAR(1) ,
IDWORKSET NUMBER(9) , IDTEXTTTS NUMBER(9) , IDSEGMENTDATACOMBINE
NUMBER(9) '
, capture_values => 'both'
, source_colmap => 'y'
, target_colmap => 'y'
, options_string => 'tablespace tdba_cdc'
) ;
end ;
/
Other example:
--------------
14. AWK:
========
When lines containing `foo' are found, they are printed, because `print
$0' means print the current line:
# awk '/foo/ { print $0 }' BBS-list
looks for all files in the ls listing that matches Nov and it prints the
total of bytes:
# ls -l | awk '$5 == "Nov" { sum += $4 }
END { print sum }'
Example:
--------
Suppose you have a text file with lines much longer than, for example,
72 characters,
and you want to have a file with lines with a maximum of 72 chars, then
you might use awk
in the following way:
#!/bin/bash
DIR=/cygdrive/c/exports
FILE=result24.txt
-- r13.awk
BEGIN { maxlength=72 }
{
l=length();
if (l > 72) {
i=(l/72)
for (j=0; j<i; j++) {
printf "%s\r\n",substr($0, (j*72)+1, maxlength)
}
} else {
printf "%s\r\n",$0
}
}
15. tr command:
===============
#! /bin/sh
#
# recursive dark side repair technique
# eliminates spaces in file names from current directory down
# useful for supporting systems where clueless vendors promote NT
#
for name in `find . -depth -print`
do
na=`echo "$name" | tr ' ' '_'`
if [ "$na" != "$name" ]
then
echo "$name"
fi
done
note:
> I have finally competed setting up the samba server and setup the
share
> between NT and Samba server.
>
> However, when I open a unix text file in Windows NT using notepad, i
see
> many funny characters and the text file is not in order (Just like
when I
> ftp the unix text file out into NT in binary format) ...I think this
has to
> be something to do with whether the file transfer is in Binary format
or
> ASCII ... Is there a parameter to set for this ? I have checked the
> documents ... but couldn't find anything on this ...
>
This is a FAQ, but it brief, it's like this. Unix uses a single newline
character to end a line ("\n"), while DOS/Win/NT use a
carriage-return/newline pair ("\r\n"). FTP in ASCII mode translates
these for you. FTP in binary mode, or other forms of file transfer, such
as Samba, leave the file unaltered. Doing so would be extremely
dangerous, as there's no clear way to isolate which files should be
translated
You can get Windows editors that understand Unix line-end conventions
(Ultra Edit is one), or you can use DOS line endings on the files, which
will then look odd from the Unix side. You can stop using notepad, and
use Wordpad instead, which will deal appropriately with Unix line
endings.
You can convert a DOS format text file to Unix with this:-
The best solution to this seems to be using a Windows editor that can
handle working with Unix line endings.
HTH
Mike.
Note:
There are two ways of moving to a new line...carriage return, which is
chr(13),
and new line which is chr(10). In windows you're supposed to use a
sequence
of a carriage return followed by a new line.
For example, in VB you can use Wrap$=Chr$(13)+Chr$(10) which creates a
wrap character.
cutting columns:
cutting fields:
paste:
17. mknod:
==========
Thus, a special file takes almost no place on disk, and is used only for
communication
with the operating system, not for data storage. Often special files
refer to hardware devices
(disk, tape, tty, printer) or to operating system services
(/dev/null, /dev/random).
p for a FIFO
b for a block (buffered) special file
c for a character (unbuffered) special file
When making a block or character special file, the major and minor
device numbers must be given
after the file type (in decimal, or in octal with leading 0; the GNU
version also allows hexadecimal
with leading 0x). By default, the mode of created files is 0666 (`a/rw')
minus the bits set in the umask.
If one cannot afford to buy extra disk space one can run the export and
compress
utilities simultaneously.
This will prevent the need to get enough space for both the export file
AND the
compressed export file. Eg:
# Make a pipe
mknod expdat.dmp p # or mkfifo pipe
# Start compress sucking on the pipe in background
compress < expdat.dmp > expdat.dmp.Z &
# Wait a second or two before kicking off the export
sleep 5
# Start the export
exp scott/tiger file=expdat.dmp
Extended Example:
-----------------
# Load the cron environment
. ~/cronjobs/.profile.cron
##################################################################
compareVersionDBMS 10.2.0.1.0 10.2.0.2.0 10.2.0.3.0
##################################################################
wantedSchemas=""
wantedDatabase=""
wantedInputDir=""
##################################################################
if [ $# -ne 0 ]
then
function showSyntaxParam
{ ( Comment "\t[-db=<Database>] [-dir=<InputDir>] [-
schema=<Schema,schema,schema>]"
)
}
for param in $*
do
echo ${param} \
| awk 'BEGIN {FS="="}{print $1,$2}' \
| read flag value
if [ "${flag}" != "-h" ] && [ "${value}" = "" ]
then
Error "Empty value for ${flag}"
showSyntaxSystemParam
showSyntaxParam
else
case ${flag} in
-schema) wantedSchemas=${value} ;;
-dir) wantedInputDir=${value} ;;
*) checkSystemParam ${param} || showSyntaxParam ;;
esac
fi
done
fi
##################################################################
BlankLine
Comment "Selected options are:"
Comment " Database : -db=${wantedDatabase}"
Comment " Schema's : -schema=${wantedSchemas}"
Comment " Input directory : -dir=${wantedInputDir}"
Line
##################################################################
if [ ${continue} = true ]
then
if [ "${wantedDatabase}" = "" ]
then
BlankLine
Error "No database specified"
BlankLine
else
echo ${wantedDatabase} \
| grep -i prod \
| wc -l \
| read prod
if [ ${prod} -ne 0 ]
then
BlankLine
Error "Production environment not allowed!!"
BlankLine
else
moveLogFile ${wantedDatabase}
fi
fi
#
if [ "${wantedInputDir}" = "" ]
then
BlankLine
Error "No input directory specified"
BlankLine
else
if [ ! -d ${wantedInputDir} ]
then
BlankLine
Error "Input directory ${wantedInputDir} doesn't exists"
BlankLine
fi
fi
#
if [ "${wantedSchemas}" = "" ]
then
BlankLine
Error "No schema's to load"
BlankLine
fi
fi
##################################################################
wantedSchemas=`echo ${wantedSchemas} | sed 's/,/ /g'`
if [ ${continue} = true ]
then
for schema in ${wantedSchemas}
do
impPipeFile=${wantedInputDir}/${schema}.${currentUser}.load.pipe
impLogFile=${wantedInputDir}/${schema}.${currentUser}.load.log
impCompressFile=${wantedInputDir}/${schema}.data.Z
#
if [ ${continue} = true ]
then
Message "Check file permissions"
if [ ! -w ${wantedInputDir} ]
then
Error "Unable to write in ${wantedInputDir}"
fi
fi
#
if [ ${continue} = true ]
then
rm ${impPipeFile} 2> /dev/null
rm ${impLogFile} 2> /dev/null
fi
#
if [ ${continue} = true ]
then
Message "Load schema ${schema} into database ${wantedDatabase}"
fi
#
if [ ${continue} = true ]
then
Message "Create pipe for load"
CmdCapture "mknod ${impPipeFile} p"
fi
#
if [ ${continue} = true ]
then
if [ ! -f ${impCompressFile} ]
then
BlankLine
Error "File not found: ${impCompressFile}"
BlankLine
fi
fi
#
if [ ${continue} = true ]
then
Message "Start uncompression into background"
uncompress -c < ${impCompressFile} > ${impPipeFile} &
#
Message "Start import"
imp \"sys/change_on_install as sysdba\" file=${impPipeFile} log=$
{impLogFile} full=y statistics=always >/dev/null 2>/dev/null
#
Message "Output of import"
CmdCapture "cat ${impLogFile}"
#
Message "Allowed warnings are:"
Comment " IMP-00017 IMP-00041 IMP-00003 ORA-14063 ORA-14048 ORA-
02270"
cat ${impLogFile} \
| egrep '^ORA-|^ERROR|^IMP-' \
| egrep -v 'IMP-00017|IMP-00041|IMP-00003|ORA-14063|ORA-14048|ORA-
02270' \
| wc -l \
| read count
if [ ${count} -ne 0 ]
then
Error "Problem with import !!"
else
Message "Import succesful"
fi
fi
#
rm ${impPipeFile} 2> /dev/null
if [ ${continue} = true ]
then
Line
fi
done
fi
##################################################################
finish
##################################################################
18. Links:
==========
# ln -s fromfile /other/directory/tolink
Example:
--------
/opt/myprog
albert@starboss:/opt/myprog $ ls -al
total 8
drwxr-x--- 2 root system 256 Apr 21 10:12 .
drwxrwxrwx 3 root system 4096 Apr 21 09:59 ..
-r-xr-xr-x 1 albert beab_krn 9544 Apr 21 10:12 mvdat
Other examples:
---------------
This example shows copying three files from a directory into the current
working directory.
[2]%cp ~team/IntroProgs/MoreUltimateAnswer/more*
[3]%ls -l more*
-rw-rw-r-- 1 mrblobby mrblobby 632 Sep 21 18:12
moreultimateanswer.adb
-rw-rw-r-- 1 mrblobby mrblobby 1218 Sep 21 18:19
moreultimatepack.adb
-rw-rw-r-- 1 mrblobby mrblobby 784 Sep 21 18:16
moreultimatepack.ads
The three files take a total of 2634 bytes. The equivalent ln commands
would be:
[2]%ln -s ~team/IntroProgs/MoreUltimateAnswer/moreultimateanswer.adb
.
[3]%ln -s ~team/IntroProgs/MoreUltimateAnswer/moreultimatepack.adb .
[4]%ln -s ~team/IntroProgs/MoreUltimateAnswer/moreultimatepack.adb .
[5]%ls -l
lrwxrwxrwx 1 mrblobby mrblobby 35 Sep 22 08:50
moreultimateanswer.adb ->
/users/team/IntroProgs/MorUltimateAnswer/moreultimateanswer.adb
lrwxrwxrwx 1 mrblobby mrblobby 37 Sep 22 08:49
moreultimatepack.adb ->
/users/team/IntroProgs/MorUltimateAnswer/moreultimatepack.adb
lrwxrwxrwx 1 mrblobby mrblobby 37 Sep 22 08:50
moreultimatepack.ads ->
/users/team/IntroProgs/MorUltimateAnswer/moreultimatepack.ads
info:
showrev -p
pkginfo -i
relink:
mk -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk install
mk -f $ORACLE_HOME/svrmgr/lib/ins_svrmgr.mk install
mk -f $ORACLE_HOME/network/lib/ins_network.mk install
20. trace:
==========
20.1 truss:
-----------
Ofcourse, you can choose another path and logfile to trace to.
The upper command is quite good for tracing a shell script, or program,
that starts up, does some work,
and then terminates. If an error occurs during runtime, it's likely that
you find some pointers
in the logfile that truss made for you.
This tool has so many options, for example, you can focus your trace on
a certain library etc..
Anyway, even the upper example of truss can already be very helpfull.
So, for example, if you find in the log that truss has produced, the
error "EACCES" which is
"errno 13 = Permission denied", that would really be helpfull.
Obviously, your shell script or
program tries to access a certain object, to which it has insufficient
permisions,
and thus may fail.
Be warned though, that some errno's might be found multiple times, while
it's actually not
something to worry about. For example "ENOENT= No such file or
directory" might be found
quite often. Here, your script or program seems to be unable to find a
file or directory.
Well, if it's related to the $PATH environment variable, it could be
quite reasonable.
Your shell will search your $PATH from beginning, to the end, until the
object has been found.
Thus, it's quite possible that some ENOENT errors occurred.
NOTE: The "truss" command works on SUN and Sequent. Use "tusc" on HP-UX,
"strace" on Linux,
"trace" on SCO Unix or call your system administrator to find the
equivalent command on your system.
Monitor your Unix system:
Solaris:
Truss is used to trace the system/library calls (not user calls) and
signals made/received
by a new or existing process. It sends the output to stderr.
Truss examples
# truss -rall -wall -f -p <PID>
# truss -rall -wall lsnrctl start
# truss -aef lsnrctl dbsnmp_start
1. syscalls Command
Purpose
Provides system call tracing and counting for specific processes and the
system.
Syntax
To Create or Destroy Buffer:
syscalls [ [ -enable bytes ]| -disable ]
Description
The syscalls (system call tracing) command, captures system call entry
and exit events by individual processes
or all processes on the system. The syscalls command can also maintain
counts for all system calls
made over long periods of time.
Notes:
System call events are logged in a shared-memory trace buffer. The same
shared memory identifier may be used
by other processes resulting in a collision. In such circumstances, the
-enable flag needs to be issued.
The syscalls command does not use the trace daemon.
The system crashes if ipcrm -M sharedmemid is run after syscalls has
been run.
Run stem -shmkill instead of running ipcrm -M to remove the shared
memory segment.
Flags
-c Prints a summary of system call counts for all processes. The
counters are not reset.
-disable Destroys the system call buffer and disables system call
tracing and counting.
-enable bytes Creates the system call trace buffer. If this flag is not
used, the syscalls command
creates a buffer of the default size of 819,200 bytes. Use this flag if
events are not being logged
in the buffer. This is the result of a collision with another process
using the same shared memory buffer ID.
-p pid When used with the -start flag, only events for processes with
this pid will be logged
in the syscalls buffer. When used with the -stop option, syscalls
filters the data in the buffer
and only prints output for this pid.
-start Resets the trace buffer pointer. This option enables the buffer
if it does not exist and resets
the counters to zero.
-stop Stops the logging of system call events and prints the contents
of the buffer.
-t Prints the time associated with each system call event alongside the
event.
-x program Runs program while logging events for only that process. The
buffer is enabled if needed.
Security
Access Control: You must be root or a member of the perf group to run
this command.
Examples
To collect system calls for a particular program, enter:
syscalls -x /bin/ps
Output similar to the following appears:
PID TTY TIME CMD
19841 pts/4 0:01 /bin/ksh
23715 pts/4 0:00 syscalls -x /bin/ps
30720 pts/4 0:00 /bin/ps
34972 pts/4 0:01 ksh
PID System Call
30720 .kfork Exit , return=0 Call preceded tracing.
30720 .getpid () = 30720
30720 .sigaction (2, 2ff7eba8, 2ff7ebbc) = 0
30720 .sigaction (3, 2ff7eba8, 2ff7ebcc) = 0
30720 .sigprocmask (0, 2ff7ebac, 2ff7ebdc) = 0
30720 .sigaction (20, 2ff7eba8, 2ff7ebe8) = 0
30720 .kfork () = 31233
30720 .kwaitpid (2ff7ebfc, 31233, 0, 0) = 31233
30720 .sigaction (2, 2ff7ebbc, 0) = 0
30720 .sigaction (3, 2ff7ebcc, 0) = 0
30720 .sigaction (20, 2ff7ebe8, 0) = 0
30720 .sigprocmask (2, 2ff7ebdc, 0) = 0
30720 .getuidx (4) = 0
30720 .getuidx (2) = 0
30720 .getuidx (1) = 0
30720 .getgidx (4) = 0
30720 .getgidx (2) = 0
30720 .getgidx (1) = 0
30720 ._load NoFormat, (0x2ff7ef54, 0x0, 0x0, 0x2ff7ff58) =
537227760
30720 .sbrk (65536) = 537235456
30720 .getpid () = 30720
AIX 5.1,5.2,5.3
A good and simple way to use truss is using a command like shown in
section 20.1:
For example:
The upper command is quite good for tracing a shell script, or program,
that starts up, does some work,
and then terminates. If an error occurs during runtime, it's likely that
you find some pointers
in the logfile that truss made for you.
Further notes:
# truss -a sleep
# truss -c ls
root@zd93l14:/tmp#cat tst
= 0
_nsleep(0x4128B8E0, 0x4128B958) = 0
_nsleep(0x4128B8E0, 0x4128B958) = 0
_nsleep(0x4128B8E0, 0x4128B958) = 0
_nsleep(0x4128B8E0, 0x4128B958) = 0
thread_tsleep(0, 0xF033159C, 0x00000000, 0x43548E38) = 0
thread_tsleep(0, 0xF0331594, 0x00000000, 0x434C3E38) = 0
thread_tsleep(0, 0xF033158C, 0x00000000, 0x4343FE38) = 0
thread_tsleep(0, 0xF0331584, 0x00000000, 0x433BBE38) = 0
thread_tsleep(0, 0xF0331574, 0x00000000, 0x432B2E38) = 0
thread_tsleep(0, 0xF033156C, 0x00000000, 0x4322EE38) = 0
thread_tsleep(0, 0xF0331564, 0x00000000, 0x431AAE38) = 0
thread_tsleep(0, 0xF0331554, 0x00000000, 0x42F99E38) = 0
thread_tsleep(0, 0xF033154C, 0x00000000, 0x4301DE38) = 0
thread_tsleep(0, 0xF0331534, 0x00000000, 0x42E90E38) = 0
thread_tsleep(0, 0xF033152C, 0x00000000, 0x42E0CE38) = 0
thread_tsleep(0, 0xF033157C, 0x00000000, 0x43337E38) = 0
thread_tsleep(0, 0xF0331544, 0x00000000, 0x42F14E38) = 0
= 0
thread_tsleep(0, 0xF033153C, 0x00000000, 0x42D03E38) = 0
_nsleep(0x4128B8E0, 0x4128B958) = 0
Purpose
Syntax
Description
Flags
-f Follows all children created by the fork system call and includes
their
signals, faults, and system calls in the trace output. Normally, only
the
first-level command or process is traced. When the -f flag is specified,
the
process id is included with each line of trace output to show which
process
executed the system call or received the signal.
-l Display the id (thread id) of the responsible LWP process along with
truss
output. By default LWP id is not displayed in the output.
-r [!] FileDescriptor Displays the full contents of the I/O buffer for
each read
on any of the specified file descriptors. The output is formatted 32
bytes per
line and shows each byte either as an ASCII character (preceded by one
blank) or
as a two-character C language escape sequence for control characters,
such as
horizontal tab (\t) and newline (\n). If ASCII interpretation is not
possible,
the byte is shown in two-character hexadecimal representation. The first
16
bytes of the I/O buffer for each traced read are shown, even in the
absence of
the -r flag. The default is -r!all.
-t [!] Syscall Includes or excludes system calls from the trace process.
System
calls to be traced must be specified in a list and separated by commas.
If the
list begins with an "!" symbol, the specified system calls are excluded
from the
trace output. The default is -tall.
Traces dynamically loaded user level function calls from user libraries.
The
LibraryName is a comma-separated list of library names. The FunctionName
is a
comma-separated list of function names. In both cases the names can
include
name-matching metacharacters *, ?, [] with the same meanings as
interpreted by
the shell but as applied to the library/function name spaces, and not to
files.
-w [!] FileDescriptor Displays the contents of the I/O buffer for each
write on
any of the listed file descriptors (see -r). The default is -w!all.
Examples
2. To trace the lseek, close, statx, and open system calls, type:
5. To display delta times along with regular output for find command,
enter:
truss -D find . -print >find.out
The trace facility and commands are provided as part of the Software
Trace Service Aids fileset
named bos.sysmgt.trace.
Taking a trace:
---------------
When tracing, you can select the hook IDs of interest and exclude others
that are
not relevant to your problem. A trace hook ID is a 3 digit hexidecimal
number
that identifies an event being traced.
Trace hook IDs are defined in the "/usr/include/sys/trchkid.h" file.
The currently defined trace hook IDs can be listed using the trcrpt
command:
# trcrpt -j | sort | pg
001 TRACE ON
002 TRACE OFF
003 TRACE HEADER
004 TRACEID IS ZERO
005 LOGFILE WRAPAROUND
006 TRACEBUFFER WRAPAROUND
..
..
The trace daemon configures a trace session and starts the collection of
system events.
The data collected by the trace function is recorded in the trace log. A
report from the trace log
can be generated with the trcrpt command.
When invoked with the -a, -x, or -X flags, the trace daemon is run
asynchronously (i.e. as a background task).
Otherwise, it is run interactively and prompts you for subcommands.
Examples
1 To format the trace log file and print the result, enter:
trcrpt | qprt
2 To send a trace report to the /tmp/newfile file, enter:
trcrpt -o /tmp/newfile
3 To display process IDs and exec path names in the trace
report, enter:
trcrpt -O hist=on
5 To produce a list of all event groups, enter:
trcrpt -G
The format of this report is shown under the trcevgrp
command.
6 To generate back-to-back LMT reports from the common and
rare buffers, specify:
trcrpt -M all
7 If, in the above example, the LMT files reside at
/tmp/mydir, and we want the LMT traces to be merged,
specify:
trcrpt -m -M all:/tmp/mydir
8 To merge the system trace with the scdisk.hdisk0 component
trace, specify:
Note: This example is more educational if the input file is not already
cached in system memory. Choose as the source
file any file that is about 50KB and has not been touched recently.
We use the following form of the trcrpt command for our report:
This reports both the fully qualified name of the file that is execed
and the process ID that is assigned to it.
A quick look at the report file shows us that there are numerous VMM
page assign and delete events in the trace,
like the following sequence:
delete_in_progress proce
ss_private working_storage
The header of the trace report tells you when and where the trace was
taken, as well as the command that was
used to produce it:
When possible, the disk device driver coalesces multiple file requests
into one I/O request to the drive.
The trace output looks a little overwhelming at first. This is a good
example to use as a learning aid.
If you can discern the activities described, you are well on your way to
being able to use the trace facility
to diagnose system-performance problems.
The full detail of the trace data may not be required. You can choose
specific events of interest to be shown.
For example, it is sometimes useful to find the number of times a
certain event occurred. To answer the question
"How many opens occurred in the copy example?" first find the event ID
for the open system call.
This can be done as follows:
You should be able to see that event ID 15b is the open event. Now,
process the data from the copy example as follows:
The report is written to standard output, and you can determine the
number of open subroutines that occurred.
If you want to see only the open subroutines that were performed by the
cp process, run the report command
again using the following:
$ trcrpt -o /tmp/newfile
-trace
-trcon
-trcoff
-trcstop
-trcrpt
-atrace
-atrcrpt
IGNORE="$IGNORE_VMM,$IGNORE_LOCK,$IGNORE_PCI,$IGNORE_SCSI,$IGNORE_LVM,
$IGNORE_OTHER"
Setup instructions
edit atrace and atrcrpt and ensure that names of files for raw and
formatted trace are appropriate
Please see the comments in the scripts about 4.3.3 ML 10 being broken
for trcrpt, such that the default file name
needs to be used. You may find that specifying non-default filenames
does not have the desired effect.
make atrace and atrcrpt executable via chmod
Data collection
X'30D'
This event is recorded by WebSphere MQ on entry to or exit from a
subroutine.
X'30E'
This event is recorded by WebSphere MQ to trace data such as that being
sent or received across a
communications network. Trace provides detailed execution tracing to
help you to analyze problems.
IBMr service support personnel might ask for a problem to be re-created
with trace enabled. The files produced
by trace can be very large so it is important to qualify a trace, where
possible. For example, you can optionally
qualify a trace by time and by component.
>> Interactively.
>> Asynchronously.
Each UNIXr system provides its own commands for tracing. This article
introduces you to truss, which Solaris
and AIXr support. On Linuxr, you perform tracing with the strace
command. Although the command-line parameters
might be slightly different, application tracing on other UNIX flavors
might go by the names ptrace,
ktrace, trace, and tusc.
$ ./openapp
This should never happen!
$ truss ./openapp
execve("openapp", 0xFFBFFDEC, 0xFFBFFDF4) argc = 1
getcwd("/export/home/sean", 1015) = 0
stat("/export/home/sean/openapp", 0xFFBFFBC8) = 0
open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT
stat("/opt/csw/lib/libc.so.1", 0xFFBFF6F8) Err#2 ENOENT
stat("/lib/libc.so.1", 0xFFBFF6F8) = 0
resolvepath("/lib/libc.so.1", "/lib/libc.so.1", 1023) = 14
open("/lib/libc.so.1", O_RDONLY) = 3
memcntl(0xFF280000, 139692, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
close(3) = 0
getcontext(0xFFBFF8C0)
getrlimit(RLIMIT_STACK, 0xFFBFF8A0) = 0
getpid() = 7895 [7894]
setustack(0xFF3A2088)
open("/etc/configfile", O_RDONLY) Err#13 EACCES
[file_dac_read]
ioctl(1, TCGETA, 0xFFBFEF14) = 0
fstat64(1, 0xFFBFEE30) = 0
stat("/platform/SUNW,Sun-Blade-100/lib/libc_psr.so.1", 0xFFBFEAB0) = 0
open("/platform/SUNW,Sun-Blade-100/lib/libc_psr.so.1", O_RDONLY) = 3
close(3) = 0
This should never happen!
write(1, " T h i s s h o u l d ".., 26) = 26
_exit(3)
Each line of the output represents a function call that the application
made along with the return value,
if applicable. (You don't need to know each function call, but for more
information, you can call up the
man page for the function, such as with the command man open.) To find
the call that is potentially
causing the problem, it's often easiest to start at the end (or as close
as possible to where
the problems start). For example, you know that the application outputs
This should never happen!,
which appears near the end of the output. Chances are that if you find
this message and work your way up
through the truss command output, you'll come across the problem.
Scrolling up from the error message, notice the line beginning with
open("/etc/configfile"...,
which not only looks relevant but also seems to return an error of
Err#13 EACCES. Looking at the man page
for the open() function (with man open), it's evident that the purpose
of the function is to open a file
-- in this case, /etc/configfile -- and that a return value of EACCES
means that the problem is related
to permissions. Sure enough, a look at /etc/configfile shows that the
user doesn't have permissions to read
the file. A quick chmod later, and the application is running properly.
The output of Listing 1 shows two other calls, open() and stat(), that
return an error. Many of the calls
toward the beginning of the application, including the other two errors,
are added by the operating system
as it runs the application. Only experience will tell when the errors
are benign and when they aren't.
In this case, the two errors and the three lines that follow them are
trying to find the location of libc.so.1,
which they eventually do. You'll see more about shared library problems
later.
While the first code example showed an obvious link between the system
call causing the problem and the file,
the example you're about to see requires a bit more sleuthing. Listing 2
shows a misbehaving application
called Getlock run under truss.
Listing 2. Getlock run under truss
$ truss ./getlock
execve("getlock", 0xFFBFFDFC, 0xFFBFFE04) argc = 1
getcwd("/export/home/sean", 1015) = 0
resolvepath("/export/home/sean/getlock", "/export/home/sean/getlock",
1023) = 25
resolvepath("/usr/lib/ld.so.1", "/lib/ld.so.1", 1023) = 12
stat("/export/home/sean/getlock", 0xFFBFFBD8) = 0
open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT
stat("/opt/csw/lib/libc.so.1", 0xFFBFF708) Err#2 ENOENT
stat("/lib/libc.so.1", 0xFFBFF708) = 0
resolvepath("/lib/libc.so.1", "/lib/libc.so.1", 1023) = 14
open("/lib/libc.so.1", O_RDONLY) = 3
close(3) = 0
getcontext(0xFFBFF8D0)
getrlimit(RLIMIT_STACK, 0xFFBFF8B0) = 0
getpid() = 10715 [10714]
setustack(0xFF3A2088)
open("/tmp/lockfile", O_WRONLY|O_CREAT, 0755) = 3
getpid() = 10715 [10714]
fcntl(3, F_SETLKW, 0xFFBFFD60) (sleeping...)
The man page for fcntl() (man fcntl) describes the function simply as
"file control" on Solaris and
"manipulate file descriptor" on Linux. In all cases, fcntl() requires a
file descriptor, which is an integer
describing a file the process has opened, a command that specifies the
action to be taken on the file descriptor,
and finally any arguments required for the specific function. In the
example in Listing 2, the file descriptor is 3,
and the command is F_SETLKW. (The 0xFFBFFD60 is a pointer to a data
structure, which doesn't concern us now.)
Digging further, the man page states that F_SETLKW opens a lock on the
file and waits until the lock can be obtained.
From the first example involving the open() system call, you saw that a
successful call returns a file descriptor.
In the truss output of Listing 2, there are two cases in which the
result of open() returns 3.
Because file descriptors are reused after they are closed, the relevant
open() is the one just above fcntl(),
which is for /tmp/lockfile. A utility like lsof lists any processes
holding open a file. Failing that,
you could trace through /proc to find the process with the open file.
However, as is usually the case,
a file is locked for a good reason, such as limiting the number of
instances of the application or configuring
the application to run in a user-specific directory.
>> Attaching to a running process
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
11063 sean 1 0 0 1872K 952K run 87.9H 94.68% udpsend
$ truss -p 11063:
The sendto() function's man page (man sendto) shows that this function
is used to send a message from a socket
-- typically, a network connection. The output of truss shows the file
descriptor (the first 3) and the data
being sent (abc). Indeed, capturing a sample of network traffic with the
snoop or tcpdump tool shows a large amount
of traffic being directed to a particular host, which is likely not the
result of a properly behaving application.
Note that truss was not able to show the creation of file descriptor 3,
because you had attached after the descriptor
was created. This is one limitation of attaching to a running process
and the reason why you should gather
other information using a tool, such as a packet analyzer before jumping
to conclusions.
This example might seem somewhat contrived (and technically it was,
because I wrote the udpsend application
to demonstrate how to use truss), but it is based on a real situation. I
was investigating a process running
on a UNIX-based appliance that had a CPU-bound process. Tracing the
application showed the same packet activity.
Tracing with a network analyzer showed the packets were being directed
to a host on the Internet. After escalating
with the vendor, I determined that the problem was their application
failing to perform proper error checking
on a binary configuration file. The file had somehow become corrupted.
As a result, the application interpreted
the file incorrectly and repeatedly hammered a random IP address with
User Datagram Protocol (UDP) datagrams.
After I replaced the file, the process behaved as expected.
After a while, you'll get the knack of what to look for. While it's
possible to use the grep command to go through
the output, it's easier to configure truss to focus only on certain
calls. This practice is common if you're trying
to determine how an application works, such as which configuration files
the application is using. In this case,
the open() and stat() system calls point to any files the application is
trying to open.
You use open() to open a file, but you use stat() to find information
about a file. Often, an application looks for
a file with a series of stat() calls, and then opens the file it wants.
For truss, you add filtering system calls with the -t option. For strace
under Linux, you use -e. In either case,
you pass a comma-separated list of system calls to be shown on the
command line. By prefixing the list with the
exclamation mark (!), the given calls are filtered out of the output.
Listing 5 shows a fictitious application
looking for a configuration file.
# snap -gc
/tmp/ibmsupt/snap.pax.Z
Further info:
snap Command
Purpose
Gathers system configuration information.
Syntax
snap [ -a ] [ -A ] [ -b ] [ -B ] [ -c ] [ -C ] [ -D ] [ -f ] [ -g
] [ -G ] [ -i ] [ -k ] [ -l ] [ -L ][ -n ] [ -N ]
[ -p ] [ -r ] [ -R ] [ -s ] [ -S ] [ -t ] [ -T Filename ] [ -w
] [ -o OutputDevice ] [ -d Dir ] [ -v Component ]
[ -O FileSplitSize ] [ -P Files ]
[ script1 script2 ... | All | file:filepath ]
snap [ -a ] [ -A ] [ -b ] [ -B ] [ -c ] [ -C ] [ -D ] [ -f ] [ -g
] [ -G ] [ -i ] [ -k ] [ -l ] [ -L ][ -n ] [ -N ]
[ -p ] [ -r ] [ -R ] [ -s ] [ -S ] [ -t ] [ -T Filename ] [ -o
OutputDevice ] [ -d Dir ] [ -v Component ]
[ -O FileSplitSize ] [ -P Files ] [
script1 script2 ... | All | file:filepath ]
Description
Flags:
-a
Gathers all system configuration information. This option
requires approximately 8MB of temporary disk space.
-A
Gathers asynchronous (TTY) information.
-b
Gathers SSA information.
-B
Bypasses collection of SSA adapter dumps. The -B flag only
works when the -b flag is also specified; otherwise, the -B flag is
ignored.
-c
Creates a compressed pax image (snap.pax.Z file) of all
files in the /tmp/ibmsupt directory tree or other named output
directory. Note: Information not gathered with this option
should be copied to the snap directory tree before using the -c flag.
If a test case is needed to demonstrate the system problem,
copy the test case to the /tmp/ibmsupt/testcase directory before
compressing the pax file.
-C
Retrieves all the files in the fwdump_dir directory. The
files are placed in the "general" subdirectory. The -C snap option
behaves the same as -P*.
-D
Gathers dump and /unix information. The primary dump device
is used. Notes:
1 If bosboot -k was used to specify the running kernel
to be other than /unix, the incorrect kernel is gathered. Make sure
that /unix is , or is linked to, the kernel in use
when the dump was taken.
2 If the dump file is copied to the host machine, the
snap command does not collect the dump image in the /tmp/ibmsupt/dump
directory. Instead, it creates a link in the dump
directory to the actual dump image.
-d AbsolutePath
Identifies the optional snap command output directory
(/tmp/ibmsupt is the default). You must specify the absolute path.
-e
Gathers HACMP(TM) specific information. Note: HACMP specific
data is collected from all nodes belonging to the cluster . This
flag cannot be used with any other flags except -m and -d.
-f
Gathers file system information.
-g
Gathers the output of the lslpp -hac command, which is
required to recreate exact operating system environments. Writes output
to the /tmp/ibmsupt/general/lslpp.hBc file. Also collects
general system information and writes the output to the
/tmp/ibmsupt/general/general.snap file.
-G
Includes predefined Object Data Manager (ODM) files in
general information collected with the -g flag.
-i
Gathers installation debug vital product data (VPD)
information.
One main trace utility on most Linux distro's, is the "strace" command.
You can use it with many parameters, but the "-o outputfile" is very
important, in order to save the output to a file.
Use it like:
Because strace will show you the systemcalls and signals, you can use it
to reveal whether a program cannot
find a file, or does not have permissions to read (or write to) a file.
In such a case, a program might fail.
Example:
A trace file can get pretty long, but you should just browse it and be
alert on what seems to be an error reported.
So, if we take a look in the logfile "strace_example.log"
..
..
open("/etc/security.conf", O_RDONLY|O_LARGEFILE) = -1 EACCES (Permission
denied)
write(2, "/etc/security.conf: Permission denied\n", 32) = 32
..
..
We can clearly see, that our program failed due to lack of read
permission.
=============
21. Logfiles:
=============
21.1 Solaris:
=============
Unix message files record all system problems like disk errors, swap
errors, NFS problems, etc.
Monitor the following files on your system to detect system problems:
tail -f /var/adm/syslog
tail -f /var/adm/messages
tail -f /var/log/syslog
Diagnostics can be done from the OK prompt after a reboot, like probe-
scsci, show-devs, show-disks, test memory etc..
You can also use SunVTS tool to run diagnostics. SunVTS is Suns's
Validation Test package.
System dumps:
You can manage system dumps by using the dumpadm command.
Logfiles:
---------
/var/adm/messages
The syslogd daemon logs its findings into this file
/var/adm/lastlog
This file holds the most recent login time for each user of the system
/var/adm/utmpx
This database file contains user access and accounting information for
commands such as
who, write, login. The utmpx file is where information such as the
terminal and login time
are stored, and if you use the who command, it will retrieve that
information.
/var/adm/wtmpx
This file contains the history of user access and accounting
information, for the utmpx database.
The "last" command will use this file, to show you the historical login
and logout info, since the last reboot.
/var/adm/sulog
This file shows you which users has used the su command, to switch to
another user.
/var/adm/acct
If accounting is enabled, accounting information is recorded in that
file.
/var/adm/loginlog
If it is important for you to track whether users are trying to log in
to your user accounts,
you can create a /var/adm/loginlog file with read and write permissions
for root only. After you create the loginlog file,
all failed login activity is written to this file automatically after
five failed attempts. The five-try limit avoids recording
failed attempts that are the result of typographical errors.
The loginlog file contains one entry for each failed attempt. Each entry
contains the user's login name,
tty device, and time of the attempt.
AIX:
----
Periodical the following files have to be decreased in size. You can use
cat /dev/null command
/var/adm/sulog
/var/adm/cron/log
/var/adm/wtmp
/etc/security/failedlogin
errdemon:
---------
On most UNIX systems, information and errors from system events and
processes are managed by the
syslog daemon (syslogd); depending on settings in the configuration file
/etc/syslog.conf, messages are passed
from the operating system, daemons, and applications to the console, to
log files, or to nowhere at all.
AIX includes the syslog daemon, and it is used in the same way that
other UNIX-based operating systems use it.
In addition to syslog, though, AIX also contains another facility for
the management of hardware, operating system,
and application messages and errors. This facility, while simple in its
operation, provides unique and valuable
insight into the health and happiness of an AIX system.
The AIX error logging facility components are part of the bos.rte and
the bos.sysmgt.serv_aid packages,
both of which are automatically placed on the system as part of the base
operating system installation.
The actual file in which error entries are stored is configurable; the
default is /var/adm/ras/errlog.
That file is in a binary format and so should never be truncated or
zeroed out manually. The errlog file
is a circular log, storing as many entries as can fit within its defined
size. A memory buffer is set
by the errdemon process, and newly arrived entries are put into the
buffer before they are written to the log
to minimize the possibility of a lost entry. The name and size of the
error log file and the size of the memory buffer
may be viewed with the errdemon command:
[aixhost:root:/] # /usr/lib/errdemon -l
Error Log Attributes
--------------------------------------------
Log File /var/adm/ras/errlog
Log Size 1048576 bytes
Memory Buffer Size 8192 bytes
0 11 * * * /usr/bin/errclear -d S,O 30
0 12 * * * /usr/bin/errclear -d H 90
The errdemon deamon constantly checks the /dev/error special file, and
when new data
is written, the deamon conducts a series of operations.
- To determine the path to your system's error logfile, run the command:
# /usr/lib/errdemon -l
Error Log Attributes
Log File /var/adm/ras/errlog
Log Size 1048576 bytes
Memory 8192 bytes
You can generate the error reports using smitty or through the errpt
command.
# smitty errpt gives you a dialog screen where you can select
types of information.
# errpt -a
# errpt - d H
If you use the errpt without any options, it generates a summary report.
If used with the -a option, a detailed report is created.
You can also display errors of a particular class, for example for the
Hardware class.
where the mmddhhmmyy string equals the current month, day, hour, minute,
and year, minus 24 hours.
To list error-record templates for which logging is turned off for any
error-log entries, enter:
errpt -t -F log=0
To display a detailed report of all errors logged for the error label
ERRLOG_ON, enter:
errpt -a -J ERRLOG_ON
errpt -aD
To display a detailed report of all errors logged for the error labels
DISK_ERR1 and DISK_ERR2 during
the month of August, enter:
errpt -a -J DISK_ERR1,DISK_ERR2 -s 0801000004 -e 0831235904"
errclear:
Example errorreport:
--------------------
Example 1:
----------
P550:/home/reserve $ errpt
You might create a script called alert.sh and call it from your .profile
#!/usr/bin/ksh
cd ~
rm -rf /root/alert.log
echo "Important alerts in errorlog: " >> /root/alert.log
errpt | grep -i STORAGE >> /root/alert.log
errpt | grep -i QUORUM >> /root/alert.log
errpt | grep -i ADAPTER >> /root/alert.log
errpt | grep -i VOLUME >> /root/alert.log
errpt | grep -i PHYSICAL >> /root/alert.log
errpt | grep -i STALE >> /root/alert.log
errpt | grep -i DISK >> /root/alert.log
errpt | grep -i LVM >> /root/alert.log
errpt | grep -i LVD >> /root/alert.log
errpt | grep -i UNABLE >> /root/alert.log
errpt | grep -i USER >> /root/alert.log
errpt | grep -i CORRUPT >> /root/alert.log
cat /root/alert.log
Example 2:
----------
Note 1:
-------
thread 1:
Q:
Has anyone seen these errors before? We're running 6239 fc cards on a
CX600. AIX level is 52-03 with the latest patches for
devices.pci.df1000f7
as well.
LABEL: SC_DISK_ERR4
IDENTIFIER: DCB47997
A:
thread 2:
Q:
> Has anyone corrected this issue? SC_DISK_ERR2 with EMC Powerpath =
> filesets listed below? I am using a CX-500.=20
>
A:
A:
We have the same problem as well. EMC say its a firmware error on the
FC adapters
A:
1. In the Navisphere main screen, select tools and then click the
Failover Setup Wizard. Click next to continue.
2. From the drop-down list select the host server you wish to
modify and click next
7. Next login to the AIX command prompt as root and perform the
following commands to complete stopping the SCSI_DISK_ERR2.
a. lsdev -Cc disk | grep LUNZ
(Monitor the AIX error log to insure the SCSI_DISK_ERR2's are gone)
Task Complete...
HACMP error:
------------
LABEL: LVM_GS_RLEAVE
IDENTIFIER: AB59ABFF
Description
Remote node Concurrent Volume Group failure detected
Probable Causes
Remote node Concurrent Volume Group forced offline
Failure Causes
Remote node left VGSA/VGDA groups due to failure
Recommended Actions
Examine error log on identified remote node
Detail Data
Remote Node Name
vleet
Volume Group ID
00CC 94EE 0000 4C00 0000 0111 4FE8 8651
MAJOR/MINOR DEVICE NUMBER
0045 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000
thread 1:
Q:
Hello ...
# ls -l /var/adm/ras/errlog
-rw-r--r-- 1 root system 0 Jun 14 17:31 /var/adm/ras/errlog
A:
Some err identifiers that can sometimes be hard to trace to their true
sources:
========================================================================
=======
------------------------------------------------------------------------
--
ERRPT ENTRY 1:
--------------
LABEL: CORE_DUMP
IDENTIFIER: C69F5C9B
Date/Time: Thu Jan 15 02:00:45 MET 2009
Sequence Number: 999
Machine Id: 00CC94EE4C00
Node Id: srv1
Class: S
Type: PERM
Resource Name: SYSPROC
Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED
Probable Causes
SOFTWARE PROGRAM
User Causes
USER GENERATED SIGNAL
Recommended Actions
CORRECT THEN RETRY
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
RERUN THE APPLICATION PROGRAM
IF PROBLEM PERSISTS THEN DO THE FOLLOWING
CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
SIGNAL NUMBER
11
USER'S PROCESS ID:
1298680
FILE SYSTEM SERIAL NUMBER
57
INODE NUMBER
37134
CORE FILE NAME
/var/core/core.1298680.15010044
PROGRAM NAME
BS_sear
STACK EXECUTION DISABLED
0
COME FROM ADDRESS REGISTER
PROCESSOR ID
hw_fru_id: 1
hw_cpu_id: 9
ADDITIONAL INFORMATION
??
??
Unable to generate symptom string.
(or as another example of the last lines, where you can see the
"program name")
PROGRAM NAME
opmn
STACK EXECUTION DISABLED
0
COME FROM ADDRESS REGISTER
PROCESSOR ID
hw_fru_id: 0
hw_cpu_id: 2
ADDITIONAL INFORMATION
strlen 0
pmStrdup 14
Symptom Data
REPORTABLE
1
INTERNAL ERROR
0
SYMPTOM CODE
PCSS/SPI2 FLDS/opmn SIG/11 FLDS/strlen VALU/0 FLDS/pmStrdup
------------------------------------------------------------------------
--
POSSIBLE EXPLANATION:
=====================
http://publib.boulder.ibm.com/infocenter/systems/index.jsp?
topic=/com.ibm.aix.security/doc/security/stack_exec_disable.htm
AIXr has enabled the stack execution disable (SED) mechanism to disable
the execution of code on a stack
and select data areas of a process.
Beginning with the POWER4T family of processors, you can use a page-
level execution enable and/or disable feature
for the memory. The AIX SED mechanism uses this underlying hardware
support for implementing a
no-execution feature on select memory areas. Once this feature is
enabled, the operating system checks
and flags various files during the executable programs. It then alerts
the operating system memory manager
and the process managers that the SED is enabled for the process being
created. The select memory areas
are marked for no-execution. If any execution occurs on these marked
areas, the hardware raises
an exception flag and the operating system stops the corresponding
process. The exception and application
termination details are captured through the AIX error log events.
SED is implemented mainly through the sedmgr command. The sedmgr command
permits control
of the systemwide SED mode of operation as well as setting the
executable file based SED flags.
While systemwide flags control the systemwide operation of the SED, file
level flags indicate
how files should be treated in SED. The buffer overflow protection (BOP)
mechanism provides
for four systemwide modes of operation:
-- off
The SED mechanism is turned off and no process is marked for SED
protection.
--select
Only a select set of files are enabled and monitored for SED protection.
The select set of files
are chosen by reviewing the SED related flags in the executable program
binary headers.
The executable program header enables SED related flags to request to be
included in the select mode.
-- setidfiles
Permits you to enable SED, not only for the files requesting such a
mechanism, but all the important
setuid and setgid system files. In this mode, the operating system not
only provides SED for the files
with the request SED flag set, but also enables SED for the executable
files with the following
characteristics (except the files marked for exempt in their file
headers):
.SETUID files owned by root
.SETGID files with primary group as system or security
-- all
All executable programs loaded on the system are SED protected except
for the files requesting
an exemption from SED mode. Exemption related flags are part of the
executable program headers.
The SED feature on AIX also provides the ability to monitor instead of
stopping the process when
an exception happens. This systemwide control permits a system
administrator to check for breakdowns
and issues in the system environment by monitoring it before the SED is
deployed in the production systems.
The sedmgr command provides an option that permits you to enable SED to
monitor files instead
of stopping the processes when exceptions occur. The system
administrator can evaluate whether
an executable program is doing any legitimate stack execution. This
setting works in conjunction
with the systemwide mode set using the -c option. When the monitor mode
is turned on, the system permits
the process to continue operating even if an SED-related exception
occurs. Instead of stopping the process,
the operating system logs the exception in the AIX error log. If SED
monitoring is off,
the operating system stops any process that violates and raises an
exception per SED facility.
Any changes to the SED mode systemwide flags requires that you restart
the system for the changes
to take effect. All of these types of events are audited.
note:
=====
Labels: AIX
note:
=====
------------------------------------------------------------------------
--
ERRPT ENTRY 2:
--------------
LABEL: SRC
IDENTIFIER: E18E984F
Description
SOFTWARE PROGRAM ERROR
Probable Causes
APPLICATION PROGRAM
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
PERFORM PROBLEM RECOVERY PROCEDURES
Detail Data
SYMPTOM CODE
0
SOFTWARE ERROR CODE
-9053
ERROR CODE
2
DETECTING MODULE
'tellsrc.c'@line:'87'
FAILING MODULE
Duplicates
Number of duplicates
3
Time of first duplicate
Fri Jan 16 09:31:18 MET 2009
Time of last duplicate
Fri Jan 16 09:31:33 MET 2009
POSSIBLE EXPLANATIONS:
======================
http://www-01.ibm.com/support/docview.wss?uid=isg1IZ03064
APAR status
Closed as program error.
Error description
"varyonvg -c" fails to varyon concurrent volume group and
reports the following error message:
LABEL: SRC
IDENTIFIER: E18E984F
Class: S
Type: PERM
Resource Name: SRC
Description
SOFTWARE PROGRAM ERROR
Probable Causes
APPLICATION PROGRAM
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
PERFORM PROBLEM RECOVERY PROCEDURES
Detail Data
SYMPTOM CODE
0
SOFTWARE ERROR CODE
-9053
ERROR CODE
74
DETECTING MODULE
'srcmstr.c'@line:'529'
FAILING MODULE
Local fix
This problem occurs when multiple "varyonvg -nc"
commands are performed together. By serializing
these commands, this can be avoided.
Problem summary
Multiple varyonvg -c processes will all create threads in
the gsclvmd daemon. With certain timing, these threads can
interfere with eachothers global variables and possibly cause
varyonvg to fail.
Problem conclusion
Privatize variables so mutliple vgs coming online can't
interfere with eachother.
Temporary fix
Comments
5200-10 - use AIX APAR IZ05735
5300-06 - use AIX APAR IZ02334
5300-07 - use AIX APAR IZ03064
APAR information
APAR number IZ03064
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2007-08-14
Closed date 2007-09-04
Last modified date 2007-12-06
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
error INTRPPC_ERR:
------------------
LABEL: INTRPPC_ERR
IDENTIFIER: 853015D6
Description
UNDETERMINED ERROR
Probable Causes
SYSTEM I/O BUS
SOFTWARE PROGRAM
ADAPTER
DEVICE
Recommended Actions
PERFORM PROBLEM DETERMINATION PROCEDURES
Detail Data
BUS NUMBER
9001 00C0
INTERRUPT LEVEL
0009 0001
Number of Occurrences
0000 0001
Possible explanations:
----------------------
thread 1:
A fix is available
Download fix packs
APAR status
Closed as program error.
Error description
INTRPPC_ERR errors were observed in the error log while
customer ran a testcase as mentioned in the defect.
Local fix
Problem summary
INTRPPC_ERR errors were observed in the error log while
customer ran a testcase, which brings up and down the phxentdd
interface in a infinite loop. A ping is executed using the ip
address associated with this interface.
Problem conclusion
A simple code change to ignore the interrupts
while driver is in closing state.
Temporary fix
Comments
APAR information
APAR number IY58847
Reported component name AIX 5L FOR POWE
Reported component ID 5765E6100
Reported release 510
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2004-07-13
Closed date 2004-07-13
Last modified date 2004-10-29
Publications Referenced
Fix information
Fixed component name AIX 5L FOR POWE
Fixed component ID 5765E6100
thread 2:
.... but it is more than likely a device driver issue rather than the
device itself.
thread 3:
>Detail Data
>BUS NUMBER
>0000 00C0
>INTERRUPT LEVEL
>0000 0005
Example:
Detail Data
BUS NUMBER
0000 00C0
INTERRUPT LEVEL
0000 0003
------------------------------------------------------------------------
--
Technote (FAQ)
Question
Why is errpt showing: ECH_CANNOT_SET_CLBK error for an etherchannel that
is configured in Network Interface Backup mode
using virtual i/o network adapters.
Answer
to verify this:
#lsattr -El entX (where entX is the etherchannel)
ERRPT entry:
============
LABEL: TS_LATEHB_PE
IDENTIFIER: 3C81E43F
Description
Late in sending heartbeat
Probable Causes
Heavy CPU load
Severe physical memory shortage
Heavy I/O activities
Failure Causes
Daemon can not get required system resource
Recommended Actions
Reduce the system load
Detail Data
DETECTING MODULE
rsct,bootstrp.C,1.213,4835
ERROR ID
6zESUw.88Yn7/UKN0Nr9GF0...................
REFERENCE CODE
diag command:
-------------
The diag command is the starting point to run a wide choice of tasks and
service aids.
Most of the tasks/service aids are platform specific.
# diag -d scdisk0 -c
System dumps:
-------------
A system dump is created when the system has an unexpected system halt
or system failure.
In AIX 5L the default dump device is /dev/hd6, which is also the default
paging device.
You can use the sysdumpdev command to manage system crash dumps.
If no flags are used with the sysdumpdev command, the dump devices
defined in the SWservAt
ODM object class are used. The default primary dump device is /dev/hd6.
The default secondary dump device is
/dev/sysdumpnull.
Examples
To display current dump device settings, enter:
sysdumpdev -l
To permanently change the database object for the primary dump device to
/dev/newdisk1, enter:
sysdumpdev -P -p /dev/newdisk1
4537344 /dev/hd7
To designate remote dump file /var/adm/ras/systemdump on host mercury
for a primary dump device, enter:
sysdumpdev -p mercury:/var/adm/ras/systemdump
A : (colon) must be inserted between the host name and the file name.
To specify the directory that a dump is copied to after a system crash,
if the dump device is /dev/hd6, enter:
sysdumpdev -d /tmp/dump
This attempts to copy the dump from /dev/hd6 to /tmp/dump after a system
crash. If there is an error during the copy,
the system continues to boot and the dump is lost.
To specify the directory that a dump is copied to after a system crash,
if the dump device is /dev/hd6, enter:
sysdumpdev -D /tmp/dump
This attempts to copy the dump from /dev/hd6 to the /tmp/dump directory
after a crash. If the copy fails,
you are prompted with a menu that allows you to copy the dump manually
to some external media.
If you have the Software Service Aids Package installed, you have access
to the sysdumpstart command.
You can start the system dump by entering:
# sysdumpstart -p
note 1:
-------
>
> Description
> The copy directory is too small.
>
> Recommended Actions
> Increase the size of that file system.
>
> Detail Data
> File system name
> /var/adm/ras
>
> Current free space in kb
> 7636
> Current estimated dump size in kb
> 207872
> I guess /dev/hd6 is not big enough to contain a system dump. So how
> can i change that?
> How can i configure a secondary susdump space in case the primary
> would be unavailable?
sysdumpdev -s /dev/whatever
That's where the crash dump will be put when you reboot after the crash.
/dev/hd6 will be needed for other purposes (paging space), so you cannot
keep your system dump there.
And that file system is too small to contain the dump, that's the
meaning
of the error message.
- increase the /var file system (it should have ample free space
anyway).
- change the dump directory to something where you have more space:
sysdumpdev -D /something/in/rootvg/with/free/space
Yours,
Laurenz Albe
Note 2:
-------
$ errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
F89FB899 0822150005 P O dumpcheck The copy directory is too small
This message is the result of a dump device check. You can fix this by
increasing the size of your dump device. If you are using the default
dump device (/dev/hd6) then increase your paging size or go to smit dump
and "select System Dump Compression". Myself, I don't like to use the
default dump device so I create a sysdumplv and make sure I have enough
space. To check space needed go to smit dump and select "Show Estimated
Dump Size" this will give you an idea about the size needed.
# sysdumpdev -e
0453-041 Estimated dump size in bytes: 57881395
Divide this number by 1024. This is the free space that is needed in
your copy directory. Compare it to a df -k or divide this number by
512. This is the free space that is needed in your copy directory.
Compare it to a df
Note 2:
-------
selalbe@wijting:/home/beab_krn/selalbe $ errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
E87EF1BE 0309150009 P O dumpcheck The largest dump device is too
small.
thread:
do sysdumpdev -l you should see both primary and secondary dump devices
from
this you need to ensure that these are big enough to hold a system dump
so
type sysdumpdev -e to get an estimate on the dump size and resize your
dump
devices accordingly.
Try to increase these above the value you have if it is a new system
allow
for growth of the system and give it plenty of space if possible
thread:
IZ05158: POSSIBLE SYSTEM CRASH AFTER PCI BUS ERROR AFFECTING FC ADAPTER
APPLIES TO AIX 5300-06
A fix is available
Obtain fix for this APAR
APAR status
Closed as program error.
Error description
A system crash can occur if a Fibre Channel adapter
suffers PCI bus errors around the time of an adapter
reset.
(2)> f
pvthread+800000 STACK:
[03CE5678]efc_read_reg+000048 ()
[03CFF738]efc_finish_read_rev_mb+000290 ()
[03CF1154]efc_adap_post_trb+00049C ()
[00031F3C]clock+00017C ()
[000DF250]i_softmod+00027C ()
[000DE928].finish_interrupt+000024 ()
Local fix
Respond promptly to any PCI_RECOVERABLE_ERR errors that
appear in the AIX error log. It will not be possible to
respond quickly enough to prevent a crash associated with
a particular error, but addressing the underlying problem
causing repeated PCI errors will lessen the risk to the
environment.
Problem summary
A system crash can occur if a Fibre Channel adapter
suffers PCI bus errors around the time of an adapter
reset.
Problem conclusion
The handling of possible Errorneous value read around EEH
event introduced in the relevant code path.
Temporary fix
Comments
5300-06 - use AIX APAR IZ05158
5300-07 - use AIX APAR IZ25395
5300-08 - use AIX APAR IZ24657
5300-09 - use AIX APAR IZ19062
6100-00 - use AIX APAR IZ23541
6100-01 - use AIX APAR IZ19769
6100-02 - use AIX APAR IZ19384
APAR information
APAR number IZ05158
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2007-09-21
Closed date 2008-03-18
Last modified date 2008-11-17
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
23. DOS2UNIX:
=============
If you want to convert a ascii PC file to unix, you can use many tools
like tr etc..
Or scripts like:
#!/bin/sh
perl -p -i -e 'BEGIN { print "Converting DOS to UNIX.\n" ; } END { print
"Done.\n" ; } s/\r\n$/\n/' $*
Or, on many unixes You can use the utility " dos2unix " to remove the
^M
Just type: dos2unix <filename1> <filename2> [RETURN]
-ascii
Removes extra carriage returns and converts end of file characters in
DOS format text files to conform to SunOS requirements.
-iso
This is the default. It converts characters in the DOS extended
character set to the corresponding ISO standard characters.
-7
Convert 8 bit DOS graphics characters to 7 bit space characters so that
SunOS can read the file.
#!/bin/sh
# a script to strip carriage returns from DOS text files
if test -f $1
then
tr -d '\r' <$1 >$.tmp
rm $1
mv $.tmp $1
fi
1. nvdmetoa command:
Converts an EBCDIC file taken off an AS400 and converts to an ASCII file
for the pSeries or RS/6000
2. od command:
The od command translate a file into other formats, like for example
hexadecimal format.
To translate a file into several formats at once, enter:
ssh:
====
OpenSSH delivers code that communicates using SSH1 and SSH2 protocols.
What's the difference? The SSH2 protocol
is a re-write of SSH1. SSH2 contains separate, layered protocols, but
SSH1 is one large set of code. SSH2 supports
both RSA & DSA keys, but SSH1 supports only RSA, and SSH2 uses a strong
crypto integrity check, where SSH1 uses
a CRC-32 check. The Internet Engineering Task Force (IETF) maintains the
secure shell standards.
Example 1:
----------
Replace "username" with your Prism ID. If this is your first time
connecting to acme, you will see
a warning similar to this:
Type the word "yes" and hit <ENTER>. You should see the following
warning:
Next, you will be prompted for your password. Type your password and hit
<ENTER>.
Example 2:
----------
pscp:
=====
------------------------------------
@echo off
REM Script om via pscp.exe een bestand van een UNIX systeem te copi
%ren naar het werkstation.
------------------------------------
@echo off
REM Script om via pscp.exe een bestand naar een UNIX systeem te copi
%ren van het werkstation.
Either the source or the destination may be on the remote machine; i.e.,
you may copy files or directories
into the account on the remote system OR copy them from the account on
the remote system into the account
you are logged into.
Example:
# scp conv1.tar.gz [email protected]:/backups/520backups/splenvs
# scp conv2.tar.gz [email protected]:/backups/520backups/splenvs
Example:
# scp myfile xyz@sdcc7:myfile
Example:
To copy a directory, use the -r (recursive) option.
# scp -r mydir xyz@sdcc7:mydir
Example:
cd /oradata/arc
/usr/local/bin/scp *.arc SPRAT:/oradata/arc
Example:
While logged into xyz on sdcc7, copy file "letter" into file
"application" in remote account abc on sdcc3:
% scp letter abc@sdcc3:application
While logged into abc on sdcc3, copy file "foo" from remote account xyz
on sdcc7 into filename "bar" in abc:
% scp xyz@sdcc7:foo bar
1. decide which useraccount to use (on all hosts), and logon to the
local host with that account
3. Note the name and location of the public key just generated. It
always ends in .pub.
4. Change the permissions of the generated .pub file to 600, for example
chmod 600 id_dsa.pub (or 700).
In effect, make sure that no group, or everyone (world), has any
access to the file.
5. Copy the public key just generated to all of your remote boxes. You
can use scp or FTP
or whatever to make the copy.
if you are logging in as a user, for example, albert, you should copy
it to
"/home/albert/.ssh/authorized_keys". But (!) first check whether that
file already exists.
If the file already exists and contains text,
you need to append the contents of your public key file to what
already is there.
If you want to do the same for scp from hostb to hosta, perform the same
steps again, but now
ofcourse with the serverroles reversed.
Notes:
ssh on AIX:
===========
After you download the OpenSSL package, you can install OpenSSL and
OpenSSH.
Use the Y flag to accept the OpenSSH license agreement after you have
reviewed the license agreement.
(Note: we have seen this line as well:
# geninstall -Y -d/dev/cd0 I:openssh.base)
Installation Summary
--------------------
Name Level Part Event
Result
------------------------------------------------------------------------
-------
openssh.base.client 3.8.0.5200 USR APPLY
SUCCESS
openssh.base.server 3.8.0.5200 USR APPLY
SUCCESS
openssh.base.client 3.8.0.5200 ROOT APPLY
SUCCESS
openssh.base.server 3.8.0.5200 ROOT APPLY
SUCCESS
You can also use the SMIT install_software fast path to install OpenSSL
and OpenSSH.
The sshd daemon is under AIX SRC control. You can start, stop, and view
the status of the daemon
by issuing the following commands:
root@zd110l14:/etc/rc.d/rc2.d#cat Ssshd
#!/bin/ksh
##################################################
# name: Ssshd
# purpose: script that will start or stop the sshd daemon.
##################################################
case "$1" in
start )
startsrc -g ssh
;;
stop )
stopsrc -g ssh
;;
* )
echo "Usage: $0 (start | stop)"
exit 1
esac
EXAMPLE: Using the more command, and a pipe, send the contents of
your .profile and .shrc files to the
screen by typing
EXERCISE: How could you use head and tail in a pipeline to display lines
25 through 75 of a file?
would work. The cat command feeds the file into the pipeline. The head
command gets the first 75 lines
of the file, and passes them down the pipeline to tail. The tail command
then filters out all but the last
50 lines of the input it received from head. It is important to note
that in the above example, tail never
sees the original file, but only sees the part of the file that was
passed to it by the head command.
It is easy for beginners to confuse the usage of the input/output
redirection symbols < and >, with the
usage of the pipe. Remember that input/output redirection connects
processes with files, while the pipe connects
processes with other processes.
Grep
The grep utility is one of the most useful filters in UNIX. Grep
searches line-by-line for a specified pattern,
and outputs any line that matches the pattern. The basic syntax for the
grep command is
grep [-options] pattern [file]. If the file argument is omitted, grep
will read from standard input.
It is always best to enclose the pattern within single quotes, to
prevent the shell
from misinterpreting the command.
to search the /etc/passwd file for any lines containing the string
"jon".
EXERCISE:List all the files in the /tmp directory owned by the user
root.
Redirecting:
------------
CONCEPT: Every program you run from the shell opens three files:
Standard input, standard output,
and standard error. The files provide the primary means of
communications between the programs,
and exist for as long as the process runs.
The standard output provides a means for the program to output data. As
a default, the standard output
goes to the terminal display screen.
The standard error is where the program reports any errors encountered
during execution.
By default, the standard error goes to the terminal display.
CONCEPT: A program can be told where to look for input and where to send
output, using input/output
redirection. UNIX uses the "less than" and "greater than" special
characters (< and >) to signify input
and output redirection, respectively.
Redirecting input
Using the "less-than" sign with a file name like this:
< file1
in a shell command instructs the shell to read input from a file called
"file1" instead of from the keyboard.
Many UNIX commands that will accept a file name as a command line
argument, will also accept input from
standard input if no file is given on the command line.
EXAMPLE: To see the first ten lines of the /etc/passwd file, the
command:
head /etc/passwd
will work just the same as the command:
head < /etc/passwd
Redirecting output
Using the "greater-than" sign with a file name like this:
> file2
causes the shell to place the output from the command in a file called
"file2" instead of on the screen.
If the file "file2" already exists, the old version will be overwritten.
>> file2
causes the shell to append the output from a command to the end of a
file called "file2". If the file
"file2" does not already exist, it will be created.
EXAMPLE: In this example, I list the contents of the /tmp directory, and
put it in a file called myls.
Then, I list the contents of the /etc directory, and append it to the
file myls:
Redirecting error
Redirecting standard error is a bit trickier, depending on the kind of
shell you're using
(there's more than one flavor of shell program!). In the POSIX shell and
ksh, redirect the standard error
with the symbol "2>".
EXAMPLE: Sort the /etc/passwd file, place the results in a file called
foo, and trap any errors in a file
called err with the command:
===========================
27. UNIX DEVICES and mknod:
===========================
27.1 Note 1:
============
the files in the /dev directory are a little different from anything you
may be used to in
other operating systems.
The very first thing to understand is that these files are NOT the
drivers for the devices. Drivers are in
the kernel itself (/unix etc..), and the files in /dev do not actually
contain anything at all:
they are just pointers to where the driver code can be found in the
kernel. There is nothing more to it
than that. These aren't programs, they aren't drivers, they are just
pointers.
That also means that if the device file points at code that isn't in the
kernel, it obviously is not
going to work. Existence of a device file does not necessarily mean that
the device code is in the kernel,
and creating a device file (with mknod) does NOT create kernel code.
Unix actually even shows you what the pointer is. When you do a long
listing of a file in /dev,
you may have noticed that there are two numbers where the file size
should be:
Notice the "b" and the "c" as the first characters in the mode of the
file. It designates whether
we have a block "b", or a character "c" device.
Notice that each of these files shares the "5" part of the pointer, but
that the other number is different.
The "5" means that the device is a serial port, and the other number
tells exactly which com port you are
referring to. In Unix parlance, the 5 is the "major number" and the
other is the "minor number".
These numbers get created with a "mknod" command. For example, you could
type "mknod /dev/myfloppy b 2 60" and
then "/dev/myfloppy" would point to the same driver code that
/dev/fd0135ds18 points to, and it would
work exactly the same.
But if you didn't know that the magic numbers were "2,60", how could you
find out?
First, have a look at "man idmknod". The idmknod command wipes out all
non-required devices, and then recreates them.
Sounds scary, but this gets called every time you answer "Y" to that
"Rebuild Kernel environment?" question that
follows relinking. Actually, on 5.0.4 and on, the existing /dev files
don't get wiped out; the command simply
recreates whatever it has to.
idmknod requires several arguments, and you'd need to get them right to
have success. You could make it easier
by simply relinking a new kernel and answering "Y" to the "Rebuild"
question, but that's using a fire hose to
put out a candle.
A less dramatic method would be to look at the files that idmknod uses
to recreate the device nodes. These are found
in /etc/conf/node.d
In this case, the file you want would be "fd". A quick look at part of
that shows:
This gives you *almost* everything you need to know about the device
nodes in the "fd" class. The only thing it
doesn't tell you is the major number, but you can get that just by doing
an "l" of any other fd entry:
mknod /dev/fd0135ds18 b 2 60
chown bin /dev/fd0135ds18
chgrp bin /dev/fd0135ds18
chmod 666 /dev/fd0135ds18
If you examined the node file closely, you would also notice that
/dev/rfd0135ds18 and /dev/fd0135ds18 differ only
in that the "r" version is a "c" or character device and the other is
"b" or block. If you had already known that,
you wouldn't have even had to look at the node file; you'd simply have
looked at an "l" of the /dev/rfd0135ds18 and
recreated the block version appropriately.
There are other fascinating things that can be learned from the node
files. For example, fd096ds18 is also minor number 60,
and can be used in the same way with identical results. In other words,
if you z'd out (were momentarily innattentive,
not CTRL-Z in a job control shell) and dd'd an image to /dev/fd096ds18,
it would write to your hd floppy without incident.
If you have a SCSI tape drive, notice what happens when you set it to be
the "default" tape drive.
It creates device files that have different names (rct0, etc.) but that
have the same major and minor numbers.
Knowing that it's easy to recreate missing device files also means that
you can sometimes capture the output
of programs that write directly to a device. For example, suppose some
application prints directly to /dev/lp
but you need to capture this to a file. In most situations, you can
simply "rm /dev/lp" (after carefully noting
its current ownership, permissions and, of course, major/minor numbers),
and then "touch /dev/lp" to create an
ordinary file. You'll need to chmod it for appropriate permissions, and
then run your app. Unless the app has
tried to do ioctl calls on the device, the output will be there for your
use. This can be particularly useful
for examining control characters that the app is sending.
The real difference lies in what the kernel does when a device file is
accessed for reading or writing. If the device
is a block device, the kernel gives the driver the address of a kernel
buffer that the driver will use as the source
or destination for data. Note that the address is a "kernel" address;
that's important because that buffer will be
cached by the kernel. If the device is raw , then the address it will
use is in the user space of the process that is
using the device. A block device is something you could make a
filesystem on (a disk). You can move forward and backward,
from the beginning of a block device to its end, and then back to the
beginning again. If you ask to read a block that
the kernel has buffered, then you get data from the buffer. If you ask
for a block that has not yet been buffered,
the kernel reads that block (and probably a few more following it) into
the buffer cache. If you write to a block device,
it goes to the buffer cache (eventually to the device, of course). A raw
(or character) device is often something that
doesn't have a beginning or end; it just gives a stream of characters
that you read. A serial port is an excellent
example- however, it is not at all unusual to have character (raw)
drivers for things that do have a beginning
and an end- a tape drive, for example. And many times there are BOTH
character and block devices for the same
physical device- disks, for example. Nor does using a raw device
absolutely mean that you can't move forward and back,
from beginning to end- you can move wherever you want with a tape or
/dev/rfd0.
You'd use a block device when you want to take advantage of the caching
provided by the kernel. You'd use the raw device
when you don't, or for ioctl operations like "tape status" or "stty -a".
27.2 Note 2:
============
Ordinary or plain files in Unix are not all text files. They may also
contain ASCII text, binary data, and program input
or output. Executable binaries (programs) are also files, as are
commands. When a user enters a command, the associated
file is retrieved and executed. This is an important feature and
contributes to the flexibility of Unix.
Special files are also known as device files. In Unix all physical
devices are accessed via device files; they are
what programs use to communicate with hardware. Files hold information
on location, type, and access mode for a
specific device. There are two types of device files; character and
block, as well as two modes of access.
- Block device files are used to access block device I/O. Block devices
do buffered I/O, meaning that the the data is
collected in a buffer until a full block can be transfered.
One.
Two.
There are no such devices.
Device files are found in the /dev directory. Each device is assigned a
major and minor device number. The major
device number identifies the type of device, i.e. all SCSI devices would
have the same number as would all the keyboards.
The minor device number identifies a specific device, i.e. the keyboard
attached to this workstation.
Device files are created using the mknod command. The form for this
command is:
0 keyboard
1 SCSIbus
2 tty
3 disk
Using the ls command in the /dev directory will show entries that look
like:
The "b" before the permissions indicates that this is a block device
file. When a user enters /dev/sd1a the kernel sees
the file opening, realizes that it's major device number 1, and calls up
the SCSIbus function to handle it.
====================
28. Solaris devices:
====================
Solaris stores the entries for physical devices under the /devices
directory,
and the logical device entries behind the /dev directory.
- A "physical device name" represents the full pathname of the device.
Physical device files are found in the /devices directory and have a
naming convention like the following example:
/devices/sbus@1,f8000000/esp@0,40000/sd@3,0:a
Each device has a unique name representing both the type of device and
the location of that device
in the system-addressing structure called the "device tree". The
OpenBoot firmware builds the
device tree for all devices from information gathered at POST. The
device tree is loaded in memory
and is used by the kernel during boot to identify all configured
devices.
A device pathname is a series of node names separated by slashes.
Each device has the following form:
driver-name@unit-address:device-arguments
- The "instance name" represents the kernel's abbreviated name for every
possible device
on the system. For example, sd0 and sd1 represents the instance names
of two SCSI disk devices.
Instance names are mapped in the /etc/path_to_inst file, and are
displayed by using the
commands dmesg, sysdef, and prtconf
- The "Logical device names" are used with most Solaris file system
commands to refer to devices.
Logical device files in the /dev directory are symbolically linked to
physical device files
in the /devices directory. Logical device names are used to access
disk devices in the
following circumstances:
- adding a new disk to the system and partitioning the disk
- moving a disk from one system to another
- accessing or mounting a file system residing on a local disk
- backing up a local file system
- repairing a file system
Logical device files have a major and minor number that indicate
device drivers,
hardware addresses, and other characteristics.
Furthermore, a device filename must follow a specific naming
convention.
A logical device name for a disk drive has the following format:
/dev/[r]dsk/cxtxdxsx
===========================
29. filesystems in Solaris:
===========================
The UFS filesystem has always been the most popular fs on Solaris.
Ofcourse, when the newer ZFS filesystem became available, it has been
rapidly adopted.
We will frst take a look at a few classical commands, that you would
typically use on a UFS filesystem.
Ofcourse, many "listing commands" like for example, df (to show what's
used and what is free space),
can be used on ZFS as well. But creating an fs on ZFS goes absolutly
different from what you can find in section 29.1
# du -ks /home/fred
# du -ks /home/fred/*
# du -s /home/fred
# du -sg /data
# format -> specify disk -> choose partition -> choose print to get the
partition table
# cfgadm -al
-- pointer 1.
If you have a CD put in the drive, and it was automounted, simply use
the "df" command to view your filesystems:
# df -k or df -h
-- pointer 2.
# iostat -En
you could figure out what logical device name your CDROM has.
-- pointer 3.
Solaris uses the same naming conventions as used with hardisks, for
example the CDROM in the following command
The simplest way to mount CDROM on Solaris is use vold daemon. The vold
daemon in Solaris manages the CD-ROM device
and automatically performs the mounting similar to how Windows manages
CDROMs (but not as transparent or reliable).
If CD is detected in drive its should be automatically mounted to
the /cdrom/cdrom0 directory.
# newfs /dev/rdsk/c0t3d0s7
FSCK in Solaris:
----------------
# fsck -m /dev/rdsk/c0t0d0s6
ok probe-scsi
..
Target 3
Unit 0 Disk Seagate ST446452W 0001
..
ok boot -r
In this example, our disk is SCSI target 3, so we can refer to the whole
disks as
/dev/rdsk/c0t3d0s2 # slice 2, or partition 2, s2 refers to the
whole disk
We now use the format program to partition the disk, and afterwards
create filesystems.
# format /dev/rdsk/c0t3d0s2
(.. output..)
FORMAT MENU:
format>label
Ready to label disk, continue? y
format>partition
PARTITION MENU:
partition>
Once you have created and sized the partitions, you can get a list with
the "partition>print" command.
Now, for example, you can create a filesystem like in the following
command:
# newfs /dev/rdsk/c0t3d0s0
devfsadm:
---------
As from Solaris 8:
Examples:
# devfsadm -i sd
# devfsadm -c tape
There are at least 4 different types of filesystems you can use with
Solaris 10 (except for zfs,
for the older Solaris 8 and 9 versions).
These are:
-- UFS
The traditional filesystem for Solaris systems. UFS is old technology
but it is a stable and fast filesystem.
Sun has continuously tuned and improved the code over the years.
Solaris 10 (and older ofcouse) can only boot from a UFS root filesystem.
In the future,
ZFS boot will be available, as it already is in OpenSolaris. But for
now, every Solaris system must have
at least one UFS filesystem.
Note: This "boot-statement" was true at the time of writing. Maybe you
read this way after that time, and maybe
Solaris can now boot from zfs or other filesystem.
-- ZFS
We will talk a bit on ZFS in section 29.3
-- VxFS
The Veritas filesystem and volume manager have their roots in a fault-
tolerant proprietary minicomputer
built by Veritas in the 1980s. They have been available for Solaris
since at least 1993 and have been
ported to AIX and Linux. They are integrated into HP-UX and SCO UNIX,
and Veritas Volume Manager code
has been used (and extensively modified) in Tru64 UNIX and even in
Windows.
VxFS has never been part of Solaris but, when UFS was the only option,
it was a popular addition.
VxVM and VxFS are tightly integrated. Through vxassist, one may shrink
and grow filesystems and their
underlying volumes with minimal trouble.
-- PCFS
It's even possible to use the DOS FAT filesystem.
-- HSFS
Ofcourse, the CDROM HSFS can be used.
Maybe the following list will show you what can be used in Solaris:
Everything you hate about managing file systems and volumes is gone: you
don't have to use format, and create slices/partitions, use newfs,
mount, edit /etc/vfstab,
fsck, growfs, metadb, metainit, etc.
ZFS is easy, so let's get on with it! It's time to create your first
pool:
You now have a single-disk storage pool named tank, with a single file
system mounted at /tank. There is nothing else to do.
Yes, its really true:
The new ZFS file system, tank, can use as much of the disk space as
needed, and is automatically mounted at /tank.
You can determine if your pool was successfully created by using the
zpool list command.
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 80G 137K 80G 0% ONLINE -
Suppose we create a file in /tank and want to see how things looks like:
# mkfile 100m /tank/foo
# df -h /tank
Filesystem size used avail capacity Mounted on
tank 80G 100M 80G 1% /tank
If you want mirrored storage for mail and home directories, that's easy
too:
ZFS file systems are hierarchical: each one inherits properties from
above. In this example, the mountpoint property is inherited
as a pathname prefix. That is, tank/home/ahrens is automatically mounted
at /export/home/ahrens because tank/home is mounted at /export/home.
You don't have to specify the mountpoint for each individual user - you
just tell ZFS the pattern.
ZFS uses a commit and rollback mechanism, to ensure that all data is
written completely, and if not, everything is rolled back.
You probably know that with former filesystems, that you could choose
- for a filesystem without journaling (logging)
- or indeed use journaling (or logging).
Now you have a third option: using a transactional filesystem, like zfs.
ZFS is a transactional file system, which means that the file system
state is always consistent on disk. Traditional file systems (with no
logging)
overwrite data in place, which means that if the machine loses power,
for example, between the time a data block is allocated and
when it is linked into a directory, the file system will be left in an
inconsistent state. Historically, this problem was solved through the
use
of the fsck command. This command was responsible for going through and
verifying file system state, making an attempt to repair any
inconsistencies
in the process. This problem sometimes caused great pain to
administrators and was never guaranteed to fix all possible problems.
ZFS has been designed from the ground up to be a very scalable file
system. The file system itself is 128-bit, allowing for 256 quadrillion
zettabytes
of storage. All metadata is allocated dynamically, so no need exists to
pre-allocate inodes or otherwise limit the scalability
of the file system when it is first created. All the algorithms have
been written with scalability in mind.
Directories can have up to 248 (256 trillion) entries, and no limit
exists on the number of file systems or number of files
that can be contained within a file system.
-- To scrub all disks and verify the integrity of all data in the pool:
-- To move your pool from SPARC machine 'sparky' to AMD machine 'amdy':
[on sparky]
# zpool export tank
[on amdy]
# zpool import tank
The easiest way to determine if any known problems exist on the system
is to use the "zpool status x" command.
This command describes only pools exhibiting problems. If no bad pools
exist on the system,
then the command displays a simple message, as follows:
# zpool status -x
Without the x flag, the command displays the complete status for all
pools (or the requested pool, if specified on the command line),
even if the pools are otherwise healthy.
See section 29.2 for a general description about the filesystems you can
use on Solaris.
Example 1:
----------
Example 2:
----------
We are going to show how to create a mirroring volume and a stripping
volume on Veritas Storage Foundation.
on Solaris 10.
The first step is to check quantity of disks you have available on the
server.
A simple way to check this on solaris is using format utility:
bash-3.00# format
Also, you can check disks available to Veritas Storage Foundation using
vxdisk command:
You can see above that there are 4 disks on the server that are
available to Veritas but they have not yet
been initialized by Veritas (invalid status). To use a disk on Veritas
SF you need to initialize this
using Veritas utilities.
NOTE: If you are going to use a disk on Veritas, pay attention that you
should give this whole disk to Veritas.
Disk will be formatted and you will lose all data in the disk when you
are allocating a disk to Veritas Storage.
In this example the only disk that is in use for O.S Solaris is the
first one. (c1t0d0s2).
# vxdisksetup -i c1t1d0
#
# vxdisksetup -i c1t2d0
# vxdisksetup -i c1t3d0
# vxdg list
NAME STATE ID
>>> Volumes
Volume VolS - Stripping layout using c1t1d0 and c1t2d0 disks (RAID 0).
Volume VolM - Mirroring layout using c1t2d0 and c1t3d0 (RAID 1).
# vxprint -g DG1
dg DG1 DG1 - - - - - -
# vxprint -g DG1
dg DG1 DG1 - - - - - -
Note: You can see above that both Volumes were created successfully.
Also, you can note the difference
between stripping and mirroring volume layouts.
VolM is using two different Plex in differente disks. This means that if
you lose one disk (Plex)
you still have the data in the other disk (other Plex). It is the main
configuration of Mirroring Volumes.
VolS is using only one Plex divided in 2 disks. This means that the data
will be split in those 2 disks.
If you lose one disk you would lose the whole Plex, therefore you would
lose the data.
This is the main configuration of Stripping Volumes. It does not provide
data protection but it is very useful
for performance for purpose.
Also, you can add those 2 layouts in only one layout that provide data
protection and better performance.
It is the case of RAID 0 + 1 or RAID 1 + 0.
>>> Filesystem
version 7 layout
20480 sectors, 10240 blocks of size 1024, log size 1024 blocks
largefiles supported
version 7 layout
20480 sectors, 10240 blocks of size 1024, log size 1024 blocks
largefiles supported
Now there are 2 filesystems configured and you can use it at Solaris
Mount Point level.
Example 3:
----------
Rather than mess with vxmake you can employ vxassist to do all the
dirty work. If you have any amount of experience with vxassist
you'll know that the more information you can supply to vxassist the
better the end product will be.
I'm going to use vxassist to build a stripe-pro volume from four disks
and I want the volume to be 1G in size:
Pretty kool, huh? Quick, efficient, and poorly named; everything you
love about vxassist. I can then go a bit further
and explore my sizing options to see how much I can grow my new volume
if I need to:
See? Just like a normal volume. Now comes the beauty part. When you look
at that seemingly unmanageable mess of objects above
does it really make you want to tear it apart and work on it like you
might other "normal" volumes? Probably not. And you'd be wise
to feel that way, there are just too many places to get confused or make
a mistake when real data is involved. What if you could get back
to a more normal point of view? Luckily you can, check this out:
Veritas terminology:
Example 4:
----------
View configuration:
vxprint -th
List disks:
vxdisk list
vxdisk -o alldgs list (shows deported disks)
To encapsulate use:
vxencap -g <discgroup> <devicename>
Resize a filesystem:
vxresize -g <disk group> -F <fstype> <volume> <size>
vxedit rm disk10
to remove a greyed out or obsolete disk in this case disk10
or to remove a disk from a diskgroup
to make a subdisk
vxdisk rm c#t#d#s2
to remove a disk so it's out of vm control
vxdiskadd c#t#d#
to add bring a new disk under vm control
================
30. AIX devices:
================
/etc/objrepos
/usr/lib/objrepos
/usr/share/lib/objrepos
1. Type
2. Class
3. Subclass
odmadd,
odmdrop,
odmshow,
odmdelete,
odmcreate,
odmchange
Examples:
AIX includes both logical devices and physical devices in the ODM device
configuration database.
Logical devices include Volume Groups, Logical Volumes, network
interfaces and so on.
Physical devices are adapters, modems etc..
If you have installed an adapter for example, and you have put the
software in a directory
like /usr/sys/inst.images, you can call cfgmgr to install device drivers
as well with
# cfgmgr -i /usr/sys/inst.images
$$
09-08-00-1,0
u5971-t1-l1-l0
Device information:
-------------------
The most important AIX command to show device info is "lsdev". This
command queries the ODM, so we can use
it to locate the customized or the predifined devices.
If you need to see disk or other devices, defined or available, you can
use the lsdev command
as in the following examples:
Remark:
For local attached SCSI devices, the general format of the LOCATION code
"AB-CD-EF-GH" is actually "AB-CD-EF-G,H" ,
the first three sections are the same and for the GH section, the G is
de SCSI ID and the H is the LUN.
For adapters, only the AB-CD is mentioned in the location code.
- For SCSI devices we have a location code like AB-CD-EF-S,L where the
S,L fields identifies
the SCSI ID and LUN of the device.
To lists all devices in the Predefined object class with column headers,
use
# lsdev -P -H
To list the adapters that are in the Available state in the Customized
Devices object class, use
# lsdev -C -c adapter -S
lsattr examples:
----------------
This command gets the current attributes (-E flag) for a tape drive:
(Ofcourse, the equivalent for the above command is for example # lsattr
-l rmt0 -E )
To list the default values for that tape device (-D flag), use
# lsattr -l -D rmt0
# lsattr -E -l ent1
busmem 0x3cfec00 Bus memory address False
busintr 7 Bus interrupt level False
..
..
To list only a certain attribute (-a flag), use the command as in the
following example:
You must specify one of the following flags with the lsattr command:
-D Displays default values.
-E Displays effective values (valid only for customized devices
specified with the -l flag).
-F Format Specifies the user-defined format.
-R Displays the range of legal values.
-a Displays for that attribute
lscfg examples:
---------------
Example 1:
This command gets the Vital Product Data for the tape drive rmt0:
-v Displays the VPD found in the Customized VPD object class. Also, on
AIX 4.2.1
or later, displays platform specific VPD when used with the -p flag.
-s Displays the device description on a separate line from the name and
location.
sample output:
Platform Firmware:
ROM Level.(alterable).......3R040602
Version.....................RS6K
System Info Specific.(YL)...U1.18-P1-H2/Y2
Physical Location: U1.18-P1-H2/Y2
The ROM Level denotes the firmware/microcode level
Platform Firmware:
ROM Level ............. RH020930
Version ................RS6K
..
Example 2:
The following command shows details about the Fiber Channel cards:
Adding a device:
----------------
To add a device you can run cfgmgr, or shutdown the system, attach the
new device and boot the system.
There are also many smitty screens to accomplish the task of adding a
new device.
where
The mkdev command also creates the ODM entries for the device and loads
the device driver.
Suppose you have just added a new disk. Suppose the cfgmgr has run and
detected the disk.
The first field identifies the system-assigned name of the disk. The
second field displays the
"physical volume id" PVID. If that is not shown, you can use chdev:
Examples:
If you really want to remove it from the system, use the -d flag as well
# rmdev -l rmt0 -d
To unconfigure the childeren of PCI bus pci1 and all devices under them,
while retaining their
device definition in the Customized Devices Object Class.
# rmdev -p pci1
rmt0 Defined
hdisk1 Defined
scsi1 Defined
ent0 Defined
In AIX 5.x we have a special device named sys0 that is used to manage
some kernel parameters.
The way to change these values is by using smitty, the chdev command or
WSM.
Example.
To change the maxusersprocesses parameter, you can for example use the
Web-based System Manager.
You can also use the chdev command:
#chdev -l sys0 -a maxuproc=50
sys0 changed
Device drivers:
---------------
============================
31. filesystem commands AIX:
============================
In AIX, it's common to use a Logical Volume Manager LVM to cross the
boundaries posed by
traditional disk management.
Traditionally, a filesystem was on a single disk or on a single
partition.
Changing a partionion size was a difficult task. With a LVM, we can
create logical volumes
which can span several disks.
The LVM has been a feature of the AIX operating system since version 3,
and it is installed
automatically with the Operating System.
mkvg (or the mkvg4vp command in case of SAN vpath disks. See section
31.3)
cplv
rmlv
mklvcopy
extendvg
reducevg
getlvcb
lspv
lslv
lsvg
mirrorvg
chpv
migratepv
exportvg, importvg
varyonvg, varyoffvg
Volume group:
-------------
What a physical disk is, or a physical volume is, is evident. When you
add a physical volume to a volume group,
the physical volume is partitioned into contiguous equal-sized units of
space called "physical partitions".
A physical partition is the smallest unit of storage space allocation
and is a contiguous space
on a physical volume.
The physical volume must now become part of a volume group. The disk
must be in a available state
and must have a "physical volume id" assigned to it.
You create a volume group with the "mkvg" command. You add a physical
volume to an existing volume group with
the "extendvg" command, you make use of the changed size of a physical
volume with the "chvg" command,
and remove a physical volume from a volume group with the "reducevg"
command.
Some of the other commands that you use on volume groups include:
list (lsvg), remove (exportvg), install (importvg), reorganize
(reorgvg), synchronize (syncvg),
make available for use (varyonvg), and make unavailable for use
(varyoffvg).
Typical example:
In case you use the socalled SDD subsystem with vpath SAN storage, you
should use the "mkvg4vp" command,
which works similar (same flags) as the mkvg command.
Types of VG's:
==============
There are 3 kinds of VG's:
Normal VG:
----------
Big VG:
-------
Number of disks Max number of partitions/disk
1 130048
2 65024
4 32512
8 16256
16 8128
32 4064
64 2032
128 1016
Physical Partition:
===================
You can change the NUMBER of PPs in a VG, but you cannot change the SIZE
of PPs afterwards.
Defaults:
- 4 MB partition size. It can be a multiple of that amount. The Max size
is 1024 MB
- The default is 1016 PPs per disk. You can increase the number of PPs
in powers of 2 per PV, but the number
of maximum disks per VG is decreased.
Logical Partition:
------------------
A LP maps to (at least) one PP, and is actually the smallest unit of
allocatable space.
Logical Volume:
---------------
Consists of LPs in a VG. A LV consists of LPs from actual PPs from one
or more disks.
|-----| | ----|
|LP1 | ---> | PP1 |
|-----| | ----|
|LP2 | ---> | PP2 |
|-----| | ----|
|.. | hdisk 1 (Physical Volume 1)
|.. |
|.. |
|-----| |---- |
|LPn | ---> |PPn |
|-----| |---- |
|LPn+1| ---> |PPn+1|
|-----| |---- |
Logical Volume hdisk2 (Physical Volume 2)
So, a VG is a collection of related PVs, but you know that actually LVs
are created in the VG.
For the applications, the LVs are the entities they work with.
In AIX, a filesystem like "/data", corresponds to a LV.
lspv Command
------------
lspv [ -L ] [ -l | -p | -M ] [ -n DescriptorPhysicalVolume] [ -v
VolumeGroupID] PhysicalVolume
-p: lists range, state, region, LV names, type and mount points
# lspv
# lspv hdisk3
# lspv -p hdisk3
# lspv
hdisk0 00453267554 rootvg
hdisk1 00465249766 rootvg
# lspv hdisk23
PHYSICAL VOLUME: hdisk23 VOLUME GROUP: oravg
PV IDENTIFIER: 00ccf45d564cfec0 VG IDENTIFIER
00ccf45d00004c0000000104564d2386
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 256 megabyte(s) LOGICAL VOLUMES: 3
TOTAL PPs: 947 (242432 megabytes) VG DESCRIPTORS: 1
FREE PPs: 247 (63232 megabytes) HOT SPARE: no
USED PPs: 700 (179200 megabytes)
FREE DISTRIBUTION: 00..00..00..57..190
USED DISTRIBUTION: 190..189..189..132..00
# lspv -p hdisk23
hdisk23:
PP RANGE STATE REGION LV NAME TYPE MOUNT
POINT
1-22 used outer edge u01 jfs2 /u01
23-190 used outer edge u02 jfs2 /u02
191-379 used outer middle u01 jfs2 /u01
380-568 used center u01 jfs2 /u01
569-600 used inner middle u02 jfs2 /u02
601-700 used inner middle u03 jfs2 /u03
701-757 free inner middle
758-947 free inner edge
# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT
POINT
1-1 used outer edge hd5 boot N/A
2-48 free outer edge
49-51 used outer edge hd9var jfs /var
52-52 used outer edge hd2 jfs /usr
53-108 used outer edge hd6 paging N/A
109-116 used outer middle hd6 paging N/A
117-215 used outer middel hd2 jfs /usr
216-216 used center hd8 jfslog N/A
217-217 used center hd4 jfs /
218-222 used center hd2 jfs /usr
223-320 used center hd4 jfs /
..
..
# lslv -l lv06
lv06:/backups
PV COPIES IN BAND DISTRIBUTION
hdisk3 512:000:000 100% 000:218:218:076:000
# lslv lv06
LOGICAL VOLUME: lv06 VOLUME GROUP: backupvg
LV IDENTIFIER: 00c8132e00004c0000000106ef70cec2.2 PERMISSION:
read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 64
megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 512 PPs: 512
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /backups LABEL: /backups
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
# lslv -p hdisk3
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
1-10
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
11-20
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
21-30
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
31-40
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
41-50
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
51-60
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
61-70
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
71-80
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
81-90
..
..
# lsvg -l backupvg
backupvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
loglv02 jfslog 1 1 1 open/syncd N/A
lv06 jfs 512 512 1 open/syncd /backups
# lsvg -l splvg
splvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
loglv01 jfslog 1 1 1 open/syncd N/A
lv04 jfs 240 240 1 open/syncd /data
lv00 jfs 384 384 1 open/syncd /spl
lv07 jfs 256 256 1 open/syncd /apps
-redovg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
redo1lv jfs2 42 42 3 open/syncd /u05
redo2lv jfs2 1401 1401 3 open/syncd /u04
loglv03 jfs2log 1 1 1 open/syncd N/A
-db2vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
db2lv jfs2 600 600 2 open/syncd
/db2_database
loglv00 jfs2log 1 1 1 open/syncd N/A
-oravg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
u01 jfs2 800 800 2 open/syncd /u01
u02 jfs2 400 400 2 open/syncd /u02
u03 jfs2 200 200 2 open/syncd /u03
logfs jfs2log 2 2 1 open/syncd N/A
-rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 36 72 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 8 16 3 open/syncd /
hd2 jfs2 24 48 2 open/syncd /usr
hd9var jfs2 9 18 3 open/syncd /var
hd3 jfs2 11 22 3 open/syncd /tmp
hd1 jfs2 10 20 2 open/syncd /home
hd10opt jfs2 2 4 2 open/syncd /opt
fslv00 jfs2 1 2 2 open/syncd /XmRec
fslv01 jfs2 2 4 3 open/syncd /tmp/m2
paging00 paging 32 32 1 open/syncd N/A
sysdump1 sysdump 80 80 1 open/syncd N/A
oralv jfs2 100 100 1 open/syncd
/opt/app/oracle
fslv03 jfs2 63 63 2 open/syncd /bmc_home
lsvg Command:
-------------
Examples:
# lsvg
rootvg
informixvg
oravg
# lsvg -o
rootvg
oravg
# lsvg oravg
VOLUME GROUP: oravg VG IDENTIFIER:
00ccf45d00004c0000000104564d2386
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1894 (484864
megabytes)
MAX LVs: 256 FREE PPs: 492 (125952
megabytes)
LVs: 4 USED PPs: 1402 (358912
megabytes)
OPEN LVs: 4 QUORUM: 2
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
# lsvg -p informixvg
informixvg
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk3 active 542 462
109..28..108..108..109
hdisk4 active 542 447
109..13..108..108..109
# lsvg -l rootvg
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
hd5 boot 1 1 1 closed/syncd N/A
hd6 paging 24 24 1 open/syncd N/A
hd8 jfslog 1 1 1 open/syncd N/A
hd4 jfs 4 4 1 open/synced /
hd2 jfs 76 76 1 open/synced /usr
hd9var jfs 4 4 1 open/synced /var
hd3 jfs 6 6 1 open/synced /tmp
paging00 paging 20 20 1 open/synced N/A
..
..
extendvg command:
-----------------
reducevg command:
-----------------
To remove a VG:
When you delete the last disk from the VG, the VG is also removed.
When you activate a VG for use, all its resident filesystems are mounted
by default if they have
the flag mount=true in the /etc/filesystems file.
# varyonvg apachevg
# varyoffvg apachevg
To use this command, you must be sure that none of the logical volumes
are opened, that is, in use.
mkvg command:
-------------
You can create a new VG by using "smitty mkvg" or by using the mkvg
command.
As with physical volumes, volume groups can be created and removed and
their characteristics
can be modified.
Before a new volume group can be added to the system, one or more
physical volumes not used
in other volume groups, and in an available state, must exist on the
system.
The following example shows the use of the mkvg command to create a
volume group myvg
using the physical volumes hdisk1 and hdisk5.
mklv command:
-------------
To create a LV, you can use the smitty command "smitty mklv" or just use
the mklv command
by itself.
The mklv command creates a new logical volume within the VolumeGroup.
For example, all file systems
must be on separate logical volumes. The mklv command allocates the
number of logical partitions
to the new logical volume. If you specify one or more physical volumes
with the PhysicalVolume parameter,
only those physical volumes are available for allocating physical
partitions; otherwise, all the
physical volumes within the volume group are available.
The default settings provide the most commonly used characteristics, but
use flags to tailor the logical volume
to the requirements of your system. Once a logical volume is created,
its characteristics can be changed
with the chlv command.
When you create a LV, you also specify the number of LP's, and how a LP
maps to PP's.
Later, you can create one filesystem per LV.
Examples
The following example shows the use of mklv command to create a new LV
newlv in the rootvg
and it will have 10 LP's and each LP consists of 2 physical partitions.
To make a logical volume in volume group vg02 with one logical partition
and a total of two copies of the data, enter:
# mklv -c 2 vg02 1
# mklv -c 3 -u 2 -s n vg03 9
The following example uses a "map file /tmp/mymap1" which list which PPs
are to be used in creating a LV:
rmlv command:
-------------
# rmlv newlv
Warning, all data on logical volume newlv will be destroyed.
rmlv: Do you wish to continue? y(es) n(o) y
#
extendlv command:
-----------------
The following example shows the use of the extentlv command to add 3
more LP's to the LP newlv:
# extendlv newlv 3
cplv command:
-------------
Purpose
Copies the contents of a logical volume to a new logical volume.
Syntax
To Copy to a New Logical Volume
Description
Attention: Do not copy from a larger logical volume containing data to a
smaller one. Doing so results
in a corrupted file system because some data is not copied.
The cplv command copies the contents of SourceLogicalVolume to a new or
existing logical volume.
The SourceLogicalVolume parameter can be a logical volume name or a
logical volume ID.
The cplv command creates a new logical volume with a system-generated
name by using the default syntax.
The system-generated name is displayed.
Note:
The cplv command can not copy logical volumes which are in the open
state,
including logical volumes
that are being used as backing devices for virtual storage.
Flags
-f Copies to an existing logical volume without requesting user
confirmation.
-lv NewLogicalVolume Specifies the name to use, in place of a system-
generated name,
for the new logical volume. Logical volume names must be unique
systemwide names, and can range
from 1 to 15 characters.
-prefix Prefix Specifies a prefix to use in building a system-generated
name for the new logical volume.
The prefix must be less than or equal to 13 characters. A name cannot
be a name already used by another device.
-vg VolumeGroup Specifies the volume group where the new logical volume
resides. If this is not specified,
the new logical volume resides in the same volume group as the
SourceLogicalVolume.
Examples
To copy the contents of logical volume fslv03 to a new logical volume,
type:
# cplv fslv03
The new logical volume is created, placed in the same volume group as
fslv03,
and named by the system.
Errors:
-------
========================================================================
==
CASES of usage of cplv command:
CASE 1:
-------
------------------------------------------------------------------------
----
In the following example, an RS6000 has 1 one disk with rootvg on, and
has
just had a second disk installed. The second disk needs a volume group
creating on it and a data filesystem transferring to the new disk.
Ensure
that you have a full system backup befor you start.
lspv
df -k
In this example the /usr2 filesystem needs to be moved to the new disk
drive, freeing up space in the root volume group.
1, Create a data volume group on the new disk (hdisk1), the command
below
will create a volume group called datavg on hdisk1 with a PP size of
32 Meg:-
logform /dev/datalog
umount /usr2
5, Copy the /usr2 logical volume (lv01) to a new logical volume (lv11)
on
the new volume group :-
7, Change the /usr2 filesystem to use the jfslog on the new volume group
(/dev/datalog) :-
mount /usr2
df -k
9, Once the filesystem has been checked out, the old logical volume can
be removed :-
rmfs /dev/lv01
========================================================================
==
CASE 2:
-------
Doel:
-----
ROOTVG WASVG
-------------- --------------
|/usr (hd2) | | |
|.. | | |
|/prj (prjlv)|----------->|/prj (lvprj) |
|.. | | |
-------------- -------------
hdisk0,hdisk1 hdisk12,hdisk13
umount /prj
chfs -m /prj_old /prj
mount /prj
========================================================================
==
migratepv command:
------------------
Use the following command to move PPs from hdisk1 to hdisk6 and hdisk7
(all PVs must be in 1 VG)
# migratepv hdisk1 hdisk6 hdisk7
Use the following command to move PPs in LV lv02 from hdisk1 to hdisk6
# migratepv -l lv02 hdisk1 hdisk6
chvg command:
-------------
chpv command:
-------------
The chpv command changes the state of the physical volume in a volume
group by setting allocation
permission to either allow or not allow allocation and by setting the
availability to either
available or removed. This command can also be used to clear the boot
record for the given physical volume.
Characteristics for a physical volume remain in effect unless explicitly
changed with the corresponding flag.
Examples
The physical volume is closed to logical input and output until the -v a
flag is used.
The physical volume is now open for logical input and output.
syncvg Command
Purpose
Synchronizes logical volume copies that are not current.
Syntax
syncvg [ -f ] [ -i ] [ -H ] [ -P NumParallelLps ] { -l | -p | -v }
Name ...
Description
The syncvg command synchronizes the physical partitions, which are
copies of the original physical partition,
that are not current. The syncvg command can be used with logical
volumes, physical volumes,
or volume groups, with the Name parameter representing the logical
volume name, physical volume name,
or volume group name. The synchronization process can be time consuming,
depending on the
hardware characteristics and the amount of data.
When the -f flag is used, a good physical copy is chosen and propagated
to all other copies
of the logical partition, whether or not they are stale. Using this flag
is necessary
in cases where the logical volume does not have the mirror write
consistency recovery.
Note:
For the sycnvg command to be successful, at least one good copy of the
logical volume should
be accessible, and the physical volumes that contains this copy should
be in ACTIVE state.
If the -f option is used, the above condition applies to all mirror
copies.
If the -P option is not specified, syncvg will check for the
NUM_PARALLEL_LPS environment variable.
The value of NUM_PARALLEL_LPS will be used to set the number of logical
partitions to be synchronized in parallel.
Examples
To synchronize the copies on physical volumes hdisk04 and hdisk05,
enter:
# syncvg -p hdisk04 hdisk05
The lvmstat command display statistics values since the previous lvmstat
command.
# lvmstat -v rootvg -e
# lvmstat -v rootvg -C
# lvmstat -v rootvg
Logical Volume iocnt KB_read KB_wrtn Kbps
hd8 4 0 0 0.00
paging01 0 0 0 0.00
..
..
The mklv command allows you to select one or two additional copies for
each logical volume.
example:
mklv -c 3 -u 2 -s n vg03 9
Now replace the failed disk with a new one and name it hdisk7
# extendvg workvg hdisk7
# mirrorvg workvg
mirrorvg command:
-----------------
mirrorvg Command
Purpose
Mirrors all the logical volumes that exist on a given volume group.
This command only applies to AIX 4.2.1 or later.
Syntax
mirrorvg [ -S | -s ] [ -Q ] [ -c Copies] [ -m ] VolumeGroup
[ PhysicalVolume ... ]
Description
The mirrorvg command takes all the logical volumes on a given volume
group and mirrors
those logical volumes. This same functionality may also be accomplished
manually if you execute
the mklvcopy command for each individual logical volume in a volume
group. As with mklvcopy,
the target physical drives to be mirrored with data must already be
members of the volume group.
To add disks to a volume group, run the extendvg command.
Note: To use this command, you must either have root user authority or
be a member of the system group.
Flags
-c Copies Specifies the minimum number of copies that each logical
volume must have after
the mirrorvg command has finished executing. It may be possible,
through the independent use
of mklvcopy, that some logical volumes may have more than the minimum
number specified after
the mirrorvg command has executed. Minimum value is 2 and 3 is the
maximum value.
A value of 1 is ignored.
-m exact map Allows mirroring of logical volumes in the exact physical
partition order that
the original copy is ordered. This option requires you to specify a
PhysicalVolume(s) where the exact map
copy should be placed. If the space is insufficient for an exact
mapping, then the command will fail.
You should add new drives or pick a different set of drives that will
satisfy an exact
logical volume mapping of the entire volume group. The designated
disks must be equal to or exceed
the size of the drives which are to be exactly mirrored, regardless
of if the entire disk is used.
Also, if any logical volume to be mirrored is already mirrored, this
command will fail.
-Q Quorum Keep By default in mirrorvg, when a volume group's contents
becomes mirrored, volume group
quorum is disabled. If the user wishes to keep the volume group
quorum requirement after mirroring
is complete, this option should be used in the command. For later
quorum changes, refer to the chvg command.
-S Background Sync Returns the mirrorvg command immediately and starts
a background syncvg of the volume group.
With this option, it is not obvious when the mirrors have completely
finished their synchronization.
However, as portions of the mirrors become synchronized, they are
immediately used by the operating system
in mirror usage.
-s Disable Sync Returns the mirrorvg command immediately without
performing any type of
mirror synchronization. If this option is used, the mirror may exist
for a logical volume but
is not used by the operating system until it has been synchronized
with the syncvg command.
- rootvg mirroring When the rootvg mirroring has completed, you must
perform three additional tasks:
bosboot, bootlist, and reboot.
The bosboot command is required to customize the bootrec of the newly
mirrored drive.
The bootlist command needs to be performed to instruct the system which
disk and order you prefer
the mirrored boot process to start.
Finally, the default of this command is for Quorum to be turned off. For
this to take effect
on a rootvg volume group, the system must be rebooted.
- non-rootvg mirroring When this volume group has been mirrored, the
default command causes Quorum
to deactivated. The user must close all open logical volumes, execute
varyoffvg and then varyonvg on
the volume group for the system to understand that quorum is or is not
needed for the volume group.
If you do not revaryon the volume group, mirror will still work
correctly. However, any quorum changes
will not have taken effect.
rootvg and non-rootvg mirroring The system dump devices, primary and
secondary, should not be mirrored.
In some systems, the paging device and the dump device are the same
device. However, most users want
the paging device mirrored. When mirrorvg detects that a dump device and
the paging device are the same,
the logical volume will be mirrored automatically.
If mirrorvg detects that the dump and paging device are different
logical volumes, the paging device
is automatically mirrored, but the dump logical volume is not. The dump
device can be queried and modified
with the sysdumpdev command.
Remark:
-------
Run bosboot to initialize all boot records and devices by executing the
following command:
bosboot -a -d /dev/hdisk?
hdisk? is the first hdisk listed under the PV heading after the command
lslv -l hd5 has executed.
Secondary, you need to understant that the mirroring under AIX it's at
the logical volume level. The mirrorvg command is a hight level command
that use "mklvcopy" command.
So, all LV created before runing the mirrorvg command are keep
synchronised, but if you add a new LV after runing mirrorvg, you need to
mirror it manualy using "mklvcopy" .
Remark:
-------
lresynclv
Method 1:
---------
Make sure you have an empty disk, in this example its hdisk1
Add the disk to the vg via
# mirrorvg -s rootvg
# syncvg -v rootvg
# bosboot -a
Method 2:
---------
------------------------------------------------------------------------
-------
# Add the new disk, say its hdisk5, to rootvg
# If you use one mirror disk, be sure that a quorum is not required for
varyon:
#If you have other LV's in your rootvg, be sure to create copies for
them as well !!
------------------------------------------------------------------------
------
# lspv -l hdisk0
hd5 1 1 01..00..00..00..00 N/A
prjlv 256 256 108..44..38..50..16 /prj
hd6 59 59 00..59..00..00..00 N/A
fwdump 5 5 00..05..00..00..00
/var/adm/ras/platform
hd8 1 1 00..00..01..00..00 N/A
hd4 26 26 00..00..02..24..00 /
hd2 45 45 00..00..37..08..00 /usr
hd9var 10 10 00..00..02..08..00 /var
hd3 22 22 00..00..04..10..08 /tmp
hd1 8 8 00..00..08..00..00 /home
hd10opt 24 24 00..00..16..08..00 /opt
Method 3:
---------
In the following example, an RS6000 has 3 disks, 2 of which have the AIX
filesystems mirrored on. The boolist contains both hdisk0 and hdisk1.
There are no other logical volumes in rootvg other than the AIX system
logical volumes. hdisk0 has failed and need replacing, both hdisk0 and
hdisk1
are in "Hot Swap" carriers and therefore the machine does not need
shutting
down.
lspv
lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT
POINT
hd6 paging 4 8 2 open/syncd N/A
hd5 boot 1 2 2 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 12 24 2 open/syncd /usr
hd9var jfs 1 2 2 open/syncd /var
hd3 jfs 2 4 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
1, Reduce the logical volume copies from both disks to hdisk1 only :-
lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-101 free outer edge
102-201 free outer middle
202-301 free center
302-401 free inner middle
402-501 free inner edge
bosboot -a -d /dev/hdisk1
bootlist -m normal rmt0 cd0 hdisk1
lspv
6, Delete hdisk0 :-
rmdev -l hdisk0 -d
7, Remove the failed hard drive and replace with a new hard drive.
cfgmgr
syncvg -v rootvg
chvg -Q n rootvg
Method 4:
---------
Make sure you have an empty disk, in this example its hdisk1
Add the disk to the vg via "extendvg rootvg hdisk1
Mirror the vg via: "mirrorvg rootvg"
Adapt the bootlist to add the current disk, the system will then fail to
hdisk1 is hdisk0 fails during startup
do bootlist -o -m normal
this will list currently 1 disk, in this exmaple hdisk0
do bootlist -m normal hdisk0 hdisk1
Run a bosboot on both new disks, this will install all software needed
for boot on the disk
bosboot -ad hdisk0
bosboot -ad hdisk1
Method 5:
---------
Although the steps to mirror volume groups between HP and AIX are
incredibly similar,
there are enough differences to send me through hoops if/when I ever
have to do that.
Therefore, the following checklist:
Otherwise:
chvg -Q n rootvg
bosboot ${disk}
Method 6:
---------
For more information, and a comprehensive procedure see the man page for
mirrorvg and
The logical volume modified with this command uses the Copies parameter
as its new copy characteristic.
The data in the new copies are not synchronized until one of the
following occurs:
the -k option is used, the volume group is activated by the varyonvg
command, or the volume group
or logical volume is synchronized explicitly by the syncvg command.
Individual logical partitions
are always updated as they are written to.
# smit mklv
or
# smit mklvcopy
Using "smit mklv" you can create a new LV and at the same time tell the
system to create a mirror
(2 or 3 copies) of each LP and which PV's are involved.
After a VG is created, you can create filesystems. You can use smitty or
the crfs and mkfs command.
File systems are confined to a single logical volume.
The journaled file system (JFS) and the enhanced journaled file system
(JFS2) are built into the
base operating system. Both file system types link their file and
directory data to the structure
used by the AIX Logical Volume Manager for storage and retrieval. A
difference is that JFS2 is designed to accommodate
a 64-bit kernel and larger files.
Run lsfs -v jfs2 to determine if your system uses JFS2 file systems.
This command returns no output if it finds only standard file systems.
crfs:
-----
- To make a JFS on the rootvg volume group with nondefault fragment size
and nondefault nbpi, enter:
# crfs -v jfs -g rootvg -m /test -a size=32768 -a frag=512 -a
nbpi=1024
This command creates the /test file system on the rootvg volume group
with a fragment size of 512 bytes,
a number of bytes per i-node (nbpi) ratio of 1024, and an initial size
of 16MB (512 * 32768).
- To make a JFS on the rootvg volume group with nondefault fragment size
and nondefault nbpi, enter:
# crfs -v jfs -g rootvg -m /test -a size=16M -a frag=512 -a nbpi=1024
This command creates the /test file system on the rootvg volume group
with a fragment size of 512 bytes,
a number of bytes per i-node (nbpi) ratio of 1024, and an initial size
of 16MB.
- To create a JFS2 file system which can support NFS4 ACLs, type:
# crfs -v jfs2 -g rootvg -m /test -a size=1G -a ea=v2
- This command creates the /test JFS2 file system on the rootvg volume
group with an initial size of 1 gigabyte.
The file system will store extended attributes using the v2 format.
# crfs -v jfs -g backupvg -m /backups -a size=32G -a bf=true
Extended example:
-----------------
Note that we did not mentioned the size of the filesystem. This is
because we use a previously defined LV
with a known size.
Notes:
1. The option -a bf=true allows large files [ > 2Gb];
mkfs:
-----
The mkfs command makes a new file system on a specified device. The mkfs
command initializes the volume label,
file system label, and startup block.
The Device parameter specifies a block device name, raw device name, or
file system name. If the parameter
specifies a file system name, the mkfs command uses this name to obtain
the following parameters from the
applicable stanza in the /etc/filesystems file, unless these parameters
are entered with the mkfs command.
- To specify the volume and file system name for a new file system,
type:
# mkfs -lworks -vvol001 /dev/hd3
This command creates an empty file system on the /dev/hd3 device, giving
it the volume serial number vol001
and file system name works. The new file system occupies the entire
device.
The file system has a default fragment size (4096 bytes) and a default
nbpi ratio (4096).
This creates a large file enabled JFS file system with an allocation
group size of 64 megabytes and 1 inode
for every 131072 bytes of disk. The size of the file system will be the
size of the logical volume lv01.
- To create a JFS2 file system which can support NFS4 ACLs, type:
# mkfs -V jfs2 -o ea=v2 /dev/lv01
This command creates an empty file system on the /dev/lv01 device with
v2 format for extended attributes.
chfs command:
-------------
- Example 1:
To split off a copy of a mirrored file system and mount it read-only for
use as an online backup, enter:
# chfs -a splitcopy=/backup -a copy=2 /testfs
This mount a read-only copy of /testfs at /backup.
- Example 3:
- Eaxample 4:
- Example 5:
2) umount old_filename
3) mount new_filename
lsfs command:
-------------
Syntax
lsfs [ -q ] [ -c | -l ] [ -a | -v VfsType | -u MountGroup|
[FileSystem...] ]
Description
The lsfs command displays characteristics of file systems, such as mount
points, automatic mounts, permissions,
and file system size. The FileSystem parameter reports on a specific
file system.
The following subsets can be queried for a listing of characteristics:
To show the file system size, the fragment size, the compression
algorithm (if any), and the
number of bytes per i-node as recorded in the superblock of the root
file system, enter:
#lsfs -q /
If you use advanced storage on AIX, the workings on disks and volume
groups are a bit different
from the traditional ways, using local disks, as described above.
You can use SDD or SDDPCM Multipath IO. This section describes SDD. See
section 31.5 for SDDPCM.
The IBM System Storage Multipath Device Driver SDD provides multipath
configuration environment support
for a host system that is attached to storage devices. It provides:
The IBM System Storage Multipath Subsystem Device Driver Path Control
Module SDDPCM provides
AIX MPIO support. Its a loadable module. During the configuration of
supported devices, SDDPCM is loaded
and becomes part of the AIX MPIO Fibre Channel protocol device driver.
The AIX MPIO-capable device driver
with the SDDPCM module provides the same functions that SDD provides.
Note that before attempting to exploit the Virtual shared disk support
for the Subsystem device driver,
you must read IBM Subsystem Device Driver Installation and User's Guide.
-------------------------------
| Host System |
| ------- ------- |
| |FC 0 | | FC 1| |
| ------- ------- |
-------------------------------
| |
| |
----------------------------------
ESS | -------- -------- |
| |port 0| |port 1| |
| -------- \ /-------- |
| | \ / | |
| | \/ | |
| | / \ | |
| -----------/ \---------- |
| |Cluster 1| |Cluster 2||
| ----------- -----------|
| | | | | | | | | |
| | | | | | | | | |
| O--|--|--|-------| | | | |
| lun0| | | | | | |
| O--|--|---------| | | |
| lun1| | | | |
| O--|-----------| | |
| lun2| | |
| O--------------| |
| lun3 |
---------------------------------
DPO (Data Path Optimizer) was renamed by IBM a couple years ago- and
became SDD (Subsystem Device Driver).
When redundant paths are configured to ESS logical units, and the SDD is
installed and configured,
the AIX(R) lspv command shows multiple hdisks as well as a new construct
called a vpath. The hdisks and vpaths
represent the same logical unit. You will need to use the lsvpcfg
command to get more information.
Each SDD vpath device represents a unique physical device on the storage
server.
Each physical device is presented to the operating system as an
operating system disk device.
So, essentially, a vpath device acts like a disk.
You will see later on that a hdisk is actually a "path" to a LUN, that
can be reached either by fscsi0 or fscsi1.
Also you will see that a vpath represents the LUN.
Examples:
---------
For example, after issuing lspv, you see output similar to this:
# lspv
hdisk0 000047690001d59d rootvg
hdisk1 000047694d8ce8b6 None
hdisk18 000047694caaba22 None
hdisk19 000047694caadf9a None
hdisk20 none None
hdisk21 none None
hdisk22 000047694cab2963 None
hdisk23 none None
hdisk24 none None
vpath0 none None
vpath1 none None
vpath2 000047694cab0b35 gpfs1scsivg
vpath3 000047694cab1d27 gpfs1scsivg
# lsvpcfg
vpath0 (Avail ) 502FCA01 = hdisk18 (Avail pv )
vpath1 (Avail ) 503FCA01 = hdisk19 (Avail pv )
vpath2 (Avail pv gpfs1scsivg) 407FCA01 = hdisk20 (Avail ) hdisk24 (Avail
)
- vpath2 has two paths (hdisk20 and hdisk24) and has a volume group
defined on it. Notice that with the
lspv command, hdisk20 and hdisk24 look like newly installed disks with
no PVIDs. The lsvpcfg command had
to be used to determine that hdisk20 and hdisk24 make up vpath2, which
has a PVID.
Warning: so be very carefull not to use a hdisk for a "local" VG, if its
already used for a vpath.
Other Example:
--------------
# lspv
hdisk0 00c49e8c8053fe86 rootvg
active
hdisk1 00c49e8c841a74d5 rootvg
active
-hdisk2 none None
-hdisk3 none None
vpath0 00c49e8c94c02c15 datavg
active
vpath1 00c49e8c94c050d4 appsvg
active
-hdisk4 none None
vpath2 00c49e8c2806dc22 appsvg
active
-hdisk5 none None
-hdisk6 none None
-hdisk7 none None
# lsvpcfg
Active Adapters :2
Total Devices : 3
Part Number.................03N6441
EC Level....................A
Serial Number...............1D54508045
Manufacturer................001D
Feature Code................280B
FRU Number.................. 03N6441
Device Specific.(ZM)........3
Network Address.............10000000C94F91CD
ROS Level and ID............0288193D
Device Specific.(Z0)........1001206D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF801412
Device Specific.(Z5)........0288193D
Device Specific.(Z6)........0683193D
Device Specific.(Z7)........0783193D
Device Specific.(Z8)........20000000C94F91CD
Device Specific.(Z9)........TS1.90X13
Device Specific.(ZA)........T1D1.90X13
Device Specific.(ZB)........T2D1.90X13
Device Specific.(YL)........U7879.001.DQDKCPR-P1-C2-T1
Please note that, for example, from the above output, that fsci0 can be
"linked" to hdisk2, hdisk3 and hdisk4,
due to the location code.
You can compare that to the output of "datapath query device".
Also interesting can be the following:
# lsdev -C | grep fc
fcnet0 Defined 05-08-02 Fibre Channel Network Protocol
Device
fcnet1 Defined 07-08-02 Fibre Channel Network Protocol
Device
fcs0 Available 05-08 FC Adapter
fcs1 Available 07-08 FC Adapter
From this, you can see that fcs0 is the "parent" of the child "fsci0".
# lsattr -D -l fscsi0
attach none How this adapter is CONNECTED False
dyntrk no Dynamic Tracking of FC Devices True
fc_err_recov delayed_fail FC Fabric Event Error RECOVERY Policy True
scsi_id Adapter SCSI ID False
sw_fc_class 3 FC Class for Fabric True
# lsattr -D -l fcs0
bus_intr_lvl Bus interrupt level
Fals e
bus_io_addr 0x00010000 Bus I/O address
Fals e
bus_mem_addr 0x01000000 Bus memory address
Fals e
init_link al INIT Link flags
True
intr_priority 3 Interrupt priority
Fals e
lg_term_dma 0x800000 Long term DMA
True
max_xfer_size 0x100000 Maximum Transfer Size
True
num_cmd_elems 200 Maximum number of COMMANDS to queue to the
adapter True
pref_alpa 0x1 Preferred AL_PA
True
sw_fc_class 2 FC Class for Fabric
True
From this you can see that a hdisk is actually a "path" to a LUN, that
can be reached either by fscsi0 or fscsi1.
Also you can see that a vpath represents the LUN.
Adapter #: 0
=============
Total Read Total Write Active Read Active Write
Maximum
I/O: 9595892 4371836 0 0
23
SECTOR: 176489389 138699019 0 0
5128
Adapter #: 1
=============
Total Read Total Write Active Read Active Write
Maximum
I/O: 10238891 4523508 0 0
24
SECTOR: 188677891 143739157 0 0
5128
Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid
has 7 digits.
# /usr/sbin/cfgvpath
Example:
# mkvg4vp -B -t 32 -s 4 -y DB01_RECOV_VG1 vpath4 vpath10
Before installing SDD, you should check firmware levels, and AIX APAR
requirements. See the following sites:
-- AIX APAR:
www-03.ibm.com/servers/eserver/support/unixservers/aixfixes.html
or,
www.ibm.com/servers/eserver/support/pseries/aixfixes.html
or,
www14.software.ibm.com/webapp/set2/sas/f/genunix3/aixfixes.html
If you have "sdd" installed use the datapath command, and with sddpcm
use the pcmpath command.
Just as the commands shown in section 31.4, just replace datapath with
pcmpath, like
On a system with SDDPCM, you will see the SDDPCM server daemon,
"pcmsrv", running.
This process checks available paths and does other checks and
monitoring.
# stopsrc -s pcmsrv
# startsrc -s pcmsrv
Note 1:
-------
thread
Q +A:
> I've been reading IBM web sites and PDF manuals and still can't decide
> on exactly how to upgrade my AIX 4.3.3 machine to AIX 5.2 and have my
> ESS SDD vpath disks visible and working when I'm done.
>
> Has someone done this? Can you comment on my proposed method here?
> 2. After the migration, and reboot, I understand that the ESS disks
will
> not "be there", since the migration does not upgrade the SDD
(subsystem
> device driver) does NOT get upgraded. Question: Is this true?
Yes, the datapath devices will be gone because you deleted the SDD
software; IIRC, that is part of the un-install process. After your
upgrade, install SDD just like the first time. This will get you your
hdisks and vpaths back, though not necessarily with the same numbers;
have
a 'lsvpcfg' from before your upgrade to cross-reference your new setup
to.
'importvg' the VG(s) one at a time, using one of the hdisk's which
constitute the vpath, then run 'hd2vp' on the VG. That will convert the
VG back to using the vpath's.
>
> 3. Vary off all ESS volume groups, if I shouldn't have done this back
in
> step 1.
>
> 4. Remove all the "datapath devices", via: rmdev -dl dpo -R
>
> 5. Uninstall the 4.3 version of the SDD.
>
> 6. Install the 5.2 version of the SDD.
>
> 7. Install the latest PTF of the 5.2 SDD, that they call version
> 1.5.1.3.
>
> 8. Reboot.
>
>
> If you can tell me how to make this procedure more nearly correct, I'd
> greatly appreciate it.
Note 2:
-------
thread
Q + A:
>
> I need a quick refresher here. I've got a HACMP (4.4) cluster with
SAN- attached
> ESS storage. SDD is installed. Can I add volumes to one of these
volume groups on
> the fly, or does HA need to be down? It's been awhile since I have
done this and I
> can't quite remember if I have to jump through any hoops. Thanks for
the help.
Note 3:
-------
The hd2vp script converts a volume group from supported storage device
hdisks to SDD vpath devices, and the vp2hd script converts a volume
group from SDD vpath devices to supported storage device hdisks.
Use the vp2hd program when you want to configure your applications back
to original supported storage device hdisks, or when you want to remove
SDD from your AIX host system.
Note 4:
-------
thread
Q:
Hi There,
I want to add a vpath to running hacmp cluster with HACMP 5.1 on AIX 5.2
with Rotating Resource Group.
If anyone has done it before then can provide a step by step procedure
for this. Do i need to stop and start
HACMP for this?
A:
On Vg active node :
#extendvg4vp vg00 vpath10 vpath11
#smitty chfs ( Increase the f/s as required )
#varyonvg -bu vg00 ( this is to un-lock the vg)
Regards
Note 5:
-------
> HI,
Regards,
Actually, I have the same question as Frederic and you have not
quite answered it. Sure, lsdev can tell you that "hdisk5" is
matched to "fcs0" . . . but what tells you that "fcs0" in turn
matches to "fscsi0"? And if "hdisk126" matches to adapter "fchan1",
how do I determine what that matches to? I've checked all of the
various lsxxxx commands but can't find this bit of info.
Note 6:
-------
thread
Q:
where to fidnd a guide for the adapter (described all its states, LED
blinkging/lighting)
Adapter is cabled by SAN guys, they double checked it and when I run:
thx in advance.
A:
Regards,
Do something like:
HTH,
thx anyway,
I will ask my SAN team to check cables once more.
Note 7:
-------
thread
# cfallvpath
# rmdev -l fcs1 -d
Example
rmdev -dl dpo -R ; rmdev -dl fscsi0 -R ; cfgmgr -vl fcs0 ; cfallvpath
Note 8:
-------
Technote (FAQ)
Problem
When non-root AIX users issue SDD datapath commands, the "No device file
found" message results.
Cause
AIX SDD does not distinguish between file not found and invalid
permissions.
Solution
Login as the root user or "su" to root user and re-execute command in
order to obtain the desired SDD datapath
command output.
Note 9:
-------
Question:
Hi,
I have an AIX 5.3 server running with 2 FCs. One on a DS8300 and one on
a DS4300.
On the server, i have a filesystems that is mounted and active (hdisks
are from the DS8300).
I can access it fine, write, delete etc...
Answer:
Hi.
The reason is that the vpaths are not part of a varied on volume group.
If you do a 'datapath query device' you should find all the paths will
be
state=closed.
If the vpaths are being used by a volume group, do a varyonvg xxxx.
Then display the datapath and the paths should be active.
Question:
Hi.
Question:
Hi.
Answer:
...
Note 10:
--------
thread
Q:
Hi All,
A:
HI
AT that time I did not have the corect version off the device driver for
the fiber cards in P570.
/HGA
Note 11:
--------
Greetings:
The "0514-061 Cannot find a child device" is common when the FC card is
either
not attached to a FC device, or if it is attached, then I would look at
the
polarity of the cable
ie. (tx -> rx and rx -> tx) NOT (tx -> tx and rx -> rx)
I would make sure the FC card has connectivity to a FC device, not just
the
fabric and re-run cfgmgr.
-=Patrick=-
To: [email protected]
cc: (bcc: Patrick Bigelbach/DSS)
Subject Re: Cannot cfgmgr on a new FC
this should load any filesets you need for the adapter if they are not
already there. You should the adapter in lsdev -Cc adapter | grep fs.
HTH
Vince
-----Original Message-----
From: IBM AIX Discussion List [mailto:[email protected]] On Behalf Of
Calderon, Linda
Sent: Wednesday, February 19, 2003 10:12 AM
To: [email protected]
Subject: Cannot cfgmgr on a new FC
0514-519 The following device was not found in the customized device
configuration database: name 'fcs1'
* cfgmgr
Note 12:
--------
thread
Q:
Hi All AIXers,
I am trying to add some vpath to Current Volume Group (which is on
vpath)and i
am getting this error
Do anybody have any idea about this error. I never seen this error
before.
Thanks
A:
James,
If you're adding a vpath to a volume group that has other vpaths, you
will need to use extendvg4vp instead of extendvg.
Note 13:
--------
On Vg active node :
#extendvg4vp vg00 vpath10 vpath11
#smitty chfs ( Increase the f/s as required )
#varyonvg -bu vg00 ( this is to un-lock the vg)
Regards
Note 14:
--------
thread
as a backup action
After a spech with the Country IBM referent we modify the action plan
in:
Anyway before applying the modified action plan I try to follow the
original one, but with unpredictable return codes. With some vpaths
works, with someothers halfworks (update the VGDA, but not the odm),
with others return the original error.
Regards.
1:
==
APAR status
Closed as program error.
Error description
Users of the 64bit kernel may observe an error when cfgmgr is
invoked at runtime in the cfgsisscsi or cfgsisioa config
methods. Following is an example:
# cfgmgr
Method error (/usr/lib/methods/cfgsisscsi -l sisscsia0 ):
0514-061 Cannot find a child device.
APAR information
APAR number IY48873
Reported component name AIX 5L POWER V5
Reported component ID 5765E6200
Reported release 520
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2003-09-19
Closed date 2003-09-19
Last modified date 2003-10-24
Q:
I have an IBM DS4400 with two EXP 700s expansion units connected to a
pSeries 650 with AIX 5.1.I have
created two logical drives in the storage unit.When i run "cfgmgr" to
recognise the new raw physical volume
each disk is reported twice.
A:
#>> Ed,
what you probably should do is run the cfgmgr comand without the
device name behind it. Because you deleted the scsi device with the
options -dR you also removed any child devices.
Q:
Hi...
Does someone know what to do with an SDD driver which can't detect
vpaths
from an ESS F20 but hdisks are already available on AIX?
Regards
Luis A. Rojas
A:
I solve the problem using the hd2vp command which converts the logical
hdisk
to its related vpath. And Wal? !.. vpaths suddenly were recognized by
cfgvpath command.
Best Regards
Note 19: fget_config:
---------------------
how to show the current state and volume (hdisk) ownership in a IBM
DS4000
Description
The fget_config command shows the current state and volume (hdisk)
ownership.
# fget_config
# fget_config -A
Example
fget_config -A
Note 20:
--------
Q:
What filesets do dpovgfix, hd2vp and vp2hd belong to. I installed my sdd
driver and can see everything but can't find these commands.
A:
They are part of your SDD drivers. You probably installed the
devices.xxx filesets. Did you also
install the host attachment script... the ibm2105 filesets?
Note 21:
--------
thread
Q:
Hi
I have several AIX LPARS running on SVC controlled disks. Right now i
have SDD SW 1.6.1.2. After configuration
i have some vpath devices that can be managed using the datapath
command.
Now in a recent training of SVC i was asked to install the new SDDPCM
driver in order to get some of the benefits
of this SW driver.
SDDPCM does not use the concept of vpath anymore, instead a hdisk device
object is created.
This object has definitions and attributes in ODM files.
Recently i had to change a faulty HBA under SDD drivers. I was able to:
How to do the same with SDDPCM even when there's no concept of vpath
anymore.
Thanks in advanced
A:
Hello ,
You can do the same with sddpcm , either using the MPIO commands or
smitty screens , smitty devices ---> MPIO devices
there you can list paths , remove paths , adapters.
IN the SDD user guide there is a complete section describing what you
can do , but same functions you use
for the vpath , you can use for sddpcm.
Here is the link for the latest user guide
http://www-1.ibm.com/support/docview.wss?rsP3&con
text=ST52G7&dc=DA490&dc=DA4A30&dc=DA480&dc=D700&dc
=DA410&dc=DA4A20&dc=DA460&dc=DA470&dc=DA400&uid=ss g1
S7000303&loc=en_US&cs=utf-8&lang=en
Note 22:
--------
thread
Q:
Greetings:
Thanks in advance.
Jay.
A:
Hi
If using vpath devices then you can confirm that you can open any given
device by running:
Also you can review the errpt reports in order to look for VPATH OPEN
messages. You can also use
the lquerypr command in order to check for SCSI reservations in the SAN
box previously set
by another host (in case of a cluster).
Note 23:
--------
thread
Q:
All,
Any of you scripters out there have any suggestions? Thanks for your
help in
advance!
A:
Create the VG
>mkvg -B -y datavg vpathN
Extend it
for i in `lspv | grep vpath | grep None | awk '{print #1}'`
do
extendvg datavg $i
done
That would assign all unused vpaths to the VG. BTW Use the vpath and
not the hdisk. You could add a count into it to limit the number of
disks you assign.
Note 24:
--------
thread
Q:
it sound like the vpath is showing correctly after cfgmgr so thats OK.
But you need to use extendvg4vp and not just extendvg
Do a 'smitty vg' and choose
'Add a Data Path Volume to a Volume Group'
Examples mkpath:
--To define and configure an already defined path between scsi0 and the
hdisk1 device at SCSI ID 5
and LUN 0 (i.e., connection 5,0), enter:
# mkpath -l hdisk1 -p scsi0 -w 5,0
--To only add to the Customized Paths object class a path definition
between scsi0 and the hdisk1 disk device
at SCSI ID 5 and LUN 0, enter:
# mkpath -d -l hdisk1 -p scsi0 -w 5,0
Examples lspath:
--To display the set of paths whose operational status is failed, enter:
# lspath -s failed
Note that this output shows both the path status and the operational
status of the device.
The path status simply indicates whether the path is configured or not.
The operational status indicates
how the path is being used with respect to path selection processing in
the device driver.
Only paths with a path status of available also have an operational
status. If a path is not currently configured
into the device driver, it does not have an operational status.
Examples of displaying path attributes:
--If the target device is a SCSI disk, to display all attributes for the
path to parent scsi0 at connection 5,0,
use the command:
# lspath -AHE -l hdisk10 -p scsi0 -w "5,0"
The system will display a message similar to the following:
attribute value description user_settable
weight 1 Order of path failover selection true
IBM TotalStorager FAStT has been renamed IBM TotalStorage DS4000 series
Q20:
What's the difference between using an ESS with or without SDD or SDDPCM
installed on the host?
A20:
The use of SDD or SDDPCM gives the AIX host the ability to access
multiple paths to a single LUN
within an ESS. This ability to access a single LUN on multiple paths
allows for a higher degree of
data availability in the event of a path failure. Data can continue to
be accessed within the ESS
as long as there is at least one available path. Without one of these
installed, you will lose access
to the LUN in the event of a path failure.
However, your choice of whether to use SDD or SDDPCM impacts your
ability to use single-node quourm:
Q3: What disk support guidelines must be followed when running GPFS in
an sp cluster type?
Q6: What disk support guidelines must be followed when running GPFS in
an rpd cluster type?
Q9:What are the disk support guidelines that must be followed when
running GPFS in an hacmp cluster type
Examples:
# chdev -l fscsi0 -a fc_err_recov=fast_fail
# chdev -l fscsi0 -a dyntrk=yes
Display attributes:
http://www-1.ibm.com/support/docview.wss?
rs=540&context=ST52G7&uid=ssg1S1002295&loc=en_US&cs=utf-8&lang=en
All hdisks and vpath devices must be removed from host system before
upgrading to SDD host attachment script
32.6.100.21 and above. All MPIO hdisks must be removed from host system
before upgrading to SDDPCM host attachment
script 33.6.100.9.
Flash (Alert)
Abstract
When upgrading from SDDPCM host attachment script
devices.fcp.disk.ibm2105.mpio.rte version 33.6.100.8 or below
to 33.6.100.9, all SDDPCM MPIO hdisks must be removed from the AIX host
system before the upgrade.
When upgrading from SDD host attachment script ibm2105.rte version
32.6.100.18 or below to 32.6.100.21 or later,
all AIX hdisks and SDD vpath devices must be removed from the AIX host
system before the upgrade.
Content
Please note that this document contains the following sections:
- AIX OS only*
- Host attachment + AIX OS*
- SDD + AIX OS*
- Host attachment + SDD
- Host attachment only
- SDD + Host attachment + AIX OS*
* Upgrading the AIX OS will always require you to install the SDD which
corresponds to the new AIX OS level.
To upgrade SDD only, follow the procedure in the SDD User's Guide.
Example:
===================================================
Previous lspv output (from step 4):
hdisk0 000bc67da3945d3c None
hdisk1 000bc67d531c699f rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
hdisk15 none None
hdisk16 none None
hdisk17 none None
hdisk18 none None
hdisk19 none None
hdisk20 none None
hdisk21 none None
vpath0 000bc67d318fb8ea SDDVG0
vpath1 000bc67d318fde50 SDDVG1
vpath2 000bc67d318ffbb0 SDDVG2
vpath3 000bc67d319018f3 SDDVG3
vpath4 000bc67d319035b2 SDDVG4
Current lspv output (from this step):
hdisk0 000bc67da3945d3c None
hdisk1 000bc67d531c699f rootvg active
hdisk2 000bc67d318fb8ea None
hdisk3 000bc67d318fde50 None
hdisk4 000bc67d318ffbb0 None
hdisk5 000bc67d319018f3 None
hdisk6 000bc67d319035b2 None
hdisk7 000bc67d318fb8ea None
hdisk8 000bc67d318fde50 None
hdisk9 000bc67d318ffbb0 None
hdisk10 000bc67d319018f3 None
hdisk11 000bc67d319035b2 None
hdisk12 000bc67d318fb8ea None
hdisk13 000bc67d318fde50 None
hdisk14 000bc67d318ffbb0 None
hdisk15 000bc67d319018f3 None
hdisk16 000bc67d319035b2 None
hdisk17 000bc67d318fb8ea None
hdisk18 000bc67d318fde50 None
hdisk19 000bc67d318ffbb0 None
hdisk20 000bc67d319018f3 None
hdisk21 000bc67d319035b2 None
vpath0 none None
vpath1 none None
vpath2 none None
vpath3 none None
vpath4 none None
In this case, hdisk2, hdisk7, hdisk12, and hdisk17 from the current lspv
output
has the pvid which matches the pvid of SDDVG0 from the previous lspv
output.
So, use either hdisk2, hdisk7, hdisk12, or hdisk17 to import the volume
group
with the name SDDVG0
IBM Flash Alert: SDD 1.6.2.0 requires minimum AIX code levels; possible
0514-035 error:
------------------------------------------------------------------------
---------------
Flash (Alert)
Abstract
SDD 1.6.2.0 requires minimum AIX code levels. Not upgrading to correct
AIX version and level can result in
0514-035 error when attempting removal of dpo or vpath device
Content
Starting from SDD version 1.6.2.0, a unique ID attribute is added to SDD
vpath devices, in order to
support AIX5.3 VIO future features. AIX device configure methods have
been changed in both AIX52 TL8 and
AIX53 TL4 for this support.
Following are the requirements for this version of SDD with:
If upgraded to SDD 1.6.2.0 and above without first upgrading AIX to the
levels listed above the following error
will be experienced when attempting to remove any vpath devices using
the:
or the
Solution:
1) Upgrade AIX to correct level and ptf, or
2) Contact SDD support at 1-800-IBM-SERV for steps to clean up ODM to
allow for downgrading the SDD level
from 1.6.2.0, if unable to upgrade AIX to a newer technology level.
Note 30:
--------
fcnet0 deleted
fscsi0 deleted
fcs0 deleted
# cfgmgr
root@n5114l02:/root#
adapter checked with several commands
connection with san seems impossible.
root@n5114l02:/root#lsattr -El fscsi0
attach none How this adapter is CONNECTED False
dyntrk no Dynamic Tracking of FC Devices True
fc_err_recov delayed_fail FC Fabric Event Error RECOVERY Policy True
scsi_id Adapter SCSI ID False
sw_fc_class 3 FC Class for Fabric True
Note 31:
--------
A fix is available
Obtain fix for this APAR
APAR status
Closed as program error.
Error description
#---------------------------------------------------
chvg -t renumber pvs that have pv numbers greater than
maxpvs with the new factor. chvg -t is only updating the
new pv_num in lvmrec and not updating the VGDA.
chvg -t leaves the vg is inconsistent state and any changes to
vg may get unpredictable results like a system crash.
Local fix
Problem summary
#---------------------------------------------------
chvg -t renumber pvs that have pv numbers greater than
maxpvs with the new factor. chvg -t is only updating the
new pv_num in lvmrec and not updating the VGDA.
chvg -t leaves the vg is inconsistent state and any changes to
vg may get unpredictable results like a system crash.
Problem conclusion
Fix chvg -t to update the VGDA with the new pv number.
Add a check in hd_kextendlv to make sure that the pvol we
are trying to access is not null.
Temporary fix
Comments
APAR information
APAR number IY83872
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2006-04-11
Closed date 2006-04-11
Last modified date 2006-05-03
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
Note 32:
========
------------------------------------------------------------------------
--------
========================================================================
===
AUSCERT External Security Bulletin Redistribution
ESB-2008.0267 -- [AIX]
AIX Logical Volume Manager buffer overflow
14 March 2008
========================================================================
===
Original Bulletin:
http://www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd?
mode=18&ID=4169
I. OVERVIEW
II. DESCRIPTION
/usr/sbin/lchangevg
/usr/sbin/ldeletepv
/usr/sbin/putlvodm
/usr/sbin/lvaryoffvg
/usr/sbin/lvgenminor
/usr/sbin/tellclvmd
III. IMPACT
The successful exploitation of this vulnerability allows a
non-privileged user to execute code with root privileges.
V. SOLUTIONS
A. APARS
http://www.ibm.com/support/docview.wss?uid=isg1IZ00559
http://www.ibm.com/support/docview.wss?uid=isg1IZ10828
http://www.ibm.com/support/docview.wss?uid=isg1IY98331
http://www.ibm.com/support/docview.wss?uid=isg1IY98340
http://www.ibm.com/support/docview.wss?uid=isg1IY99537
B. FIXES
ftp://aix.software.ibm.com/aix/efixes/security/lvm_ifix.tar
-----------------------------------------------------------------
bos.lvm.rte 5200-08
IZ10828_08.071212.epkg.Z
bos.lvm.rte 5200-08
IZ00559_8a.071212.epkg.Z
bos.clvm.enh 5200-08
IZ00559_8b.071212.epkg.Z
bos.lvm.rte 5200-09
IZ10828_09.071212.epkg.Z
bos.lvm.rte 5200-09
IZ00559_9a.071211.epkg.Z
bos.clvm.enh 5200-09
IZ00559_9b.071211.epkg.Z
bos.lvm.rte 5200-10
IZ10828_10.071212.epkg.Z
bos.lvm.rte 5200-10 bos.rte.lvm.5.2.0.107.U
bos.clvm.enh 5200-10
bos.clvm.enh.5.2.0.107.U
bos.lvm.rte 5300-05
IY98331_05.071212.epkg.Z
bos.lvm.rte 5300-05
IY99537_05.071212.epkg.Z
bos.lvm.rte 5300-05
IY98340_5a.071211.epkg.Z
bos.clvm.enh 5300-05
IY98340_5b.071211.epkg.Z
sum filename
------------------------------------
14660 17 IY98331_05.071212.epkg.Z
26095 9 IY98340_5a.071211.epkg.Z
40761 8 IY98340_5b.071211.epkg.Z
10885 16 IY99537_05.071212.epkg.Z
24909 10 IZ00559_8a.071212.epkg.Z
64769 9 IZ00559_8b.071212.epkg.Z
65110 10 IZ00559_9a.071211.epkg.Z
25389 9 IZ00559_9b.071211.epkg.Z
26812 26 IZ10828_08.071212.epkg.Z
55064 26 IZ10828_09.071212.epkg.Z
55484 26 IZ10828_10.071212.epkg.Z
03885 157 bos.clvm.enh.5.2.0.107.U
30581 128 bos.clvm.enh.5.3.0.61.U
48971 1989 bos.rte.lvm.5.2.0.107.U
64179 2603 bos.rte.lvm.5.3.0.63.U
cksum filename
-------------------------------------------
3121912357 16875 IY98331_05.071212.epkg.Z
107751313 9190 IY98340_5a.071211.epkg.Z
1129637178 7735 IY98340_5b.071211.epkg.Z
4019303479 16201 IY99537_05.071212.epkg.Z
1791374386 9289 IZ00559_8a.071212.epkg.Z
3287090389 8299 IZ00559_8b.071212.epkg.Z
565672617 9294 IZ00559_9a.071211.epkg.Z
257555679 8302 IZ00559_9b.071211.epkg.Z
3930477686 26525 IZ10828_08.071212.epkg.Z
1199269029 26533 IZ10828_09.071212.epkg.Z
358657844 26480 IZ10828_10.071212.epkg.Z
3753492719 160768 bos.clvm.enh.5.2.0.107.U
4180839749 131072 bos.clvm.enh.5.3.0.61.U
3765659627 2036736 bos.rte.lvm.5.2.0.107.U
3338925192 2665472 bos.rte.lvm.5.3.0.63.U
------------------------------------------------------------------
d9929214a4d85b986fb2e06c9b265c768c7178a9
IY98331_05.071212.epkg.Z
0f5fbcdfbbbf505366dad160c8dec1c1ce75285e
IY98340_5a.071211.epkg.Z
cf2cda3b8d19b73d06b69eeec7e4bae192bec689
IY98340_5b.071211.epkg.Z
9d8727b5733bc34b8daba267b82864ef17b7156f
IY99537_05.071212.epkg.Z
e7a366956ae7a08deb93cbd52bbbbf451d0f5565
IZ00559_8a.071212.epkg.Z
1898733cdf6098e4f54ec36132a03ebbe0682a7e
IZ00559_8b.071212.epkg.Z
f68c458c817f99730b193ecbd02ae24b9e51cc67
IZ00559_9a.071211.epkg.Z
185954838c439a3c7f8e5b769aa6cc7d31123b59
IZ00559_9b.071211.epkg.Z
6244138dc98f3fd16928b2bbcba3c5b4734e9942
IZ10828_08.071212.epkg.Z
98bfaf44ba4bc6eba452ea074e276b8e87b41c9d
IZ10828_09.071212.epkg.Z
2a9c0dd75bc79eba153d0a4e966d930151121d45
IZ10828_10.071212.epkg.Z
96706ec5afd792852350d433d1bf8d8981b67336
bos.clvm.enh.5.2.0.107.U
91f6d3a4d9ffd15d258f4bda51594dbce7011d8a
bos.clvm.enh.5.3.0.61.U
4589a5bca998f437aac5c3bc2c222eaa51490dab
bos.rte.lvm.5.2.0.107.U
3449afd795c24594c7a0c496f225c7148b4071ab bos.rte.lvm.5.3.0.63.U
These sums should match exactly. The PGP signatures in the tar
file and on this advisory can also be used to verify the
integrity of the fixes. If the sums or signatures cannot be
confirmed, contact IBM AIX Security at
[email protected] and describe the discrepancy.
installp -a -d . -p all
installp -a -d . -X all
VI. WORKAROUNDS
A. OPTION 1
http://www.redbooks.ibm.com/abstracts/sg247430.html
An fpm level of high will remove the setuid bit from the
affected commands. For example:
ftp://aix.software.ibm.com/aix/efixes/security
http://www.ibm.com/eserver/support/fixes/fixcentral/main/pseries/aix
http://www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd
B. Download the key from a PGP Public Key Server. The key ID is:
0xA6A36CCC
Please contact your local IBM AIX support center for any
assistance.
IX. ACKNOWLEDGMENTS
df Command
Purpose
Reports information about space on file systems. This document describes
the AIXr df command as well as
the System V version of df.
Syntax
df [ [ -P ] | [ -I | -M | -i | -t | -v ] ] [ -k ] [ -m ] [ -g ] [ -s ]
[FileSystem ... | File... ]
Description
The df command displays information about total space and available
space on a file system.
The FileSystem parameter specifies the name of the device on which the
file system resides, the directory
on which the file system is mounted, or the relative path name of a file
system. The File parameter specifies
a file or a directory that is not a mount point.
If the File parameter is specified, the df command displays information
for the file system on which the file
or directory resides.
If you do not specify the FileSystem or File parameter, the df command
displays information for all
currently mounted file systems.
File system statistics are displayed in units of 512-byte blocks by
default.
The df command gets file system space statistics from the statfs system
call. However, specifying the -s flag
gets the statistics from the virtual file system (VFS) specific file
system helper. If you do not specify
arguments with the -s flag and the helper fails to get the statistics,
the statfs system call statistics
are used. Under certain exceptional conditions, such as when a file
system is being modified while
the df command is running, the statistics displayed by the df command
might not be accurate.
Note:
Some remote file systems, such as the Network File System (NFS), do not
provide all the information
that the df command needs. The df command prints blanks for statistics
that the server does not provide.
flags:
examples:
df
If your system has the /, /usr, /site, and /usr/venus file systems
mounted, the output from the df command
resembles the following:
cd/
df .
The output from this command resembles the following:
defragfs Command
Purpose
Increases a file system's contiguous free space.
Syntax
defragfs [ -q | -r | -s] { Device | FileSystem }
Description
The defragfs command increases a file system's contiguous free space by
reorganizing allocations to be
contiguous rather than scattered across the disk. The file system to be
defragmented can be specified
with the Device variable, which is the path name of the logical volume
(for example, /dev/hd4).
It can also be specified with the FileSystem variable, which is the
mount point in the /etc/filesystems file.
You must mount the file system read-write for this command to run
successfully. Using the -q flag,
the -r flag or the -s flag generates a fragmentation report. These flags
do not alter the file system.
The defragfs command is slow against a JFS2 file system with a snapshot
due to the amount of data
that must be copied into snapshot storage object. The defragfs command
issues a warning message
if there are snapshots. The snapshot command can be used to delete the
snapshots and then used again
to create a new snapshot after the defragfs command completes.
Flags
Output
On a JFS filesystem, the definitions for the messages reported by the
defragfs command are as follows:
Examples:
To defragment the /data1 file system located on the /dev/lv00 logical
volume, enter:
defragfs /data1
Purpose
Checks file system consistency and interactively repairs the file
system.
Syntax
fsck [ -n ] [ -p ] [ -y ] [ -dBlockNumber ] [ -f ] [ -ii-NodeNumber ] [
-o Options ] [ -tFile ]
[ -V VfsName ] [ FileSystem1 - FileSystem2 ... ]
Description
Attention: Always run the fsck command on file systems after a system
malfunction. Corrective actions
may result in some loss of data. The default action for each consistency
correction is to wait for the operator
to enter yes or no. If you do not have write permission for an affected
file system, the fsck command defaults
to a no response in spite of your actual response.
Notes:
The fsck command does not make corrections to a mounted file system.
The fsck command can be run on a mounted file system for reasons other
than repairs.
However, inaccurate error messages may be returned when the file system
is mounted.
The fsck command checks and interactively repairs inconsistent file
systems. You should run this command
before mounting any file system. You must be able to read the device
file on which the file system resides
(for example, the /dev/hd0 device). Normally, the file system is
consistent, and the fsck command merely reports
on the number of files, used blocks, and free blocks in the file system.
If the file system is inconsistent,
the fsck command displays information about the inconsistencies found
and prompts you for permission to repair them.
If a JFS2 file system has snapshots, the fsck command will attempt to
preserve them. If this action fails,
the snapshots cannot be guaranteed to contain all of the before-images
from the snapped file system.
The fsck command will delete the snapshots and the snapshot logical
volumes.
If you do not specify a file system with the FileSystem parameter, the
fsck command checks all file systems
listed in the /etc/filesystems file for which the check attribute is set
to True. You can enable this type of
checking by adding a line in the stanza, as follows:
check=true
You can also perform checks on multiple file systems by grouping the
file systems in the /etc/filesystems file.
To do so, change the check attribute in the /etc/filesystems file as
follows:
check=Number
The Number parameter tells the fsck command which group contains a
particular file system.
File systems that use a common log device should be placed in the same
group. File systems are checked,
one at a time, in group order, and then in the order that they are
listed in the /etc/filesystems file.
All check=true file systems are in group 1. The fsck command attempts to
check the root file system before
any other file system regardless of the order specified on the command
line or in the /etc/filesystems file.
In addition to its messages, the fsck command records the outcome of its
checks and repairs through its exit value.
This exit value can be any sum of the following conditions:
When the system is booted from a disk, the boot process explicitly runs
the fsck command,
specified with the -f and -p flags on the /, /usr, /var, and /tmp file
systems. If the fsck command
is unsuccessful on any of these file systems, the system does not boot.
Booting from removable media and
performing maintenance work will then be required before such a system
will boot.
If the fsck command successfully runs on /, /usr, /var, and /tmp, normal
system initialization continues.
During normal system initialization, the fsck command specified with the
-f and -p flags runs from the
/etc/rc file. This command sequence checks all file systems in which the
check attribute is set to True (check=true).
If the fsck command executed from the /etc/rc file is unable to
guarantee the consistency of any file system,
system initialization continues. However, the mount of any inconsistent
file systems may fail.
A mount failure may cause incomplete system initialization.
Note:
By default, the /, /usr, /var, and /tmp file systems have the check
attribute set to False (check=false)
in their /etc/filesystem stanzas. The attribute is set to False for the
following reasons:
The boot process explicitly runs the fsck command on the /, /usr, /var,
and /tmp file systems.
The /, /usr, /var, and /tmp file systems are mounted when the /etc/rc
file is executed. The fsck command
will not modify a mounted file system. Furthermore, the fsck command run
on a mounted file system produces
unreliable results.
You can use the File Systems application in Web-based System Manager
(wsm) to change file system characteristics.
You could also use the System Management Interface Tool (SMIT) smit fsck
fast path to run this command.
Flags
Examples
To check all the default file systems, enter:
fsck
This command checks all the file systems marked check=true in the
/etc/filesystems file.
This form of the fsck command asks you for permission before making any
changes to a file system.
fsck -p
To check a specific file system, enter:
fsck /dev/hd1
This command checks the unmounted file system located on the /dev/hd1
device.
31.6 DESCRIPTOR AREA'S:
-----------------------
With Scalable VG's, LVCM info is no longer stored in the first user
block of any LV.
All relevant LVCM info is kept in the VGDA.
The lqueryvg command reads the VGDA from a specified disk in a VG.
Example:
-p: which PV
-A: show all available information
-t: show descriptive tags
Example:
#lqueryvg -Atp hdisk0
Max LVs: 256
PP Size: 25
Free PPs: 468
LV count: 20
PV count: 2
Total VGDAs: 3
Conc Allowed: 0
MAX PPs per PV 1016
MAX PVs: 32
Conc Autovaryo 0
Varied on Conc 0
Logical: 00c665ed00004c0000000112b7408848.1 hd5 1
00c665ed00004c0000000112b7408848.2 hd6 1
00c665ed00004c0000000112b7408848.3 hd8 1
00c665ed00004c0000000112b7408848.4 hd4 1
00c665ed00004c0000000112b7408848.5 hd2 1
00c665ed00004c0000000112b7408848.6 hd9var 1
00c665ed00004c0000000112b7408848.7 hd3 1
00c665ed00004c0000000112b7408848.8 hd1 1
00c665ed00004c0000000112b7408848.9 hd10opt 1
00c665ed00004c0000000112b7408848.10 hd7 1
00c665ed00004c0000000112b7408848.11 hd7x 1
00c665ed00004c0000000112b7408848.12 beheerlv 1
00c665ed00004c0000000112b7408848.13 varperflv 1
00c665ed00004c0000000112b7408848.14 loglv00 1
00c665ed00004c0000000112b7408848.15 db2_server_v8 1
00c665ed00004c0000000112b7408848.16 db2_var_v8 1
00c665ed00004c0000000112b7408848.17 db2_admin_v8 1
00c665ed00004c0000000112b7408848.18 db2_adminlog_v8 1
00c665ed00004c0000000112b7408848.19 db2_dasscr_v8 1
00c665ed00004c0000000112b7408848.20 db2_Fixpak10 1
Physical: 00c665edb74079bc 2 0
00c665edb7f2987a 1 0
Total PPs: 1022
LTG size: 128
HOT SPARE: 0
AUTO SYNC: 0
VG PERMISSION: 0
SNAPSHOT VG: 0
IS_PRIMARY VG: 0
PSNFSTPP: 4352
VARYON MODE: 0
VG Type: 0
Max PPs: 32512
-------
How do I find out what the maximum supported logical track group (LTG)
size of my hard disk?
You can use the lquerypv command with the -M flag. The output gives the
LTG size in KB. For instance,
the LTG size for hdisk0 in the following example is 256 KB.
/usr/sbin/lquerypv -M hdisk0
256
------
run
to find the executable (probably man, but man may have called
something else in the background)
then run
dbx> where
and paste the stack output, should be able to find it from there. also
paste the level of fileset you are on for the executable
-------
Wie l,sst sich ein Storage Lock auf einer SAN-Disk brechen?
Endlich die ersehnte SAN-Disk bekommen und dann das, es l,sst sich keine
Volume Group darauf anlegen.
# mkvg -f vpath100
-------
# lquerypv -h /dev/hdisk9 80 10
00000080 00001155 583CD4B0 00000000 00000000 |...UX<..........|
# lquerypv -h /dev/hdisk1
00000000 C9C2D4C1 00000000 00000000 00000000 |................|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00C665ED B7F2987A 00000000 00000000 |..e....z........|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
# lquerypv -h /dev/hdisk0 80 10
root@zd93l12:/root#lquerypv -h /dev/hdisk0 80 10
00000080 00C665ED B74079BC 00000000 00000000 |..e..@y.........|
The LVCB stores attributes of a LV. The getlvcb command reads the LVCB
of a specified LV.
Displays a formatted output of the data in the LVCB of a LV.
Example:
Writes the control block information (only the specified fields) into
block 0 of a logical volume (LVCB).
The bootlist needs to be changed so that CD0 is the first boot device.
Shutdown and re-boot.
the system will re-boot and should come back online in it's proper
state.
# varyoffvg myvg
2. Now remove the complete information of that VG from ODM. The VGDA and
LVCB
on the actual disks are NOT touched by the exportvg command.
# exportvg myvg
3. Now import the VG and create new ODM objects associated with that VG:
You only need to specify one intact PV of the VG in the above command.
Any disk in the VG
will have a VGDA which contains all neccessary information.
The importvg command reads the VGDA and LVCB on that disk and creates
completely new ODM entries.
rvgrecover:
-----------
chmod +x /bin/rvgrecover
Then run:
/bin/rvgrecover
Use the following shell script to reinitialize the ODM entries for the
rootvg volume group:
PV=/dev/ipldevice # PV=hdisk0
VG=rootvg
cp /etc/objrepos/CuAt /etc/objrepos/CuAt.$$
cp /etc/objrepos/CuDep /etc/objrepos/CuDep.$$
cp /etc/objrepos/CuDv /etc/objrepos/CuDv.$$
cp /etc/objrepos/CuDvDr /etc/objrepos/CuDvDr.$$
lqueryvg -Lp $PV | awk '{ print $2 }' | while read LVname; do
odmdelete -q "name = $LVname" -o CuAt
odmdelete -q "name = $LVname" -o CuDv
odmdelete -q "value3 = $LVname" -o CuDvDr
done
odmdelete -q "name = $VG" -o CuAt
odmdelete -q "parent = $VG" -o CuDv
odmdelete -q "name = $VG" -o CuDv
odmdelete -q "name = $VG" -o CuDep
odmdelete -q "dependency = $VG" -o CuDep
odmdelete -q "value1 = 10" -o CuDvDr
odmdelete -q "value3 = $VG" -o CuDvDr
importvg -y $VG $PV # ignore lvaryoffvg errors
varyonvg $VG
redefinevg:
-----------
redefinevg Command
Purpose
Redefines the set of physical volumes of the given volume group in the
device configuration database.
Syntax
redefinevg { -d Device | -i Vgid } VolumeGroup
Description
During normal operations the device configuration database remains
consistent with the
Logical Volume Manager (LVM) information in the reserved area on the
physical volumes.
If inconsistencies occur between the device configuration database and
the LVM, the redefinevg command
determines which physical volumes belong to the specified volume group
and re-enters this information
in the device configuration database. The redefinevg command checks for
inconsistencies by reading
the reserved areas of all the configured physical volumes attached to
the system.
Note: To use this command, you must either have root user authority or
be a member of the system group.
Flags
-d Device The volume group ID, Vgid, is read from the specified physical
volume device.
You can specify the Vgid of any physical volume belonging to the
volume group that you are redefining.
-i Vgid The volume group identification number of the volume group to be
redefined.
Example
To redefine rootvg physical volumes in the Device Configuration
Database, enter a command similar to the following:
synclvodm:
----------
synclvodm Command
Purpose
Synchronizes or rebuilds the logical volume control block, the device
configuration database,
and the volume group descriptor areas on the physical volumes.
Syntax
synclvodm [ -v ] VolumeGroup [ LogicalVolume ... ]
Description
During normal operations, the device configuration database remains
consistent with the
logical volume manager information in the logical volume control blocks
and the volume group descriptor
areas on the physical volumes. If for some reason the device
configuration database is not consistent
with Logical Volume Manager information, the synclvodm command can be
used to resynchronize the database.
The volume group must be active for the resynchronization to occur (see
varyonvg).
If logical volume names are specified, only the information related to
those logical volumes is updated.
Attention: Do not remove the /dev entries for volume groups or logical
volumes. Do not change the
device configuration database entries for volume groups or logical
volumes using the object data manager.
Note: To use this command, you must either have root user authority or
be a member of the system group.
Flags
-v verbose
Example
synclvodm rootvg
1. Short version for normal VG (not rootvg) and the disk is working:
--------------------------------------------------------------------
extendvg VolumeGroupName hdiskY
migratepv hdiskX hdiskY
reducevg -d VolumeGroupName hdiskX
2. More Detail:
---------------
2.2 The disk was not mirrored, or you want to replace a working disk:
---------------------------------------------------------------------
If hdiskX contains the primary dump device, you must deactivate it:
# sysdumpdev -p /dev/sysdumpnull
Note 1:
-------
Q:
spiderman# cd probes
spiderman# pwd
/opt/diagnostics/probes
spiderman# ls -la
ls: 0653-341 The file . does not exist.
spiderman# cd ..
spiderman# ls -la probes
ls: probes: Invalid file system control data detected.
total 0
spiderman#
A:
Some good news here. Yes, your directory is hosed, but the important
things is that all a directory is a repository for storing inode numbers
and associated (human readable) file names. Since fsck is so nicely
generating all of those now currently inaccessible inode numbers, a find
command can be used to move them into a new directory. Once the old
directory is empty, you can (hopefully) rm -r it.
http://www-1.ibm.com/support/docview.wss?uid=isg1IY94101
APAR status
Closed as program error.
Error description
After shrinking a filesystem, J2_DMAP_CORRUPT reports
appear in the error report and some file creates/writes
fail with "Invalid file system control data detected".
Local fix
Problem summary
Problem conclusion
Temporary fix
Comments
APAR information
APAR number IY94101
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2007-01-26
Closed date 2007-01-29
Last modified date 2007-05-25
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
Note 3:
-------
Q:
Since applying ML7 for AIX 5.1 I have been getting file corruption error
messages on a particular filesystem and the only way to fix it is to
umount
the filesystem and fsck it. I thought it might be a hardware problem but
now it is also happening on another machine I put the ML7 on and it is
happening to the same filesystem (one machine is a test server of the
other). The only unique thing about the filesystem is that it is not in
rootvg and it is large -1281228 1024-blocks. Has anyone heard of this?
Below is the error I am getting:
LABEL: JFS_META_CORRUPTION
IDENTIFIER: 684A365B
Description
FILE SYSTEM CORRUPTION
Probable Causes
INVALID FILE SYSTEM CONTROL DATA
Recommended Actions
PERFORM FULL FILE SYSTEM RECOVERY USING FSCK UTILITY OBTAIN
DUMP
CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
Failure Causes
ADAPTER HARDWARE OR MICROCODE
DISK DRIVE HARDWARE OR MICROCODE
SOFTWARE PROGRAM
STORAGE CABLE LOOSE, DEFECTIVE, OR UNTERMINATED
Recommended Actions
CHECK CABLES AND THEIR CONNECTIONS
INSTALL LATEST ADAPTER AND DRIVE MICROCODE
INSTALL LATEST STORAGE DEVICE DRIVERS
IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
FILE NAME
xix_lookup.c
LINE NO.
300
MAJOR/MINOR DEVICE NUMBER
0026 0006
ADDITIONAL INFORMATION
4A46 5345 426E 8C46 0000 000E 0000 001D 0003 0610 0000 0000 0000 0000
0000
0002
164D A330 0001 86D3 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000
0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000
0000
------------------------------------------------------------------------
---
LABEL: JFS_FSCK_REQUIRED
IDENTIFIER: CD546B25
Description
FILE SYSTEM RECOVERY REQUIRED
Recommended Actions
PERFORM FULL FILE SYSTEM RECOVERY USING FSCK UTILITY
Detail Data
MAJOR/MINOR DEVICE NUMBER
0026 0006
FILE SYSTEM DEVICE AND MOUNT POINT
/dev/lv04, /opt/egate
Note 3:
-------
Q:
A:
Type:
ls -li
The inode for this file is 153805. Use find -inum [inode] to make sure
that the file is correctly identified.
Here, we see that it is. Then used the -exec functionality to do the
remove. .
Note that if this strangely named file were not of zero-length, it might
contain accidentally misplaced
and wanted data. Then you might want to determine what kind of data the
file contains and move the file
to some temporary directory for further investigation, for example:
Will rename the file to unknown.file, so you can easily inspect it.
$ ls
-????*'?
$ ls | cat -v
-^B^C?^?*'
* filesystem corruption (in which case touching the filesystem any more
can really stuff things up)
If you discover that you have two files of the same name, one of the
files probably has a bizarre
(and unprintable) character in its name. Most probably, this unprintable
character is a backspace.
For example:
$ ls
filename filename
$ ls -q
filename fl?ilename
$ ls | cat -v
filename
fl^Hilename
Note 1:
-------
Q:
Hi all,
Any suggestion?
A:
A:
Note 2:
-------
Q:
Hi,
I get a error concerning a filesystem.
Now I have 2 questions:
LABEL: J2_FS_FULL
IDENTIFIER: CED6B4B5
Date/Time: Mon Dec 27 12:49:35 NFT
Sequence Number: 3420
Machine Id: 00599DDD4C00
Node Id: srvdms0
Class: O
Type: INFO
Resource Name: SYSJ2
Description
UNABLE TO ALLOCATE SPACE IN FILE SYSTEM
Probable Causes
FILE SYSTEM FULL
Recommended Actions
INCREASE THE SIZE OF THE ASSOCIATED FILE SYSTEM REMOVE UNNECESSARY
DATA FROM FILE SYSTEM USE FUSER UTILITY TO LOCATE UNLINKED FILES STILL
REFERENCED
Detail Data
JFS2 MAJOR/MINOR DEVICE NUMBER
002B 000B
A:
Q:
Subject
Re: error concerning filesystem [Virus checked]
Hi Holger,
A small query...how did you arrive at this figure of 43 from the error
code.
The decimal value of B is 11 but I could not understand the 2*16..
A:
The major/minor numbers (002B 000B) are in hex: hex abcd =
a*16^3+b*16^2+c*16^1+d therefore hex
002B=0*16^3+0*16^2+2*16^1+11=2*16+11
thread:
Use this command in case the superblock is corrupted. This will restore
the BACKUP COPY of the superblock
to the CURRENT copy.
Note:
fuser
Identifies processes using a file or file system
# fuser -u /dev/hd3
Sample output: /dev/hd3: 2964(root) 6615c(root) 8465(casado)
11290(bonner)
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?
topic=/com.ibm.aix.howtos/doc/howto/HT_baseadmn_badmagnumber.htm
# umount /home/myfs
2. To confirm damage to the file system, run the fsck command against
the file system. For example:
# fsck -p /dev/lv02
If the problem is damage to the superblock, the fsck command returns one
of the following messages:
3. With root authority, use the od command to display the superblock for
the file system,
as shown in the following example:
# od -x -N 64 /dev/lv02 +0x1000
Where the -x flag displays output in hexadecimal format and the -N flag
instructs the system to format
no more than 64 input bytes from the offset parameter (+), which
specifies the point in the file where
the file output begins. The following is an example output:
In the preceding output, note the corrupted magic value at 0x1000 (1234
0234). If all defaults were taken
when the file system was created, the magic number should be 0x43218765.
If any defaults were overridden,
the magic number should be 0x65872143.
# od -x -N 64 /dev/lv02 +0x1f000
Use the fsck command to clean up inconsistent files caused by using the
secondary superblock. For example:
1.
DAMAGED SUPERBLOCK
Thus, a damaged superblock means that the filesystem check will fail.
Our luck is that there are backups of the superblock located on several
positions and we can restore
them with a simple command.
The usual ( and only ) positions are: 8193, 32768, 98304, 163840, 229376
and 294912. ( 8193 in many cases
only on older systems, 32768 is the most current position for the first
backup )
You can check this out and have a lot more info about a particular
partition you have on your HD by:
CODE
# dumpe2fs /dev/hda5
You will see that the primary superblock is located at position 0, and
the first backup on position 32768.
O.K. let's get serious now, suppose you get a "Damaged Superblock" error
message at filesystem check
( after a power failure ) and you get a root-prompt in a recovery
console, then you give the command:
CODE
# e2fsck -b 32768 /dev/hda5
It will then check the filesystem with the information stored in that
backup superblock and if the check
was successful it will restore the backup to position 0.
Now imagine the backup at position 32768 was damaged too . . . then you
just try again with the backup
stored at position 98304, and 163840, and 229376 etc. etc. until you
find an undamaged backup
( there are five backups so if at least one of those five is okay it's
bingo ! )
So next time don't panic . . just get the paper where you printed out
this Tip and give the magic command
CODE
# e2fsck -b 32768 /dev/hda5
/
************************************************************************
*****
* rsb.c - Read Super Block. Allows a jfs superblock to be dumped, inode
* table to be listed or specific inodes data pointers to be chased and
* dumped to standard out (undelete).
*
* Phil Gibbs - Trinem Consulting ([email protected])
************************************************************************
****/
#include <stdio.h>
#include <jfs/filsys.h>
#include <jfs/ino.h>
#include <sys/types.h>
#include <pwd.h>
#include <grp.h>
#include <unistd.h>
#include <time.h>
void PrintSep()
{
int k=80;
while (k)
{
putchar('-');
k--;
}
putchar('\n');
}
res=getpwuid(uid);
if (res->pw_name[0])
{
return res->pw_name;
}
else
{
sprintf(replystr,"%d",uid);
return replystr;
}
}
if (sb->s_version==fsv3pvers)
{
TotalFrags=(sb->s_fsize*512)/sb->s_fragsize;
MaxInodes=(TotalFrags/sb->s_agsize)*sb->s_iagsize;
}
else
{
MaxInodes=(sb->s_fsize*512)/sb->s_bsize;
}
return MaxInodes;
}
PrintSep();
printf("SuperBlock Details:\n-------------------\n");
printf("File system size: %ld x 512 bytes (%ld Mb)\n",
sb->s_fsize,
(sb->s_fsize*512)/(1024*1024));
printf("Block size: %d bytes\n",sb->s_bsize);
printf("Flags: ");
switch (sb->s_fmod)
{
case (char)FM_CLEAN:
break;
case (char)FM_MOUNT:
printf("mounted ");
break;
case (char)FM_MDIRTY:
printf("mounted dirty ");
break;
case (char)FM_LOGREDO:
printf("log redo failed ");
break;
default:
printf("Unknown flag ");
break;
}
if (sb->s_ronly) printf("(read-only)");
printf("\n");
printf("Last SB update at: %s",ctime(&(sb->s_time)));
printf("Version: %s\n",
sb->s_version?"1 - fsv3pvers":"0 - fsv3vers");
printf("\n");
if (sb->s_version==fsv3pvers)
{
TotalFrags=(sb->s_fsize*512)/sb->s_fragsize;
printf("Fragment size: %5d ",sb-
>s_fragsize);
printf("inodes per alloc: %8d\n",sb->s_iagsize);
printf("Frags per alloc: %5d ",sb->s_agsize);
printf("Total Fragments: %8d\n",TotalFrags);
printf("Total Alloc Grps: %5d ",
TotalFrags/sb->s_agsize);
printf("Max inodes: %8ld\n",NumberOfInodes(sb));
}
else
{
printf("Total Alloc Grps: %5d ",
(sb->s_fsize*512)/sb->s_agsize);
printf("inodes per alloc: %8d\n",sb->s_agsize);
printf("Max inodes: %8ld\n",NumberOfInodes(sb));
}
PrintSep();
}
AllocBlock=(StartInum/InodesPerAllocBlock);
BlockNumber=(StartInum-(AllocBlock*InodesPerAllocBlock))/
(PAGESIZE/DILENGTH);
OffsetInBlock=(StartInum-(AllocBlock*InodesPerAllocBlock))-
(BlockNumber*(PAGESIZE/DILENGTH));
SeekPoint=(AllocBlock)?
(BlockNumber*PAGESIZE)+(AllocBlock*AllocBlockSize):
(BlockNumber*PAGESIZE)+(INODES_B*PAGESIZE);
if (SeekPoint!=LastSeekPoint)
{
sync();
fseek(in,SeekPoint,SEEK_SET);
fread(I_NODES,PAGESIZE,1,in);
LastSeekPoint=SeekPoint;
}
*inode=I_NODES[OffsetInBlock];
}
ReadInode( in,
inode,
&DiskInode,
InodesPerAllocBlock,
AllocBlockSize);
FileSize=DiskInode.di_size;
if (FileSize>FOUR_MB)
{
/* Double indirect mapping */
}
else
if (FileSize>THIRTY_TWO_KB)
{
/* Indirect mapping */
SeekPoint=DiskInode.di_rindirect & Mask;
SeekPoint=SeekPoint*Multiplier;
DiskPointers=(ulong *)malloc(1024*sizeof(ulong));
fseek(in,SeekPoint,SEEK_SET);
fread(DiskPointers,1024*sizeof(ulong),1,in);
NumPtrs=1024;
}
else
{
/* Direct Mapping */
DiskPointers=&(DiskInode.di_rdaddr[0]);
NumPtrs=8;
}
BytesToRead=(FileSize>sizeof(Buffer))?
sizeof(Buffer):FileSize;
fseek(in,SeekPoint,SEEK_SET);
fread(Buffer,BytesToRead,1,in);
FileSize=FileSize-BytesToRead;
write(1,Buffer,BytesToRead);
}
}
void ExitWithUsageMessage()
{
fprintf(stderr,"USAGE: rsb [-i inode] [-d] [-s]
<block_device>\n");
exit(1);
}
if (strlen(argv[optind])) in=fopen(argv[optind],"r");
else ExitWithUsageMessage();
if (in)
{
fseek(in,SUPER_B*PAGESIZE,SEEK_SET);
fread(&SuperBlock,sizeof(SuperBlock),1,in);
switch (SuperBlock.s_version)
{
case fsv3pvers:
Valid=!
strncmp(SuperBlock.s_magic,fsv3pmagic,4);
InodesPerAllocBlock=SuperBlock.s_iagsize;
AllocBlockSize=
SuperBlock.s_fragsize*SuperBlock.s_agsize;
Multiplier=SuperBlock.s_fragsize;
Mask=0x3ffffff;
break;
case fsv3vers:
Valid=!
strncmp(SuperBlock.s_magic,fsv3magic,4);
InodesPerAllocBlock=SuperBlock.s_agsize;
AllocBlockSize=SuperBlock.s_agsize*PAGESIZE;
Multiplier=SuperBlock.s_bsize;
Mask=0xfffffff;
break;
default:
Valid=0;
break;
}
if (Valid)
{
if (DumpSuperBlockFlag==1)
{
AnalyseSuperBlock(&SuperBlock);
}
MaxInodes=NumberOfInodes(&SuperBlock);
if (DumpFlag==1)
{
if (inode)
DumpInodeContents(inode,in,InodesPerAllocBlock,AllocBlockSize,Mask,Multi
plier);
else
DumpInodeList(in,MaxInodes,InodesPerAllocBlock,AllocBlockSize);
}
}
else
{
fprintf(stderr,"Superblock - bad magic
number\n");
exit(1);
}
}
else
{
fprintf(stderr,"couldn't open ");
perror(argv[optind]);
exit(1);
}
}
Where,
-i : Ignore case distinctions in both the PATTERN and the input files
i.e. match both uppercase and lowercase character.
-a : Process a binary file as if it were text
-B Print number lines/size of leading context before matching lines.
-A: Print number lines/size of trailing context after matching lines.
To recover text file starting with "nixCraft" word on /dev/sda1 you can
try following command:
# grep -i -a -B10 -A100 'nixCraft' /dev/sda1 > file.txt
Next use vi to see file.txt. This method is ONLY useful if deleted file
is text file.
If you are using ext2 file system, try out recover command. .
Note 3:
-------
This delay is your key to a quick and happy recovery: if a process still
has the file open, the data's there
somewhere, even though according to the directory listing the file
already appears to be gone.
To know where to go, you need to get the id of the process that has the
file open, and the file descriptor.
These you get with lsof, whose name means "list open files." (It
actually does a whole lot more than this
and is so useful that almost every system has it installed. If yours
isn't one of them, you can grab the latest
version straight from its author.)
Once you get that information from lsof, you can just copy the data out
of /proc and call it a day.
This whole thing is best demonstrated with a live example. First, create
a text file that you can delete
and then bring back:
Then have a look at the contents of the file that you just created:
$ less myfile
You should see a plaintext version of lsof's huge man page looking out
at you, courtesy of less.
Now press Ctrl-Z to suspend less. Back at a shell prompt make sure your
file is still there:
$ ls -l myfile
-rw-r--r-- 1 jimbo jimbo 114383 Oct 31 16:14 myfile
$ stat myfile
File: `myfile'
Size: 114383 Blocks: 232 IO Block: 4096 regular file
Device: 341h/833d Inode: 1276722 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1010/ jimbo) Gid: ( 1010/
jimbo)
Access: 2006-10-31 16:15:08.423715488 -0400
Modify: 2006-10-31 16:14:52.684417746 -0400
Change: 2006-10-31 16:14:52.684417746 -0400
Yup, it's there all right. OK, go ahead and oops it:
$ rm myfile
$ ls -l myfile
ls: myfile: No such file or directory
$ stat myfile
stat: cannot stat `myfile': No such file or directory
$
It's gone.
At this point, you must not allow the process still using the file to
exit, because once that happens,
the file will really be gone and your troubles will intensify. Your
background less process in this walkthrough
isn't going anywhere (unless you kill the process or exit the shell),
but if this were a video or sound file that
you were playing, the first thing to do at the point where you realize
you deleted the file would be to
immediately pause the application playback, or otherwise freeze the
process, so that it doesn't eventually
stop playing the file and exit.
Now to bring the file back. First see what lsof has to say about it:
You might think that using the -a flag with cp is the right thing to do
here, since you're restoring the file --
but it's actually important that you don't do that. Otherwise, instead
of copying the literal data contained
in the file, you'll be copying a now-broken symbolic link to the file as
it once was listed in its original directory:
$ ls -l /proc/4158/fd/4
lr-x------ 1 jimbo jimbo 64 Oct 31 16:18 /proc/4158/fd/4 ->
/home/jimbo/myfile (deleted)
$ cp -a /proc/4158/fd/4 myfile.wrong
$ ls -l myfile.wrong
lrwxr-xr-x 1 jimbo jimbo 24 Oct 31 16:22 myfile.wrong ->
/home/jimbo/myfile (deleted)
$ file myfile.wrong
myfile.wrong: broken symbolic link to `/home/jimbo/myfile (deleted)'
$ file /proc/4158/fd/4
/proc/4158/fd/4: broken symbolic link to `/home/jimbo/myfile (deleted)'
So instead of all that, just a plain old cp will do the trick:
$ cp /proc/4158/fd/4 myfile.saved
$ ls -l myfile.saved
-rw-r--r-- 1 jimbo jimbo 114383 Oct 31 16:25 myfile.saved
$ man lsof | col -b > myfile.new
$ cmp myfile.saved myfile.new
No complaints from cmp -- your restoration is the real deal.
Incidentally, there are a lot of useful things you can do with lsof in
addition to rescuing lost files.
32.7 Some notes about disks on x86 systems: MBR and Partition
Bootsector:
========================================================================
=
There are two sectors on the disk that are critical to starting the
computer:
The MBR is created when you create the first partition on the harddisk.
The location is always cylinder 0, head 0 and sector 1.
The MBR contains the Partition Table for the disk and a small amount of
executable code.
On x86 machines, this executable code examines the Partition Table and
identifies
the system partition. The code then finds the system partition's
starting location on the disk,
and loads an copy of its Partition Boot Sector into memory.
The Partition Boot Sector, has its own "layout" depending on the type of
system.
ZD110L05
600507680190014DC000000000000304
ZD110L08
600507680190014DC000000000000305
ZD111L05
600507680190014DC000000000000306
ZD111L08
600507680190014DC000000000000307
#############################
33. Filesystems in Linux:
#############################
33.1 Disks:
===========
- IDE:
/dev/hda is the primary IDE master drive,
/dev/hdb is the primary IDE slave drive,
/dev/hdc is the secondary IDE master,
/dev/hdd is the secondary IDE slave,
- SCSI:
/dev/sda is the first SCSI interface and 1st device id number
etc..
/dev/hda1
Floppydrive:
/dev/fd0
# mount -t auto /dev/fd0 /mnt/floppy
# mount -t vfat /dev/fd0 /mnt/floppy
# mount /dev/fd0 /mnt/floppy
Zipdrive:
33.2 Filesystems:
=================
- ReiserFS
A journaled filesystem
- Ext2
The most popular filesystem for years. But it does not use a log/jounal,
so gradually it becomes less important.
- Ext3
Very related to Ext2, but this one supports journaling.
An Ext2 filesystem can easily be upgraded to Ext3.
# fdisk /dev/sda
The number of cylinders for this disk is set to ..
(.. more output..)
Command:
Command: new
Command action
e extended
p primary partition (1-4): 1
(.. more output..)
Command: print
Command: new
Command action
e extended
p primary partition (1-4): 2
(.. more output..)
Command: type
Partition number (1-4): 2
Hex code: 82 # which is a Linix swap partition
Changed system type of partition 2 to 82 (Linux swap)
Command: print
Command: write
(.. more output..)
Ofcourse, we now would like to create the filesystems and the swap.
If you want to use the Ext2 filesystem on partition one, use the
following command:
# mkdir /bkroot
# mount /dev/sda1 /bkroot
# swapon /dev/sda2
Note 1:
=======
------------------------------------------------------------------------
--------
There is one limitation with Linux Software RAID that a /boot parition
can only reside on a RAID-1 array.
Linux supports both several hardware RAID devices but also software RAID
which allows you to use any IDE or
SCSI drives as the physical devices. In all cases I'll refer to software
RAID.
LVM stands for Logical Volume Manager and is a way of grouping drives
and/or partition in a way where instead
of dealing with hard and fast physical partitions the data is managed in
a virtual basis where the virtual
partitions can be resized. The Red Hat website has a great overview of
the Logical Volume Manager.
There is one limitation that a LVM cannot be used for the /boot.
------------------------------------------------------------------------
--------
So now with all three drives with a partitioned with id fd Linux raid
autodetect you can go ahead and combine
the paritions into a RAID array:
Wow, that was easy. That created a special device /dev/md0 which can be
used instead of a physical parition.
You can check on the status of that RAID array with the mdadm command:
Layout : left-symmetric
Chunk Size : 64K
UUID : 36161bdd:a9018a79:60e0757a:e27bb7ca
Events : 0.10670
The important lines to see are the State line which should say clean
otherwise there might be a problem.
At the bottom you should make sure that the State column always says
active sync which says each device
is actively in the array. You could potentially have a spare device
that's on-hand should any drive should fail.
If you have a spare you'll see it listed as such here.
One thing you'll see above if you're paying attention is the fact that
the size of the array is 240G but I
have three 120G drives as part of the array. That's because the extra
space is used as extra parity data that is
needed to survive the failure of one of the drives.
------------------------------------------------------------------------
--------
# pvcreate /dev/md0
# vgcreate lvm-raid /dev/md0
The default value for the physical extent size can be too low for a
large RAID array. In those cases you'll need
to specify the -s option with a larger than default physical extent
size. The default is only 4MB as of the
version in Fedora Core 5. For example, to successfully create a 550G
RAID array a size of 2G works well:
Ok, you've created a blank receptacle but now you have to tell how many
Physical Extents from the
physical device (/dev/md0 in this case) will be allocated to this Volume
Group. In my case I wanted all the data
from /dev/md0 to be allocated to this Volume Group. If later I wanted to
add additional space I would create
a new RAID array and add that physical device to this Volume Group.
To find out how many PEs are available to me use the vgdisplay command
to find out how many are available
and now I can create a Logical Volume using all (or some) of the space
in the Volume Group.
In my case I call the Logical Volume lvm0.
# vgdisplay lvm-raid
.
.
Free PE / Size 57235 / 223.57 GB
In the end you will have a device you can use very much like a plain 'ol
parition called /dev/lvm-raid/lvm0.
You can now check on the status of the Logical Volume with the lvdisplay
command. The device can then be used to to create a filesystem on.
# lvdisplay /dev/lvm-raid/lvm0
--- Logical volume ---
LV Name /dev/lvm-raid/lvm0
VG Name lvm-raid
LV UUID FFX673-dGlX-tsEL-6UXl-1hLs-6b3Y-rkO9O2
LV Write Access read/write
LV Status available
# open 1
LV Size 223.57 GB
Current LE 57235
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
# mkfs.ext3 /dev/lvm-raid/lvm0
.
.
# mount /dev/lvm-raid/lvm0 /mnt
# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/lvm--raid-lvm0
224G 93M 224G 1% /mnt
------------------------------------------------------------------------
--------
You'll notice in this case I had /dev/hdb fail. I replaced it with a new
drive with the same capacity and was able
to add it back to the array. The first step is to partition the new
drive just like when first creating the array.
Then you can simply add the partition back to the array and watch the
status as the data is rebuilt onto the newly replace drive.
Layout : left-symmetric
Chunk Size : 64K
- Expanding an Array/Filesytem
The answer to how to expand a RAID-5 array is very simple: You can't.
I'm used to working with a NetApp Filer where you plug in a drive, type
a simple command and that drive was added
to the existing RAID array, no muss, no fuss. While you can't add space
to a RAID-5 array directly in Linux you CAN
add space to an existing Logical Volume and then expand the ext3
filesytem on top of it. That's the main reason you
want to run LVM on top of RAID.
Before you start it's probably a good idea to back up your data just in
case something goes wrong.
Assuming you want your data to be protected from a drive failing you'll
need to create another RAID array
per the instructions above. In my case I called it /dev/md1 so after
partitioning I can create the array:
The next couple steps will add the space from the new RAID array to the
space available to be used by Logical Volumes.
You then check to see how many Physical Extents you have and add them to
the Logical Volume you're using.
Remember that since you can have multiple Logical Volumes on top of a
physical RAID array you need to do this extra step.
There, you now have a much larger Logical Volume which is using space on
two separate RAID arrays.
You're not done yet, you now have to extend your filesystem to make use
of all that new space. Fortunately this
is easy on FC4 and RHEL4 since there is a command to expand a ext3
filesytem without even unmounting it!
Be patient, expanding the file system takes a while.
# lvdisplay /dev/lvm-raid/lvm0
.
.
LV Size 447.14 GB
.
# df /raid-array
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/lvm--raid-lvm0
230755476 40901348 178132400 19% /raid-array
# ext2online /dev/lvm-raid1/lvm0 447g
Get yourself a sandwich
# df /raid-array
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/lvm--raid-lvm0
461510952 40901348 40887876 9% /raid-array
Congrats, you now have more space. Now go fill it with something.
Note 2:
=======
What is an LVM ?
LVM stands for Logical Disk Manager which is the fundamental way to
manage UNIX/Linux storage systems
in a scalable manner. An LVM abstracts disk devices into pools of
storage space called Volume Groups.
These volume groups are in turn subdivided into virtual disks called
Logical Volumes. The logical volumes
may be used just like regular disks with filesystem created on them and
mounted in the Unix/Linux
filesystem tree. The logical volumes can span multiple disks. Even
though a lot of companies have implemented
their own LVM's for *nixes, the one created by Open Software Foundation
(OSF) was integrated into many
Unix systems which serves as a base for the Linux implementation of LVM.
Note: Sun Solaris ships with LVM from Veritas which is substantially
different from the OSF implementation.
LVM created in conjunction with RAID can provide fault tolerance coupled
with scalability and easy disk management.
Create a logical volume and filesystem which spans multiple disks.
Note : Before you move to implement LVM's in linux, make sure your
kernel is 2.4 and above. Or else you will have
to recompile your kernel from source to include support for LVM.
LVM Creation
To create a LVM, we follow a three step process.
Step One : We need to select the physical storage resources that are
going to be used for LVM. Typically, these
are standard partitions but can also be Linux software RAID volumes that
we've created. In LVM terminology,
these storage resources are called "physical volumes" (eg: /dev/hda1,
/dev/hda2 ... etc).
The above step creates a physical volume from 3 partitions which I want
to initialize for inclusion
in a volume group.
Step Two : Creating a volume group. You can think of a volume group as a
pool of storage that consists of one
or more physical volumes. While LVM is running, we can add physical
volumes to the volume group or even remove them.
# vgscan
Now you can create a volume group and assign one or more physical
volumes to the volume group.
You can check the result of your work at this stage by entering the
command:
# vgdisplay
This command displays the total physical extends in a volume group, size
of each extent,
the allocated size and so on.
Step Three : This step involves the creation of one or more "logical
volumes" using our volume group storage pool.
The logical volumes are created from volume groups, and may have
arbitary names. The size of the new volume
may be requested in either extents (-l switch) or in KB, MB, GB or TB (
-L switch) rounding up to whole extents.
Now you can check if you got the desired results by using the command :
# lvdisplay
# mke2fs -j /dev/my_vol_grp/my_logical_vol
#File: /etc/fstab
/dev/my_vol_grp/my_logical_vol /data ext3 defaults 0 0
Now you can start using the newly created logical volume accessable
at /data mount point.
Next : Resizing Logical Volumes
Command Library
NAME
vgcreate - create a volume group
SYNOPSIS
vgcreate [-A|--autobackup {y|n}] [-d|--debug] [-h|--help] [-l|--
maxlogicalvolumes MaxLogicalVolumes]
[-p|--maxphysicalvolumes MaxPhysicalVolumes] [-s|--physicalextentsize
PhysicalExtentSize[kKmMgGtT]]
[-v|--verbose] [--version] VolumeGroupName PhysicalVolumePath
[PhysicalVolumePath...]
DESCRIPTION
vgcreate creates a new volume group called VolumeGroupName using the
block special device
PhysicalVolumePath previously configured for LVM with pvcreate(8).
OPTIONS
-A, --autobackup {y|n}
Controls automatic backup of VG metadata after the change (see
vgcfgbackup(8)). Default is yes.
-d, --debug
Enables additional debugging output (if compiled with DEBUG).
-h, --help
Print a usage message on standard output and exit successfully.
-l, --maxlogicalvolumes MaxLogicalVolumes
Sets the maximum possible logical volume count. More logical
volumes can't be created in this volume group.
Absolute maximum is 256.
-p, --maxphysicalvolumes MaxPhysicalVolumes
Sets the maximum possible physical volume count. More physical
volumes can't be included in this volume group. Absolute maximum is 256.
-s, --physicalextentsize PhysicalExtentSize[kKmMgGtT]
Sets the physical extent size on physical volumes of this volume
group. A size suffix
(k for kilobytes up to t for terabytes) is optional, megabytes is
the default if no suffix is present.
Values can be from 8 KB to 16 GB in powers of 2. The default of 4
MB causes maximum LV sizes of ~256GB
because as many as ~64k extents are supported per LV. In case
larger maximum LV sizes are needed (later),
you need to set the PE size to a larger value as well. Later
changes of the PE size in an existing VG are
not supported.
-v, --verbose
Display verbose runtime information about vgcreate's activities.
--version
Display tool and IOP version and exit successfully.
EXAMPLES
To create a volume group named test_vg using physical volumes /dev/hdk1,
/dev/hdl1, and /dev/hdm1
with default physical extent size of 4MB:
NOTE: If you are using devfs it is essential to use the full devfs name
of the device rather than the
symlinked name in /dev. so: the above could be
Command Library
NAME
vgextend - add physical volumes to a volume group
SYNOPSIS
vgextend [-A|--autobackup{y|n}] [-d|--debug] [-h|--help] [-v|--verbose]
VolumeGroupName
PhysicalVolumePath [PhysicalVolumePath...]
DESCRIPTION
vgextend allows you to add one or more initialized physical volumes
( see pvcreate(8) ) to an existing
volume group to extend it in size.
OPTIONS
-A, --autobackup y/n
Controls automatic backup of VG metadata after the change ( see
vgcfgbackup(8) ). Default is yes.
-d, --debug
Enables additional debugging output (if compiled with DEBUG).
-h, --help
Print a usage message on standard output and exit successfully.
-v, --verbose
Gives verbose runtime information about lvextend's activities.
Examples
tries to extend the existing volume group "vg00" by the new physical
volumes (see pvcreate(8) )
"/dev/sdn1" and /dev/sda4".
Command Library
NAME
pvcreate - initialize a disk or partition for use by LVM
SYNOPSIS
pvcreate [-d|--debug] [-f[f]|--force [--force]] [-y|--yes] [-h|--help]
[-v|--verbose] [-V|--version]
PhysicalVolume [PhysicalVolume...]
DESCRIPTION
pvcreate initializes PhysicalVolume for later use by the Logical Volume
Manager (LVM). Each PhysicalVolume
can be a disk partition, whole disk, meta device, or loopback file. For
DOS disk partitions,
the partition id must be set to 0x8e using fdisk(8), cfdisk(8), or a
equivalent. For whole disk devices
only the partition table must be erased, which will effectively destroy
all data on that disk. This can be done
by zeroing the first sector with:
OPTIONS
-d, --debug
Enables additional debugging output (if compiled with DEBUG).
-f, --force
Force the creation without any confirmation. You can not recreate
(reinitialize) a physical volume belonging
to an existing volume group. In an emergency you can override this
behaviour with -ff. In no case case can you
initialize an active physical volume with this command.
-s, --size
Overrides the size of the physical volume which is normally
retrieved. Useful in rare case where this value
is wrong. More useful to fake large physical volumes of up to 2
Terabyes - 1 Kilobyte on smaller devices
for testing purposes only where no real access to data in created
logical volumes is needed. If you wish
to create the supported maximum, use "pvcreate -s 2147483647k
PhysicalVolume [PhysicalVolume ...]".
All other LVM tools will use this size with the exception of
lvmdiskscan(8)
-y, --yes
Answer yes to all questions.
-h, --help
Print a usage message on standard output and exit successfully.
-v, --verbose
Gives verbose runtime information about pvcreate's activities.
-V, --version
Print the version number on standard output and exit successfully.
Example
Initialize partition #4 on the third SCSI disk and the entire fifth SCSI
disk for later use by LVM:
This example uses /dev/sdb (an empty SCSI disk with no existing
partitions) to create a single partition for the entire disk (36 GB).
We will do this for all disks.
Ex:
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Ex:
# partprobe
Obtain OCFS2
To determine the kernel-specific module that you need, use uname -r.
# uname -r
2.6.9-22.ELsmp
Preparing... ###########################################
[100%]
1:ocfs2-tools ###########################################
[ 33%]
2:ocfs2console ###########################################
[ 67%]
3:ocfs2-2.6.9-22.ELsmp ###########################################
[100%]
Configure OCFS2
Once all of the nodes have been added, click on Cluster --> Propagate
Configuration. This will copy the OCFS2 configuration file
to each node in the cluster. You may be prompted for root passwords as
ocfs2console uses ssh to propagate the configuration file.
Leave the OCFS2 console by clicking on File --> Quit. It is possible to
format and mount the OCFS2 partitions using the ocfs2console GUI;
however,
this guide will use the command line utilities.
As root, execute the following command on each cluster node to allow the
OCFS2 cluster stack to load at boot time:
/etc/init.d/o2cb enable
Ex:
# /etc/init.d/o2cb enable
As root on each of the cluster nodes, create the mount point directory
for the OCFS2 filesystem
Ex:
# mkdir /u03
Ex:
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /u03 /dev/sdc1
mkfs.ocfs2 1.0.3
Filesystem label=/u03
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=36413280256 (1111245 clusters) (8889960 blocks)
35 cluster groups (tail covers 14541 clusters, rest cover 32256
clusters)
Journal size=33554432
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing lost+found: done
mkfs.ocfs2 successful
Since this filesystem will contain the Oracle Clusterware files and
Oracle RAC database files, we must ensure that all I/O
to these files uses direct I/O (O_DIRECT). Use the "datavolume" option
whenever mounting the OCFS2 filesystem to enable direct I/O.
Failure to do this can lead to data loss in the event of system failure.
Ex:
# mount -t ocfs2 -L /u03 -o datavolume /u03
Notice that the mount command uses the filesystem label (-L u03) used
during the creation of the filesystem. This is a handy way to refer
to the filesystem without having to remember the device name.
To verify that the OCFS2 filesystem is mounted, issue the mount command
or run df:
# mount -t ocfs2
/dev/sdc1 on /u03 type ocfs2 (rw,_netdev,datavolume)
# df /u03
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 35559840 138432 35421408 1% /u03
The OCFS2 filesystem can now be mounted on the other cluster nodes.
Database files
mkdir /u03/oradata
chown oracle:oinstall /u03/oradata
chmod 775 /u03/oradata
34.1 Solaris:
-------------
# swap -l
The -l option can be used to list swap space. The system displays
information like:
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s3 136,3 16 302384 302384
path : the pathname for the swaparea. In this example the pathname is
swapfile.
dev : the major/minor device number is in decimal if it's a block
special device; zeroes otherwise
swaplo: the offset in 512 byte blocks where usable swapspace begins
blocks: size in 512 byte blocks. The swaplen value can be adjusted as a
kernel parameter.
free : free 512 byte blocks.
The swap -l command does not include physical memory in it's calculation
of swap space.
# swap -s
The -s option can be used to list a summary of the system's virtual swap
space.
total: 31760k bytes allocated + 5952k reserved = 37712k used, 202928k
available
There are 2 methods available for adding more swap to your system.
(2) The other method to add more swap space is to use the mkfile and
swap commands
to designate a part of an existing UFS filesystem as a supplementary
swap area.
You can use it as a temporary solution, or as a solution for longer
duration as well,
but a swap file is just another file in the filesystem, so you cannot
unmount that
filesystem while the swapfile is in use.
The following steps enable you to add more swap space without
repartitioning a disk.
- As root, use df -k to locate a suitable filesystem. Suppose /data
looks allright
for this purpose
- Use the mkfile command to add a 50MB swapfile named swapfile in the
/data partition.
-- Removing a swapfile:
-- --------------------
Create a directory which will serve as the mount point for the TMPFS
file system.
There is no command such as newfs to create a TMPFS file system before
mounting it.
The TMPFS file system actually gets created in RAM when you execute the
mount command
and specify a filesystem type of TMPFS. The following example creates a
new directory
/export/data and mounts a TMPFS filesystem, limiting it to 25MB.
34.2 AIX:
---------
The reports from the "vmstat" and "topas" commands indicate the amount
of paging space I/O that is
taking place.
# lsps -a
Page Space Physical Volume Volume Group Size %Used
Active Auto Type
paging00 hdisk1 rootvg 80MB 1 yes
yes lv
hd6 hdisk1 rootvg 256MB 1 yes
yes lv
# pstat -s
Note 1:
-------
The deferred page space allocation policy is the default policy in AIX.
Using the "vmo -o defps" command enables turning the deferred page space
allocation, or DPSA,
on or off in order to preserve the late page space allocation policy, or
LPSA.
Note 2:
-------
Technote (FAQ)
Question
During an online backup, you might see a high paging space usage, which
will not be released
even after online backup completion in DB2® Universal Database™ (DB2
UDB) Version 8.
This problem does not occur during an offline backup.
Cause
Paging space usage increases during online database backups on AIX® 5.2
and 5.3.
This is an expected behavior from ML4 of AIX 5L™ 5.2 and ML1 of AIX 5L
5.3 onwards.
During an online database backup operation, file pages are loaded into
memory by AIX
in order for the backup processes to read them. If DB2 UDB runs out of
memory,
AIX has to free memory to fit additional file pages into RAM. It does
this by writing
DB2 UDB shared memory segments out to paging space. When the backup
completes,
these pages in paging space are not released because they are still in
use by the other
DB2 UDB processes. They will only be freed when the database is
deactivated.
Answer
To free up paging space without stopping the database, use the AIX
tuning parameter lru_file_repage.
It affects Virtual Memory Manager (VMM) page replacement. By setting
this parameter to 0,
you force the system to only free file pages when you run out of memory
and to not write
working pages out to paging space. This will stop paging use from
increasing.
To set this parameter to zero, use vmo command. For example:
vmo -o lru_file_repage=0
This parameter was introduced in ML4 of AIX5L 5.2 and ML1 of AIX5L 5.3.
The default value is 1.
Note 3:
-------
Note 4:
-------
When the (dynamic) value of numperm% drops below minperm%, it will cause
the paging of computational pages to page space.
Note 5:
-------
# lsps -a
# lsps -s
# chps -d 1 hd6
mkps:
-----
To create a paging space in volume group myvg that has four logical
partitions and is activated immediately
and at all subsequent system restarts, enter:
# mkps -a -n -s 4 myvg
rmps:
-----
AIX 51 or later:
Use the swapoff command to dynamically deactive the paging space, then
use the rmps command.
# swapoff /dev/paging03
# rmps paging03
chps:
-----
As from AIX 5L you can use the chps -d command, to decrease the size of
a paging space,
without having to deactive it, then reboot, then remove, and then
recreate it with a smaller size.
Decrease it with a number of LP's like:
# chps -d 2 paging03
# lsps -a or lsps -s
# pg /etc/swapspaces
hd6:
dev=/dev/hd6
paging00
dev=/dev/paging00
If the amount of paging space is less than the amount of real memory in
the system, it's possible the system
will run out of paging space before real memory. This is because AIX
performs early allocation of page space.
When a page is referenced, real memory and paging space blocks are
allocated. If there are less paging space blocks
then real memory pages, paging space will be exhaused before all of real
memory is consumed.
# export PSALLOC=early
This example causes all future programs to be executed in the
environment to use early allocation.
The currently executing shell is not affected.
When using PSALLOC=early, the user should set a handler for the
following SIGSEGV signal by pre-allocating and setting
the memory as a stack using the sigaltstack function. Even though
PSALLOC=early is specified, when there
is not enough paging space and a program attempts to expand the stack,
the program may receive the SIGSEGV signal.
On some systems, paging space might not ever be needed even if all the
pages accessed have been touched.
This situation is most common on systems with very large amount of RAM.
However, this may result in overcommitment
of paging space in cases where more virtual memory than available RAM is
accessed.
To disable DPSA and preserve the Late Page Space Allocation policy, run
the following command:
# vmo -o defps=0
# vmo -o defps=1
34.3 Linux:
-----------
# mkswap -c /dev/hda4
# swapon /dev/hd4
If you want the swap space enabled after boot, include the appropriate
entry into /etc/fstab, for example
/dev/hda4 swap swap defaults 0 0
When you are done using the swapfile, you can turn it off and remove
with
# swapoff /swapfile
# rm /swapfile
Abstract
While the virtual memory management in Linux 2.2 has decent performance
for many workloads, it suffers from
a number of problems.
The first part of this paper contains a description of how the Linux 2.2
VMM works and an analysis of why
it has bad behaviour in some situations.
The way in which a lot of this behaviour has been fixed in the Linux 2.4
kernel is described in the second part of the paper.
Due to Linux 2.4 being in a code freeze period while these improvements
were implemented, only known-good solutions
have been integrated. A lot of the ideas used are derived from
principles used in other operating systems,
mostly because we have certainty that they work and a good understanding
of why, making them suitable for integration
into the Linux codebase during a code freeze.
The slab cache: this is the kernel's dynamically allocated heap storage.
This memory is unswappable,
but once all objects within one (usually page-sized) area are unused,
that area can be reclaimed.
The page cache: this cache is used to cache file data for both mmap()
and read() and is indexed by (inode, index) pairs.
No dirty data exists in this cache; whenever a program writes to a page,
the dirty data is copied to the buffer cache,
from where the data is written back to disk.
The buffer cache: this cache is indexed by (block device, block number)
tuples and is used to cache raw disk devices,
inodes, directories and other filesystem metadata. It is also used to
perform disk IO on behalf of the page cache
and the other caches. For disk reads the pagecache bypasses this cache
and for network filesystems it isn't used at all.
The inode cache: this cache resides in the slab cache and contains
information about cached files in the system.
Linux 2.2 cannot shrink this cache, but because of its limited size it
does need to reclaim individual entries.
The dentry cache: this cache contains directory and name information in
a filesystem-independent way and is used
to lookup files and directories. This cache is dynamically grown and
shrunk on demand.
SYSV shared memory: the memory pool containing the SYSV shared memory
segments is managed pretty much like the page cache,
but has its own infrastructure for doing things.
Process mapped virtual memory: this memory is administrated in the
process page tables. Processes can have page cache
or SYSV shared memory segments mapped, in which case those pages are
managed in both the page tables
and the data structures used for respectively the page cache or the
shared memory code.
shm_swap scans the SYSV shared memory segments, swapping out those pages
that haven't been referenced recently
and which aren't mapped into any process.
Under very heavy loads, NRU replacement of pages simply doesn't cut it.
More careful and better balanced pageout eviction and flushing is called
for. With the fragility of the Linux 2.2 pageout framework this goal
doesn't really seem achievable.
The facts that shrink_mmap is a simple clock algorithm and relies on
other functions to make process-mapped pages freeable makes it fairly
unpredictable. Add to that the balancing loop in try_to_free_pages and
you get a VM subsystem which is extremely sensitive to minute changes in
the code and a fragile beast at its best when it comes to maintenance or
(shudder) tweaking.
Unification of the buffer cache and the page cache. While in Linux 2.2
the page cache used the buffer cache to write back its data, needing an
extra copy of the data and doubling memory requirements for some write
loads, in Linux 2.4 dirty page cache pages are simply added in both the
buffer and the page cache. The system does disk IO directly to and from
the page cache page. That the buffer cache is still maintained
separately for filesystem metadata and the caching of raw block devices.
Note that the cache was already unified for reads in Linux 2.2, Linux
2.4 just completes the unification.
Support for systems with up to 64GB of RAM (on x86). The Linux kernel
previously had all physical memory directly mapped in the kernel's
virtual address space, which limited the amount of supported memory to
slightly under 1GB. For Linux 2.4 the kernel also supports additional
memory (so called "high memory" or highmem), which can not be used for
kernel data structures but only for page cache and user process memory.
To do IO on these pages they are temporarily mapped into kernel virtual
memory and the data is copied to or from a bounce buffer in "low
memory".
At the same time the memory zone for ISA DMA (0 - 16 MB physical address
range) has also been split out into a separate page zone. This means
larger x86 systems end up with 3 memory zones, which all need their free
memory balanced so we can continue allocating kernel data structures and
ISA DMA buffers. The memory zones logic is generalised enough to also
work for NUMA systems.
The SYSV shared memory code has been removed and replaced with a simple
memory filesystem which uses the page cache for all its functions. It
supports both POSIX SHM and SYSV SHM semantics and can also be used as a
swappable memory filesystem (tmpfs).
Since the changes to the page replacement code took place after all
these changes and in the (one and a half year long) code freeze period
of the Linux 2.4 kernel, the changes have been kept fairly conservative.
On the other hand, we have tried to fix as many of the Linux 2.2 page
replacement problems as possible. Here is a short overview of the page
replacement changes: they'll be described in more detail below.
Page aging, which was present in the Linux 1.2 and 2.0 kernels and in
FreeBSD has been reintroduced into the VM. However, a few small changes
have been made to avoid some artifacts of virtual page based aging.
--Page aging
Page aging was the first easy step in making the bad border-case
behaviour from Linux 2.2 go away, it works reasonably well in Linux 1.2,
Linux 2.0 and FreeBSD. Page aging allows us to make a much finer
distinction between pages we want to keep in memory and pages we want to
swap out than the NRU aging in Linux 2.2.
Page aging in these OSes works as follows: for each physical page we
keep a counter (called age in Linux, or act_count in FreeBSD) that
indicates how desirable it is to keep this page in memory. When scanning
through memory for pages to evict, we increase the page age (adding a
constant) whenever we find that the page was accessed and we decrease
the page age (substracting a constant) whenever we find that the page
wasn't accessed. When the page age (or act_count) reaches zero, the page
is a candidate for eviction.
These two problems are solved by doing exponential decline of the page
age (divide by two instead of substracting a constant) whenever we find
a page that wasn't accessed, resulting in page replacement which is
closer to LRU[Note] than LFU. This reduces the CPU overhead of page
aging drastically in some cases; however, no noticable change in swap
behaviour has been observed.
Another artifact comes from the virtual address scanning. In Linux 1.2
and 2.0 the system reduces the page age of a page whenever it sees that
the page hasn't been accessed from the page table which it is currently
scanning, completely ignoring the fact that the page could have been
accessed from other page tables. This can put a severe penalty on
heavily shared pages, for example the C library.
This problem is fixed by simply not doing "downwards" aging from the
virtual page scans, but only from the physical-page based scanning of
the active list. If we encounter pages which are not referenced, present
in the page tables but not on the active list, we simply follow the
swapout path to add this page to the swap cache and the active list so
we'll be able to lower the page age of this page and swap it out as soon
as the page age reaches zero.
Both a large and a small target size for the inactive page list have
their benefits. In Linux 2.4 we have chosen for a middle ground by
letting the system dynamically vary the size of the inactive list
depending on VM activity, with an artificial upper limit to make sure
the system always preserves some aging information.
Linux 2.4 keeps a floating average of the amount of pages evicted per
second and sets the target for the inactive list and the free list
combined to the free target plus this average number of page steals per
second. Not only does this second give us enough time to do all kinds of
page flushing optimisations, it also is small enough to keep page age
distribution within the system intact, allowing us to make good choices
on which pages to evict and which pages to keep.
This means that under loads where data is seldom written we can avoid
writing out dirty inactive pages most of the time, giving us much better
latencies in freeing pages and letting streaming reads continue without
the disk head moving away to write out data all the time. Only under
loads where lots of pages are being dirtied quickly does the system
suffer a bit from syncing out dirty data irregularly.
Another alternative would have been the strategy used in FreeBSD 4.3,
where dirty pages get to stay in the inactive list longer than clean
pages but are synced out before the clean pages are exhausted. This
strategy gives more consistent pageout IO in FreeBSD during heavy write
loads. However, a big factor causing the irregularities in pageout
writes using the simpler strategy above may well be caused because of
the huge inactive list target in FreeBSD (33It is not at all clear what
this more complicated strategy would do when used on the dynamically
sized inactive list on Linux 2.4, because of this Linux 2.4 uses the
better understood strategy of evicting clean inactive pages first and
only after those are gone start syncing the dirty ones.
--Conclusions
Since the Linux 2.4 kernel's VM subsystem is still being tuned heavily,
it is too early to come with conclusive figures on performance. However,
initial results seem to indicate that Linux 2.4 generally has better
performance than Linux 2.2 on the same hardware.
One big difference between the VM in Linux 2.4 and the VM in Linux 2.2
is that the new VM is far less sensitive to subtle changes. While in
Linux 2.2 a subtle change in the page flushing logic could upset page
replacement, in Linux 2.4 it is possible to tweak the various aspects of
the VM with predictable results and little to no side-effects in the
rest of the VM.
Remaining issues
The Linux 2.4 VM mainly contains easy to implement and obvious to verify
solutions for some of the known problems Linux 2.2 suffers from. A
number of issues are either too subtle to implement during the code
freeze or will have too much impact on the code. The complete list of
TODO items can be found on the Linux-MM page[Linux-MM]; here are the
most important ones:
Load control: no matter how good we can get the page replacement code,
there will always be a point where the system ends up thrashing to
death. Implementing a simple load control system, where processes get
suspended in round-robin fashion when the paging load gets too high, can
keep the system alive under heavy overload and allow the system to get
enough work done to bring itself back to a sane state.
Note 1:
-------
Q:
Hi All,
Now, this system works for the most part like I understand it should.
Via nmon, I can watch it stealing memory from the FileSystemCache
(numclient values decrease) when the box gets under memory pressure.
However, every once in a while when under memory pressure,
I can see that the system starts writing to the paging space when there
is plenty of FileSystemCache available to steal from.
So, my question is, why does AIX page out when under memory pressure
instead of stealing from the FileSystemCache memory like I want it to?
A:
Look at the Paging to/from the Paging Space - its zero. Once info is in
the paging space its left there until the space is needed
for something else. So at this point the server isn't actually paging.
Note 2:
------
AIX will always try to use 100% of real memory--> AIX will use the
amount of
memory solicited by your processes. The remaining capacity will be used
as
filesystem cache.
You can change the minimum and maximum amounts of memory used to cache
files
with vmtune (vmo for 5.2+), and it is advised to do so if your're
running
databases with data on raw devices (since the db engine usually has its
own
cache algorithm, and AIX can't cache data on raw devices). The values to
modify are minperm, maxperm, minclient and maxpin (use at you own
risk!!!).
Paging space use will be very low: 5% is about right--> A paging space
so
little used seems to be oversized. In general, the paging space should
be
under 40%, and the size must be determined accordingly to the
application
running (i.e. 4X the physical memory size for oracle). In AIX 5L a
paging
space can be reduced without rebooting. Anyway, AIX always uses some
paging
space, even keeping copies of the data on memory and on disk, as a
"predictive" paging.
Look in topas for the values "comp mem" (proceses) and "non comp mem"
(filesystem cache) to see the distribution of the memory usage. Nmon can
show you the top proceses by memory usage, along with many other
statistics.
There are several tools which can give you a more detailed picture of
how
memory is being used. "svmon" is very comprehensive. Tools such as topas
and nmon will also give you a bit more information.
Note 3:
-------
Finally, the "svmon" command can be used to list how much memory is used
each process. The interpretation of the svmon output
requires some expertise. See the AIX documentation for details.
==================================================================
35 Volume group, logical volumes, and filesystem commands in HPUX:
==================================================================
Up through the 10.0 release of HP-UX, HFS has been the only available
locally mounted read/write file system.
Beginning at 10.01, you also have the option of using VxFS. (Note,
however, that VxFS cannot be used
as the root file system.)
As compared to HFS, VxFS allows much shorter recovery times in the event
of system failure.
It is also particularly useful in environments that require high
performance or deal with large
volumes of data. This is because the unit of file storage, called an
extent, can be multiple blocks,
allowing considerably faster I/O than with HFS. It also provides for
minimal downtime by allowing
online backup and administration - that is, unmounting the file system
will not be necessary for
certain tasks. You may not want to configure VxFS, though, on a system
with limited memory
because VxFS memory requirements are considerably larger than that for
HFS.
Create a file system using the newfs command. Note the use of the
character device file. For example:
If you do not use the -F FStype option, by default, newfs creates a file
system based on the content
of your /etc/fstab file. If there is no entry for the file system in
/etc/fstab, then the file system type
is determined from the file /etc/default/fs. For information on
additional options, see newfs(1M).
$ cat /etc/default/fs
LOCAL=vxfs
For HFS, you can explicitly specify that newfs create a file system that
allows short file names or long file names
by using either the -S or -L option. By default, these names will as
short or long as those allowed
by the root file system. Short file names are 14 characters maximum.
Long file names allow up to 255 characters.
Generally, you use long file names to gain flexibility in naming files.
Also, files created on other systems
that use long file names can be moved to your system without being
renamed.
When creating a VxFS file system, file names will automatically be long.
Choose an empty directory to serve as the mount point for the file
system. Use the mkdir command to
create the directory if it does not currently exist. For example, enter:
# mkdir /test
Mount the file system using the mount command. Use the block device file
name that contains the file system.
You will need to enter this name as an argument to the mount command.
Note:
The newfs command is a "friendly" front-end to the mkfs command (see
mkfs(1M)). The newfs command
calculates the appropriate parameters and then builds the file system by
invoking the mkfs command.
-- vgdisplay:
-- ----------
Examples:
# vgdisplay
# vgdisplay -v vgdatadir
-- pvdisplay:
-- ----------
EXAMPLES
# pvdisplay /dev/dsk/c102t9d3
-- lvdisplay:
-- ----------
Examples:
# lvdisplay lvora_p0gencfg_apps
# lvdisplay -v lvora_p0gencfg_apps
# lvdisplay -v /dev/vg00/lvol2
# lvdisplay /dev/vgora_e0etea_data/lvora_e0etea_data
--- Logical volumes ---
LV Name /dev/vgora_e0etea_data/lvora_e0etea_data
VG Name /dev/vgora_e0etea_data
LV Permission read/write
LV Status available/syncd
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 17020
Current LE 4255
Allocated PE 8510
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
-- vgchange:
-- ---------
-- vgcreate:
-- ---------
/usr/sbin/vgcreate [-f] [-A autobackup] [-x extensibility] [-e max_pe]
[-l max_lv] [-p max_pv]
[-s pe_size] [-g pvg_name] vg_name pv_path ...
EXAMPLES
First, create the directory /dev/vg00 with the character special file
called group.
mkdir /dev/vg00
mknod /dev/vg00/group c 64 0x030000
The minor number for the group file should be unique among all the
volume groups on the system.
It has the format 0xNN0000, where NN runs from 00 to ff. The maximum
value of NN is controlled by the kernel
tunable parameter maxvgs.
pvcreate /dev/rdsk/c1t0d0
pvcreate /dev/rdsk/c1t2d0
Physical volumes are identified by their device file names, for example
/dev/dsk/cntndn
/dev/rdsk/cntndn
Note that each disk has a block device file and a character or raw
device file, the latter identified by the r.
Which name you use depends on what task you are doing with the disk. In
the notation above, the first name
represents the block device file while the second is the raw device
file.
-- Use a physical volume's raw device file for these two tasks only:
-> When creating a physical volume. Here, you use the device file for
the disk. For example,
this might be /dev/rdsk/c3t2d0 if the disk were at card instance 3,
target address 2, and device number 0.
(The absence of a section number beginning with s indicates you are
referring to the entire disk.)
For all other tasks, use the block device file. For example, when you
add a physical volume to a volume group,
you use the disk's block device file for the disk, such as
/dev/dsk/c5t3d0.
-- vgextend:
-- ---------
Examples:
-- pvcreate:
-- ---------
# pvcreate -f /dev/rdsk/c1d0s2
-- lvcreate:
-- ---------
The lvcreate command creates a new logical volume within the volume
group specified by vg_name.
Up to 255 logical volumes can be created in one volume group
SYNOPSIS
/etc/lvcreate [-d schedule] {-l logical_extents_number | -L
logical_volume_size} [-m mirror_copies] [-n lv_path] [-p
permission]
[-r relocate] [-s strict] [-C contiguous] [-M mirror_write_cache]
[-c
vol_group_name
Examples:
# lvcreate /dev/vg02
# lvcreate -s n /dev/vg03
# lvcreate -L 90 -i 3 -I 64 /dev/vg03
-- fstyp:
-- ------
SYNOPSIS
/usr/sbin/fstyp [-v] special
The fstyp command allows the user to determine the file system type of a
mounted or unmounted file system.
special represents a device special file (for example: /dev/dsk/c1t6d0).
Examples:
# fstyp /dev/dsk/c1t6d0
# fstyp /dev/vg00/lvol6
Find the file system type for a particular device file and also
information about its super block:
# fstyp -v /dev/dsk/c1t6d0
-- mkboot:
-- -------
Boot programs are stored in the boot area in Logical Interchange Format
(LIF), which is similar to a file system.
For a device to be bootable, the LIF volume on that device must contain
at least the ISL
(the initial system loader) and HPUX (the HP-UX bootstrap utility) LIF
files. If, in addition, the device
is an LVM physical volume, the LABEL file must be present (see
lvlnboot(1M) ).
For the VERITAS Volume Manager (VxVM) layout on the Itanium-based system
architecture, the only relevant
LIF file is the LABEL file. All other LIF files are ignored. VxVM uses
the LABEL file when the system boots
to determine the location of the root, stand, swap, and dump volumes.
EXAMPLES
# mkboot -l /dev/dsk/c0t5d0
Use the existing layout, and install only SYSLIB and ODE files and
preserve the EST file on the disk:
Install only the SYSLIB file and retain the ODE file on the disk. Use
the Whole Disk layout. Use the file
/tmp/bootlf to get the boot programs rather than the default. (The -i
ODE option will be ignored):
# mkboot -e -l /dev/dsk/c3t1d0
Create AUTO file with the string autofile command on a device. If the
device is on an Itanium-based system,
the file is created as /EFI/HPUX/AUTO in the EFI partition. If the
device is on a PA-RISC system, the file
is created as a LIF file in the boot area.
-- bdf:
-- ----
bdf prints out the amount of free disk space available on the specified
filesystem (/dev/dsk/c0d0s0, for example)
or on the file system in which the specified file ($HOME, for example)
is contained.
If no file system is specified, the free space on all of the normally
mounted file systems is printed.
The reported numbers are in kilobytes.
Examples:
# bdf
oranh300:/home/se1223>bdf | more
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 434176 165632 266504 38% /
/dev/vg00/lvol1 298928 52272 216760 19% /stand
/dev/vg00/lvol8 2097152 1584488 508928 76% /var
/dev/vg00/lvol11 524288 2440 490421 0% /var/tmp
/dev/vg00/lvucmd 81920 1208 75671 2% /var/opt/universal
/dev/vg00/lvol9 1048576 791925 240664 77% /var/adm
/dev/vg00/lvol10 2064384 47386 1890941 2% /var/adm/crash
/dev/vg00/lvol7 1548288 1262792 283320 82% /usr
/dev/vg00/vsaunixlv
311296 185096 118339 61% /usr/local/vsaunix
/dev/vg00/lvol4 1867776 5264 1849784 0% /tmp
/dev/vg00/lvol6 1187840 757456 427064 64% /opt
/dev/vg00/lvol5 262144 34784 225632 13% /home
/dev/vg00/lvbeheer 131072 79046 48833 62% /beheer
/dev/vg00/lvbeheertmp
655360 65296 553190 11% /beheer/tmp
/dev/vg00/lvbeheerlog
524288 99374 398407 20% /beheer/log
/dev/vg00/lvbeheerhistlog
..
..
# bdf /tmp
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol4 1867776 5264 1849784 0% /tmp
-- lvextend:
-- ---------
WARNINGS
The -m option cannot be used on HP-IB devices.
EXAMPLES
- Increase the number of the logical extents of a logical volume to one
hundred:
# lvextend -m 2 /dev/vg01/lvol5
-- extendfs:
-- ---------
If the original hfs filesystem image created on special does not make
use of all of the available space,
extendfs can be used to increase the capacity of an hfs filesystem by
updating the filesystem structure
to include the extra space.
The command-line parameter special specifies the character device
special file of either a logical volume
or a disk partition. If special refers to a mounted filesystem, special
must be un-mounted
before extendfs can be run (see mount(1M)).
EXAMPLES
To increase the capacity of a filesystem created on a logical volume,
enter:
# umount /dev/vg00/lvol1
# extendfs /dev/vg00/rlvol1
-- fsadm:
-- ------
EXAMPLES
Convert a HFS file system from a nolargefiles file system to a
largefiles file system:
-- diskinfo:
-- ---------
SYNOPSIS
/etc/diskinfo [-b|-v] character_devicefile
DESCRIPTION
diskinfo determines whether the character special file named by
character_devicefile is associated with a SCSI, CS/80, or
Subset/80
disk drive; if so, diskinfo summarizes the disk's characteristics.
Example:
# diskinfo /dev/rdsk/c31t1d3
SCSI describe of /dev/rdsk/c31t1d3:
vendor: IBM
product id: 2105800
type: direct access
size: 13671904 Kbytes
bytes per sector: 512
Example 1:
----------
Choose an empty directory to serve as the mount point for the file
system. Use the mkdir command to
create the directory if it does not currently exist. For example, enter:
# mkdir /test
Mount the file system using the mount command. Use the block device file
name that contains the file system.
You will need to enter this name as an argument to the mount command.
Example 2:
----------
Example 3:
----------
To use mkfs to determine the command that was used to create the VxFS
file system on /dev/rdsk/c0t6d0:
http://www.docs.hp.com/en/B2355-90672/index.html
Example 4:
----------
Select one or more disks. ioscan(1M) shows the disks attached to the
system and their device file names.
Initialize each disk as an LVM disk by using the pvcreate command. For
example, enter
# pvcreate /dev/rdsk/c0t0d0
Note that using pvcreate will result in the loss of any existing data
currently on the physical volume.
You use the character device file for the disk.
Once a disk is initialized, it is called a physical volume.
- Pool the physical volumes into a volume group. To complete this step:
# mkdir /dev/vgnn
Create a device file named group in the above directory with the mknod
command.
The c following the device file name specifies that group is a character
device file.
The 64 is the major number for the group device file; it will always be
64.
The 0xNN0000 is the minor number for the group file in hexadecimal. Note
that each particular NN must be a
unique number across all volume groups.
Use the block device file to include each disk in your volume group. You
can assign all the physical volumes
to the volume group with one command. No physical volume can already be
part of an existing volume group.
Once you have created a volume group, you can now create a logical
volume using lvcreate. For example:
# lvcreate /dev/vgnn
Using the above command creates the logical volume /dev/vgnn/lvoln with
LVM automatically assigning
the n in lvoln.
When LVM creates the logical volume, it creates the block and character
device files and places them in the directory
/dev/vgnn.
You can create a file system with large file capability by entering the
following command:
Specifying largefiles sets the largefiles flag, which allows the file
system to hold files
up to one terabyte in size. Conversely, the default nolargefiles option
clears the flag and limits
files being created to a size of two gigabytes or less:
Notes:
------
# for P in 1 2 3 4 5 6 7 8 9 10
> do
> lvextend -m 1 /dev/vg00/lvol$P /dev/dsk/c2t5d0
> sleep 1
> done
Note: c1t2d0 is the boot disk and c2t2d0 is the mirrored disk.
3) Use mkboot to place the boot utilities in the boot area and add the
AUTO file.
mkboot /dev/dsk/c2t2d0
mkboot -a "hpux -lq" /dev/rdsk/c2t2d0
4) Use mkboot to update the AUTO file on the primary boot disk.
mkboot -a "hpux -lq" /dev/rdsk/c1t2d0
Repeat the lvextend for all other logical volumes on the boot mirror.
lvextend -m 1 /dev/vg00/lvol4
lvextend -m 1 /dev/vg00/lvol5
lvextend -m 1 /dev/vg00/lvol6
lvextend -m 1 /dev/vg00/lvol7
lvextend -m 1 /dev/vg00/lvol8
6) Modify your alternate boot path to point to the mirror copy of the
boot disk.
Note: Use the Hardware path for your new boot disk.
setboot -a 0/0/2/0.2.0
Example 1:
----------
In this example, you would need to increase the file system size of /var
by 10 MB, which actually needs
to be rounded up to 12 MB.
Increase /var
Follow these steps to increase the size limit of /var.
# /sbin/vgdisplay /dev/vg00
/sbin/shutdown
# /sbin/mount
- Unmount /var:
# /sbin/umount /var
This is required for the next step, because extendfs can only work on
unmounted volumes. If you get a
"device busy" error at this point, reboot the system and log on in
single-user mode before continuing.
# /sbin/extendfs /dev/vg00/rlvol7
Mount /var:
# /sbin/mount /var
Example 2:
----------
# umount /dev/vg00/lvol1
# lvextend -L larger_size /dev/vg00/lvol1
# extendfs -F hfs /dev/vg00/rlvol1 -- For operation like mkfs
or extendfs, you should use raw device interface.
# mount /dev/vg00/lvol1 mount_directory
Example 3:
----------
>
> Date: 12/14/99
> Document description: Extending /var, /usr, /tmp without Online JFS
> Document id: KBRC00000204
>
>
> You may provide feedback on this document
>
>
> Extending /var, /usr, /tmp without Online JFS DocId: KBRC00000204
Updated:
> 12/14/99 1:14:29 PM
>
> PROBLEM
> Since /var, /usr, /tmp (and sometimes /opt) are always in use by the
> operating system, they cannot be unmounted with the umount command. In
order
> to extend these filesystems, the system must be in single user mode.
>
> RESOLUTION
> This example will show how to extend /usr to 400MB without Online JFS
>
>
> 1.. Backup the filesystem before extending
>
>
> 2.. Display disk information on the logical volume
>
> lvdisplay -v /dev/vg00/lvol4 | more
>
>
> a.. Make sure this is enough Free PE's to increase this filesystem.
> b.. Make sure that allocation is NOT strict/contiguous.
>
>
> 3.. Reboot the machine
>
> shutdown -r now
>
>
> 4.. When prompted, press "ESC" to interrupt the boot.
>
>
> 5.. Boot from the primary device and invoke ISL interaction.
>
> bo pri isl
>
> NOTE: If prompted to interact with ISL, respond "y"
>
>
> 6.. Boot into single user mode
>
> hpux -is
>
> NOTE:Nothing will be mounted.
>
>
> 7.. Extend the logical volume that holds the filesystem.
>
> /sbin/lvextend -L 400 /dev/vg00/lvol4
>
>
> 8.. Extend the file system.
>
> /sbin/extendfs -F hfs /dev/vg00/rlvol4
>
> NOTE: The use of the character device.
>
>
> 9.. Ensure the filesystem now reports to be the new size
>
> bdf
>
>
> 10.. Reboot the system to its normal running state.
>
> shutdown -r now
>
>
>
The only thing is that you have to have contiguous lvols to do that. The
best way is to do an Ignite make_tape_recovery -i for vg00 and then
resize it when you recreate it. If you have vg00 on a seperate disk then
it is real easy, the backup can run in the background, and the restore
interactive will take about 2.5 hours for a 9GB root disk, you can make
the lvols any size you want and it also puts it back in place in order
so you save space.
Example 4:
----------
The right way to extend a file system with "OnLine jfs" is using the
command "fsadm".
For example, if you want to extend the fs /mk2/toto in the
/dev/vgmk2/lvtoto in from 50Mbytes to 60 you must extend de logical
volume
# lvextend -L 60 /dev/vgmk2/lvtoto
Now use fsadm ( I supose you have vxfs, if you are using hfs is not
possible to increase on-line, or at least I don't know how ).
Note 4:
-------
commands are:
swagentd -r
swinstall -x mount_all_filesystems=false -x enforce_dependencies=true -s
hpdepot.ao.nl.abnamro.com:/beheer/depot/OnlineJFS_License OnlineJFS
swagentd -k
thread:
nfile:
------
Acceptable Values:
Minimum
14
Maximum
Memory limited
Default
((16*(Nproc+16+MaxUsers)/10)+32+2*(Npty+Nstrpty)
Description
nfile defines the maximum number files that can be open at any one time,
system-wide.
It is the number of slots in the file descriptor table. Be generous with
this number because the required memory
is minimal, and not having enough slots restricts system processing
capacity.
Every process uses at least three file descriptors per process (standard
input, standard output,
and standard error).
Every process has two pipes per process (one per side), each of which
requires a pty. Stream pipes also use s
treams ptys which are limited by nstrpty.
Entering Values:
Use the kcweb web interface or the kmtune command to view and change
values. kcweb is described
in the kcweb(1M) manpage and in the program's help topics. You can run
kcweb from the command line
or from the System Administration Manager (SAM); see sam(1M). You run
kmtune from the command line;
see kmtune(1M) for details.
Accounting
acctresume Resume accounting when free space on the file system where
accounting log files reside rises above acctresume plus minfree percent
of total usable file system size. Manpage: acctsuspend(5).
Accounting
acctsuspend
Suspend accounting when free space on the file system where accounting
log files reside drops below acctsuspend plus minfree percent of total
usable file system size. Manpage: acctsuspend(5).
Asynchronous I/O
aio_listio_max
Maximum number of POSIX asynchronous I/O operations allowed in a single
lio_listio() call. Manpage: aio_listio_max(5).
Asynchronous I/O
aio_max_ops
System-wide maximum number of POSIX asynchronous I/O operations allowed
at one time. Manpage: aio_max_ops(5).
Asynchronous I/O
aio_physmem_pct
Maximum percentage of total system memory that can be locked for use in
POSIX asynchronous I/O operations. Manpage: aio_physmem_pct(5).
Asynchronous I/O
aio_prio_delta_max
Maximum priority offset (slowdown factor) allowed in a POSIX
asynchronous I/O control block (aiocb). Manpage: aio_prio_delta_max(5).
Memory Paging
allocate_fs_swapmap
Enable or disable preallocation of file system swap space when swapon()
is called as opposed to allocating swap space when malloc() is called.
Enabling allocation reduces risk of insufficient swap space and is used
primarily where high availability is important. Manpage:
allocate_fs_swapmap(5).
Spinlock Pool
bufcache_hash_locks
Buffer-cache spinlock pool. NO MANPAGE.
Spinlock Pool
chanq_hash_locks
Channel queue spinlock pool. Manpage: chanq_hash_locks(5).
IPC: Share
core_addshmem_read
Flag to include readable shared memory in a process core dump. Manpage:
core_addshmem_read(5).
IPC: Share
core_addshmem_write
Flag to include read/write shared memory in a process core dump.
Manpage: core_addshmem_write(5).
Miscellaneous: Links
create_fastlinks
Create fast symbolic links using a newer, more efficient format to
improve access speed by reducing disk block accesses during path name
look-up sequences. Manpage: create_fastlinks(5).
Spinlock Pool
dnlc_hash_locks
Number of locks for directory cache synchronization. NO MANPAGE.
Miscellaneous: Clock
dst
Enable/disable daylight savings time. Manpage: timezone(5).
Miscellaneous: IDS
enable_idds
Flag to enable the IDDS daemon, which gathers data for IDS/9000.
Manpage: enable_idds(5).
Miscellaneous: Memory
eqmemsize
Number of pages of memory to be reserved for equivalently mapped
memory, used mostly for DMA transfers. Manpage: eqmemsize(5).
ProcessMgmt: Process
executable_stack
Allows or denies program execution on the stack. Manpage:
executable_stack(5).
Spinlock Pool
ftable_hash_locks
File table spinlock pool. NO MANPAGE.
Spinlock Pool
hdlpreg_hash_locks
Set the size of the pregion spinlock pool. Manpage:
hdlpreg_hash_locks(5).
Spinlock Pool
io_ports_hash_locks I/O port spinlock pool. NO MANPAGE.
Miscellaneous: Queue
ksi_alloc_max
Maximum number of system-wide queued signals that can be allocated.
Manpage: ksi_alloc_max(5).
Miscellaneous: Queue
ksi_send_max
Maximum number of queued signals that a process can send and have
pending at one or more receivers. Manpage: ksi_send_max(5).
ProcessMgmt: Memory
maxdsiz
Maximum process data storage segment space that can be used for statics
and strings, as well as dynamic data space allocated by sbrk() and
malloc() (32-bit processes). Manpage: maxdsiz(5).
ProcessMgmt: Memory
maxdsiz_64bit
Maximum process data storage segment space that can be used for statics
and strings, as well as dynamic data space allocated by sbrk() and
malloc() (64-bit processes). Manpage: maxdsiz(5).
ProcessMgmt: Memory
maxrsessiz
Maximum size (in bytes) of the RSE stack for any user process on the
IPF platform. Manpage: maxrsessiz(5).
ProcessMgmt: Memory
maxrsessiz_64bit
Maximum size (in bytes) of the RSE stack for any user process on the
IPF platform. Manpage: maxrsessiz(5).
ProcessMgmt: Memory
maxssiz
Maximum dynamic storage segment (DSS) space used for stack space (32-
bit processes). Manpage: maxssiz(5).
ProcessMgmt: Memory
maxssiz_64bit
Maximum dynamic storage segment (DSS) space used for stack space (64-
bit processes). Manpage: maxssiz(5).
ProcessMgmt: Memory
maxtsiz
Maximum allowable process text segment size, used by unchanging
executable-code (32-bit processes). Manpage: maxtsiz(5).
ProcessMgmt: Memory
maxtsiz_64bit
Maximum allowable process text segment size, used by unchanging
executable-code (64-bit processes). Manpage: maxtsiz(5).
ProcessMgmt: Process
maxuprc
Maximum number of processes that any single user can have running at
the same time, including login shells, user interface processes, running
programs and child processes, I/O processes, etc. If a user is using
multiple, simultaneous logins under the same login name (user ID) as is
common in X Window, CDE, or Motif environments, all processes are
combined, even though they may belong to separate process groups.
Processes that detach from their parent process group, where that is
possible, are not counted after they detach (line printer spooler jobs,
certain specialized applications, etc.). Manpage: maxuprc(5).
Miscellaneous: Users
maxusers
Maximum number of users expected to be logged in on the system at one
time; used by other system parameters to allocate system resources.
Manpage: maxusers(5).
Accounting
max_acct_file_size
Maximum size of the accounting file. Manpage: max_acct_file_size(5).
Asynchronous I/O
max_async_ports
System-wide maximum number of ports to the asynchronous disk I/O driver
that processes can have open at any given time. Manpage:
max_async_ports(5).
Memory Paging
max_mem_window
Maximum number of group-private 32-bit shared memory windows. Manpage:
max_mem_window(5).
ProcessMgmt: Threads
max_thread_proc
Maximum number of threads that any single process can create and have
running at the same time. Manpage: max_thread_proc(5).
IPC: Message
mesg
Enable or disable IPC messages at system boot time. Manpage: mesg(5).
IPC: Message
msgmap
Size of free-space resource map for allocating shared memory space for
messages. Manpage: msgmap(5).
IPC: Message
msgmax
System-wide maximum size (in bytes) for individual messages. Manpage:
msgmax(5).
IPC: Message
msgmnb
Maximum combined size (in bytes) of all messages that can be queued
simultaneously in a message queue. Manpage: msgmnb(5).
IPC: Message
msgmni
Maximum number of message queues allowed on the system at any given
time. Manpage: msgmni(5).
IPC: Message
msgseg
Maximum number of message segments that can exist on the system.
Manpage: msgseg(5).
IPC: Message
msgssz
Message segment size in bytes. Manpage: msgssz(5).
IPC: Message
msgtql
Maximum number of messages that can exist on the system at any given
time. Manpage: msgtql(5).
Miscellaneous: CD
ncdnode
Maximum number of entries in the vnode table and therefore the maximum
number of open CD-ROM file system nodes that can be in memory. Manpage:
ncdnode(5).
Miscellaneous: Terminal
nclist
Maximum number of cblocks available for data transfers through tty and
pty devices. Manpage: nclist(5).
ProcessMgmt: Threads
nkthread
Maximum number of kernel threads allowed on the system at the same
time. Manpage: nkthread(5).
ProcessMgmt: Process
nproc
Defines the maximum number of processes that can be running
simultaneously on the entire system, including remote execution
processes initiated by other systems via remsh or other networking
commands. Manpage: nproc(5).
Miscellaneous: Terminal
npty
Maximum number of pseudo-tty entries allowed on the system at any one
time. Manpage: npty(5).
Streams
NSTREVENT
Maximum number of outstanding streams bufcalls that are allowed to
exist at any given time on the system. This number should be equal to or
greater than the maximum bufcalls that can be generated by the combined
total modules pushed onto any given stream, and serves to limit run-away
bufcalls. Manpage: nstrevent(5).
Miscellaneous: Terminal
nstrpty
System-wide maximum number of streams-based pseudo-ttys that are
allowed on the system. Manpage: nstrpty(5).
Streams
nstrpty
System-wide maximum number of streams-based pseudo-ttys that are
allowed on the system. Manpage: nstrpty(5).
Streams
NSTRPUSH
Maximum number of streams modules that are allowed to exist in any
single stream at any one time on the system. This provides a mechanism
for preventing a software defect from attempting to push too many
modules onto a stream, but it is not intended as adequate protection
against malicious use of streams. Manpage: nstrpush(5).
Streams
NSTRSCHED
Maximum number of streams scheduler daemons that are allowed to run at
any given time on the system. This value is related to the number of
processors installed in the system. Manpage: nstrsched(5).
Miscellaneous: Terminal
nstrtel
Number of telnet session device files that are available on the system.
Manpage: nstrtel(5).
Memory Paging
nswapdev
Maximum number of devices, system-wide, that can be used for device
swap. Set to match actual system configuration. Manpage: nswapdev(5).
Memory Paging
nswapfs
Maximum number of mounted file systems, system-wide, that can be used
for file system swap. Set to match actual system configuration. Manpage:
nswapfs(5).
Miscellaneous: Memory
nsysmap
Number of entries in the kernel dynamic memory virtual address space
resource map (32-bit processes). Manpage: nsysmap(5).
Miscellaneous: Memory
nsysmap64
Number of entries in the kernel dynamic memory virtual address space
resource map (64-bit processes). Manpage: nsysmap(5).
ProcessMgmt: Memory
pa_maxssiz_64bit
Maximum size (in bytes) of the stack for a user process running under
the PA-RISC emulator on IPF. Manpage: pa_maxssiz(5).
Spinlock Pool
pfdat_hash_locks
Pfdat spinlock pool. Manpage: pfdat_hash_locks(5).
Spinlock Pool
region_hash_locks
Process-region spinlock pool. Manpage: region_hash_locks(5).
Memory Paging
remote_nfs_swap
Enable or disable swap to mounted remote NFS file system. Used on
cluster clients for swapping to NFS-mounted server file systems.
Manpage: remote_nfs_swap(5).
Miscellaneous: Schedule
rtsched_numpri
Number of distinct real-time interrupt scheduling priority levels are
available on the system. Manpage: rtsched_numpri(5).
Miscellaneous: Terminal
scroll_lines
Defines the number of lines that can be scrolled on the internal
terminal emulator (ITE) system console. Manpage: scroll_lines(5).
ProcessMgmt: Process
secure_sid_scripts
Controls whether setuid and setgid bits on scripts are honored.
Manpage: secure_sid_scripts(5).
IPC: Semaphore
sema
Enable or disable IPC semaphores at system boot time. Manpage: sema(5).
IPC: Semaphore
semaem
Maximum value by which a semaphore can be changed in a semaphore "undo"
operation. Manpage: semaem(5).
IPC: Semaphore
semmni
Maximum number of sets of IPC semaphores allowed on the system at any
one time. Manpage: semmni(5).
IPC: Semaphore
semmns
Maximum number of individual IPC semaphores available to system users,
system-wide. Manpage: semmns(5).
IPC: Semaphore
semmnu
Maximum number of processes that can have undo operations pending on
any given IPC semaphore on the system. Manpage: semmnu(5).
IPC: Semaphore
semmsl
Maximum number of individual System V IPC semaphores per semaphore
identifier. Manpage: semmsl(5).
IPC: Semaphore
semume
Maximum number of IPC semaphores that a given process can have undo
operations pending on. Manpage: semume(5).
IPC: Semaphore
semvmx
Maximum value any given IPC semaphore is allowed to reach (prevents
undetected overflow conditions). Manpage: semvmx(5).
Miscellaneous: Web
sendfile_max
The amount of buffer cache that can be used by the sendfile() system
call on HP-UX web servers. Manpage: sendfile_max(5).
IPC: Share
shmem
Enable or disable shared memory at system boot time. Manpage: shmem(5).
IPC: Share
shmmax
Maximum allowable shared memory segment size (in bytes). Manpage:
shmmax(5).
IPC: Share
shmmni
Maximum number of shared memory segments allowed on the system at any
given time. Manpage: shmmni(5).
IPC: Share
shmseg
Maximum number of shared memory segments that can be attached
simultaneously to any given process. Manpage: shmseg(5).
Streams
STRCTLSZ
Maximum number of control bytes allowed in the control portion of any
streams message on the system. Manpage: strctlsz(5).
Streams
streampipes
Force all pipes to be streams-based. Manpage: streampipes(5).
Streams
STRMSGSZ
Maximum number of bytes that can be placed in the data portion of any
streams message on the system. Manpage: strmsgsz(5).
Memory Paging
swapmem_on
Enable or disable pseudo-swap allocation. This allows systems with
large installed memory to allocate memory space as well as disk swap
space for virtual memory use instead of restricting availability to
defined disk swap area. Manpage: swapmem_on(5).
Memory Paging
swchunk
Amount of space allocated for each chunk of swap area. Chunks are
allocated from device to device by the kernel. Changing this parameter
requires extensive knowledge of system internals. Without such
knowledge, do not change this parameter from the normal default value.
Manpage: swchunk(5).
Spinlock Pool
sysv_hash_locks
System V interprocess communication spinlock pool. Manpage:
sysv_hash_locks(5).
Miscellaneous: Network
tcphashsz
TCP hash table size, in bytes. Manpage: tcphashsz(5).
ProcessMgmt: CPU
timeslice
Maximum time a process can use the CPU until it is made available to
the next process having the same process execution priority. This
feature also prevents runaway processes from causing system lock-up.
Manpage: timeslice(5).
Miscellaneous: Clock
timezone
The offset between the local time zone and Coordinated Universal Time
(UTC), often called Greenwich Mean Time or GMT. Manpage: timezone(5).
Miscellaneous: Memory
unlockable_mem
Amount of system memory to be reserved for system overhead and virtual
memory management, that cannot be locked by user processes. Manpage:
unlockable_mem(5).
Spinlock Pool
vnode_cd_hash_locks
Vnode clean/dirty spinlock pool. NO MANPAGE.
Spinlock Pool
vnode_hash_locks
Vnode spinlock pool. NO MANPAGE.
Panic Reboots
/var/tombstones/ts99
/etc/shutdownlog
Bad disk
Filesystem full
Shows directories on the local filesystem and how much space they are
taking up
NFS Server
/sbin/init.d/nfs.server stop
The processes should NOT be running:
# ps -ef|grep nfsd
# ps -ef|grep rpc.mountd
# ps -ef|grep rpc.lockd
# ps -ef|grep rpc.statd
/sbin/init.d/nfs.server start
# ps -ef|grep nfsd
root 3444 1 0 10:39:12 ? 0:00 /usr/sbin/nfsd 4
root 3451 3444 0 10:39:12 ? 0:00 /usr/sbin/nfsd 4
root 3449 3444 0 10:39:12 ? 0:00 /usr/sbin/nfsd 4
root 3445 3444 0 10:39:12 ? 0:00 /usr/sbin/nfsd 4
# ps -ef|grep rpc.mountd
root 3485 1 0 10:42:09 ? 0:00 rpc.mountd
# ps -ef|grep rpc.lockd
root 3459 1 0 10:39:12 ? 0:00 /usr/sbin/rpc.lockd
# ps -ef|grep rpc.statd
root 3453 1 0 10:39:12 ? 0:00 /usr/sbin/rpc.statd
# ps -ef|grep rpc.mountd
# rpc.mountd or /usr/sbin/rpc.mountd
# ps -ef|grep rpc.mountd
root 3485 1 0 10:42:09 ? 0:00 rpc.mountd
##
# WARNING: The rpc.mountd should now be started from a startup script.
# Please enable the mountd startup script to start rpc.mountd.
##
#rpc stream tcp nowait root /usr/sbin/rpc.rexd 100017 1
rpc.rexd
rpc dgram udp wait root /usr/lib/netsvc/rstat/rpc.rstatd 100001
2-4 rpc.rstatd
rpc dgram udp wait root /usr/lib/netsvc/rusers/rpc.rusersd
100002 1-2 rpc.rusersd
rpc dgram udp wait root /usr/etc/rpc.mountd 100005 1
rpc.mountd -e
rpc dgram udp wait root /usr/lib/netsvc/rwall/rpc.rwalld 100008
1 rpc.rwalld
#rpc dgram udp wait root /usr/sbin/rpc.rquotad 100011 1
rpc.rquotad
rpc dgram udp wait root /usr/lib/netsvc/spray/rpc.sprayd 100012
1 rpc.sprayd
NIC problems:
The lanadmin utility provides NIC statistics
The nettladmin utility provides packet trace information
If you've connected to a central UCS computer to use vi, first tell that
host about your communications software
(e.g., NCSA Telnet). At IUB, your software will typically emulate a VT-
100 terminal.
To find out what shell program you use, type:
echo $SHELL
Then if you use ksh, bash, or sh, type:
TERM=vt100; export TERM
You can automate this task by adding the appropriate command to your
default command shell's configuration file.
Using vi modes:
---------------
If you make a typing mistake, press ESC to return to edit mode and then
reposition the cursor at the error,
and press i to get back to insert mode.
The VI editor has two kinds of searches: string and character. For a
string search, the / and ? commands are used.
When you start these commands, the command just typed will be shown on
the bottom line, where you type the particular
string to look for. These two commands differ only in the direction
where the search takes place.
The / command searches forwards (downwards) in the file, while the ?
command searches backwards (upwards) in the file.
The n and N commands repeat the previous search command in the same or
opposite direction, respectively.
Some characters have special meanings to VI, so they must be preceded by
a backslash (\) to be included as part
of the search expression.
/text Search forward (down) for text (text can include spaces
and characters with special meanings.)
% Find matching ( ), { }, or [ ]
Some other:
-----------
:set number
If you tire of the line numbers, enter the following command to turn
them off:
:set nonumber
36. ulimit:
===========
/usr/bin/ulimit
Example 1: Limiting the stack size
ULIMIT - Sets the file size limit for the login. Units are disk blocks.
Default is zero (no limit).
Be sure to specify even numbers, as the ULIMIT variable accepts a number
of 512-byte blocks.
If you see a core file lying around, just type "file core" to get some
details about it. Example:
$ file core
core:ELF-64 core file - PA-RISC 2.0 from 'sqlplus' - received SIGABRT
Run the Unix process debugger to obtain more information about where and
why the process abended.
This information is normally requested by Oracle Support for in-depth
analysis of the problem. Some example:
Solaris:
$ gdb $ORACLE_HOME/bin/sqlplus core
bt # backtrace of all stack frames
quit
Sequent:
$ debug -c core $ORACLE_HOME/bin/sqlplus
debug> stack
debug> quit
AIX:
Purpose
Sets or reports user resource limits.
Syntax
ulimit [ -H ] [ -S ] [ -a ] [ -c ] [ -d ] [ -f ] [ -m ] [ -n ] [ -s ] [
-t ] [ Limit ]
Description
The ulimit command sets or reports user process resource limits, as
defined in the /etc/security/limits file.
This file contains these default limits:
fsize = 2097151
core = 2097151
cpu = -1
data = 262144
rss = 65536
stack = 65536
nofiles = 2000
These values are used as default settings when a new user is added to
the system. The values are set with the
mkuser command when the user is added to the system, or changed with the
chuser command.
Limits are categorized as either soft or hard. With the ulimit command,
you can change your soft limits,
up to the maximum set by the hard limits. You must have root user
authority to change resource hard limits.
Many systems do not contain one or more of these limits. The limit for a
specified resource is set when the
Limit parameter is specified. The value of the Limit parameter can be a
number in the unit specified with
each resource, or the value unlimited. To set the specific ulimit to
unlimited, use the word unlimited
For more information about user and system resource limits, refer to the
getrlimit, setrlimit, or vlimit
subroutine in AIX 5L Version 5.2 Technical Reference: Base Operating
System and Extensions Volume 1.
Flags
You can check the current ulimit settings using the ulimit -a command,
and at least the following
three commands should be run, as the user account that will launch Java:
ulimit -m unlimited
ulimit -d unlimited
ulimit -f unlimited
=====================================
37. RAM disks:
=====================================
37.1 AIX:
=========
Example:
--------
# mkramdisk SIZE
/dev/rramdiskxx
# mkfs -V jfs /dev/ramdiskxx
# mount -V jfs -o nointegrity /dev/ramdiskxx /whatever_mountpoint
mkramdisk Command:
------------------
Purpose
Creates a RAM disk using a portion of RAM that is accessed through
normal reads and writes.
Syntax
mkramdisk [ -u ] size[ M | G ]
Description
The mkramdisk command is shipped as part of bos.rte.filesystems, which
allows the user to create a RAM disk.
Upon successful execution of the mkramdisk command, a new RAM disk is
created, a new entry added to /dev,
the name of the new RAM disk is written to standard output, and the
command exits with a value of 0.
If the creation of the RAM disk fails, the command prints an
internalized error message, and the command
will exit with a nonzero value.
The names of the RAM disks are in the form of /dev/rramdiskx where x is
the logical RAM disk number (0 through 63).
The mkramdisk command also creates block special device entries (for
example, /dev/ramdisk5) although use
of the block device interface is discouraged because it adds overhead.
The device special files in /dev are owned
by root with a mode of 600. However, the mode, owner, and group ID can
be changed using normal system commands.
Note:
The size of a RAM disk cannot be changed after it is created.
The mkramdisk command is responsible for generating a major number,
loading the ram disk kernel extension,
configuring the kernel extension, creating a ram disk, and creating the
device special files in /dev.
Once the device special files are created, they can be used just like
any other device special files through
normal open, read, write, and close system calls.
RAM disks can be removed by using the rmramdisk command. RAM disks are
also removed when the machine is rebooted.
By default, RAM disk pages are pinned. Use the -u flag to create RAM
disk pages that are not pinned.
Flags
-u Specifies that the ram disk that is created will not be pinned. By
default, the ram disk will be pinned.
Parameters
size Indicates the amount of RAM (in 512 byte increments) to use for the
new RAM disk. For example, typing:
# mkramdisk 1
# mkramdisk 40000
Exit Status
The following exit values are returned:
0 Successful completion.
>0 An error occurred.
Examples:
To create a new ram disk using a default 512-byte block size, and the
size is 500 MBs (1048576 * 512), enter:
# mkramdisk 1048576
/dev/rramdisk0
# mkramdisk 500M
/dev/rramdisk0
The /dev/rramdisk0 ramdisk is created. Note that the ramdisk has the
same size as example 1 above.
# mkramdisk 2G
/dev/rramdisk0
# mkramdisk 40000
# ls -l /dev | grep ram
# mkfs -V jfs /dev/ramdiskx
# mkdir /ramdisk0
# mount -V jfs -o nointegrity /dev/ramdiskx /ramdiskx
37.2 Linux:
===========
Redhat:
Those three commands will make a directory for the ramdisk , format the
ramdisk (create a filesystem),
and mount the ramdisk to the directory "/tmp/ramdisk0". Now you can
treat that directory as a pretend partition!
Go ahead and use it like any other directory or as any other partition.
If the formatting of the ramdisk faild then you might have no support
for ramdisk compiled into the Kernel.
The Kernel configuration option for ramdisk is CONFIG_BLK_DEV_RAM .
The default size of the ramdisk is 4Mb=4096 blocks. You saw what ramdisk
size you got while you were running mke2fs.
mke2fs /dev/ram0 should have produced a message like this:
Running df -k /dev/ram0 tells you how much of that you can really use
(The filesystem takes also some space):
>df -k /dev/ram0
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/ram0 3963 13 3746 0% /tmp/ramdisk0
What are some catches? Well, when the computer reboots, it gets wiped.
Don't put any data there that isn't
copied somewhere else. If you make changes to that directory, and you
need to keep the changes, figure out
some way to back them up.
Okay, first the hard way. Add this line to your lilo.conf file:
and it will make the default ramdisks 10 megs after you type the "lilo"
command and reboot the computer.
Here is an example of my /etc/lilo.conf file.
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
image=/boot/vmlinuz
label=linux
root=/dev/hda2
read-only
ramdisk_size=10000
options rd rd_size=10000
insmod rd rd_size=10000
37.3 Solaris:
=============
Note 1:
-------
Quick example:
# ramdiskadm -a mydisk 2m
/dev/ramdisk/mydisk
# ramdiskadm
Block Device Size Removable
/dev/ramdisk/miniroot 134217728 No
/dev/ramdisk/certfs 1048576 No
/dev/ramdisk/mydisk 2097152 Yes
NAME
ramdiskadm- administer ramdisk pseudo device
SYNOPSIS
/usr/sbin/ramdiskadm -a name size [g | m | k | b]
/usr/sbin/ramdiskadm -d name
/usr/sbin/ramdiskadm
DESCRIPTION
The ramdiskadm command administers ramdisk(7D), the ramdisk driver. Use
ramdiskadm to create a new named
ramdisk device, delete an existing named ramdisk, or list information
about exisiting ramdisks.
OPTIONS
The following options are supported:
-a name size
Create a ramdisk named name of size size and its corresponding block and
character device nodes.
The size can be a decimal number, or, when prefixed with 0x, a
hexadecimal number, and can specify the size
in bytes (no suffix), 512-byte blocks (suffix b), kilobytes (suffix k),
megabytes (suffix m)
or gigabytes (suffix g). The size of the ramdisk actually created might
be larger than that specified,
depending on the hardware implementation.
-d name
Delete an existing ramdisk of the name name. This command succeeds only
when the named ramdisk is not open.
The associated memory is freed and the device nodes are removed.
Without options, ramdiskadm lists any existing ramdisks, their sizes (in
decimal), and whether they can be removed
by ramdiskadm (see the description of the -d option, above).
Note 2:
-------
thread:
Is there anyone who could tell me how to make a ram disk in Solaris 8?
I have a Sun Sparc Box running Solaris 8, and I want to use some of
it's memory to mount a new file-system
Thanks in advance,
The solution:
mkdir /ramdisk
mount -F tmpfs -o size=500m swap /ramdisk
However this is not a true ramdisk (it really uses VM, not RAM, and the
size
is an upper limit, not a reservation) This is what Solaris provides.
======================
38. Software Packages:
======================
Software packages are grouped into software clusters, which are logical
collections of
software packages. Some clusters contain just 1 or 2 packages, while
another may contain more
packages.
Solaris provides the tools for adding and removing software from a
system.
You can use pkgadd command to install packages, and the pkgrm command to
remove packages.
There are also GUI tools to install and remove packages.
Package files are delivered in package format and are unusable as they
are delivered. The pkgadd command interprets the software package's
control files, and then uncompresses and installs the product files onto
the system's local disk.
Although the pkgadd and pkgrm commands do not log their output to a
standard location, they do keep track of the product
that is installed or removed. The pkgadd and pkgrm commands store
information about a package that has been installed
or removed in a software product database.
By updating this database, the pkgadd and pkgrm commands keep a record
of all software products installed on the system.
-- pkgadd:
-- -------
pkgadd [-nv] [-a admin] [-d device] [[-M]-R root_path] [-r response] [-V
fs_file] [pkginst...]
pkgadd -s spool [-d device] [pkginst...]
-a admin
Define an installation administration file, admin, to
be used in place of the default administration file.
The token none overrides the use of any admin file,
and thus forces interaction with the user. Unless a
full path name is given, pkgadd first looks in the
current working directory for the administration file.
If the specified administration file is not in the
current working directory, pkgadd looks in the
/var/sadm/install/admin directory for the administra-
tion file.
-d device
Install or copy a package from device. device can be a
full path name to a directory or the identifiers for
tape, floppy disk, or removable disk (for example,
/var/tmp or /floppy/floppy_name ). It can also be a
device alias (for example, /floppy/floppy0).
Or just
-a admin-file
(Optional) Specifies an administration file that the pkgadd command
should consult during the installation.
-d device-name
Specifies the absolute path to the software packages. device-name can
be the path to a device, a directory, or a spool directory.
If you do not specify the path where the package resides, the pkgadd
command checks the default spool directory (/var/spool/pkg).
If the package is not there, the package installation fails.
pkgid
(Optional) Is the name of one or more packages (separated by spaces) to
be installed.
If omitted, the pkgadd command installs all available packages.
# pkgchk -v pkgid
Example 1:
following example shows how install the SUNWpl5u package from a mounted
Solaris 9 CD.
The example also shows how to verify that the package files were
installed properly.
Example 2:
# pkgadd -d /cdrom/cdrom0/s0/Solaris_2.6
Example 3:
# pkgadd -d /tmp/signed_pppd
The following packages are available:
1 SUNWpppd Solaris PPP Device Drivers
(sparc) 11.10.0,REV=2003.05.08.12.24
Example 4:
# pkgadd -d http://install/signed-video.pkg
## Downloading...
..............25%..............50%..............75%..............100%
## Download Complete
Example 5:
# pkgadd -d . DISsci The command will create a new directory
structure in /opt/DISsci
Example 6:
Spooling the packages to a spool directory
Example 7:
The following example shows how install software packages from a remote
system. In this example, assume that the remote system
named package-server has software packages in the /latest-packages
directory. The mount command mounts the packages locally on /mnt,
and the pkgadd command installs the SUNWpl5u package.
- pkgrm
- pkgchk
- pkginfo
- pkgask
- pkgparam
If prior to installation, you know that the package you want to install
is an interactive package, and you want to store
your answers to prevent user interaction during future installations of
this package, you can use the pkgask command to save your response.
Once you have stored your responses to the questions asked by the
request script, you can use the pkgadd -r command
to install the package without user interaction.
-- pkginfo
-- -------
# pkginfo
system SUNWaccr System Accounting, (Root)
system SUNWaccu System Accounting, (Usr)
system SUNWadmap System administration applications
system SUNWadmc System administration core libraries
.
.
etc..
# pkginfo -l SUNWcar
PKGINST: SUNWcar
NAME: Core Architecture, (Root)
CATEGORY: system
ARCH: sparc.sun4u
VERSION: 11.9.0,REV=2001.10.16.17.05
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: core software for a specific hardware platform group
PSTAMP: crash20011016171723
INSTDATE: Nov 02 2001 08:53
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 111 installed pathnames
36 shared pathnames
40 directories
56 executables
17626 blocks used (approx)
For the spool directory, you may use the token spool.
-- pkgrm:
-- ------
Always use the pkgrm command to remove installed packages. Do not use
the rm command, which will corrupt
the system's record-keeping of installed packages.
Examples:
# pkgrm SUNWctu
The Solaris Product Registry is a GUI tool that enables you to install
and uninstall software packages.
To startup the Solaris Product Registry to view, install or uninstall
software, use the command
/usr/bin/prodreg
The Solaris Management Console provides a new Patches Tool for managing
patches. You can only use the Patches Tool
to add patches to a system running the Solaris 9 or later release.
Installing Patches:
-------------------
#patchadd
#patchrm
Examples:
Example 1:
Show the patches on your system:
# showrev -p shows all patches applied to a system
# patchadd -p same as above
# pkgparam <pkgid> PATCHLIST shows all patches applied to the package
identified by <pkgid>
Example 2:
# patchadd /var/spool/patch/104945-02
# patchadd -R /export/root/client1 /var/spool/patch/104945-02
# patchadd -M /var/spool/patch 104945-02 104946-02 102345-02
# patchadd -M /var/spool/patch patchlist
# patchadd -M /var/spool/patch -R /export/root/client1 -B
/export/backoutrepository 104945-02 104946-02 102345-02
- Fileset:
A fileset is the smallest individually installable unit. It's a
collection of files that provide a specific
function. For example, the "bos.net.tcp.client" is a fileset in the
"bos.net" package.
- Package:
A package contains a group of filesets with a common function, This is a
single installable image,
for example "bos.net".
- LPP:
This is a complete software product collection, including all the
packages and filesets required.
LPP's are separately orderable products that will run on the AIX
operating system, for example
BOS, DB2, CICS, ADSM and so on.
P521:/apps $lppchk -l
lppchk: No link found from /etc/security/mkuser.sys to
/usr/lib/security/mkuser.sys.
lppchk: No link found from /etc/security/mkuser.default to
/usr/lib/security/mkuser.default.
smitty update_all
Install a fix with instfix:
---------------------------
P521:/apps $instfix
Usage: instfix [-T [-M platform]] [-s string] [ -k keyword | -f file ]
[-d device] [-S] [-p | [-i [-c] [-q] [-t type] [-v] [-F]]] [-a]
Another option is to use the instfix command. Any fix can have a single
fileset or multiple filesets that
comprise that fix. Fix information is organized in the Table of Contents
(TOC) on the installation media.
After a fix is installed, fix information is kept on the system in a fix
database.
Examples:
# smitty instfix
[Entry Fields]
* INPUT device / directory for software []
+
Purpose
Syntax
lslpp -L -c [ -v]
lslpp -S [A|O]
lslpp -e
Description
To display information about installed filesets, you can use the lslpp
command.
If you need to check whether certain filesets have been installed, use
the lslpp command
as in the following example:
lslpp options:
-l: Displays the name, level, state and description of the fileset.
-h: Displays the installation and update history for the fileset.
-p: Displays requisite information for the fileset.
-d: Displays dependent information for the fileset.
-f: Displays the filenames added to the system during installation of
the fileset.
-w: Lists the fileset that owns a file or files.
Examples:
------------------------------------------------------------------------
----
Path: /usr/lib/objrepos
bos.adt.include 5.2.0.95 COMMITTED Base Application
Development
Include Files
# lslpp -w "*vmstat*"
File Fileset Type
------------------------------------------------------------------------
----
/usr/sbin/lvmstat bos.rte.lvm File
/usr/share/man/info/EN_US/a_doc_lib/cmds/aixcmds6/vmstat.htm
infocenter.man.EN_US.commands File
/usr/share/man/info/EN_US/a_doc_lib/cmds/aixcmds3/lvmstat.htm
infocenter.man.EN_US.commands File
/usr/bin/vmstat bos.acct File
/usr/bin/vmstat64 bos.acct File
/usr/es/sbin/cluster/OEM/VxVM40/cllsvxvmstat
cluster.es.server.utils File
The same for trying to find out what contains the make command:
# lslpp -w "*make*"
Removing a fix:
---------------
On AIX you can use either the
installp -r command, or use the
smitty reject fast path
Smitty fastpaths:
-----------------
From here you can commit or reject installed software. You can also copy
the filesets from the installation media
to a directory on disk. The default directory for doing this is
/usr/sys/inst.images
-- To commit software:
# smitty install_commit
-- To reject software:
# smitty install_reject
installp Command
Purpose
Installs available software products in a compatible installation
package.
Syntax
To Install with Apply Only or with Apply and Commit
installp [ -a | -ac [ -N ] ] [ -eLogFile ] [ -V Number ] [ -dDevice ] [
-b ] [ -S ] [ -B ] [ -D ] [ -I ] [ -p ]
[ -Q ] [ -q ] [ -v ] [ -X ] [ -F | -g ] [ -O { [ r ] [ s ]
[ u ] } ] [ -tSaveDirectory ] [ -w ] [ -zBlockSize ]
{ FilesetName [ Level ]... | -f ListFile | all }
When updates are committed with the -c flag, the user is making a
commitment to that version of the software product,
and the saved files from all previous versions of the software product
are removed from the system, thereby making
it impossible to return to a previous version of the software product.
Software can be committed at the time of installation by using the -ac
flags. Note that committing already
applied updates does not change the currently active version of a
software product.
It merely removes saved files for previous versions of the software
product.
Examples:
# installp -L -d /dev/cd0
Lists the table of contents for the install/update media and saves it
into a file named /tmp/toc.list
# installp -q -d/dev/rmt1.1 -l > /tmp/toc.list
Lists the lpps that have been applied but not yet committed or rejected:
# installp -s
[P521]root@ol116u106:installp -s
0503-459 installp: No filesets were found in the Software
Vital Product Database in the APPLIED state.
With the geninstall command, you can list and install packages from
media that contains installation images
packaged in any of the listed formats. The geninstall and gencopy
commands recognize the non-installp
installation formats and either call the appropriate installers or copy
the images, respectively.
Beginning in AIX 5L, you can not only install installp formatted
packages, but also RPM and
Install Shield Mutli-Platform (ISMP) formatted packages. Use the Web-
based System Manager,
SMIT, or the geninstall command to install and uninstall these types of
packages.
The geninstall command is designed to detect the format type of a
specified package and run the
appropriate install command.
Syntax
geninstall -d Media [ -I installpFlags ] [ -E | -T ] [ -t
ResponseFileLocation ]
[-e LogFile] [ -p ] [ -F ] [ -Y ] [ -Z ] [ -D ] { -f File |
Install_List ] | all}
OR
OR
Description
Accepts all current installp flags and passes them on to installp. Some
flags (for example, -L) are overloaded
to mean list all products on the media. Flags that don't make sense for
ISMP packaged products are ignored.
This allows programs (like NIM) to continue to always send in installp
flags to geninstall, but only the flags
that make sense are used.
d /etc/inittab
Note:
Refer to the README.ISMP file in the /usr/lpp/bos directory to learn
more about ISMP-packaged installations
and using response files.
Examples:
I: installp format
R: RPM format
J: ISMP format
For example, to install the cdrecord RPM package and the bos.games
installp package, type the following:
There is a tool named fixdist you can use to download fixes from IBM.
Maintenance levels:
===================
Notes:
Note 1:
-------
04: V5.2 with the 5200-04 Recommended Maintenance Package APAR IY56722
plus APAR IY60347 £
Use this package to update to 5200-05 (ML 05) an AIX 5.2.0 system whose
current ML is 5200-00 (i.e. base level) or higher.
(Nota: ML 05 notably brings the fileset bos.mp.5.2.0.54)
This package, 5200-05, updates AIX 5.2 from base level (no maintenance
level) to maintenance level 05 (5200-05).
This package is a recommended maintenance package for AIX 5.2. IBM
recommends that customers install the latest
available maintenace package for their AIX release.
General description
This package contains code corrections for the AIX operating system and
many related subsystems.
Unless otherwise stated, this package is released
for all languages. For additional information, refer to the Package
information
Note: IBM recommends that you create a separate file system for
/usr/sys/inst.images to prevent the expansion
of the /usr file system.
More information
# inutoc /usr/sys/inst.images
# installp -acgXd /usr/sys/inst.images bos.rte.install
# smit update_all
Reboot your system. This maintenance package replaces critical operating
system code.
Installation Tips
* The latest AIX 5.2 installation hints and tips are available
from the eServer Subscription Services web site at:
https://techsupport.services.ibm.com/server/pseries.subscriptionSvcs
Installation
smit update_by_fix
smit update_all
General description
This package contains code corrections for the AIX operating system and
many related subsystems. Unless otherwise stated,
this package is released for all languages. For additional information,
refer to the Package information
Note: IBM recommends that you create a separate file system for
/usr/sys/inst.images to prevent
the expansion of the /usr file system.
More information
# inutoc /usr/sys/inst.images
# installp -acgXd /usr/sys/inst.images bos.rte.install
# smit update_all
Always run the inutoc command to ensure the installation subsystem will
recognize the new fix packages
you download. This command creates a new .toc file for the fix package.
Run the inutoc command in
the same directory where you downloaded the package filesets. For
example, if you downloaded the
filesets to /usr/sys/inst.images, run the following command:
# inutoc /usr/sys/inst.images
# smit update_by_fix
To install all updates from this package that apply to the installed
filesets on your system,
use the following command:
# smit update_all
It is highly recommended that you apply all updates from this package.
Reboot the system. A reboot is required for this update to take effect.
--
# inutoc /software/ML07
# smitty update_all
Create a LV
# mount /usr/sys/inst.images
Note 5: About the inutoc command:
---------------------------------
inutoc Command
Purpose
Creates a .toc file for directories that have backup format file install
images.
This command is used by the installp command and the install scripts.
Syntax
inutoc [ Directory ]
Description
The inutoc command creates the .toc file in Directory. If a .toc file
already exists, it is recreated with new information.
The default installation image Directory is /usr/sys/inst.images. The
inutoc command adds table of contents entries
in the .toc file for every installation image in Directory.
The installp command and the bffcreate command call this command
automatically upon the creation or use
of an installation image in a directory without a .toc file.
Examples
To create the .toc file for the /usr/sys/inst.images directory, enter:
# inutoc
bffcreate Command
Purpose
Creates installation image files in backup format.
Syntax
bffcreate [ -q ] [ -S ] [ -U ] [ -v ] [ -X ] [ -d Device ] [ -t
SaveDir ] [ -w Directory ]
[ -M Platform ] { [ -l | -L ] | -c [ -s LogFile ] | Package
[Level ] ... | -f ListFile | all }
Description
The bffcreate command creates an installation image file in backup file
format (bff) to support
software installation operations.
The installation image file name has the form Package.Level.I. The
Package is the name of the software package,
as described for the Package Name parameter. Level has the format of
v.r.m.f, where v = version, r = release,
m = modification, f = fix. The I extension means that the image is an
installation image rather than an update image.
Update image files containing an AIX 3.1 formatted update have a service
number extension following the level.
The Servicenum parameter can be up to 4 digits in length. One example is
xlccmp.3.1.5.0.1234.
Update image files containing an AIX 3.2 formatted update have a ptf
extension following the level.
One example is bosnet.3.2.0.0.U412345.
AIX Version 4 and later update image file names begin with the fileset
name, not the PackageName.
They also have U extensions to indicate that they are indeed update
image files, not installation images.
One example of an update image file is bos.rte.install.4.3.2.0.U.
The all keyword indicates that installation image files are created for
every installable software package on the device.
You can extract a single update image with the AIX Version 4 and later
bffcreate command.
Then you must specify the fileset name and the v.r.m.f. parameter. As in
example 3 in the Examples section,
the PackageName parameter must be the entire fileset name,
bos.net.tcp.client, not just bos.net.
Examples
To create an installation image file from the bos.net software package
on the tape in the /dev/rmt0 tape drive
and use /var/tmp as the working directory, type:
# bffcreate -d /dev/rmt0.1 -w /var/tmp bos.net
To list all the Neutral software packages on the CD-ROM media, type:
# bffcreate -d /dev/cd0 -MN -l
Note 1:
-------
# rpm -q kernel
kernel-2.4.7-10
# rpm -q glibc
glibc-2.2.4-19.3
# rpm -q gcc
gcc-2.96-98
Show everything:
# rpm -qa
Note:
the U switch really means starting an Upgrade, but if nothing is there,
an installation will take place.
Note 2:
-------
What is RPM?
Red Hat, Inc. encourages other distribution vendors to take the time to
look at RPM and use it
for their own distributions. RPM is quite flexible and easy to use,
though it provides the base
for a very extensive system. It is also completely open and available,
though we would appreciate
bug reports and fixes. Permission is granted to use and distribute RPM
royalty free under the GPL.
Acquiring RPM
The best way to get RPM is to install Red Hat Linux. If you don't want
to do that, you can still get
and use RPM. It can be acquired from ftp.redhat.com.
RPM Requirements
RPM itself should build on basically any Unix-like system. It has been
built and used on Tru64 Unix,
AIX, Solaris, SunOS, and basically all flavors of Linux.
To build RPMs from source, you also need everything normally required to
build a package, like gcc, make, etc.
# rpm -i foobar-1.0-1.i386.rpm
# rpm -e foobar
One of the more complex but highly useful commands allows you to install
packages via FTP.
If you are connected to the net and want to install a new package, all
you need to do is specify
the file with a valid URL, like so:
# rpm -i ftp://ftp.redhat.com/pub/redhat/rh-2.0-beta/RPMS/foobar-1.0-
1.i386.rpm
Please note, that RPM will now query and/or install via FTP.
While these are simple commands, rpm can be used in a multitude of ways.
To see which options are available
in your version of RPM, type:
# rpm --help
You can find more details on what those options do in the RPM man page,
found by typing:
# man rpm
RPM is a very useful tool and, as you can see, has several options. The
best way to make sense of them
is to look at some examples. I covered simple install/uninstall above,
so here are some more examples:
Let's say you delete some files by accident, but you aren't sure what
you deleted. If you want to verify
your entire system and see what might be missing, you would do:
# rpm -Va
Let's say you run across a file that you don't recognize. To find out
which package owns it, you would do:
xjewel-1.6-1
You find a new koules RPM, but you don't know what it is. To find out
some information on it, do:
Now you want to see what files the koules RPM installs. You would do:
/usr/doc/koules
/usr/doc/koules/ANNOUNCE
/usr/doc/koules/BUGS
/usr/doc/koules/COMPILE.OS2
/usr/doc/koules/COPYING
/usr/doc/koules/Card
/usr/doc/koules/ChangeLog
/usr/doc/koules/INSTALLATION
/usr/doc/koules/Icon.xpm
/usr/doc/koules/Icon2.xpm
/usr/doc/koules/Koules.FAQ
/usr/doc/koules/Koules.xpm
/usr/doc/koules/README
/usr/doc/koules/TODO
/usr/games/koules
/usr/games/koules.svga
/usr/games/koules.tcl
/usr/man/man6/koules.svga.6
SYNOPSIS
QUERYING AND VERIFYING PACKAGES:
MISCELLANEOUS:
rpm {--initdb|--rebuilddb}
rpm {--addsign|--resign} PACKAGE_FILE ...
rpm {--querytags|--showrc}
rpm {--setperms|--setugids} PACKAGE_NAME ...
Note 3:
-------
NAME
rpm - RPM Package Manager
SYNOPSIS
QUERYING AND VERIFYING PACKAGES:
MISCELLANEOUS:
rpm {--initdb|--rebuilddb}
rpm {--addsign|--resign} PACKAGE_FILE ...
rpm {--querytags|--showrc}
rpm {--setperms|--setugids} PACKAGE_NAME ...
select-options
query-options
verify-options
install-options
DESCRIPTION
rpm is a powerful Package Manager, which can be used to build, install,
query, verify, update, and erase
individual software packages. A package consists of an archive of files
and meta-data used to install
and erase the archive files. The meta-data includes helper scripts, file
attributes, and descriptive
information about the package. Packages come in two varieties: binary
packages, used to encapsulate
software to be installed, and source packages, containing the source
code and recipe necessary
to produce binary packages.
GENERAL OPTIONS
These options can be used in all the different modes.
-?, --help
Print a longer usage message then normal.
--version
Print a single line containing the version number of rpm being used.
--quiet
Print as little as possible - normally only error messages will be
displayed.
-v
Print verbose information - normally routine progress messages will be
displayed.
-vv
Print lots of ugly debugging information.
--rcfile FILELIST
Each of the files in the colon separated FILELIST is read sequentially
by rpm for configuration information.
Only the first file in the list must exist, and tildes will be expanded
to the value of $HOME.
The default FILELIST is
/usr/lib/rpm/rpmrc:/usr/lib/rpm/redhat/rpmrc:~/.rpmrc.
--pipe CMD
Pipes the output of rpm to the command CMD.
--dbpath DIRECTORY
Use the database in DIRECTORY rathen than the default path /var/lib/rpm
--root DIRECTORY
Use the file system tree rooted at DIRECTORY for all operations. Note
that this means the database within
DIRECTORY will be used for dependency checks and any scriptlet(s) (e.g.
%post if installing, or %prep if building,
a package) will be run after a chroot(2) to DIRECTORY.
--aid
Add suggested packages to the transaction set when needed.
--allfiles
Installs or upgrades all the missingok files in the package, regardless
if they exist.
--badreloc
Used with --relocate, permit relocations on all file paths, not just
those OLDPATH's included in the binary package relocation hint(s).
--excludepath OLDPATH
Don't install files whose name begins with OLDPATH.
--excludedocs
Don't install any files which are marked as documentation (which
includes man pages and texinfo documents).
--force
Same as using --replacepkgs, --replacefiles, and --oldpackage.
-h, --hash
Print 50 hash marks as the package archive is unpacked. Use with -v|--
verbose for a nicer display.
--ignoresize
Don't check mount file systems for sufficient disk space before
installing this package.
--ignorearch
Allow installation or upgrading even if the architectures of the binary
package and host don't match.
--ignoreos
Allow installation or upgrading even if the operating systems of the
binary package and host don't match.
--includedocs
Install documentation files. This is the default behavior.
--justdb
Update only the database, not the filesystem.
--nodigest
Don't verify package or header digests when reading.
--nosignature
Don't verify package or header signatures when reading.
--nodeps
Don't do a dependency check before installing or upgrading a package.
--nosuggest
Don't suggest package(s) that provide a missing dependency.
--noorder
Don't reorder the packages for an install. The list of packages would
normally be reordered to satisfy dependancies.
--noscripts
--nopre
--nopost
--nopreun
--nopostun
Don't execute the scriptlet of the same name. The --noscripts option is
equivalent to
--nopre --nopost --nopreun --nopostun
and turns off the execution of the corresponding %pre, %post, %preun,
and %postun scriptlet(s).
--notriggers
--notriggerin
--notriggerun
--notriggerpostun
Don't execute any trigger scriptlet of the named type. The --notriggers
option is equivalent to
--notriggerin --notriggerun --notriggerpostun
--oldpackage
Allow an upgrade to replace a newer package with an older one.
--percent
Print percentages as files are unpacked from the package archive. This
is intended to make rpm easy to run from other tools.
--prefix NEWPATH
For relocateable binary packages, translate all file paths that start
with the installation prefix in the package relocation hint(s) to
NEWPATH.
--relocate OLDPATH=NEWPATH
For relocatable binary packages, translate all file paths that start
with OLDPATH in the package relocation hint(s) to NEWPATH. This option
can be used repeatedly if several OLDPATH's in the package are to be
relocated.
--repackage
Re-package the files before erasing. The previously installed package
will be named according to the macro %_repackage_name_fmt and will be
created in the directory named by the macro %_repackage_dir (default
value is /var/tmp).
--replacefiles
Install the packages even if they replace files from other, already
installed, packages.
--replacepkgs
Install the packages even if some of them are already installed on this
system.
--test
Do not install the package, simply check for and report potential
conflicts.
ERASE OPTIONS
The general form of an rpm erase command is
--allmatches
Remove all versions of the package which match PACKAGE_NAME. Normally an
error is issued if PACKAGE_NAME matches multiple packages.
--nodeps
Don't check dependencies before uninstalling the packages.
--noscripts
--nopreun
--nopostun
Don't execute the scriptlet of the same name. The --noscripts option
during package erase is equivalent to
--nopreun --nopostun
and turns off the execution of the corresponding %preun, and %postun
scriptlet(s).
--notriggers
--notriggerun
--notriggerpostun
Don't execute any trigger scriptlet of the named type. The --notriggers
option is equivalent to
--notriggerun --notriggerpostun
You may specify the format that package information should be printed
in. To do this, you use the
--qf|--queryformat QUERYFMT
:armor
There are two subsets of options for querying: package selection, and
information selection.
PACKAGE_NAME
Query installed package named PACKAGE_NAME.
-a, --all
Query all installed packages.
-f, --file FILE
Query package owning FILE.
--fileid MD5
Query package that contains a given file identifier, i.e. the MD5 digest
of the file contents.
-g, --group GROUP
Query packages with the group of GROUP.
--hdrid SHA1
Query package that contains a given header identifier, i.e. the SHA1
digest of the immutable header region.
-p, --package PACKAGE_FILE
Query an (uninstalled) package PACKAGE_FILE. The PACKAGE_FILE may be
specified as an ftp or http style URL, in which case the package header
will be downloaded and queried. See FTP/HTTP OPTIONS for information on
rpm's internal ftp and http client support. The PACKAGE_FILE
argument(s), if not a binary package, will be interpreted as an ASCII
package manifest. Comments are permitted, starting with a '#', and each
line of a package manifest file may include white space seperated glob
expressions, including URL's with remote glob expressions, that will be
expanded to paths that are substituted in place of the package manifest
as additional PACKAGE_FILE arguments to the query.
--pkgid MD5
Query package that contains a given package identifier, i.e. the MD5
digest of the combined header and payload contents.
--querybynumber HDRNUM
Query the HDRNUMth database entry directly; this is useful only for
debugging.
--specfile SPECFILE
Parse and query SPECFILE as if it were a package. Although not all the
information (e.g. file lists) is available, this type of query permits
rpm to be used to extract information from spec files without having to
write a specfile parser.
--tid TID
Query package(s) that have a given TID transaction identifier. A unix
time stamp is currently used as a transaction identifier. All package(s)
installed or erased within a single transaction have a common
identifier.
--triggeredby PACKAGE_NAME
Query packages that are triggered by package(s) PACKAGE_NAME.
--whatprovides CAPABILITY
Query all packages that provide the CAPABILITY capability.
--whatrequires CAPABILITY
Query all packages that requires CAPABILITY for proper functioning.
PACKAGE QUERY OPTIONS:
--changelog
Display change information for the package.
-c, --configfiles
List only configuration files (implies -l).
-d, --docfiles
List only documentation files (implies -l).
--dump
Dump file information as follows:
path size mtime md5sum mode owner group isconfig isdoc rdev symlink
This option must be used with at least one of -l, -c, -d.
--filesbypkg
List all the files in each selected package.
-i, --info
Display package information, including name, version, and description.
This uses the --queryformat if one was specified.
--last
Orders the package listing by install time such that the latest packages
are at the top.
-l, --list
List files in package.
--provides
List capabilities this package provides.
-R, --requires
List packages on which this package depends.
--scripts
List the package specific scriptlet(s) that are used as part of the
installation and uninstallation processes.
-s, --state
Display the states of files in the package (implies -l). The state of
each file is one of normal, not installed, or replaced.
--triggers, --triggerscripts
Display the trigger scripts, if any, which are contained in the package.
VERIFY OPTIONS
The general form of an rpm verify command is
--nodeps
Don't verify dependencies of packages.
--nodigest
Don't verify package or header digests when reading.
--nofiles
Don't verify any attributes of package files.
--noscripts
Don't execute the %verifyscript scriptlet (if any).
--nosignature
Don't verify package or header signatures when reading.
--nolinkto
--nomd5
--nosize
--nouser
--nogroup
--nomtime
--nomode
--nordev
Don't verify the corresponding file attribute.
The format of the output is a string of 8 characters, a possible
attribute marker:
from the package header, followed by the file name. Each of the 8
characters denotes the result of a comparison of attribute(s) of the
file to the value of those attribute(s) recorded in the database. A
single "." (period) means the test passed, while a single "?" (question
mark) indicates the test could not be performed (e.g. file permissions
prevent reading). Otherwise, the (mnemonically emBoldened) character
denotes failure of the corresponding --verify test:
The --checksig option checks all the digests and signatures contained in
PACKAGE_FILE to ensure the integrity and origin of the package. Note
that signatures are now verified whenever a package is read, and
--checksig is useful to verify all of the digests and signatures
associated with a package.
Finally, public keys can be erased after importing just like packages.
Here's how to remove the Red Hat GPG/DSA key
rpm -e gpg-pubkey-db42a60e
SIGNING A PACKAGE
Both of the --addsign and --resign options generate and insert new
signatures for each package PACKAGE_FILE given, replacing any existing
signatures. There are two options for historical reasons, there is no
difference in behavior currently.
For compatibility with older versions of GPG, PGP, and rpm, only V3
OpenPGP signature packets should be configured. Either DSA or RSA
verification algorithms can be used, but DSA is preferred.
If you want to be able to sign packages you create yourself, you also
need to create your own public and secret key pair (see the GPG manual).
You will also need to configure the rpm macros
%_signature
The signature type. Right now only gpg and pgp are supported.
%_gpg_name
The name of the "user" whose key you wish to use to sign your packages.
For example, to be able to use GPG to sign packages as the user "John
Doe <[email protected]>" from the key rings located in /etc/rpm/.gpg using
the executable /usr/bin/gpg you would include
%_signature gpg
%_gpg_path /etc/rpm/.gpg
%_gpg_name John Doe <[email protected]>
%_gpgbin /usr/bin/gpg
SHOWRC
The command
rpm --showrc
shows the values rpm will use for all of the options are currently set
in rpmrc and macros configuration file(s).
FTP/HTTP OPTIONS
rpm can act as an FTP and/or HTTP client so that packages can be queried
or installed from the internet.
Package files for install, upgrade, and query operations may be
specified as an ftp or http style URL:
ftp://USER:PASSWORD@HOST:PORT/path/to/package.rpm
--ftpproxy HOST
The host HOST will be used as a proxy server for all ftp transfers,
which allows users to ftp through firewall machines which use proxy
systems. This option may also be specified by configuring the macro
%_ftpproxy.
--ftpport HOST
The TCP PORT number to use for the ftp connection on the proxy ftp
server instead of the default port. This option may also be specified by
configuring the macro %_ftpport.
rpm allows the following options to be used with http URLs:
--httpproxy HOST
The host HOST will be used as a proxy server for all http transfers.
This option may also be specified by configuring the macro %_httpproxy.
--httpport PORT
The TCP PORT number to use for the http connection on the proxy http
server instead of the default port. This option may also be specified by
configuring the macro %_httpport.
LEGACY ISSUES
Executing rpmbuild
The build modes of rpm are now resident in the /usr/bin/rpmbuild
executable. Although legacy compatibility provided by the popt aliases
below has been adequate, the compatibility is not perfect; hence build
mode compatibility through popt aliases is being removed from rpm.
Install the rpmbuild package, and see rpmbuild(8) for documentation of
all the rpm build modes previously documented here in rpm(8).
SEE ALSO
popt(3),
rpm2cpio(8),
rpmbuild(8),
http://www.rpm.org/ http://www.rpm.org/>
39. Simplified overview Kernel parameters Solaris, AIX, Linux:
==============================================================
Throughout this document, you can find many other examples of settings.
This section is only a simplified overview.
39.1 Solaris:
-------------
Some examples:
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmns=2500
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
..
..
You can use, among others, the "ipcs" command and "adb" command to
retrieve kernel parameters and mem info.
- Shared Memory
Shared memory provides the fastest way for processes to pass large
amounts of data to one another.
As the name implies, shared memory refers to physical pages of memory
that are shared by more than one process.
Solaris 10 only uses the shmmax and shmmni parameters. (Other parameters
are set dynamically within the
Solaris 10 IPC model.)
- Semaphores
Semaphores are a shareable resource that take on a non-negative integer
value. They are manipulted
Solaris 10 only uses the semmni, semmsl and semopm parameters. (Other
parameters are dynamic within
the Solaris 10 IPC model.)
semmap: This sets the number of entries in the semaphore map. This
should never be greater than semmni. If the number
of semaphores per semaphore set used by the application is "n" then set
semmap = ((semmni + n - 1)/n)+1
or more. Alternatively, we can set semmap to semmni x semmsl. An
undersized semmap leads to "WARNING:
rmfree map overflow" errors. The default setting is 10; the maximum for
Solaris 2.6 is 2GB. The default for
Solaris 9 was 25; Solaris 10 increased the default to 512. The limit is
SHRT_MAX.
semmni (max-sem-ids in Solaris 10+): Maximum number of systemwide
semaphore sets. Each control structure consumes
84 bytes. For Solaris 2.5.1-9, the default setting is 10; for Solaris
10, the default setting is 128.
The maximum is 65535
semmns: Maximum number of semaphores in the system. Each structure uses
16 bytes. This parameter should be set
to semmni x semmsl. The default is 60; the maximum is 2GB.
semmnu: Maximum number of undo structures in the system. This should be
set to semmni so that each control structure
has an undo structure. The default is 30, the maximum is 2 GB.
semmsl (max-sem-nsems in Solaris 10+): Maximum number of semaphores per
semaphore set. The default is 25,
the maximum is 65535.
semopm (max-sem-ops in Solaris 10+): Maximum number of semaphore
operations that can be performed in each
semop call. The default in Solaris 2.5.1-9 is 10, the maximum is 2 GB.
Solaris 10 increased the default to 512.
semume: Maximum number of undo structures per process. This should be
set to semopm times the number of processes
that will be using semaphores at any one time. The default is 10; the
maximum is 2 GB.
semusz: Number of bytes required for semume undo structures. This should
not be tuned; it is set to
semume x (1 + sizeof(undo)). The default is 96; the maximum is 2 GB.
semvmx: Maximum value of a semaphore. This should never exceed 32767
(default value) unless SEM_UNDO
is never used. The default is 32767; the maximum is 65535.
semaem: Maximum adjust-on-exit value. This should almost always be left
alone. The default is 16384;
the maximum is 32767.
39.2 Linux:
-----------
The aforementioned command will set the value of the msgmax parameter to
2048.
Within the /proc/ directory, one can find a wealth of information about
the system hardware and any processes
currently running. In addition, some of the files within the /proc/
directory tree can be manipulated by users
and applications to communicate configuration changes to the kernel.
Under Linux, all data are stored as files. Most users are familiar with
the two primary types of files:
text and binary. But the /proc/ directory contains another type of file
called a virtual file.
It is for this reason that /proc/ is often referred to as a virtual file
system.
These virtual files have unique qualities. Most of them are listed as
zero bytes in size and yet when one
is viewed, it can contain a large amount of information. In addition,
most of the time and date settings
on virtual files reflect the current time and date, indicative of the
fact they constantly changing.
By using the cat, more, or less commands on files within the /proc/
directory, you can immediately access
an enormous amount of information about the system. For example, if you
want to see what sort of CPU
your computer has, type "cat /proc/cpuinfo" and you will see something
similar to the following:
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 9
model name : AMD-K6(tm) 3D+ Processor
stepping : 1
cpu MHz : 400.919
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr mce cx8 pge mmx syscall 3dnow
k6_mtrr
bogomips : 799.53
When viewing different virtual files in the /proc/ file system, you will
notice some of the information is
easily understandable while some is not human-readable. This is in part
why utilities exist to pull data
from virtual files and display it in a useful way. Some examples of such
applications are
lspci, apm, free, and top.
As a general rule, most virtual files within the /proc/ directory are
read only. However, some can be used
to adjust settings in the kernel. This is especially true for files in
the /proc/sys/ subdirectory.
To change the value of a virtual file, use the echo command and a >
symbol to redirect the new value to the file.
For instance, to change your hostname on the fly, you can type:
Other files act as binary or boolean switches. For instance, if you type
cat /proc/sys/net/ipv4/ip_forward,
you will see either a 0 or a 1. A 0 indicates the kernel is not
forwarding network packets. By using the
echo command to change the value of the ip_forward file to 1, you can
immediately turn packet forwarding on.
-- sysctl:
Linux also provides the sysctl command to modify kernel parameters at
runtime.
Sysctl uses parameter information stored in a file called
/etc/sysctl.conf. If, for example, we wanted to
change the value of the msgmax parameter as we did above, but this time
using sysctl, the command would
look like this:
# sysctl -w kernel.msgmax=2048
You can also type "uname -a" to see the kernel version.
/proc/cmdline
This file shows the parameters passed to the kernel at the time it is
started. A sample /proc/cmdline file
looks like this:
ro root=/dev/hda2
If you type "sysctl -a |more" you will see a long list of kernel
parameters.
You can use this sysctl program to modify these parameters, for example:
# sysctl -w kernel.shmmax=100000000
# sysctl -w fs.file-max=65536
# echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf
Most out of the box kernel parameters (of RHELS 3,4,5) are set correctly
for Oracle
except a few.
You can check the most important parameters using the following command:
For Linux, use the ipcs command to obtain a list of the system's current
shared memory segments and
semaphore sets, and their identification numbers and owner.
Perform the following steps to modify the kernel parameters by using the
/proc file system.
Review the current semaphore parameter values in the sem file by using
the cat or more utility.
For example, using the cat utility, enter the following command:
# cat sem
The output lists, in order, the values for the SEMMSL, SEMMNS, SEMOPM,
and SEMMNI parameters.
The following example shows how the output appears:
Replace the parameter variables with the values for your system in the
order that they are entered
in the preceding example. For example:
Review the current shared memory parameters by using the cat or more
utility. For example, using the cat utility,
enter the following command:
# cat shared_memory_parameter
Modify the shared memory parameter by using the echo utility. For
example, to modify the SHMMAX parameter,
enter the following command:
Modify the shared memory parameter by using the echo utility. For
example, to modify the SHMMNI parameter,
enter the following command:
Modify the shared memory parameter by using the echo utility. For
example, to modify the SHMALL parameter,
enter the following command:
See Also:
Your system vendor's documentation for more information on script files
and init files.
Set the Process limit by using ulimit -u. This will give you the number
of processes per user.
ulimit -u 16384
lsmod:
------
SYNOPSIS
lsmod [-hV]
DESCRIPTION
lsmod shows information about all loaded modules.
The format is name, size, use count, list of referring modules. The
information displayed is identical
to that available from "/proc/modules".
If the module controls its own unloading via a can_unload routine then
the user count displayed by lsmod
is always -1, irrespective of the real use count.
insmod:
-------
SYNOPSIS
insmod [-fhkLmnpqrsSvVxXyYN] [-e persist_name] [-o module_name] [-O
blob_name] [-P prefix] module [ symbol=value ... ]
DESCRIPTION
insmod installs a loadable module in the running kernel.
insmod tries to link a module into the running kernel by resolving all
symbols from the kernel's
exported symbol table.
If the module file name is given without directories or extension,
insmod will search for the module
in some common default directories. The environment variable MODPATH can
be used to override this default.
If a module configuration file such as /etc/modules.conf exists, it will
override the paths defined in MODPATH.
rmmod:
------
If more than one module is named on the command line, the modules will
be removed in the given order.
This supports unloading of stacked modules.
- Handling Modules
The following commands are available:
insmod
insmod loads the requested module after searching for it in a
subdirectory of /lib/modules/<version>.
It is better, however, to use modprobe rather than insmod.
rmmod
Unloads the requested module. This is only possible if this module is no
longer needed. For example,
the isofs module cannot be unloaded while a CD is still mounted.
depmod
Creates the file modules.dep in /lib/modules/<version> that defines the
dependencies of all the modules.
This is necessary to ensure that all dependent modules are loaded with
the selected ones.
This file will be built after the system is started if it does not
exist.
modprobe
Loads or unloads a given module while taking into account dependencies
of this module. This command
is extremely powerful and can be used for a lot of things (e.g., probing
all modules of a given type
until one is successfully loaded). In contrast to insmod, modprobe
checks /etc/modprobe.conf and therefore
is the preferred method of loading modules. For detailed information
about this topic, refer to the
corresponding man page.
lsmod
Shows which modules are currently loaded as well as how many other
modules are using them. Modules started
by the kernel daemon are tagged with autoclean. This label denotes that
these modules will automatically
be removed once they reach their idle time limit.
modinfo
Shows module information.
/etc/modprobe.conf
The loading of modules is affected by the files /etc/modprobe.conf
and /etc/modprobe.conf.local
and the directory /etc/modprobe.d. See man modprobe.conf. Parameters for
modules that access hardware directly
must be entered in this file. Such modules may need system-specific
options (e.g., CD-ROM driver or network driver).
The parameters used here are described in the kernel sources. Install
the package kernel-source and read the
documentation in the directory /usr/src/linux/Documentation.
modprobe.conf:
--------------
Example 1:
#irda
alias tty-ldisc-11 irtty
alias char-major-161-* ircomm-tty
Example 2:
/etc/sysconfig:
---------------
Note 1:
-------
/etc/sysconfig/clock
Used to configure the system clock to Universal or local time and set
some other clock parameters. An example file:
UTC=false
ARC=false
Options:
UTC - true means the clock is set to UTC time otherwise it is at local
time
ARC - Set true on alpha stations only. It indicates the ARC console's
42-year time offset is in effect. If not set to true, the normal Unix
epoch is assumed.
ZONE="filename" - indicates the zonefile under the directory
/usr/share/zoneinfo that the /etc/localtime file is a copy of. This may
be set to:
ZONE="US/Eastern"
/etc/sysconfig/init
This file is used to set some terminal characteristics and environment
variables. A sample listing:
# color => new RH6.0 bootup
# verbose => old-style bootup
# anything else => new style bootup without ANSI colors or positioning
BOOTUP=color
# column to start "[ OK ]" label in
RES_COL=60
# terminal sequence to move to that column. You could change this
# to something like "tput hpa ${RES_COL}" if your terminal supports it
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
# terminal sequence to set color to a 'success' color (currently: green)
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
# terminal sequence to set color to a 'failure' color (currently: red)
SETCOLOR_FAILURE="echo -en \\033[1;31m"
# terminal sequence to set color to a 'warning' color (currently:
yellow)
SETCOLOR_WARNING="echo -en \\033[1;33m"
# terminal sequence to reset to the default color.
SETCOLOR_NORMAL="echo -en \\033[0;39m"
# default kernel loglevel on boot (syslog will reset this)
LOGLEVEL=1
# Set to something other than 'no' to turn on magic sysrq keys...
MAGIC_SYSRQ=no
# Set to anything other than 'no' to allow hotkey interactive startup...
PROMPT=yes
Options:
BOOTUP=bootupmode - Choices are color, or verbose. The choice color sets
new boot display. The choice verbose sets old style display. Anything
else sets a new display without ANSI formatting.
LOGLEVEL=number - Sets the initial console logging level for the kernel.
The default is 7. The values are:
emergency, panic - System is unusable
alert - Action must be taken immediately
crit - Critical conditions
err, error (depreciated) - Error conditions
warning, warn (depreciated) - Warning conditions
notice - Normal but significant conditions
info - Informational message
debug - Debug level message
RES_COL=number - Screen column to start status labels at. The Default is
60.
MOVE_TO_COL=command - A command to move the cursor to $RES_COL.
SETCOLOR_SUCCESS=command - Set the color used to indicate success.
SETCOLOR_FAILURE=command - Set the color used to indicate failure.
SETCOLOR_WARNING=command - Set the color used to indicate warning.
SETCOLOR_NORMAL=command - Set the color used tor normal color
MAGIC_SYSRQ=yes|no - Set to 'no' to disable the magic sysrq key.
PROMPT=yes|no - Set to 'no' to disable the key check for interactive
mode.
/etc/sysconfig/keyboard
Used to configure the keyboard. Used by the startup script
/etc/rc.d/rc.sysinit. An example file:
KEYTABLE="us"
Options:
KEYTABLE="keytable file" - The line
[ KEYTABLE="/usr/lib/kbd/keytables/us.map" ] tells the system to use the
file shown for keymapping.
KEYBOARDTYPE=sun|pc - The selection, "sun", indicates attached on
/dev/kbd is a sun keyboard. The selection "pc" indicates a PS/2 keyboard
is on the ps/2 port.
/etc/sysconfig/mouse
This file is used to configure the mouse. An example file:
FULLNAME="Generic - 2 Button Mouse (PS/2)"
MOUSETYPE="ps/2"
XEMU3="yes"
XMOUSETYPE="PS/2"
Options:
MOUSETYPE=type - Choices are microsoft, mouseman, mousesystems, ps/2,
msbm, logibm, atibm, logitech, mmseries, or mmhittab.
XEMU3=yes|no - If yes, emulate three buttons, otherwise not.
/etc/sysconfig/network
Used to configure networking options. All IPX options default to off. An
example file:
NETWORKING=yes
FORWARD_IPV4="yes"
HOSTNAME="mdct-dev3"
GATEWAY="10.1.0.25"
GATEWAYDEV="eth0"
Options:
NETWORKING=yes|no - Sets network capabilities on or off.
HOSTNAME="hostname". To work with old software, the /etc/HOSTNAME file
should contain the same hostname.
FORWARD_IPV4=yes|no - Turns the ability to perform IP forwarding on or
off. Turn it on if you want to use the machine as a router. Turn it off
to use it as a firewall or IP masquerading.
DEFRAG_IPV4=yes|no - Set this to automatically defragment IPv4 packets.
This is good for masquerading, and a bad idea otherwise. It defaults to
'no'.
GATEWAY="gateway IP"
GATEWAYDEV="gateway device" Possible values include eth0, eth1, or ppp0.
NISDOMAIN="nis domain name"
IPX=yes|no - Turn IPX ability on or off.
IPXAUTOPRIMARY=on|off - Must not be yes or no.
IPXAUTOFRAME=on|off
IPXINTERNALNETNUM="netnum"
IPXINTERNALNODENUM="nodenum"
/etc/sysconfig/static-routes
Configures static routes on a network. Used to set up static routing. An
example file:
eth1 net 192.168.199.0 netmask 255.255.255.0 gw 192.168.199.1
eth0 net 10.1.0.0 netmask 255.255.0.0 gw 10.1.0.153
eth1 net 255.255.255.255 netmask 255.255.255.255
The device may be a device name such as eth0 which is used to have the
route brought up and down as the device is brought up or down. The value
can also be "any" to let the system calculate the correct devices at run
time.
/etc/sysconfig/routed
Sets up dynamic routing policies. An example file:
EXPORT_GATEWAY="no"
SILENT="yes"
Options:
SILENT=yes|no
EXPORT_GATEWAY=yes|no
/etc/sysconfig/pcmcia
Used to configure pcmcia network cards. An example file:
PCMCIA=no
PCIC=
PCIC_OPTS=
CORE_OPTS=
Options:
PCMCIA=yes|no
PCIC=i82365|tcic
PCIC_OPTS=socket driver (i82365 or tcic) timing parameters
CORE_OPTS=pcmcia_core options
CARDMGR_OPTS=cardmgr options
/etc/sysconfig/amd
Used to configure the auto mount daemon. An example file:
ADIR=/.automount
MOUNTPTS='/net /etc/amd.conf'
AMDOPTS=
Options:
ADIR=/.automount (normally never changed)
MOUNTPTS='/net /etc/amd.conf' (standard automount stuff)
AMDOPTS= (extra options for AMD)
/etc/sysconfig/tape
Used for backup tape device configuration. Options:
DEV=/dev/nst0 - The tape device. Use the non-rewinding tape for these
scripts. For SCSI tapes the device is /dev/nst#, where # is the number
of the tape drive you want to use. If you only have one then use nst0.
For IDE tapes the device is /dev/ht#. For floppy tape drives the device
is /dev/ftape.
ADMIN=root - The person to mail to if the backup fails for any reason
SLEEP=5 - The time to sleep between tape operations.
BLOCKSIZE=32768 - This worked fine for 8mm, then 4mm, and now DLT. An
optimal setting is probably the amount of data your drive writes at one
time.
SHORTDATE=$(date +%y:%m:%d:%H:%M) - A short date string, used in backup
log filenames.
DAY=$(date +log-%y:%m:%d) - Used for the log file directory.
DATE=$(date) - Date string, used in log files.
LOGROOT=/var/log/backup - Root of the logging directory
LIST=$LOGROOT/incremental-list - This is the file name the incremental
backup will use to store the incremental list. It will be $LIST-{some
number}.
DOTCOUNT=$LOGROOT/.count - For counting as you go to know which
incremental list to use.
COUNTER=$LOGROOT/counter-file - For rewinding when done...might not use.
BACKUPTAB=/etc/backuptab - The file in which we keep our list of
backup(s) we want to make.
/etc/sysconfig/sendmail
An example file:
DAEMON=yes
QUEUE=1h
Options:
DAEMON=yes|no - yes implies -bd
QUEUE=1h - Given to sendmail as -q$QUEUE. The -q option is not given to
sendmail if /etc/sysconfig/sendmail exists and QUEUE is empty or
undefined.
/etc/sysconfig/i18n
Controls the system font settings. The language variables are used in
/etc/profile.d/lang.sh. An example i18n file:
LANG="en_US"
LC_ALL="en_US"
LINGUAS="en_US"
Options:
LANG= set locale for all categories, can be any two letter ISO language
code.
LC_CTYPE= localedata configuration for classification and conversion of
characters.
LC_COLLATE= localedata configuration for collation (sort order) of
strings.
LC_MESSAGES= localedata configuration for translation of yes and no
messages.
LC_NUMERIC= localedata configuration for non-monetary numeric data.
LC_MONETARY= localedata configuration for monetary data.
LC_TIME= localedata configuration for date and time.
LC_ALL= localedata configuration overriding all of the above.
LANGUAGE= can be a : separated list of ISO language codes.
LINGUAS= can be a ' ' separated list of ISO language codes.
SYSFONT= any font that is legal when used as /usr/bin/consolechars -f
$SYSFONT ... (See console-tools package for consolechars command)
UNIMAP= any SFM (screen font map, formerly called Unicode mapping table
- see consolechars(8))
/usr/bin/consolechars -f $SYSFONT --sfm $UNIMAP
/etc/sysconfig/network-scripts/ifup:
/etc/sysconfig/network-scripts/ifdown:
These are symbolic links to /sbin/ifup and /sbin/ifdown, respectively.
These symlinks are here for legacy purposes only. They will probably be
removed in future versions. These scripts take one argument normally:
the name of the device (e.g. eth0). They are called with a second
argument of "boot" during the boot sequence so that devices that are not
meant to be brought up on boot (ONBOOT=no, see below) can be ignored at
that time.
/etc/sysconfig/network-scripts/network-functions
This is not really a public file. Contains functions which the scripts
use for bringing interfaces up and down. In particular, it contains most
of the code for handling alternative interface configurations and
interface change notification through netreport.
/etc/sysconfig/network-scripts/ifcfg-interface
/etc/sysconfig/network-scripts/ifcfg-interface-clone
Defines an interface. An example file called ifcfg-eth0:
DEVICE="eth0"
IPADDR="10.1.0.153"
NETMASK="255.255.0.0"
ONBOOT="yes"
BOOTPROTO="none"
IPXNETNUM_802_2=""
IPXPRIMARY_802_2="no"
IPXACTIVE_802_2="no"
IPXNETNUM_802_3=""
IPXPRIMARY_802_3="no"
IPXACTIVE_802_3="no"
IPXNETNUM_ETHERII=""
IPXPRIMARY_ETHERII="no"
IPXACTIVE_ETHERII="no"
IPXNETNUM_SNAP=""
IPXPRIMARY_SNAP="no"
IPXACTIVE_SNAP="no"
NAME="friendly name for users to see" - Most important for PPP. Only
used in front ends.
DEVICE="name of physical device"
IPADDR=
NETMASK=
GATEWAY=
ONBOOT=yes|no
USERCTL=yes|no
BOOTPROTO=none|bootp|dhcp - If BOOTPROTO is not "none", then the only
other item that must be set is the DEVICE item; all the rest will be
determined by the boot protocol. No "dummy" entries need to be created.
Base items being deprecated:
NETWORK="will be calculated automatically with ifcalc"
BROADCAST="will be calculated automatically with ifcalc"
Ethernet-only items:
{IPXNETNUM,IPXPRIMARY,IPXACTIVE}_{802_2,802_3,ETHERII,SNAP}
configuration matrix for IPX. Only used if IPX is active. Managed
from /etc/sysconfig/network-scripts/ifup-ipx
PPP/SLIP items:
PERSIST=yes|no
MODEMPORT=device - An example device is /dev/modem.
LINESPEED=speed - An example speed is 115200.
DEFABORT=yes|no - Tells netcfg whether or not to put default abort
strings in when creating/editing the chat script and/or dip script for
this interface.
PPP-specific items
WVDIALSECT="list of sections from wvdial.conf to use" - If this variable
is set, then the chat script (if it exists) is ignored, and wvdial is
used to open the PPP connection.
PEERDNS=yes|no - Modify /etc/resolv.conf if peer uses msdns extension.
DEFROUTE=yes|no - Set this interface as default route?
ESCAPECHARS=yes|no -Simplified interface here doesn't let people specify
which characters to escape; almost everyone can use asyncmap 00000000
anyway, and they can set PPPOPTIONS to asyncmap foobar if they want to
set options perfectly).
HARDFLOWCTL=yes|no - Yes implies "modem crtscts" options.
PPPOPTIONS="arbitrary option string" - It is placed last on the command
line, so it can override other options like asyncmap that were specified
differently.
PAPNAME="name $PAPNAME" - On pppd command line. Note that the
"remotename" option is always specified as the logical ppp device name,
like "ppp0" (which might perhaps be the physical device ppp1 if some
other ppp device was brought up earlier...), which makes it easy to
manage pap/chap files -- name/password pairs are associated with the
logical ppp device name so that they can be managed together.
REMIP="remote ip address" - Normally unspecified.
MTU=
MRU=
DISCONNECTTIMEOUT="number of seconds" The current default is 5. This is
the time to wait before re-establishing the connection after a
successfully-connected session terminates before attempting to establish
a new connection.
RETRYTIMEOUT="number of seconds" - The current default is 60. This is
the time to wait before re-attempting to establish a connection after a
previous attempt fails.
/etc/sysconfig/network-scripts/chat-interface - This is the chat script
for PPP or SLIP connection intended to establish the connection. For
SLIP devices, a DIP script is written from the chat script; for PPP
devices, the chat script is used directly.
/etc/sysconfig/network-scripts/dip-interface
A write-only script created from the chat script by netcfg. Do not
modify this. In the future, this file may disappear by default and
created on-the-fly from the chat script if it does not exist.
/etc/sysconfig/network-scripts/ifup-post
Called when any network device EXCEPT a SLIP device comes up. Calls
/etc/sysconfig/network-scripts/ifup-routes to bring up static routes
that depend on that device. Calls /etc/sysconfig/network-scripts/ifup-
aliases to bring up aliases for that device. Sets the hostname if it is
not already set and a hostname can be found for the IP for that device.
Sends SIGIO to any programs that have requested notification of network
events. It could be extended to fix up nameservice configuration, call
arbitrary scripts, etc, as needed.
/etc/sysconfig/network-scripts/ifup-routes
Set up static routes for a device. An example file:
#!/bin/sh
if [ ! -f /etc/sysconfig/static-routes ]; then
exit 0
fi
/etc/sysconfig/network-scripts/ifup-aliases
Bring up aliases for a device.
/etc/sysconfig/network-scripts/ifdhcpc-done
Called by dhcpcd once dhcp configuration is complete; sets up
/etc/resolv.conf from the version dhcpcd dropped in
/etc/dhcpc/resolv.conf
Note 3:
-------
Red Hat Linux 8.0: The Official Red Hat Linux Reference Guide
Prev Chapter 3. Boot Process, Init, and Shutdown Next
------------------------------------------------------------------------
--------
amd
apmd
arpwatch
authconfig
cipe
clock
desktop
dhcpd
firstboot
gpm
harddisks
hwconf
i18n
identd
init
ipchains
iptables
irda
keyboard
kudzu
mouse
named
netdump
network
ntpd
pcmcia
radvd
rawdevices
redhat-config-users
redhat-logviewer
samba
sendmail
soundcard
squid
tux
ups
vncservers
xinetd
/etc/sysconfig/amd
The /etc/sysconfig/amd file contains various parameters used by amd
allowing for the automounting and
automatic unmounting of file systems.
/etc/sysconfig/apmd
The /etc/sysconfig/apmd file is used by apmd as a configuration for what
things to start/stop/change
on suspend or resume. It is set up to turn on or off apmd during
startup, depending on whether your hardware
supports Advanced Power Management (APM) or if you choose not to use it.
apm is a monitoring daemon that works
with power management code within the Linux kernel. It can alert you to
a low battery if you are using
Red Hat Linux on a laptop, among other things.
/etc/sysconfig/arpwatch
The /etc/sysconfig/arpwatch file is used to pass arguments to the
arpwatch daemon at boot time.
The arpwatch daemon maintains a table of Ethernet MAC addresses and
their IP address pairings.
For more information about what parameters you can use in this file,
type man arpwatch. By default,
this file sets the owner of the arpwatch process to the user pcap.
/etc/sysconfig/authconfig
The /etc/sysconfig/authconfig file sets the kind of authorization to be
used on the host.
It contains one or more of the following lines:
/etc/sysconfig/clock
The /etc/sysconfig/clock file controls the interpretation of values read
from the system hardware clock.
true or yes - Indicates the SRM console's 1900 epoch is in effect. This
setting is only for SRM-based
Alpha systems. Any other value indicates that the normal UNIX epoch is
in use.
ZONE="America/New York"
Earlier releases of Red Hat Linux used the following values (which are
deprecated):
GMT - Indicates that the clock is set to Universal Time (Greenwich Mean
Time).
ARC - Indicates the ARC console's 42-year time offset is in effect (for
Alpha-based systems only).
/etc/sysconfig/desktop
The /etc/sysconfig/desktop file specifies the desktop manager to be run,
such as:
DESKTOP="GNOME"
/etc/sysconfig/dhcpd
The /etc/sysconfig/dhcpd file is used to pass arguments to the dhcpd
daemon at boot time.
The dhcpd daemon implements the Dynamic Host Configuration Protocol
(DHCP) and the Internet Bootstrap
Protocol (BOOTP). DHCP and BOOTP assign hostnames to machines on the
network. For more information
about what parameters you can use in this file, type man dhcpd.
/etc/sysconfig/firstboot
Beginning with Red Hat Linux 8.0, the first time you boot the system,
the /sbin/init program calls
the etc/rc.d/init.d/firstboot script. This allows the user to install
additional applications
and documentation before the boot process completes.
/etc/sysconfig/gpm
The /etc/sysconfig/gpm file is used to pass arguments to the gpm daemon
at boot time. The gpm daemon is the
mouse server which allows mouse acceleration and middle-click pasting.
For more information about what
parameters you can use in this file, type man gpm. By default, it sets
the mouse device to /dev/mouse.
/etc/sysconfig/harddisks
The /etc/sysconfig/harddisks file allows you to tune your hard drive(s).
You can also use /
etc/sysconfig/hardiskhd[a-h], to configure parameters for specific
drives.
Warning
Do not make changes to this file lightly. If you change the default
values stored here, you could
corrupt all of the data on your hard drive(s).
/etc/sysconfig/hwconf
The /etc/sysconfig/hwconf file lists all the hardware that kudzu
detected on your system, as well as
the drivers used, vendor ID and device ID information. The kudzu program
detects and configures new and/or
changed hardware on a system. The /etc/sysconfig/hwconf file is not
meant to be manually edited.
If you do edit it, devices could suddenly show up as being added or
removed.
/etc/sysconfig/i18n
The /etc/sysconfig/i18n file sets the default language, such as:
LANG="en_US"
/etc/sysconfig/identd
The /etc/sysconfig/identd file is used to pass arguments to the identd
daemon at boot time.
The identd daemon returns the username of processes with open TCP/IP
connections. Some services on
the network, such as FTP and IRC servers, will complain and cause slow
responses if identd is not running.
But in general, identd is not a required service, so if security is a
concern, you should not run it.
For more information about what parameters you can use in this file,
type man identd. By default,
the file contains no parameters.
/etc/sysconfig/init
The /etc/sysconfig/init file controls how the system will appear and
function during the boot process.
/etc/sysconfig/ipchains
The /etc/sysconfig/ipchains file contains information used by the kernel
to set up ipchains packet filtering rules at boot time or whenever the
service is started.
/etc/sysconfig/iptables
Like /etc/sysconfig/ipchains, the /etc/sysconfig/iptables file stores
information used by the kernel to set up packet filtering services at
boot time or whenever the service is started.
You should not modify this file by hand unless you are familiar with how
to construct iptables rules. The simplest way to add rules is to use the
/usr/sbin/lokkit command or the gnome-lokkit graphical application to
create your firewall. Using these applications will automatically edit
this file at the end of the process.
If you wish, you can manually create rules using /sbin/iptables and then
type /sbin/service iptables save to add the rules to the
/etc/sysconfig/iptables file.
Once this file exists, any firewall rules saved there will persist
through a system reboot or a service restart.
/etc/sysconfig/irda
The /etc/sysconfig/irda file controls how infrared devices on your
system are configured at startup.
/etc/sysconfig/keyboard
The /etc/sysconfig/keyboard file controls the behavior of the keyboard.
The following values may be used:
/etc/sysconfig/kudzu
The /etc/sysconfig/kuzdu allows you to specify a safe probe of your
system's hardware by kudzu at boot time. A safe probe is one that
disables serial port probing.
/etc/sysconfig/mouse
The /etc/sysconfig/mouse file is used to specify information about the
available mouse. The following values may be used:
/etc/sysconfig/named
The /etc/sysconfig/named file is used to pass arguments to the named
daemon at boot time. The named daemon is a Domain Name System (DNS)
server which implements the Berkeley Internet Name Domain (BIND) version
9 distribution. This server maintains a table of which hostnames are
associated with IP addresses on the network.
OPTIONS="<value>", where <value> any option listed in the man page for
named except -t. In place of -t, use the ROOTDIR line above instead.
For more information about what parameters you can use in this file,
type man named. For detailed information on how to configure a BIND DNS
server, see Chapter 16. By default, the file contains no parameters.
/etc/sysconfig/netdump
The /etc/sysconfig/netdump file is the configuration file for the
/etc/init.d/netdump service. The netdump service sends both oops data
and memory dumps over the network. In general, netdump is not a required
service, so you should only run it if you absolutely need to. For more
information about what parameters you can use in this file, type man
netdump.
/etc/sysconfig/network
The /etc/sysconfig/network file is used to specify information about the
desired network configuration. The following values may be used:
Note
For compatibility with older software that people might install (such
as trn), the /etc/HOSTNAME file should contain the same value as here.
/etc/sysconfig/ntpd
The /etc/sysconfig/ntpd file is used to pass arguments to the ntpd
daemon at boot time. The ntpd daemon sets and maintains the system clock
to synchronize with an Internet standard time server. It implements
version 4 of the Network Time Protocol (NTP). For more information about
what parameters you can use in this file, point a browser at the
following file: /usr/share/doc/ntp-<version>/ntpd.htm (where <version>
is the version number of ntpd). By default, this file sets the owner of
the ntpd process to the user ntp.
/etc/sysconfig/pcmcia
The /etc/sysconfig/pcmcia file is used to specify PCMCIA configuration
information. The following values may be used:
/etc/sysconfig/radvd
The /etc/sysconfig/radvd file is used to pass arguments to the radvd
daemon at boot time. The radvd daemon listens to for router requests and
sends router advertisements for the IP version 6 protocol. This service
allows hosts on a network to dynamically change their default routers
based on these router advertisements. For more information about what
parameters you can use in this file, type man radvd. By default, this
file sets the owner of the radvd process to the user radvd.
/etc/sysconfig/rawdevices
The /etc/sysconfig/rawdevices file is used to configure raw device
bindings, such as:
/dev/raw/raw1 /dev/sda1
/dev/raw/raw2 8 5
/etc/sysconfig/redhat-config-users
The /etc/sysconfig/redhat-config-users file is the configuration file
for the graphical application, User Manager. Under Red Hat Linux 8.0
this file is used to filter out system users such as root, daemon, or
lp. This file is edited by the Preferences => Filter system users and
groups pull-down menu in the User Manager application and should not be
edited by hand. For more information on using this application, see the
chapter called User and Group Configuration in the Official Red Hat
Linux Customization Guide.
/etc/sysconfig/redhat-logviewer
The /etc/sysconfig/redhat-logviewer file is the configuration file for
the graphical, interactive log viewing application, Log Viewer. This
file is edited by the Edit => Preferences pull-down menu in the Log
Viewer application and should not be edited by hand. For more
information on using this application, see the chapter called Log Files
in the Official Red Hat Linux Customization Guide.
/etc/sysconfig/samba
The /etc/sysconfig/samba file is used to pass arguments to the smbd and
the nmbd daemons at boot time. The smbd daemon offers file sharing
connectivity for Windows clients on the network. The nmbd daemon offers
NetBIOS over IP naming services. For more information about what
parameters you can use in this file, type man smbd. By default, this
file sets smbd and nmbd to run in daemon mode.
/etc/sysconfig/sendmail
The /etc/sysconfig/sendmail file allows messages to be sent to one or
more recipients, routing the message over whatever networks are
necessary. The file sets the default values for the Sendmail application
to run. Its default values are to run as a background daemon, and to
check its queue once an hour in case something has backed up.
/etc/sysconfig/soundcard
The /etc/sysconfig/soundcard file is generated by sndconfig and should
not be modified. The sole use of this file is to determine what card
entry in the menu to pop up by default the next time sndconfig is run.
Sound card configuration information is located in the /etc/modules.conf
file.
/etc/sysconfig/squid
The /etc/sysconfig/squid file is used to pass arguments to the squid
daemon at boot time. The squid daemon is a proxy caching server for Web
client applications. For more information on configuring a squid proxy
server, use a Web browser to open the /usr/share/doc/squid-<version>/
directory (replace <version> with the squid version number installed on
your system). By default, this file sets squid top start in daemon mode
and sets the amount of time before it shuts itself down.
/etc/sysconfig/tux
The /etc/sysconfig/tux file is the configuration file for the Red Hat
Content Accelerator (formerly known as TUX), the kernel-based web
server. For more information on configuring the Red Hat Content
Accelerator, use a Web browser to open the /usr/share/doc/tux-
<version>/tux/index.html (replace <version> with the version number of
TUX installed on your system). The parameters available for this file
are listed in /usr/share/doc/tux-<version>/tux/parameters.html.
/etc/sysconfig/ups
The /etc/sysconfig/ups file is used to specify information about any
Uninterruptible Power Supplies (UPS) connected to your system. A UPS can
be very valuable for a Red Hat Linux system because it gives you time to
correctly shut down the system in the case of power interruption. The
following values may be used:
/etc/sysconfig/vncservers
The /etc/sysconfig/vncservers file configures the way the Virtual
Network Computing (VNC) server starts up.
Note that when you use a VNC server, your communication with it is
unencrypted, and so it should not be used on an untrusted network. For
specific instructions concerning the use of SSH to secure the VNC
communication, please read the information found at
http://www.uk.research.att.com/vnc/sshvnc.html. To find out more about
SSH, see Chapter 9 or Official Red Hat Linux Customization Guide.
/etc/sysconfig/xinetd
The /etc/sysconfig/xinetd file is used to pass arguments to the xinetd
daemon at boot time.
The xinetd daemon starts programs that provide Internet services when a
request to the port for that service
is received. For more information about what parameters you can use in
this file, type man xinetd.
For more information on the xinetd service, see the Section called
Access Control Using xinetd in Chapter 8.
apm-scripts - This contains the Red Hat APM suspend/resume script. You
should not edit this file directly. If you need customization, simple
create a file called /etc/sysconfig/apm-scripts/apmcontinue and it will
be called at the end of the script. Also, you can control the script by
editing /etc/sysconfig/apmd.
Scripts used to bring up and down network interfaces, such as ifup and
ifdown.
Scripts used to bring up and down ISDN interfaces, such as ifup-isdn and
ifdown-isdn
rhn - This directory contains the configuration files and GPG keys for
the Red Hat Network. No files in this directory should be edited by
hand. For more information on the Red Hat Network, see the Red Hat
Network website at the following URL: https://rhn.redhat.com.
Througout this document, you can find many AIX kernel parameter
statements.
Most commands are related to retrieving or changing attributes on the
sys0 object.
40. NFS:
========
On Solaris:
-----------
NFS uses a number of deamons to handle its services. These services are
initialized at startup
from the "/etc/init.d/nfs.server" and "/etc/init.d/nfs.client" startup
scripts.
On AIX:
-------
To start the NFS daemons for each system, whether client or Server, you
can use either
# smitty mknfs
# mknfs -N (or -B or -I)
The mknfs command configures the system to rum the NFS daemons. The
command also adds an entry
to the /etc/inittab file, so that the /etc/rc.nsf file is executed on
system restart.
mknfs flags:
-B: adds an entry to the inittab and it also executes /etc/rc.nsf to
start the daemons now.
-I: adds an entry to the inittab to execute rc.nfs at system restart.
-N: executes rc.nfs now to start the daemons.
# startsrc -g nfs
1. Verify that NFS is already running using the command "lssrc -g nfs".
The output should indicate
that the nfsd and rpc.mountd daemons are active.
# lssrc -g nfs
Subsystem Group PID Status
biod nfs 1234 active
nfsd nfs 5678 active
rpc.mountd nfs 9101 active
rpc.statd nfs 1213 active
rpc.lockd nfs 1516 active
# smitty mknfsexp or
# mknfsexp or
# edit the /etc/exports file, like for example
vi /etc/exports
/home1
/home2
etc..
41.1 SOLARIS:
=============
ifconfig:
---------
ifconfig enables or disables a network interface, sets its IP address,
subnet mask, and sets
various other options.
syntax:
ifconfig interface address options .. up
Examples:
# ifconfig -a
Displays the systems IP address and mac address.
rpcinfo:
--------
This utility can list all registered RPC services running on a system,
for example
# rpcinfo -p 192.168.1.21
You can also unregister an rpc service using the -d option, for example
#rpcinfo -d sprayd 1
route:
------
Syntax:
route [-f] add/delete destination gateway [hop-count]
files:
------
- /etc/hostname.interface
The file contains the hostname or IP address associated with the
networkinterface.
Suppose the system is called system1 and the interface is le0
then the file would be "hostname.le0" and contains the entry "system1".
- /etc/nodename
The file should contain one entry: the hostname of the local machine.
- /etc/defaultdomain
The file is present if the network uses a name service. The file should
contain
one entry: the fully qualified Domain name of the administrative domain
to which
the local host belongs.
- /etc/inet/hosts or /etc/hosts
This is the well known local hosts file, which resolves names to IP
addresses.
The /etc/hosts is a symbolic link to /etc/inet/hosts.
- /etc/defaultrouter
This file should contain an entry for each router directly connected to
the network.
- /etc/inetd.conf
The inetd deamon runs on behalf of other networkservices. It starts the
appropriate server process
when a request for that service is received. The /etc/inetd.conf file
lists the services that
inetd is to provide
- /etc/services
This file lists the well known ports.
- /etc/hosts.equiv
This file contains a list of trusted hosts for a remote system, one per
line.
It has the following structure:
system1
system2 user_a
If the user attemps to login remotely by using rlogin from one of the
hosts listed
in this file, the system allows the user to login without a password.
~/.rhosts
- /etc/resolv.conf
# cat resolv.conf
domain yourdomain.com
search yourdomain.com
search client1.com
nameserver 192.168.0.9
nameserver 192.168.0.11
41.2 AIX:
=========
At IPL time, the init process will run the /etc/rc.tcpip after starting
the SRC.
This is so because in /etc/inittab the following record is present:
There are also deamons specific to the bos or to other applications that
can be started through
the rc.tcpip file. These deamons are lpd, portmap, sendmail, syslogd
(started by default)
The subsystems started from rc.tcpip can be stopped and restarted using
the stopsrc and startsrc commands.
Example:
# stopsrc -s inetd
# mktcpip
or use smitty
# smitty configtcp
- BIND/DNS (named)
- Network Information Service (NIS)
- Local /etc/hosts file
You can override the order by creating the /etc/netsvc.conf file with an
entry.
If /etc/netsvc.conf does not exist, it will be just like you have the
following entry:
hosts = bind,nis,local
You can override the order by changing the NSORDER environment variable.
If it is not set,
it will be just like you have issued the command:
export NSORDER=bind,nis,local
If you use name services, you can provide the minimal information needed
through the mktcpip command.
Typically, the "/etc/resolv.conf" file stores your domain name and name
server ip addresses.
The mktcpip command creates or updates the /etc/resolv.conf file for
you.
41.2.3 Adapter:
---------------
# lsdev -Cc if
en0 Defined 10-80 Standard Ethernet Network Interface
en1 Defined 20-60 Standard Ethernet Network Interface
et0 Defined 10-80 IEEE 802.3 Ethernet Network INterface
et1 Defined 20-60 IEEE 802.3 Ethernet Network INterface
lo0 Available Loopback Network INterface
more info:
iptrace:
--------
The iptrace command can be used to record the packets that are exchanged
on an interface to and from
a remote host. This is like a Solaris snoop facility.
Examples
1. To start the iptrace daemon with the System Resource Controller
(SRC),
enter:
startsrc -s iptrace -a "/tmp/nettrace"
iptrace /tmp/nettrace
The recorded packets are received on and sent from the local host.
All
packet flow between the local host and all other hosts on any
interface is
recorded. The trace information is placed into the /tmp/nettrace
file.
3. To record packets received on an interface from a specific remote
host,
enter the command in the following format:
Adding routes:
--------------
# smitty mktcpip
# smitty chinet
Smitty and chdev will update the ODM database, and makes changes
permanent, while ifconfig commands will not.
- /etc/hosts.equiv
This file contains a list of trusted hosts for a remote system, one per
line.
It has the following structure:
system1
system2 user_a
If the user attemps to login remotely by using rlogin from one of the
hosts listed
in this file, the system allows the user to login without a password.
~/.rhosts
For example, to allow all the users on the host toaster and machine to
login to the local host,
you would have a host.equiv file like
toaster
starboss
To allow only the user bob to login from starboss, you would have
toaster
starboss bob
To allow the user lester to login from any host, you would have
toaster
starboss bob
+ lester
# entstat -d en0
This command shows Media speed and that kind of stuff etc..
# netstat -nr
# no -o ipforwarding=1
note:
-----
Some examples:
# no -o thewall=3072
# no -o tcp_sendspace=16384
# no -o ipqmaxlen=512 (controls the number of incoming packets
that can exists on the IP interrupt queue)
# no -a
arpqsize = 12
arpt_killc = 20
arptab_bsiz = 7
arptab_nb = 149
bcastping = 0
clean_partial_conns = 1
delayack = 0
delayackports = {}
dgd_packets_lost = 3
dgd_ping_time = 5
dgd_retry_time = 5
directed_broadcast = 0
extendednetstats = 0
fasttimo = 200
icmp6_errmsg_rate = 10
icmpaddressmask = 0
ie5_old_multicast_mapping = 0
ifsize = 256
inet_stack_size = 16
ip6_defttl = 64
ip6_prune = 1
ip6forwarding = 0
ip6srcrouteforward = 0
ip_ifdelete_notify = 0
ip_nfrag = 200
ipforwarding = 0
ipfragttl = 2
ipignoreredirects = 1
ipqmaxlen = 100
ipsendredirects = 1
ipsrcrouteforward = 0
ipsrcrouterecv = 0
ipsrcroutesend = 0
llsleep_timeout = 3
lo_perf = 1
lowthresh = 90
main_if6 = 0
main_site6 = 0
maxnip6q = 20
maxttl = 255
medthresh = 95
mpr_policy = 1
multi_homed = 1
nbc_limit = 891289
nbc_max_cache = 131072
nbc_min_cache = 1
nbc_ofile_hashsz = 12841
nbc_pseg = 0
nbc_pseg_limit = 1048576
ndd_event_name = {all}
ndd_event_tracing = 0
ndp_mmaxtries = 3
ndp_umaxtries = 3
ndpqsize = 50
ndpt_down = 3
ndpt_keep = 120
ndpt_probe = 5
ndpt_reachable = 30
ndpt_retrans = 1
net_buf_size = {all}
net_buf_type = {all}
net_malloc_police = 0
nonlocsrcroute = 0
nstrpush = 8
passive_dgd = 0
pmtu_default_age = 10
pmtu_expire = 10
pmtu_rediscover_interval = 30
psebufcalls = 20
psecache = 1
pseintrstack = 24576
psetimers = 20
rfc1122addrchk = 0
rfc1323 = 0
rfc2414 = 1
route_expire = 1
routerevalidate = 0
rto_high = 64
rto_length = 13
rto_limit = 7
rto_low = 1
sack = 0
sb_max = 1048576
send_file_duration = 300
site6_index = 0
sockthresh = 85
sodebug = 0
sodebug_env = 0
somaxconn = 1024
strctlsz = 1024
strmsgsz = 0
strthresh = 85
strturncnt = 15
subnetsarelocal = 1
tcp_bad_port_limit = 0
tcp_ecn = 0
tcp_ephemeral_high = 65535
tcp_ephemeral_low = 32768
tcp_finwait2 = 1200
tcp_icmpsecure = 0
tcp_init_window = 0
tcp_inpcb_hashtab_siz = 24499
tcp_keepcnt = 8
tcp_keepidle = 14400
tcp_keepinit = 150
tcp_keepintvl = 150
tcp_limited_transmit = 1
tcp_low_rto = 0
tcp_maxburst = 0
tcp_mssdflt = 1460
tcp_nagle_limit = 65535
tcp_nagleoverride = 0
tcp_ndebug = 100
tcp_newreno = 1
tcp_nodelayack = 0
tcp_pmtu_discover = 0
tcp_recvspace = 16384
tcp_sendspace = 16384
tcp_tcpsecure = 0
tcp_timewait = 1
tcp_ttl = 60
tcprexmtthresh = 3
thewall = 1048576
timer_wheel_tick = 0
udp_bad_port_limit = 0
udp_ephemeral_high = 65535
udp_ephemeral_low = 32768
udp_inpcb_hashtab_siz = 24499
udp_pmtu_discover = 0
udp_recvspace = 42080
udp_sendspace = 9216
udp_ttl = 30
udpcksum = 1
use_isno = 1
use_sndbufpool = 1
rcp command:
------------
Purpose
Transfers files between a local and a remote host or between two remote
hosts.
Syntax
-r Recursively copies
Description
The /usr/bin/rcp command is used to copy one or more files between the
local host and a remote host,
between two remote hosts, or between files at the same remote host.
The local host is included in the remote host /etc/hosts.equiv file and
the remote user is not the root user.
The local host and user name is included in a $HOME/.rhosts file on the
remote user account.
Although you can set any permissions for the $HOME/.rhosts file, it is
recommended that the permissions
of the .rhosts file be set to 600 (read and write by owner only).
Examples:
- The following example uses rcp to copy the local file, YTD_sum from
the directory /usr/reports
on the local host to the file year-end in the directory /usr/acct on the
remote host moon:
- To copy a remote file from one remote host to another remote host,
enter:
- To send the directory subtree from the local host to a remote host and
preserve the modification times and modes,
enter:
# rcp -p -r report jane@host2:report
The directory subtree report is copied from the local host to the home
directory of user jane
at remote host host2 and all modes and modification times are preserved.
The remote file /home/jane/.rhosts includes an entry specifying the
local host and user name.
Note:
rcp is ofcourse used to copy files between unix systems. On nt/w2k/xp
computers, rcp could be available
with some different syntax, like
rcp [{-a | -b}] [-h] [-r] [Host][.User:] [Source] [Host][.User:]
[Path\Destination]
Note 1:
=======
Note 2:
=======
ftpd Daemon
Purpose
Provides the server function for the Internet FTP protocol.
Syntax
Note: The ftpd daemon is normally started by the inetd daemon. It can
also be controlled from the command line,
using SRC commands.
/usr/sbin/ftpd [ -d ] [ -k ] [ -l ] [ -t TimeOut ] [ -T MaxTimeOut ] [
-s ] [ -u OctalVal ]
Description
The /usr/sbin/ftpd daemon is the DARPA Internet File Transfer Protocol
(FTP) server process. The ftpd daemon
uses the Transmission Control Protocol (TCP) to listen at the port
specified with the ftp command service
specification in the /etc/services file.
Changes to the ftpd daemon can be made using the System Management
Interface Tool (SMIT) or
System Resource Controller (SRC), by editing the /etc/inetd.conf or
/etc/services file.
Entering ftpd at the command line is not recommended. The ftpd daemon is
started by default when it is
uncommented in the /etc/inetd.conf file.
The inetd daemon gets its information from the /etc/inetd.conf file and
the /etc/services file.
Note 3:
=======
ftp> bin
ftp> put "| dd if=/dev/zero bs=512k count=2000" /dev/null
Note 4:
=======
Document Text
Title : How to setup anonymous ftp, and troubleshooting ftp
Date : 970828
Type : EN
Document ID : A4786122
Problem Description
Can you explain the proper setup of anonymous FTP and how to
troubleshoot any problems?
Configuration Info
Solution
10.X:
ftp stream tcp nowait root /usr/lbin/ftpd ftpd
9.X:
ftp stream tcp nowait root /etc/ftpd ftpd
or
netstat -a |grep ftp
the output should look like:
10.X:
ftp:*:500:1:Anonymous FTP user:/home/ftp:/usr/bin/false
9.X:
ftp:*:500:1:Anonymous FTP user:/users/ftp:/bin/false
*Note: If UID 500 is not available, use a UID that
is not currently being used.
*Note: GID 1 is usually group 'other', verify that group 'other'
2. Create a home directory for the ftp user that is owned by ftp and
has permissions set to 0555:
10.X:
mkdir /home/ftp
chmod 555 /home/ftp
chown ftp:other /home/ftp
9.X:
mkdir /users/ftp
chmod 555 /users/ftp
chown ftp:other /users/ftp
10.X:
mkdir -p /home/ftp/usr/bin
chmod 555 /home/ftp/usr/bin /home/ftp/usr
chown root /home/ftp/usr/bin /home/ftp/usr
9.X:
mkdir /users/ftp/bin
chmod 555 /users/ftp/bin
chown root /users/ftp/bin
4. Copy 'ls' to the new bin directory with permissions set to 0111:
10.X:
cp /sbin/ls /home/ftp/usr/bin/ls
chmod 111 /home/ftp/usr/bin/ls
9.X:
cp /bin/ls /users/ftp/bin/ls
chmod 111 /users/ftp/bin/ls
10.X:
mkdir /home/ftp/etc
chmod 555 /home/ftp/etc
chown root /home/ftp/etc
9.X:
mkdir /users/ftp/etc
chmod 555 /users/ftp/etc
chown root /users/ftp/etc
10.X:
cp /etc/passwd /etc/group /home/ftp/etc
chown root /home/ftp/etc/passwd /home/ftp/etc/group
chmod 444 /home/ftp/etc/passwd /home/ftp/etc/group
9.X:
cp /etc/passwd /etc/group /users/ftp/etc
chown root /users/ftp/etc/passwd /users/ftp/etc/group
chmod 444 /users/ftp/etc/passwd /users/ftp/etc/group
6. OPTIONAL:
Create a dist directory that is owned by root and has permissions
of 755. Superuser can put read-only files in this directory to
make them available to anonymous ftp users.
10.X:
mkdir /home/ftp/dist
chown root /home/ftp/dist
chmod 755 /home/ftp/dist
9.X:
mkdir /users/ftp/dist
chown root /users/ftp/dist
chmod 755 /users/ftp/dist
7. OPTIONAL:
Create a pub directory that is owned by ftp and writable by all.
Anonymous ftp users can put files in this directory to make them
available to other anonymous ftp users.
10.X:
mkdir /home/ftp/pub
chown ftp:other /home/ftp/pub
chmod 777 /home/ftp/pub
9.X:
mkdir /users/ftp/pub
chown ftp:other /users/ftp/pub
chmod 777 /users/ftp/pub
Troubleshooting FTP:
write, and execute permission bits for group and other must
all be zero, and it must be readable by its owner.
Otherwise, the file is ignored.
B. Check /etc/ftpusers.
ftpd rejects remote logins to local user accounts that are
named in /etc/ftpusers. Each restricted account name must
appear alone on a line in the file. The line cannot contain
any white space. User accounts that specify a restricted
login shell in /etc/passwd should be listed in /etc/ftpusers
because ftpd accesses local accounts without using their
login shells.
Example entries:
/bin/sh <<<-
/bin/rsh |
/bin/ksh |
/bin/rksh > 9.X valid shells
/bin/csh |
/bin/pam |
/usr/bin/keysh |
/bin/posix/sh <<<-
/sbin/sh <<<-
/usr/bin/sh |
/usr/bin/rsh |
/usr/bin/ksh > 10.X valid shells
/usr/bin/rksh |
/usr/bin/csh |
/usr/bin/keysh <<<-
All shells referred to in /etc/passwd or in the NIS passwd map
should be valid shells or links on this system and be listed
in /etc/shells.
Note 5:
=======
Note 6:
=======
On HP-UX:
# /usr/sbin/inetd -c
Note 7:
=======
There are five files used to hold FTP configuration information. These
files are listed here:
The configuration files allow you to configure FTP features, such as the
number of FTP login tries permitted,
FTP banner displays, logging of incoming and outgoing file transfers,
access permissions,
use of regular expressions, etc. For complete details on these files,
see the ftpaccess(4), ftpgroups(4),
ftpusers(4), ftphosts(4), and ftpconversion(4) manpages.
Example 1:
----------
#!/usr/bin/ksh
ftp -v -n "YOUR.IP.ADD.RESS" << cmd
user "user" "passwd"
cd /distant/directory
lcd /local/directoryget ssh_install
get ( or put) your files
quit
cmd
Example 2:
----------
autounix.sh
#!/bin/ksh
cd $s_backuppath
ftp -in ftp-out.sapservx.com << EndHere
user $user $passwd
cd $destdir
bin
put $s_filename
rename $s_filename $s_donefilename
quit
EndHere
41.3 Linux:
===========
- What is it?
Formatting Rules
All access control rules are placed on lines within hosts.allow and
hosts.deny, and any blank lines
or lines that start with the comment character (#) are ignored. Each
rule needs to be on its own line.
You can use the above wildcards in combination with the EXCEPT operator.
Example:
Users that wish to prevent any hosts other than specific ones from
accessing services usually place
ALL: ALL in hosts.deny. Then, they place lines in hosts.allow, such as:
in.telnetd: 10.0.1.24
in.ftpd: 10.0.1. EXCEPT 10.0.1.1
- Shell commands:
defaults
{
instances = 60
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST
cps = 25 30
}
includedir /etc/xinetd.d
To get an idea of how these files are structured, consider the wu-ftp
file:
service ftp
{
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.ftpd
server_args = -l -a
log_on_success += DURATION USERID
log_on_failure += USERID
nice = 10
disable = yes
}
The first line defines the service's name. The lines within the brackets
contain settings that define how this
service is supposed to be started and used. The wu-ftp file states that
the FTP service uses a
stream socket type (rather than dgram), the binary executable file to
use, the arguments to pass
to the binary, the information to log in addition to the
/etc/xinetd.conf settings, the priority with which
to run the service, and more.
The use of xinetd with a service also can serve as a basic level of
protection from a
Denial of Service (DoS) attack. The max_load option takes a floating
point value to set a CPU usage
threshold when no more connections for a particular service will be
accepted, preventing certain services
from overwhelming the system. The cps option accepts an integer value to
set a rate limit on the number
of connections available per second. Configuring this value to something
low, such as 3, will help prevent
attackers from being able to flood your system with too many
simultaneous requests for a particular service.
service telnet
{
disable = no
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
no_access = 10.0.1.0/24
log_on_success += PID HOST EXIT
access_times = 09:45-16:15
}
In this example, when any system from the 10.0.1.0/24 subnet, such as
10.0.1.2, tries to telnet into the server,
they will receive a message stating Connection closed by foreign host.
In addition, their login attempt
is logged in /var/log/secure.
- Network Scripts
Using Red Hat Linux, all network communications occur between configured
interfaces and physical
networking devices connected to the system. The different types of
interfaces that exist are as varied
as the physical devices they support.
The configuration files for network interfaces and the scripts to
activate and deactivate them are located in the
"/etc/sysconfig/network-scripts/" directory.
While the existence of interface files can differ from system to system,
the three different types of files
that exist in this directory, interface configuration files, interface
control scripts, and network
function files, work together to enable Red Hat Linux to use various
network devices.
This chapter will explore the relationship between these files and how
they are used.
Ethernet Interfaces
One of the most common interface files is ifcfg-eth0, which controls the
first network interface card or
NIC in the system. In a system with multiple NICs, you will also have
multiple ifcfg-eth files,
each one with a unique number at the end of the file name. Because each
device has its own configuration file,
you can control how each interface functions individually.
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.0.1.0
NETMASK=255.255.255.0
IPADDR=10.0.1.27
USERCTL=no
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
Most of the time you will probably want to use a GUI utility, such as
Network Administration Tool
(redhat-config-network) to make changes to the various interface
configuration files.
You can also edit the configuration file for a given network interface
by hand. Below is a listing of the parameters
one can expect to configure in an interface configuration file.
- Network Functions
Red Hat Linux makes use of several files that contain important
functions that are used in various ways
to bring interfaces up and down. Rather than forcing each interface
control file to contain the same functions
as another, these functions are grouped together in a few files that can
be sourced when needed.
As the functions required for IPv6 interfaces are different than IPv4
interfaces, a network-functions-ipv6 file
exists specifically to hold this information. IPv6 support must be
enabled in the kernel in order to communicate
via that protocol. A function is present in this file that checks for
the presence of IPv6 support.
Additionally, functions that configure and delete static IPv6 routes,
create and remove tunnels, add and
remove IPv6 addresses to an interface, and test for the existence of an
IPv6 address on an interface can also
be found in this file.
Linux comes with advanced tools for packet filtering - the process of
controlling network packets as they enter,
move through, and exit the network stack within the kernel. Pre-2.4
kernels relied on ipchains for
packet filtering and used lists of rules applied to packets at each step
of the filtering process.
The introduction of the 2.4 kernel brought with it iptables (also called
netfilter), which is similar
to ipchains but greatly expands on the scope and control available for
filtering network packets.
Warning
The default firewall mechanism under the 2.4 kernel is iptables, but
iptables cannot be used if ipchains
are already running. If ipchains are present at boot time, the kernel
will issue an error and fail
to start iptables.
- Packet Filtering
Every chain has a default policy to ACCEPT, DROP, REJECT, or QUEUE the
packet to be passed to user-space.
If none of the rules in the chain apply to the packet, then the packet
is dealt with in accordance
with the default policy.
The iptables command allows you to configure these rule lists, as well
as set up new tables to be used
for your particular situation.
- iptables command:
BIND as a Nameserver:
Red Hat Linux includes BIND, which is a very popular, powerful, open
source nameserver. BIND uses the named
daemon to provide name resolution services.
Note 1:
-------
To view:
# /usr/sbin/no -o tcp_keepinit
The output should be something like:
tcp_keepinit = 150
To set:
# /usr/sbin/no -d tcp_keepinit 100
Note 2:
-------
Use the following steps to change the TCP/IP timeout for your computer.
no -o tcp_keepinit=<timeout_value>
========================
42. SOME NOTES ON IPSEC:
========================
- Authentication: proof that the identity of the host on the other end
of the connection is valid and correct.
- Integrity Checking: assurance that no data sent over the network
connection was modified in transit.
- Encryption: the rendering of network communications indecipherable to
anyone who might intercept the transmitted data.
IPsec protocols operate at the network layer, layer 3 of the OSI model.
Other Internet security protocols
in widespread use, such as SSL, TLS and SSH, operate from the transport
layer up (OSI layers 4 - 7).
This makes IPsec more flexible, as it can be used for protecting layer 4
protocols, including both TCP and UDP,
the most commonly used transport layer protocols. IPSec has an advantage
over SSL and other methods that operate
at higher layers. For an application to use IPsec no code change in the
applications is required whereas
to use SSL and other higher level protocols, applications must undergo
code changes.
-- Transport mode
-- --------------
In transport mode, only the payload (the data you transfer) of the IP
packet is authenticated and/or encrypted.
The routing is intact, since the IP header is neither modified nor
encrypted; however, when the authentication
header is used, the IP addresses cannot be translated, as this will
invalidate the hash value. The transport
and application layers are always secured by hash, so they cannot be
modified in any way (for example by
translating the port numbers). Transport mode is used for host-to-host
communications.
In its most simple form, using only an Authentication Header (AH) for
identifying your communication
partner, the packet looks like this:
---------------------------------------
| Original IP header | AH | TCP| DATA |
---------------------------------------
In transport mode, IPSec inserts the AH header after the IP header. The
IP data and header are used to calculate
the AH authentication value.
-- Tunnel mode
-- -----------
In tunnel mode, the entire IP packet (data plus the message headers) is
encrypted and/or authenticated.
It must then be encapsulated into a new IP packet for routing to work.
Tunnel mode is used for
network-to-network communications (secure tunnels between routers) or
host-to-network and host-to-host
communications over the Internet.
You should be aware that tunnel mode is probably the most widely used
implementation.
Many organizations use the Internet, to tunnel their traffic from site
to site.
In its most simple form, using only an Authentication Header (AH) for
identifying your communication
partner, the packet looks like this:
--------------------------
|NEW IP Header | Payload |
--------------------------
which is
----------------------------------------------------
|NEW IP Header| AH | Original IP header| TCP| DATA |
----------------------------------------------------
Symmetric key hash functions are also known as shared key hash functions
because the sender and receiver
must use the same (symmetric) key for the hash functions. In addition,
the key must only be known by the
sender and receiver, so this class of hash functions is sometimes
referred to as secret key hash functions.
So, secret key must not be confused with the well-know Public/Private
key encryptions.
DES-CBC (Data Encryption Standard Cipher Block Chaining Mode, 56-bit key
length)
3DES-CBC (Triple-DES CBC, three encryption iterations, each with a
different 56-bit key)
AES128-CBC (Advanced Encryption Standard CBC, 128-bit key length).
So IPSec uses "shared key" technology. If you use the manual keys, its
clear how they get
generated: by you. But even if you use IKE, you still have a
"negotiation phase" before the
keys are actually determined. In this phase, two models can be used:
Notes:
-----
Note 1:
-------
IPSec can be employed between hosts (that is, end nodes), between
gateways, or between a host and a gateway
in an IP network. Some implementations, like HP-UX IPSec, can only be
installed on end nodes.
Note 2:
-------
Note 3:
-------
In IPSec, you will often see the term "SA". This stands for "Security
Association", which is actually
a term discribing and collecting all relevant parameters like
Destination Address, Security Parameter Index SPI, Key,
Autentication Algolrithm, Key lifetime etc..
- Installing IPSec:
lslpp -L '*ipsec*'
The output from that command should contain the following filesets:
lslpp -L 'bos.crypto*'
The IP Security software uses syslog to process messages and errors that
it generates.
Messages are sent to syslogd at the local4 facility. It is a good idea
to setup logging of these messages
before activating IPSec, to make troubleshooting easier.
local4.debug /var/adm/ipsec.log
or use the commandline with, for example, the "genfilt", "lsfilt" and
other commands.
Syntax
genfilt -v 4|6 [ -n fid] [ -a D|P] -s s_addr -m s_mask [-d d_addr] [ -M
d_mask] [ -g Y|N ]
[ -c protocol] [ -o s_opr] [ -p s_port] [ -O d_opr] [ -P
d_port] [ -r R|L|B ] [ -w I|O|B ] [ -l Y|N ]
[ -f Y|N|O|H ] [ -t tid] [ -i interface]
Description
Use the genfilt command to add a filter rule to the filter rule table.
The filter rules generated by this command
are called manual filter rules. IPsec filter rules can be configured
using the genfilt command,
IPsec smit (IP version 4 or IP version 6), or Web-based System Manager
in the Virtual Private Network submenu.
Examples:
Purpose
Lists filter rules from either the filter table or the IP Security
subsystem.
Syntax
lsfilt -v 4|6 [-n fid_list] [-a] [-d]
Description
Use the lsfilt command to list filter rules and their status.
You can configure IP Sec using the Web-based System Manager application
Network or SMIT. If using SMIT,
the following fastpaths will take you directly to the configuration
panels you need:
- ips4_basic
Basic configuration for IP version 4
- ips6_basic
Basic configuration for IP version 6
This section on IP Security Configuration discusses the following
topics:
There are two related but distinct parts of IP Security: tunnels and
filters. Tunnels require filters,
but filters do not require tunnels.
A packet comes in the network adapter to the IP stack. From there, the
filter module is called to determine
if the packet should be permitted or denied. If a tunnel ID is
specified, the packet will be checked against
the existing tunnel definitions. If the decapsulation from the tunnel is
successful, the packet will be passed
to the upper layer protocol. This function will occur in reverse order
for outgoing packets. The tunnel
relies on a filter rule to associate the packet with a particular
tunnel, but the filtering function can occur
without passing the packet to the tunnel.
----------- ---------
|Host A | |Host B |
| |------------------------------| |
| |------------------------------| |
| | | |
----------- SA A-------------------> ---------
<------------------ SA B
The Security Parameter Index (SPI) and the destination address identify
a unique security association.
Therefore, these two parameters are required for uniquely specifying a
tunnel. Other parameters such as
cryptographic algorithm, authentication algorithm, keys, and lifetime
can be specified or defaults can be used.
The decision to use IBM tunnels, manual tunnels, or, for AIX versions
4.3.2 and later, IKE tunnels,
depends on the tunnel support of the remote end and the type of key
management desired. IKE tunnels
are preferable (when available) because they offer secure key
negotiation and key refreshment in an
industry-standard way. They also take advantage of the new IETF ESP and
AH header types and support
anti-replay protection.
If the remote end does not support IBM tunnels, or uses one of the
algorithms requiring manual tunnels,
manual tunnels should be used. Manual tunnels ensure interoperability
with a large number of hosts.
Because the keys are static and difficult to change and may be
cumbersome to update, they are not as secure.
IBM Tunnels may be used between any two AIX machines running AIX Version
4.3 or higher, or between an AIX 4.3 host and
a host running IBM Secure Network Gateway 2.2 or IBM Firewall 3.1/3.2.
Manual tunnels may be used between a host
running AIX Version 4.3 and any other machine running IP Security and
having a common set of cryptographic
and authentication algorithms. Almost all vendors offer Keyed MD5 with
DES, or HMAC MD5 with DES.
This is a base subset that works with almost all implementations of IP
Security.
When setting up manual or IBM tunnels, the procedure depends on whether
you are setting up the first host
of the tunnel or setting up the second host, which must have parameters
matching the first host's setup.
When setting up the first host, the keys may be autogenerated, and the
algorithms can be defaulted.
When setting up the second host, it is best to import the tunnel
information from the remote end, if possible.
This will create a tunnel with output (using lstun -v 4) that looks
similar to:
Tunnel ID : 1
IP Version : IP Version 4
Source : 5.5.5.19
Destination : 5.5.5.8
Policy : auth/encr
Tunnel Mode : Tunnel
Send AH Algo : HMAC_MD5
Send ESP Algo : DES_CBC_8
Receive AH Algo : HMAC_MD5
Receive ESP Algo : DES_CBC_8
Source AH SPI : 300
Source ESP SPI : 300
Dest AH SPI : 23576
Dest ESP SPI : 23576
Tunnel Life Time : 480
Status : Inactive
Target : -
Target Mask : -
Replay : No
New Header : Yes
Snd ENC-MAC Algo : -
Rcv ENC-MAC Algo : -
# mktun -v 4 -t1
The filter rules associated with the tunnel are automatically generated
and output (using lsfilt -v 4)
looks similar to:
Rule 4:
Rule 5:
These filter rules in addition to the default filter rules are activated
by the mktun -v 4 -t 1 command.
To set up the other side (when it is another AIX machine), the tunnel
definition can be exported on host A
then imported to host B.
To export:
# exptun -v 4 -t 1 -f /tmp
# imptun -v 4 -t 1 -f /tmp
where 1 is the tunnel to be imported and /tmp is the directory where the
import files reside. This tunnel number
is system generated and must be referenced from the output of the gentun
command, or by using the lstun command
to list the tunnels and determine the correct tunnel number to import.
If there is only one tunnel in the
import file, or if all the tunnels are to be imported, then the -t
option is not needed.
If the remote machine is not AIX 4.3, the export file can be used as a
reference for setting up the algorithm,
keys, and SPI values for the other end of the tunnel.
Export files from the IBM Secure Network Gateway (SNG) can be imported
to create tunnels in AIX 4.3. To do this,
use the -n option when importing the file:
# imptun -v 4 -f /tmp -n
# mktun -v 4 -t1
To set up the other side, if the other host is an AIX 4.3 IP Security
machine, the tunnel definition can be exported
on host A, then imported to host B.
To export:
# exptun -v 4 -f /tmp
Below is a sample set of filter rules. Within each rule, fields are
shown in the following order
(an example of each field from rule 1 is shown in parentheses):
Rule_number (1), Action (permit),
Source_addr (0.0.0.0), Source_mask (0.0.0.0), Dest_addr (0.0.0.0),
Dest_mask (0.0.0.0), Source_routing (no),
Protocol (udp), Src_prt_operator (eq), Src_prt_value (4001),
Dst_prt_operator (eq), Dst_prt_value (4001),
Scope (both), Direction (both), Logging (no), Fragment (all packets),
Tunnel (0), and Interface (all).
3 permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 no esp any 0 any 0 both both no
all
packets 0 all
18 permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 no all any 0 any 0 both both
yes all
packets
Rule 1 is for the IBM Session Key daemon and will only appear in IP
Version 4 filter tables. It uses port number
4001 to control packets for refreshing the session key. It is an example
of how the port number can be used
for a specific purpose. This filter rule should not be modified except
for logging purposes.
Rules 10 and 11 are a set of user-defined rules that filter both inbound
and outbound icmp services of any type
between addresses 10.0.0.1 and 10.0.0.4 through tunnel #3.
Each rule may be viewed separately (using lsfilt) to make each field
clear.
ipsec_config
ipsec_report
ipsec_admin
ipsec_policy
The syntax with respect of addresses and ports, resembles somewhat the
common syntax found in many
types of router, gateway, firewall products.
For example
0.0.0.0 means here all possible IPv4 addresses
10.0.0.0 means here all possible IPv4 addresses in 10.
For example, the "ipsec_config show all" command displays the entire
contents of the database.
profiles:
You can specify a profile file name with the -profile argument as part
of an ipsec_config command. By default,
ipsec_config uses the /var/adm/ipsec/.ipsec_profile profile file, which
is shipped with HP-UX IPSec.
In most topologies, you can use the default values supplied in the
/var/adm/ipsec/.ipsec_profile file.
Installation:
-------------
The software takes about 110MB. Most of the software goes into
/var/adm/ipsec.
As root:
# swinstall
This opens the "Software Selection" window and the "Specify Source"
window.
On the Specify Source window, change the Source Host Name if necessary.
Enter the mount point of the drive in the Source Depot Path field and
click OK to return to the
Software Selection window.
swinstall loads the fileset, runs the control scripts for the fileset,
and builds the kernel.
Estimated time for processing: 3 to 5 minutes.
When you install HP-UX IPSec, the HP-UX IPSec password is set to ipsec.
You must change the HP-UX IPSec password
after installing the product to use the autoboot feature and to load and
configure security certificates.
HP-UX IPSec uses the password to encrypt certificate files that contain
cryptography keys for
security certificates, and to control access to the ipsec_mgr security
certificate configuration GUI.
# ipsec_admin -newpasswd
-- Getting help
-- ------------
ok help / ok help [category] / ok help command
For example, if you want to see the help messages for all commands in
the category "diag", type the following:
ok help diag
-- OpenBoot Diagnostics
-- --------------------
Various hardware diagnostics can be run in OpenBoot.
-- OpenBoot NVRAM
-- --------------
System configuration parameters, like "auto-boot", are stored in NVRAM.
You can list or modify these configuration parameters and any changes
you make
remain in effect, even after a power cycle because the are stored in
NVRAM.
Once unix is loaded, root can also use the /usr/sbin/eeprom command to
view or change an OpenBoot parameter.
/usr/sbin/eeprom auto-boot?=true
Solaris:
--------
nice:
-----
A high nice value means a low priority for your process: you are goiing
to be nice.
A low or negative value means a high priority: you are not very nice.
Examples:
The nice command uses the programname as an argument. The renice command
takes the PID as argument.
System Range
------ -----
Solaris 0-39
HPUX 0-39
Read Hat -20-20
FreeBSD -20-20
prioctl:
--------
Syntax:
# prioctl -s -p <new_priority> -i pid <process_id>
Example:
# prioctl -s -p -5 -i pid 8200
AIX:
----
Syntax
schedtune [ -D | { [ -d n ] [ -e n ] [ -f n ] [ -h n ] [ -m n ] [ -p n ]
[ -r n ] [ -t n ] [ -w n ] } ]
Description
Priority-Calculation Parameters
The priority of most user processes varies with the amount of CPU time
the process has used recently. The CPU scheduler's priority calculations
are based on two parameters that are set with schedtune: -r and -d. The
r and d values are in thirty-seconds (1/32); that is, the formula used
by the scheduler to calculate the amount to be added to a process's
priority value as a penalty for recent CPU use is:
new recently used CPU value = (old recently used CPU value of the
process) * (d/32)
Solaris:
--------
Note 1:
-------
$ cd /etc
$ ls -al get*
lrwxrwxrwx 1 root root 21 Aug 10 2004 getty ->
../usr/lib/saf/ttymon
/var/saf/zsmon >sacadm -l
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
zsmon ttymon - 0 ENABLED /usr/lib/saf/ttymon #
$ pmadm -l
PMTAG PMTYPE SVCTAG FLGS ID <PMSPECIFIC>
zsmon ttymon ttya u root /dev/term/a I
- /usr/bin/login - 9600 ldterm,ttcompat ttya login: - tvi925 y #
zsmon ttymon ttyb u root /dev/term/b I
- /usr/bin/login - 9600 ldterm,ttcompat ttyb login: - tvi925 y #
ls -al \dev\term
Note 2:
-------
Solaris 2.x systems come with a ttymon port monitor named zsmon and with
serial ports A and B already
configured with default settings for terminals, as shown in the
following example:
castle% /usr/sbin/sacadm -l
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
zsmon ttymon - 0 ENABLED /usr/lib/saf/ttymon #
castle% /usr/sbin/pmadm -l
PMTAG PMTYPE SVCTAG FLGS ID <PMSPECIFIC>
tcp listen lp - root - p -
$ sacadm -l
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
zsmon ttymon - 0 ENABLED /usr/lib/saf/ttymon #
Note 3:
-------
Note 4:
-------
ttymon then writes the prompt and waits for user input. If
the user indicates that the speed is inappropriate by press-
ing the BREAK key, ttymon tries the next speed and writes
the prompt again. When valid input is received, ttymon
interprets the per-service configuration file for the port,
if one exists, creates a utmpx entry if required (see
utmpx(4)), establishes the service environment, and then
invokes the service associated with the port. Valid input
consists of a string of at least one non-newline character,
terminated by a carriage return. After the service ter-
minates, ttymon cleans up the utmpx entry, if one exists,
and returns the port to its initial state.
SERVICE INVOCATION
The service ttymon invokes for a port is specified in the
ttymon administrative file. ttymon will scan the character
string giving the service to be invoked for this port, look-
ing for a %d or a %% two-character sequence. If %d is found,
ttymon will modify the service command to be executed by
replacing those two characters by the full path name of this
port (the device name). If %% is found, they will be
replaced by a single %. When the service is invoked, file
descriptor 0, 1, and 2 are opened to the port device for
reading and writing. The service is invoked with the user
ID, group ID and current home directory set to that of the
user name under which the service was registered with
ttymon. Two environment variables, HOME and TTYPROMPT, are
added to the service's environment by ttymon. HOME is set to
the home directory of the user name under which the service
is invoked. TTYPROMPT is set to the prompt string configured
for the service on the port. This is provided so that a ser-
vice invoked by ttymon has a means of determining if a
prompt was actually issued by ttymon and, if so, what that
prompt actually was.
See ttyadm(1M) for options that can be set for ports moni-
tored by ttymon under the Service Access Controller.
SECURITY
ttymon uses pam(3PAM) for session management. The PAM con-
figuration policy, listed through /etc/pam.conf, specifies
the modules to be used for ttymon. Here is a partial
pam.conf file with entries for ttymon using the UNIX session
management module.
Note 5:
-------
1. Become superuser.
2. Type sacadm -l and press Return. Check the output to make sure that
a ttymon port monitor is configured.
It is unlikely that you will need to add a new port monitor. If you
do need to add one, type
3. Type
oak% su
Password:
# sacadm -l
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
zsmon ttymon - O ENABLED /usr/lib/saf/ttymon #
# sacadm -a -p ttymonO -t ttymon -c /usr/lib/saf/ttymon -v`ttyadm -V`
# sacadm -l
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
ttymonmO ttymon - O STARTING /usr/lib/saf/ttymon #
zsmon ttymon - O ENABLED /usr/lib/saf/ttymon #
# pmadm -a -p ttymonO -s ttyOO -i root -fu
-v `ttyadm -V` -m "`ttyadm -t tvi925 -d
/dev/term/OO -l 96OO -s
/usr/bin/login`"
# pmadm -l
PMTAG PMTYPE SVCTAG FLGS ID <PMSPECIFIC>
zsmon ttymon ttya u root /dev/term/a I -
/usr/bin/login - 96OO ldterm,ttcompat ttya login: - tvi925 y
#
zsmon ttymon ttyb u root /dev/term/b I -
/usr/bin/login - 96OO ldterm,ttcompat
ttyb login: - tvi925 y
#
ttymonO ttymon ttyOO u root /dev/term/OO - - -
?/usr/bin/login - 96OO login: - tvi925 - #
#
Note 7:
-------
3.23) What has happened to getty? What is pmadm and how do you use it?
I was hoping you wouldn't ask. PMadm stands for Port Monitor Admin, and
it's part of a ridiculously complicated
bit of software over-engineering that is destined to make everybody an
expert.
Best advice for workstations: don't touch it! It works out of the box.
For servers, you'll have to read the manual.
This should be in admintool in Solaris 2.3 and later. For now, here are
some basic instructions from Davy Curry.
1. Do a "pmadm -l" to see what's running. The serial ports on the CPU
board are probably already being monitored by "zsmon".
2. If the port you want is not being monitored, you need to create a new
port monitor with the command
where PMTAG is the name of the port monitor, e.g. "zsmon" or "alm1mon",
and VERSION is the output of "ttyadm -V".
3. If the port you want is already being monitored, and you want to
change something, you need to delete the current instance of the port
monitor. To do this, use the command
where PMTAG and SVCTAG are as given in the output from "pmadm -l". Note
that if the "I" is present in the <PMSPECIFIC> field (as it is above),
you need to get rid of it.
Note the assorted quotes; Bourne shell (sh) and Korn (ksh) users leave
off the second backslash!
In the above:
PMTAG is the port monitor name you made with "sacadm", e.g. "zsmon".
SVCTAG is the service tag, which can be the name of the port, e.g.,
"ttya" or "tty21".
PROMPT is the prompt you want to print, e.g. "login: ".
YORN is "y" to turn software carrier on (you want this for directly
connected terminals" and "n" to leave it off
(you want this for modems).
TERMTYPE is the value you want in $TERM.
DEVICE is the name of the device, e.g. "/dev/term/a" or "/dev/term/21".
TTYID is the line you want from /etc/ttydefs that sets the baud rate and
stuff. I suggest you use one of the
"contty" ones for directly connected terminals.
Note 8:
-------
-- The sacadm command adds and removes port monitors. This command is
your main link with the Service Access Controller (SAC)
and its administrative file (/etc/saf/_sactab).
45: CDE:
========
The login Server, also called the Login Manager, usually starts up the
CDE environment when the system
is booted and the "/etc/rc2.d/S99dtlogin" script is run.
The login Server is a server responsible for displaying a graphical
logon screen, authenticating users,
and starting a user session.
It can display a login screen on local or network bitmap displays
It can also be started from the command line, for example, to start the
Login Server use either:
# /etc/init.d/dtlogin start
or
# /usr/dt/bin/dtlogin -deamon; exit
To set the Login Manager to start CDE the next time the system is
booted, give the command
# /usr/dt/bin/dtconfig -e
# /etc/init.d/dtlogin stop
or
# /usr/dt/bin/dtconfig -kill
If you do not want the CDE startup if the system is booted use
# /usr/dt/bin/dtconfig -d
Upon startup, the Login Server checks the Xservers file to determine if
an X server needs to be
started and to determine if and how login screens should be displayed on
local or network displays.
To modify Xservers, copy Xservers from /usr/dt/config to /etc/dt/config.
After modifying, tell the login server to reread Xservers by
# /usr/dt/bin/dtconfig -reset
If your login server has no bitmap display, you should comment ou the
line shown above like:
So when the login server starts, it runs in the background waiting for
requests from
network displays.
ABOUT MAKE
The make utility executes a list of shell commands associated with each
target, typically to create
or update a file of the same name. makefile contains entries that
describe how to bring a target
up to date with respect to those on which it depends, which are called
dependencies.
SYNTAX
DESCRIPTION
The make utility executes a list of shell commands associ-
ated with each target, typically to create or update a file
of the same name. makefile contains entries that describe
how to bring a target up to date with respect to those on
which it depends, which are called dependencies. Since each
dependency is a target, it may have dependencies of its own.
Targets, dependencies, and sub-dependencies comprise a tree
structure that make traces when deciding whether or not to
rebuild a target.
/opt/app/oracle/product/9.2/sqlplus/lib >/usr/ccs/bin/make -f
ins_sqlplus.mk install
To be able to use 'make' 'as' and 'ld' you need to make sure that
/usr/ccs/bin is in your path.
> For users and scripts that expect the BSD style options, in cases such
> as ps & ls where they are incompatible with the SvsV options found in
> the /usr/bin versions.
It's there for historical reasons. SunOS 4.x was based on BSD unix.
Solaris 2.x (= SunOS 5.x) was based on SYSV, with a bunch of commands
having different syntax and behavior. To ease the transition, the
/usr/ucb directory was created to hold the incompatible BSD versions.
People who really wanted BSD could put /usr/ucb before /usr in their
PATH.
Note 3:
-------
Now suppose only one file changes, and the files are not small but
contains many codelines, then
a better approach could be this:
Suppose you seperate the compilation and linking stages:
- compile into objectfiles:
f90 -c sortit sortit_main.f90 readN.f90 sortarray.f90
Suppose there were many source files, and thus many objectfiles.
In this case it's better to make one definitionfile which explains it
all. So if one source changes,
the corresponding objectfile is out of date, and needs to be recreated.
All that information can be in a definitionfile, for example:
sortit_main.o: sortit_main.f90
f90 -c sortit_main.f90
readN.o: readN.f90
f90 -c readN.f90
sortarray.o: sortarray.f90
f90 -c sortarray.f90
make -f makefile1.mk
or
One of the labels present in the Makefile happens to be named ' install
' .
Further explanation:
--------------------
To make this even easier, the make utility has a set of built-in rules
so you only need to tell it what new things
it needs to know to build your particular utility. For example, if you
typed in make love, make would first look
for some new rules from you. If you didn't supply it any then it would
look at its built-in rules. One of those
built-in rules tells make that it can run the linker (ld) on a program
name ending in .o to produce the
executable program.
So, make would look for a file named love.o. But, it wouldn't stop
there. Even if it found the .o file,
it has some other rules that tell it to make sure the .o file is up to
date. In other words, newer than
the source program. The most common source program on Linux systems is
written in C and its file name ends in .c.
The old UNIX joke, by the way, is what early versions of make said when
it could not find the necessary files.
In the example above, if there was no love.o, love.c or any other source
format, the program would have said:
make: don't know how to make love. Stop.
Getting back to the task at hand, the default file for additional rules
in Makefile in the current directory.
If you have some source files for a program and there is a Makefile file
there, take a look. It is just text.
The lines that have a word followed by a colon are targets. That is,
these are words you can type following
the make command name to do various things. If you just type make with
no target, the first target will be executed.
What you will likely see at the beginning of most Makefile files are
what look like some assignment statements.
That is, lines with a couple of fields with an equal sign between them.
Surprise, that is what they are.
They set internal variables in make. Common things to set are the
location of the C compiler (yes, there is a default),
version numbers of the program and such.
As more and more software became available and more and more POSIX-
compliant platforms appeared, this got harder
and harder. This is where configure comes in. It is a shell script
(generally written by GNU Autoconf) that goes up
and looks for software and even tries various things to see what works.
It then takes its instructions
from Makefile.in and builds Makefile (and possibly some other files)
that work on the current system.
You run configure (you usually have to type ./configure as most people
don't have the current directory in their
search path). This builds a new Makefile.
Type make This builds the program. That is, make would be executed, it
would look for the first target in Makefile
and do what the instructions said. The expected end result would be to
build an executable program.
Now, as root, type make install. This again invokes make, make finds the
target install in Makefile and files
the directions to install the program.
This is a very simplified explanation but, in most cases, this is what
you need to know. With most programs,
there will be a file named INSTALL that contains installation
instructions that will fill you in on
other considerations. For example, it is common to supply some options
to the configure command to change the
final location of the executable program. There are also other make
targets such as clean that remove unneeded
files after an install and, in some cases test which allows you to test
the software between the make and
make install steps.
47. mkitab:
===========
AIX:
mkitab Command
Purpose
Makes records in the /etc/inittab file.
Syntax
mkitab [ -i Identifier ] { [ Identifier ] : [ RunLevel ] : [ Action ] :
[ Command ] }
Description
The mkitab command adds a record to the /etc/inittab file.
The Identifier:RunLevel:Action:Command parameter string specifies the
new entry to the /etc/inittab file.
You can insert a record after a specific record using the -i Identifier
flag. The command finds the field
specified by the Identifier parameter and inserts the new record after
the one identified by
the -i Identifier flag.
Example:
To add a new record to the /etc/inittab file, telling the init command
to handle a login on tty2,
enter:
To change currently existing entries from the file, use the chitab
command. For example, to change
tty2's runlevel, enter the command
rmitab Command
Purpose
Removes records in the /etc/inittab file.
Syntax
rmitab Identifier
Description
The rmitab command removes an /etc/inittab record. You can specify a
record to remove by
using the Identifier parameter. The Identifier parameter specifies a
field of one to fourteen
characters used to uniquely identify an object. If the Identifier field
is not unique, the command is unsuccessful.
Examples
To remove the tty entry for tty2 , enter:
rmitab "tty002"
AIX:
----
You can also use the refresh command, after for example editing a .conf
file and you need the
subsystem to reparse the config file.
For example, you have started the httpd demon
# startsrc -s httpd
Now you have edited the /etc/httpd.conf file. To refresh the deamon, use
the following command:
# refresh -s httpd
In general, and in most cases, daemons which are not under the control
of some resource controller, can be
stopped or started in a way as shown in the following "stanza":
# <script_name> stop
# <script_name> start
In many occasions, a script associated with the daemon is available,
that will take "stop"or "start"
as an argument.
49.1 Solaris:
-------------
When you create an UFS filesystem, the disk slice is divided into
cylindergroups. The slice is then divided
into blocks to control and organize the structure of files within the
cylinder group.
Each block performs a specific function in the filesystem.
A UFS filesystem has the following types of blocks:
Boot block: stores information used when booting the system, and is the
first 8KB in a slice (partition).
Superblock: stores much of the information about the filesystem. Its
located after the bootblock.
Inode : stores all information about a file except its name
datablock : stores data for each file
The bootblock stores the procedures used in booting the system. Without
a bootblock the system does not boot.
If a filesystem is not used for booting, the bootblock is left blank.
The bootblock appears only
in the first cylinder group (cylinder group 0) and is the first 8KB in a
slice.
An inode contains all the information about a file except its name which
is kept in a directory.
An inode is 128 bytes. For each file there corresponds one inode.
The inode information is kept in the cylinder information block and
contains the
following:
The maximum number of files per UFS file system is determined by the
number of inodes
allocated for a filesystem. The number of inodes depends on the amount
of diskspace that
is allocated for each inode and the total size of the filesystem.
By default, on inode is allocated for each 2KB of dataspace. You can
change this default
with the newfs command.
-------------------------------
| | | | | | | | | | | | | | | |
-------------------------------
| | | | | | | | | | | | | | |--------------------------
data blocks | |-----------| |
| | |
----- ----- -----
| | | | | |
----- ----- -----
||| ||| |||
data ----- -----
| | | |
----- -----
||| |||
data -----
| |
-----
|||
data
------------------------------------------------------------------------
---
| | | | | | | | | | | | | | | |
|
| B. B. | S. B. | Inodes | | | ... | Many Data Blocks ......
|
| | | | | | | | | | | | | | | |
|
------------------------------------------------------------------------
---
# newfs /dev/rdsk/c0t3d0s7
49.2 AIX:
---------
Although we use the LVM to create Volume Groups, and Logical Volumes
within a Volume Group,
a file system resides on a single logical volume.
Every file and directory belongs to a file system within a logical
volume.
The mkfs (make file system) command, or crfs command, or the System
Management Interface Tool (smit command)
creates a file system on a logical volume.
- crfs
The crfs command creates a file system on a logical volume within a
previously created volume group.
A new logical volume is created for the file system unless the name of
an existing logical volume
is specified using the -d. An entry for the file system is put into
the /etc/filesystems file.
By the way, a newly installed AIX 5.x system has the following
filesystem structure:
"/" root is a filesystem. Certain standard directories are present
within "/", like for example /bin.
But also a set of separate filesystems like hd2=/usr, hd3=/tmp,
hd9var=/var, are MOUNTED over the
coresponding named directories or mountpoints.
/
|
----------------------------------------
| | | | | | |
/bin /dev /etc /usr /tmp /var /home
directories file systems
So, when you unmount all extra (later on) defined filesystems like
/export, /software etc..
you still have / (with its standard directories like /etc, /bin etc..)
and the standard filesystems
like /usr etc..
inodes:
-------
The offset of a particular i-node within the i-node list of the file
system produces the unique number
(i-number) by which the operating system identifies the i-node. A bit
map, known as the i-node map, tracks the
availability of free disk i-nodes for the file system.
Field Contents
i_mode Type of file and access permission mode bits
i_size Size of file in bytes
i_uid Access permissions for the user ID
i_gid Access permissions for the group ID
i_nblocks Number of blocks allocated to the file
i_mtime Last time file was modified
i_atime Last time file was accessed
i_ctime Last time i-node was modified
i_nlink Number of hard links to the file
i_rdaddr[8] Real disk addresses of the data
i_rindirect Real disk address of the indirect block, if any
The i_rdaddr field within the disk i-node contains 8 disk addresses.
These addresses point to the first
8 data blocks assigned to the file. The i_rindirect field address points
to an indirect block.
Indirect blocks are either single indirect or double indirect. Thus,
there are three possible geometries
of block allocation for a file: direct, indirect, or double indirect.
Use of the indirect block and other
file space allocation geometries are discussed in the article JFS File
Space Allocation .
The i-nodes that represent files that define devices contain slightly
different information from i-nodes
for regular files. Files associated with devices are called special
files. There are no data block addresses
in special device files, but the major and minor device numbers are
included in the i_rdev field.
When a file is opened, the information in the disk i-node is copied into
an in-core i-node for easier access.
The in-core i-node structure contains additional fields which manage
access to the disk i-node's valuable data.
The fields of the in-core i-node are defined in the inode.h file. Some
of the additional information tracked
by the in-core i-node is:
Field Contents
di_mode Type of file and access permission mode bits
di_size Size of file in bytes
di_uid Access permissions for the user ID
di_gid Access permissions for the group ID
di_nblocks Number of blocks allocated to the file
di_mtime Last time file was modified
di_atime Last time file was accessed
di_ctime Last time i-node was modified
di_nlink Number of hard links to the file
di_btroot Root of B+ tree describing the disk addresses of the data
50. sendmail:
=============
Solaris:
--------
To receive SMTP mail from the network, run sendmail as a daemon during
system startup. The sendmail daemon listens
to TCP port 25 and processes incoming mail. In most cases the code to
start sendmail is already in one of
your boot scripts. If it isn't, add it.
First, this code checks for the existence of the sendmail program. If
the program is found, the code displays
a startup message on the console and runs sendmail with two command-line
options.
One option, the -q option, tells sendmail how often to process the mail
queue. In the sample code, the queue is
processed every 15 minutes (-q15m), which is a good setting to process
the queue frequently.
Don't set this time too low. Processing the queue too often can cause
problems if the queue grows very large,
due to a delivery problem such as a network outage. For the average
desktop system, every hour (-q1h) or
half hour (-q30m) is an adequate setting.
The other option relates directly to receiving SMTP mail. The option (-
bd) tells sendmail to run as a daemon
and to listen to TCP port 25 for incoming mail. Use this option if you
want your system to accept incoming TCP/IP mail.
The Linux example is a simple one. Some systems have a more complex
startup script.
Solaris 2.5, which dedicates the entire /etc/init.d/sendmail script to
starting sendmail, is a notable example.
The mail queue directory holds mail that has not yet been delivered. It
is possible that the system went down while
the mail queue was being processed. Versions of sendmail prior to
sendmail V8, such as the version that comes
with Solaris 2.5, create lock files when processing the queue. Therefore
lock files may have been left
behind inadvertently and should be removed during the boot. Solaris
checks for the existence of the mail queue directory
and removes any lock files found there. If a mail queue directory
doesn't exist, it creates one. The additional
code found in some startup scripts is not required when running sendmail
V8.
All you really need is the sendmail command with the -bd option.
groupadd -g 25 smmsp
useradd -u 25 -g smmsp -d / smmsp
Then edit /etc/passwd and remove the shell. You want the line to look
something like "smmsp:x:25:25::/:".
I notice that Slackware has the line set to
"smmsp:x:25:25:smmsp:/var/spool/clientmqueue:", and that's okay too,
so I leave it at that.
# mail -f
Mail [5.2 UCB] [AIX 5.X] Type ? for help.
"/root/mbox": 0 messages
# mail -f
Mail [5.2 UCB] [AIX 5.X] Type ? for help.
"/root/mbox": 3 messages
> 1 root Tue Nov 1 17:05 13/594
2 MAILER-DAEMON Sun Oct 30 07:59 109/3527 "Postmaster notify:
see trans"
3 daemon Wed Jan 26 10:59 34/1618
? 1
Message 1:
From root Tue Nov 1 17:05:34 2005
Date: Tue, 1 Nov 2005 17:05:34 +0100
From: root
To: root
..
..
51. SAR:
========
AIX:
----
sar Command
Purpose
Collects, reports, or saves system activity information.
Syntax
/usr/sbin/sar [ { -A | [ -a ] [ -b ] [ -c ] [ -k ] [ -m ] [ -q ] [ -r ]
[ -u ] [ -V ] [ -v ] [ -w ] [ -y ] } ]
[ -P ProcessorIdentifier, ... | ALL ] [ -ehh [ :mm [ :ss ] ] ] [
-fFile ] [ -iSeconds ] [ -oFile ] [ -shh [ :mm [ :ss ] ] ]
[ Interval [ Number ] ]
To report current tty activity for each 2 seconds for the next 20
seconds, enter:
# sar -y -r 2 20
To report message, semaphore, and cpu activity for all processors and
system-wide, enter:
# sar -mu -P ALL
On a four-processor system, this produces output similar to the
following (the last line indicates
system-wide statistics for all processors):
cpu msgs/s sema/s %usr %sys %wio %idle
0 7 2 45 45 5 5
1 5 0 27 65 3 5
2 3 0 55 40 1 4
3 4 1 48 41 4 7
- 19 3 44 48 3 5
On the obsolete AIX versions 4.2 throught 5.1, you should also make sure
that the schedtune and vmtune utilities
can be found in /usr/samples/kernel . If they're not there, install
bos.adt.samples. These utilites are used
to report on the tunable parameters for the VMM and the scheduler, and
SarCheck is much more useful if it can
analyze the values of these parameters. On newer versions of AIX, this
is not necessary because we look at
ioo, schedo, vmo, and vmstat -v for the data we need.
Solaris:
--------
52. Xwindows:
=============
The client will often run on another host, often a powerful Unix box
that would commonly be known as a "server."
The X client might itself also be a "server process" from some other
point of view; there is no contradiction here.
(Although calling it such may be unwise as it will naturally result in
further confusion.)
The upshot (and the point) of all this is that this allows use of the X
system that allows processes on
various computers on a network to display stuff on display devices
elsewhere on the network.
- GNOME:
It seeks to provide:
KDE had been using the MICO CORBA ORB to construct an application
embedding framework known as KOM and OpenParts.
According to the [ KDE-Two: Second KDE Developers Conference], they
found themselves unable to use
the standardized CORBA framework, citing problems with concurrency,
reliability and performance, and have
instead decided to create Yet Another IPC Framework involving a shared
library called libICE.
On the other hand, the KDE Technology Overview for Version 2.0 provides
a somewhat different story,
so it's not completely clear just what is going on; they indicate the
use of an IPC scheme called DCOP,
indicating it to be a layer atop libICE, with the option of also using
XML-RPC as an IPC scheme.
X &
xhost +
export DISPLAY=:0
When using X from a terminal server session, take note of the right ip
and port.
$ cd ~
$ chmod 755 .
$ chmod 777 .Xauthority
$ set |grep DISPLAY
DISPLAY=localhost:10.0
Starting xdm
xdm is typically started at system boot time. This is typically done in
either an rc file in the /etc directory,
or in the inittab file.
IBM wants xdm to integrate into their src subsystem. The AIX version of
the above command is a bit different.
The problem with this is that since xdm is not supported in R4 under
AIX, it is not really integrated into
the src subsystem, so the attendant startup, shutdown, and other src
commands do not work properly.
An alternative, which works on many other systems as well, is to start
xdm from the inittab file.
The -nodaemon flag keeps xdm from starting a daemon and exiting, which
would cause the respawn option
to start another copy of xdm, whereupon the process would repeat itself,
quickly filling up your
process table and dragging your system to its knees attempting to run
oodles of managers and servers.
xdm attempts to use system lock calls to prevent this from happening. It
nevertheless happens on some systems.
While the heart of Red Hat Linux is the kernel, for many users, the face
of the operating system is the
graphical environment provided by the X Window System, also called
simply X.
This chapter is an introduction to the behind-the-scenes world of
XFree86, the open-source implementation
of X provided with Red Hat Linux.
Red Hat Linux 8.0 uses XFree86 version 4.2 as the base X Window System,
which includes the various
necessary X libraries, fonts, utilities, documentation, and development
tools.
/usr/X11R6/ directory
A directory containing X client binaries (the bin directory), assorted
header files (the include directory),
libraries (the lib directory), and manual pages (the man directory), and
various other X documentation
(the /usr/X11R6/lib/X11/doc/ directory).
/etc/X11/ directory
The /etc/X11/ directory hierarchy contains all of the configuration
files for the various components
that make up the X Window System. This includes configuration files for
the X server itself,
the X font server (xfs), the X Display Manager (xdm), and many other
base components.
Display managers such as gdm and kdm, as well as various window
managers, and other X tools also store their
configuration in this hierarchy.
53.1 AIX:
---------
# mksysb -i /dev/rmt0
# backup -0 -uf /dev/rmt0 /data
# tctl -f /dev/rmt0 rewind
# savevg -if /dev/rmt0 uservg
Its very important which /dev/rmtx.y you use in some backup command like
tar. See the following table:
special file rewind on close retension on open density
setting
------------------------------------------------------------------------
--------
/dev/rmtx yes no #1
/dev/rmtx.1 no yes #1
/dev/rmtx.2 yes yes #1
/dev/rmtx.3 no yes #2
/dev/rmtx.4 yes no #2
/dev/rmtx.5 no no #2
/dev/rmtx.6 yes yes #2
/dev/rmtx.7 no yes #2
AIX only:
---------
- The WSM can be run in stand-alone mode, that is, you can use the tool
to perform system administration
on the AIX system you are currently running on.
- However, the WSM also supports a client-server environment.
In this environment, it is possible to administer an AIX system from a
remote PC or from another AIX system
using a graphics terminal.
In this environment, the AIX system being administered is the Server and
the system you are
performing the administration functions from is the client.
The client can operate in either application mode on AIX with jave 1.3,
or in applet mode
on platforms that support Java 1.3. Thus, the AIX system can be managed
from another AIX system
or from a PC with a browser and Java.
IBM VisualAge is a commandline C and C++ compiler for the AIX operating
system.
You can use VisualAge as a C compiler for files with a .c suffix, or as
a C++ compiler
for files with a .C, .cc, .cpp or .cxx suffix. The compiler processes
your text-based
program source files to create an executable object module.
In most cases you should use the xlC command to compile your C++ source
files,
and the xlc command to compile C source files.
You can use VisualAge to develop both 32 bit and 64 bit appliactions.
If you want to install VisualAge C++ for AIX, check first if the
following required filesets are installed.
Make sure the AppDev package has been installed in order to have access
to commands like "make" etc...
Notes:
======
Note 1:
-------
Usage:
xlC [ option | inputfile ]...
xlc [ option | inputfile ]...
cc [ option | inputfile ]...
c89 [ option | inputfile ]...
xlC128 [ option | inputfile ]...
xlc128 [ option | inputfile ]...
cc128 [ option | inputfile ]...
xlC_r [ option | inputfile ]...
xlc_r [ option | inputfile ]...
cc_r [ option | inputfile ]...
xlC_r4 [ option | inputfile ]...
xlc_r4 [ option | inputfile ]...
cc_r4 [ option | inputfile ]...
CC_r4 [ option | inputfile ]...
xlC_r7 [ option | inputfile ]...
xlc_r7 [ option | inputfile ]...
cc_r7 [ option | inputfile ]...
Description:
The xlC and related commands compile C and C++ source files.
They also processes assembler source files and object files. Unless
the
-c option is specified, xlC calls the linkage editor to produce a
single object file. Input files may be any of the following:
1. file name with .C suffix: C++ source file
2. file name with .i suffix: preprocessed C or C++ source file
3. file name with .c suffix: C source file
4. file name with .o suffix: object file for ld command
5. file name with .s suffix: assembler source file
6. file name with .so suffix: shared object file
xlc : ANSI C compiler with UNIX header files. Use this command for most
new C programs.
c89 : Strict ANSI C compiler with ANSI header files. Use this command
for maximum portability of your C programs.
xlC : Native (i.e., non-cfront) C++ compiler. Use this command for
compiling and linking all C++ code.
The following additional command names, plus their "-tst" and "-old"
variants, are also available at SLAC
for compiling and linking reentrant programs:
xlc_r, cc_r; xlC_r : For use with POSIX threads
xlc_r4, cc_r4; xlC_r4, CC_r4 : For use with DCE threads
Note 2:
-------
- insert CD
- smitt install_latest
- press F4 to display all devices
- select CDROM device
- press F4 to select the filesets you want to install
After you have installed VisualAge C++ for AIX, you need to enroll your
license for the product
before using it.
Note 3:
-------
Example 1:
Example 2:
input_files are source files (.c or .i), object files (.o), or assembler
files (.s)
xlc prog.c
After the xlc command completes, you will see a new executable file
named "a.out" in your directory.
For example, you may compile a subprogram "second.c" and then use it in
your main program "prog.c"
with the following sequence of commands:
xlc -c second.c
xlc prog.c second.o
/usr/lib/crt0_64.o
/usr/css/lib/crt0_64.o
/usr/lib/crt0_64.o
/usr/css/lib/crt0_64.o
root@zd110l02:/root#lslpp -l vacpp*
lslpp: Fileset vacpp* not installed.
root@zd110l02:/root#lslpp -l xlC*
Fileset Level State Description
------------------------------------------------------------------------
----
Path: /usr/lib/objrepos
xlC.aix50.rte 7.0.0.6 COMMITTED C Set ++ Runtime for
AIX 5.0
xlC.cpp 6.0.0.0 COMMITTED C for AIX Preprocessor
xlC.rte 7.0.0.1 COMMITTED C Set ++ Runtime
Note 4:
-------
install:
# cd /prj/tmp
# tar xv (tape in rmt0)
# ./driver
config licentie:
# /usr/vac/bin/vac6_licentie
# l4blt -r6
# /usr/opt/ifor/ls/aix/bin/i4blt -r6
test:
Or...
#include <stdio.h>
int main(void)
{
printf("Hello World!\n");
return 0;
}
now compile it
# /usr/vac/bin/xlc hello.c -o hello
now run it
# ./hello
Note 5: LUM
-----------
The i4lmd subsystem starts the network license server on the local node.
Examples
Start a license server and do not log checkin, vendor, product, timeout,
or message events:
In /etc/inittab:
cat /etc/i4ls.rc
#!/bin/ksh
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos520 src/bos/usr/opt/ifor/var/i4ls.rc 1.8
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1996,2001
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
/usr/opt/ifor/ls/os/aix/bin/i4cfg -start -nopause
exit 0
# ps -ef
inittab:
--------
init:2:initdefault:
brc::sysinit:/sbin/rc.boot 3 >/dev/console 2>&1 # Phase 3 of system boot
powerfail::powerfail:/etc/rc.powerfail 2>&1 | alog -tboot > /dev/console
# Power Failure Detection
mkatmpvc:2:once:/usr/sbin/mkatmpvc >/dev/console 2>&1
atmsvcd:2:once:/usr/sbin/atmsvcd >/dev/console 2>&1
load64bit:2:wait:/etc/methods/cfg64 >/dev/console 2>&1 # Enable 64-bit
execs
tunables:23456789:wait:/usr/sbin/tunrestore -R > /dev/console 2>&1 # Set
tunables
rc:23456789:wait:/etc/rc 2>&1 | alog -tboot > /dev/console # Multi-User
checks
rcemgr:23456789:once:/usr/sbin/emgr -B > /dev/null 2>&1
fbcheck:23456789:wait:/usr/sbin/fbcheck 2>&1 | alog -tboot >
/dev/console # run /etc/firstboot
srcmstr:23456789:respawn:/usr/sbin/srcmstr # System Resource Controller
rctcpip:23456789:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP
daemons
sniinst:2:wait:/var/adm/sni/sniprei > /dev/console 2>&1
: rcnfs:23456789:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS
Daemons
cron:23456789:respawn:/usr/sbin/cron
: piobe:2:wait:/usr/lib/lpd/pio/etc/pioinit >/dev/null 2>&1 # pb
cleanup
qdaemon:23456789:wait:/usr/bin/startsrc -sqdaemon
: writesrv:23456789:wait:/usr/bin/startsrc -swritesrv
uprintfd:23456789:respawn:/usr/sbin/uprintfd
shdaemon:2:off:/usr/sbin/shdaemon >/dev/console 2>&1 # High availability
daemon
l2:2:wait:/etc/rc.d/rc 2
logsymp:2:once:/usr/lib/ras/logsymptom # for system dumps
: itess:23456789:once:/usr/IMNSearch/bin/itess -start search >/dev/null
2>&1
diagd:2:once:/usr/lpp/diagnostics/bin/diagd >/dev/console 2>&1
: httpdlite:23456789:once:/usr/IMNSearch/httpdlite/httpdlite -r
/etc/IMNSearch/httpdlite/httpdlite.conf & >/dev/console 2>&1
ha_star:h2:once:/etc/rc.ha_star >/dev/console 2>&1
cons:0123456789:respawn:/usr/sbin/getty /dev/console
hntr2mon:2:once:/opt/hitachi/HNTRLib2/etc/D002start
dlmmgr:2:once:startsrc -s DLMManager
ntbl_reset:2:once:/usr/bin/ntbl_reset_datafiles
rcml:2:once:/usr/sni/aix52/rc.ml > /dev/console 2>&1
perfstat:2:once:/usr/lib/perf/libperfstat_updt_dictionary >/dev/console
2>&1
ctrmc:2:once:/usr/bin/startsrc -s ctrmc > /dev/console 2>&1
tty1:2:off:/usr/sbin/getty /dev/tty1
tty0:2:off:/usr/sbin/getty /dev/tty0
: i4ls:2:wait:/etc/i4ls.rc > /dev/null 2>&1 # Start i4ls
mF:2345:wait:sh /etc/mflmrcscript > /dev/null 2>&1
i4ls:2:wait:/etc/i4ls.rc > /dev/null 2>&1 # Start i4ls
documentum:2:once:/etc/rc.documentum start >/dev/null 2>&1
Note 7:
-------
Contents
Invoking the Compiler
C Compiler Modes
C++ Compiler Modes
Source Files and Preprocessing
Default Datatype Sizes
Distributed-memory parallelism
Shared-memory parallelism
64-bit addressing
Optimization
Related Information
Memory Management
Porting programs from the Crays to the SP
Mixing C and Fortran
------------------------------------------------------------------------
--------
% xlc source.c
This will produce an executable named a.out. The other C Compiler modes
are described below in the
section C Compiler Modes.
This will produce an executable named a.out. The other C++ Compiler
modes are described below
in the section C++ Compiler Modes.
Note: There is no on-line man page for the C++ compiler. "man xlC"
brings up the man page for the C compiler.
For complete documentation of C++ specific options and conventions see
the on-line C++ manual.
The commands xlc, mpcc, and mpCC all have on-line man pages.
C Compiler Modes
There are four basic compiler invocations for C compiles: xlc, cc, c89,
and mpcc. All but c89 have one or more
subinvocations with different defaults.
xlc
xlc invokes the compiler for C with an ansi language level. This is the
basic invocation that IBM recommends.
xlc_r
This invokes the thread safe version of xlc. It should be used when any
kind of multi-threaded code is being built.
This is equivalent to invoking the compiler as xlc -D_THREAD_SAFE and
the loader as
xlc -L/usr/lib/threads -Lusr/lib/dce -lc_r -lpthreads.
xlc128
This is equivalent to invoking the compiler as xlc -qldbl128 -lC128. It
increases the size of long double data types
from 64 to 128 bits.
cc
cc invokes the compiler for C with an extended language level. This is
for source files with legacy C code
that IBM refers to as "RT compiler extensions". This include older pre-
ansi features such as those in the
Kernighan and Ritchie's "The C Programming Language".
The two most useful subinvocations are cc_r which is the cc equivalent
of xlc_r and cc128 which is the cc equivalent
of xlc128.
c89
c89 should be used when strict conformance to the C ANSI ANSI standard
(ISO/IEC 9899:1990) is desired.
There are no subinvocations associated with this compiler invocation.
mpcc
mpcc is a shell script that compiles C programs with the cc compiler
while linking in the Partition Manager,
Message Passing Interface (MPI), and/or Message Passing Library (MPL).
Flags are passed by mpcc to the xlc command,
so any of the xlc options can be used with mpcc as well. When mpcc is
used to link a program the Partition Manager
and message passing interface are automatically linked in. The script
creates an executable that dynamically binds
with the message passing libraries.
Compiler summary
This table summarizes the features of several different C compiler
invocations:
All of the C++ invocations will compile source files with a .c suffix as
ansi C source files unless the
-+ option to the C++ compiler is specified. Any of the C compiler
invocations will also compile a file with
the appropriate suffix as a C++ file.
xlC
Among the subinvocations of xlC are:
xlC_r: the xlC equivalent of xlc_r
xlC128: the xlC equivalent of xlc128
xlC128_r: this combines the features of the xlC_r and xlC128
subinvocations.
mpCC
mpCC is a shell script that compiles C++ programs with the xlC compiler
while linking in the Partition Manager,
Message Passing Interface (MPI), and/or Message Passing Library (MPL).
Flags are passed by mpCC to the xlC command,
so any of the xlC options can be used on the mpCC shell script. When
mpCC is used to link a program the
Partition Manager and message passing interface are automatically linked
in. The script creates an executable
that dynamically binds with the message passing libraries.
By default, the mpCC compiler uses the regular C program MPI bindings.
In order to use the full C++ MPI bindings
use the compiler flag -cpp
Invoking any of the compilers starting with "mp" enables the program for
running across several nodes.
Of course, you are responsible for using a library such as MPI to
arrange communication and coordination
in such a program. Any of the mp compilers sets the include path and
library paths to pick up the MPI library.
To use the MPI with C++ or to use the MPI I/O subroutines, the thread-
safe version of the compiler must be used.
% mpcc_r a.c
% mpCC_r -cpp a.C
The example, hello.C, demonstrates the use of MPI from a C++ code.
Shared-Memory Parallelism
The IBM C and C++ compilers support a variety of shared-memory
parallelism.
OpenMP
OpenMP directives are fully supported by the IBM C and C++ compilers
when one of the invocations with _r suffix
is used. See Using OpenMP on seaborg for details.
Automatic Parallelization
The IBM C compiler will attempt to automatically parallelize simple loop
constructs. Use the option "-qsmp"
with one of the _r invocations:
64 Bit Addressing
Both the IBM C and C++ compilers support 64 bit addressing through the
-q64 option. The default mode can be set
through the environment variable OBJECT_MODE on Bassi, OBJECT_MODE=64
has been set to make 64-bit mode the default.
On Seaborg the default is 32-bit addressing mode. In 64-bit mode all
pointers are 64 bits in length and length
of long datatypes increase from 32 to 64 bits. It does not change the
default size of any other datatype.
If you have some object files that were compiled in 32-bit mode and
others compiled in 64-bit mode the objects
will not bind. You must recompile to ensure that all objects are in the
same mode.
Your link options must reflect the type of objects you are linking. If
you compiled 64-bit objects, you must
also link these objects with the -q64 option.
Optimization
The default for all IBM compilers is for there to be no optimization.
The NERSC/IBM recommended optimization options
for both C and C++ compiles are -O3 -qstrict -qarch=auto -qtune=auto.
Before installing make sure you understand the BEA and Tuxedo home dirs,
and give appropriate
ownership/permissions to a dedicated BEA account.
GUI:
====
Go to the directory where you downloaded the installer and invoke the
installation procedure by entering
the following command:
prompt> sh filename.bin
Select the install set that you want installed on your system. The
following seven choices are available:
For a detailed list of software components for each install set, see
Install Sets.
Specify the BEA Home directory that will serve as the central support
directory for all BEA products
installed on the target system. If you already have a BEA Home directory
on your system, you can select
that directory (recommended) or create a new BEA Home directory. If you
choose to create a new directory,
the BEA Tuxedo installer program automatically creates the directory for
you. For details about the
BEA Home directory, see BEA Home Directory.
Choose a BEA Home directory and then click Next to continue with the
installation.
Console mode:
=============
Go to the directory where you downloaded the installer and invoke the
installation procedure
by entering the following command:
/spl/SPLDEV1/product/tuxedo8.1/bin:>ls
AUTHSVR TMNTSFWD_T dmadmin
snmp_integrator.pbk tpaclcvt
AUTHSVR.pbk TMQFORWARD dmadmin.pbk
snmp_version tpacldel
BBL TMQUEUE dmloadcf
snmp_version.pbk tpaclmod
BBL.pbk TMS dmloadcf.pbk snmpget
tpaddusr
BRIDGE TMS.pbk dmunloadcf
snmpget.pbk tpdelusr
BRIDGE.pbk TMSYSEVT dmunloadcf.pbk
snmpgetnext tpgrpadd
BSBRIDGE TMSYSEVT.pbk epifreg
snmpgetnext.pbk tpgrpdel
BSBRIDGE.pbk TMS_D epifregedt snmptest
tpgrpmod
CBLDCLNT TMS_QM epifunreg
snmptest.pbk tpmigldap
CBLDSRVR TMS_QM.pbk esqlc snmptrap
tpmodusr
CBLVIEWC TMS_SQL evt2trapd
snmptrap.pbk tpusradd
CBLVIEWC32 TMS_SQL.pbk evt2trapd.pbk snmptrapd
tpusrdel
DBBL TMUSREVT genicf
snmptrapd.pbk tpusrmod
DMADM TMUSREVT.pbk idl snmpwalk
tux_snmpd
DMADM.pbk WSH idl2ir
snmpwalk.pbk tux_snmpd.pbk
GWADM WSH.pbk idltojava sql
tuxadm
GWTDOMAIN WSL idltojava.pbk
stop_agent tuxadm.pbk
GWTDOMAIN.pbk bldc_dce ir2idl
stop_agent.pbk tuxwsvr
GWTOPEND blds_dce irdel tidl
txrpt
ISH build_dgw jrly tlisten
ud
ISH.pbk buildclient jrly.pbk
tlisten.pbk ud32
ISL buildish mkfldhdr tlistpwd
uuidgen
ISL.pbk buildobjclient mkfldhdr32 tmadmin
viewc
JRAD buildobjserver ntsadmin
tmadmin.pbk viewc.pbk
JRAD.pbk buildserver qmadmin tmboot
viewc32
JREPSVR buildtms reinit_agent
tmboot.pbk viewc32.pbk
JSH buildwsh reinit_agent.pbk tmconfig
viewdis
JSH.pbk cleanupsrv restartsrv tmipcrm
viewdis32
JSL cleanupsrv.pbk restartsrv.pbk
tmipcrm.pbk wgated
LAUTHSVR cns rex tmloadcf
wgated.pbk
TMFFNAME cnsbind rmskill
tmloadcf.pbk wlisten
TMFFNAME.pbk cnsls sbbl
tmshutdown wlisten.pbk
TMIFRSVR cnsunbind show_agent
tmshutdown.pbk wtmconfig
TMNTS cobcc show_agent.pbk
tmunloadcf wud
TMNTSFWD_P cobcc.pbk snmp_integrator tpacladd
wud32
txrpt:
------
Name
txrpt-BEA TUXEDO system server/service report program
Synopsis
txrpt [-t] [-n names] [-d mm/dd] [-s time] [-e time]
Description
txrpt analyzes the standard error output of a BEA TUXEDO system server
to provide a summary
of service processing time within the server. The report shows the
number of times dispatched
and average elapsed time in seconds of each service in the period
covered. txrpt takes its input
from the standard input or from a standard error file redirected as
input. Standard error files
are created by servers invoked with the -r option from the servopts(5)
selection; the file can be
named by specifying it with the -e servopts option. Multiple files can
be concatenated into a single
input stream for txrpt. Options to txrpt have the following meaning:
-t
order the output report by total time usage of the services, with those
consuming the most total time printed first.
If not specified, the report is ordered by total number of invocations
of a service.
-n names
restrict the report to those services specified by names. names is a
comma-separated list of service names.
-d mm/dd
limit the report to service requests on the month, mm, and day, dd,
specified. The default is the current day.
-s time
restrict the report to invocations starting after the time given by the
time argument.
The format for time is hr[:min[:sec]].
-e time
restrict the report to invocations that finished before the specified
time. The format for time is the
same as the -s flag.
The report produced by txrpt covers only a single day. If the input file
contains records from more than one day,
the -d option controls the day reported on.
tuxadm:
-------
Name
Synopsis
http://cgi-bin/tuxadm[TUXDIR=tuxedo_directory |
INIFILE=initialization_file][other_parameters]
Description
Errors
See Also
tuxwsvr(1), wlisten(1)
MSTMACH:
--------
Is the machine name, and usually corresponds to the LMID, the logical
machine ID.
There should be an entry of the hostname in /etc/hosts.
tmboot:
-------
tmboot(1)
Name
Synopsis
tmboot [-l lmid] [-g grpname] [-i srvid] [-s aout] [-o sequence] [-S] [-
A] [-b] [-B lmid] [-T grpname] [-e command]
[-w] [-y] [-g] [-n] [-c] [-M] [-d1]
Description
For servers in the SERVERS section, only CLOPT, SEQUENCE, SRVGRP, and
SRVID are used by tmboot. Collectively,
these are known as the server's boot parameters. Once the server has
been booted, it reads the configuration file
to find its run-time parameters. (See UBBCONFIG(5) for a description of
all parameters.)
The ULOGPFX for the server is also set up at boot time based on the
parameter for the machine in the
configuration file. If not specified, it defaults to $APPDIR/ULOG.
Many of the command line options of tmboot serve to limit the way in
which the system is booted and can be used
to boot a partial system. The following options are supported.
-l lmid
For each group whose associated LMID parameter is lmid, all TMS and
gateway servers associated with the group
are booted and all servers in the SERVERS section associated with those
groups are executed.
-g grpname
All TMS and gateway servers for the group whose SRVGRP parameter is
grpname are started, followed by all servers
in the SERVERS section associated with that group. TMS servers are
started based on the TMSNAME and TMSCOUNT
parameters for the group entry.
-i srvid
All servers in the SERVERS section whose SRVID parameter is srvid are
executed.
-s aout
All servers in the SERVERS section with name aout are executed. This
option can also be used to boot TMS and
gateway servers; normally this option is used in this way in conjunction
with the -g option.
-o sequence
All servers in the SERVERS section with SEQUENCE parameter sequence are
executed.
-S
-A
-b
Boot the system from the BACKUP machine (without making this machine the
MASTER).
-B lmid
-M
-d1
-T grpname
All TMS servers for the group whose SRVGRP parameter is grpname are
started (based on the TMSNAME and TMSCOUNT
parameters associated with the group entry). This option is the same as
booting based on the TMS server name
(-s option) and the group name (-g).
-e command
Causes command to be executed if any process fails to boot successfully.
command can be any program, script,
or sequence of commands understood by the command interpreter specified
in the SHELL environment variable.
This allows an opportunity to bail out of the boot procedure. If command
contains white space, the entire
string must be enclosed in quotes. This command is executed on the
machine on which tmboot is being run,
not on the machine on which the server is being booted.
-w
-y
-q
-n
-c
The standard input, standard output, and standard error file descriptors
are closed for all booted servers.
Interoperability
Portability
Environment Variables
Link-Level Encryption
Diagnostics
Examples
To start only those servers located on the machines logically named CS0
and CS1, enter the following command:
To boot a BBL on the machine logically named PE8, as well as all those
servers with a location specified as PE8,
enter the following command.
To view minimum IPC resources needed for the configuration, enter the
following command.
tmboot -c
The minimum IPC requirements can be compared to the parameters set for
your machine. See the system administration
documentation for your machine for information about how to change these
parameters. If the -y option is used,
the display will differ slightly from the previous example.
Notices
Minimum IPC resources displayed with the -c option apply only to the
configuration described in the configuration
file specified; IPC resources required for a resource manager or for
other BEA Tuxedo configurations are not
considered in the calculation.
See Also
Het "gedrag" van Tuxedo wordt bijna geheel bepaald door de configuratie
file
"/prj/spl/<ETM_Instance_Name>/etc/tuxconfig.bin".
./gentuxedo.sh -m
Opmerkingen:
tmloadcf -y $SPLEBASE/etc/ubb
#% USAGE: gentuxedo.sh
#% USAGE: -h = HELP
#% USAGE: -r = Recreate the default tuxedo server
#% USAGE: This will recreate all of the default service
lists
#% USAGE: ( see option -n ) as well as create UBB files
#% USAGE: -u = Create the UBB file from template only
#% USAGE: -m = use tmloadcf to recreate ubb binary from
$SPLEBASE/etc/ubb
#% USAGE: Once modifications have been made to the
$SPLEBASE/etc/ubb
#% USAGE: file it is necessary to compile those changes.
use the -m
#% USAGE: option to do this.
#% USAGE: -s = Create the Servers
#% USAGE: This will create only the Servers as defined in
the -n option.
Note 2:
-------
Op een AIX lpar (logical partition, ofwel een virtual machine, ofwel een
volledig zelfstandige AIX machine),
draaien 1 of meer ETM instance(s). Een ETM instance, is middleware,
bestaande uit tuxedo services, een
Cobol Application server, en Cobol business objects.
De ETM user (of software owner) kan zich "verbinden" met een dergelijke
Instance, bijvoorbeeld om
administratieve handelingen uit te voeren zoals het starten of stoppen
van de Instance.
Het .profile van de etm user, dient echter zodanig te zijn ingesteld,
dat reeds een aantal environment variabelen
"goed" zijn neergezet, en correct verwijzen naar de juiste Cobol, Tuxedo
en DB2 locaties.
Er vanuit gaande dat het .profile goed is, kan de etm user zich
verbinden met een Instance via:
splenviron.sh -e <Instance_Name>
Voorbeeld:
Stel op een AIX machine (of lpar) bestaat de ETM Instance "SPLDEV1"
welke geinstalleerd is in het directory
"/spl/SPLDEV1".
/spl/SPLDEV1/tuxedo/genSources:/spl/SPLDEV1/cobol/source:
/spl/SPLDEV1/product/tuxedo8.1/cobinclude
Behalve het gebruik van splenviron.sh, is het heel goed mogelijk dat in
het .profile van de etm user, reeds een
aantal "aliases" zijn gedefinieerd.
Als er inderdaad aliases zijn gedefinieerd, is het attachen naar een
Instance heel makkelijk. Men dient dan alleen
nog maar de alias vanaf de unix prompt in te voeren.
Voorbeeld:
Stel dat in het .profile van de ETM user het volgende is opgenomen:
Dan kan de ETM user zich direct aan SPLDEV1 attachen via het command:
SPLDEV1
Note 3:
-------
Syntax:
co_BD.sh -p <CobolSourceName>.cbl
Hoe te gebruiken:
1. Logon als de juiste etmuser op AIX
2. Run nu de alias van de juiste instance om de juiste environment
in te stellen, en om je aan de
juiste-ETM instance te verbinden.
3. Zorg ervoor dat je de juiste DB2User and DB2Password kent.
Nu kun je cobol objecten compileren, als in het volgende voorbeeld:
Note 4:
-------
db2 =>
Voer nu in:
Voorbeeld:
Als extra test, kun je ook proberen om de huidige datum uit een DB2
dummy table op te vragen, via het commando:
Opmerking:
De ETM instance owner dient wel in zijn .profile een aantal DB2
environment variables te hebben staan,
zodat DB2 correct werkt, zoals:
export DB2_HOME=/prj/db2/admin/<db2_user>
. $DB2_HOME/sqllib/db2profile
Note 5:
-------
De ETM software owner, of ook wel de ETM Instance owner, heeft op unix /
AIX, een aantal noodzakelijke
environment variabelen nodig in het .profile bestand.
1. Algemene vars die verwijzen naar Support software als Java, Perl, DB2
connect e.d.
SPLAPP=/spl/splapp/V1515_SFix2_BASE_SUN_DB2
SPLBCKLOGDIR=/tmp
SPLBUILD=/spl/V1515_SFix2_BASE_SUN_DB2/cobol/build
SPLCOBCPY=/spl/V1515_SFix2_BASE_SUN_DB2/cobol/source/cm:/spl/V1515_SFix2
_BASE_SUN_DB2/tuxedo/templates:/spl/V1515_SFix2_BASE_SUN_DB2/tuxedo/genS
ources:/spl/V1515_SFix2_BASE_SUN_DB2/cobol/source:/spl/V1515_SFix2_BASE_
SUN_DB2/product/tuxedo8.1/cobinclude
SPLCOMMAND='ksh -o vi'
SPLCOMP=microfocus
SPLDB=db2
SPLEBASE=/spl/V1515_SFix2_BASE_SUN_DB2
SPLENVIRON=V1515_SFix2_BASE_SUN_DB2
SPLFUNCGETOP=''
SPLGROUP=cisusr
SPLHOST=sf-sunapp-22
SPLLOCALLOGS=/spl/vInd/local/logs
SPLLOGS=/spl/V1515_SFix2_BASE_SUN_DB2/logs
SPLQUITE=N
SPLRUN=/spl/V1515_SFix2_BASE_SUN_DB2/runtime/db2
SPLSOURCE=/spl/V1515_SFix2_BASE_SUN_DB2/cobol/source
SPLSUBSHELL=ksh
SPLSYSTEMLOGS=/spl/V1515_SFix2_BASE_SUN_DB2/logs/system
SPLUSER=cissys
SPLVERS=1
SPLVERSION=V1.5.15.1
SPLWEB=/spl/V1515_SFix2_BASE_SUN_DB2/cisdomain/applications
T=/spl/V135_MASTERTEMPLATE_UNIX
TERM=ansi
THREADS_FLAG=native
TUXCONFIG=/spl/V1515_SFix2_BASE_SUN_DB2/etc/tuxconfig.bin
TUXDIR=/spl/V1515_SFix2_BASE_SUN_DB2/product/tuxedo8.1
ULOGPFX=/spl/V1515_SFix2_BASE_SUN_DB2/logs/system/ULOG
THREADSTACKSIZE:
----------------
What is it?:
------------
- SDK, JDK
Java 2 Platform, Standard Edition (J2SE) provides a complete
environment for applications development
on desktops and servers and for deployment in embedded environments.
It also serves as the foundation
for the Java 2 Platform, Enterprise Edition (J2EE) and Java Web
Services.
- The CLASSPATH tells the Java virtual machine and other applications
(which are located in the
"jdk_<version>\bin" directory) where to find the class libraries, such
as classes.zip file
(which is in the lib directory).
The LIBPATH environment variable tells AIX applications, such as the JVM
where to find shared libraries.
This is equivalent to the use of LD_LIBRARY_PATH in other Unix-based
systems.
How to install?:
----------------
Java 1.3.0
PATH=/usr/java130/jre/bin:/usr/java130/bin:$PATH
Java 1.3.1
PATH=/usr/java131/jre/bin:/usr/java131/bin:$PATH
Java 1.4
PATH=/usr/java14/jre/bin:/usr/java14/bin:$PATH
For update images the .bff files are ready to be installed. Before
installing, remove the old .toc file (if it exist)
in the directory containing the .bff images.
You can use the smitty command to install (both base and update images):
Install JRE:
/software/java:>java -version
java version "1.3.1"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1)
Classic VM (build 1.3.1, J2RE 1.3.1 IBM AIX build ca131ifx-20040721a
SR7P (JIT enabled: jitc))
/software/java:>java -fullversion
java full version "J2RE 1.3.1 IBM AIX build ca131ifx-20040721a SR7P"
/root:>which java
/usr/java131/bin/java
Notes:
------
Note 1:
-------
thread
Q:
Selected Filesets
-----------------
Java14_64.sdk 1.4.0.1 # Java SDK 64-bit
FILESET STATISTICS
------------------
1 Selected to be installed, of which:
1 Passed pre-installation verification
----
1 Total to be installed
Java14_64.sdk
A:
Paul Landay
Other notes:
------------
For AIX 4.3.3, which is out of support, Java 1.3.1 requires the AIX
4330-10 Recommended Maintenance Level.
For AIX 5.1, Java 1.3.1 requires the AIX 5100-03 Recommended Maintenance
Level.
For AIX 5.2, Java 1.3.1 requires the AIX 5200-01 Recommended Maintenance
Level.
For AIX 5.3, Java 1.3.1 requires Version 5.3.0.1 (APAR IY58143) or
later.
AIXTHREAD_SCOPE=S
This setting is used to ensure that each Java thread maps 1x1 to a
kernel thread. The advantage of this approach
is seen in several places; a notable example is how Java exploits
Dynamic Logical Partitioning (DLPAR);
when a new CPU is added to the partition, a Java thread can be scheduled
on it. This setting should not be
changed under normal circumstances.
LDR_CNTRL=MAXDATA=0x80000000
This is the default setting on Java 1.3.1, and controls how large the
Java heap can be allowed to grow.
Java 1.4 decides the LDR_CNTRL setting based on requested heap. See
Getting more memory in AIX for your
Java applications for details on how to manipulate this variable.
JAVA_COMPILER
This decides what the Just-In-Time compiler will be. The default is
jitc, which points to the IBM JIT compiler.
It can be changed to jitcg for the debug version of JIT compiler, or to
NONE for switching the JIT compiler off
(which in most cases is the absolute worst thing you can do for
performance).
IBM_MIXED_MODE_THRESHOLD
This decides the number of invocations after which the JVM JIT-compiles
a method. This setting varies
by platform and version; for example, it is 600 for Java 1.3.1 on AIX.
Note 1:
-------
... space for the native heap. Moving the fence down allows the native
heap to grow, while reducing shared memory.
For a setting of o_maxdata = N, the fence is placed at 0x30000000+N. For
several good reasons,
it is recommended to set o_maxdata to a value that is the start of a
particular segment,
such as 0xn0000000. In this case, the fence sits between segments 2+n
and 3+n, which translates
to n segments for the native heap, and 10-n segments for shared memory.
# export IBM_JAVA_MMAP_JAVA_HEAP=true
So, if you need to enhance memory for Websphere 5.x 32 bits, put the
following lines
into the startServer.sh script, or in /prj/was/omgeving.rc:
export LDR_CNTRL=MAXDATA=0xn0000000
export IBM_JAVA_MMAP_JAVA_HEAP=true
try:
export AIXTHREAD_SCOPE=S
export AIXTHREAD_MUTEX_DEBUG=OFF
export AIXTHREAD_RWLOCK_DEBUG=OFF
export AIXTHREAD_COND_DEBUG=OFF
export LDR_CNTRL=MAXDATA=0x40000000
export IBM_JAVA_MMAP_JAVA_HEAP=TRUE
or
export IBM_JAVA_MMAP_JAVA_HEAP=true
export LDR_CNTRL=MAXDATA=0x80000000
or
export IBM_JAVA_MMAP_JAVA_HEAP=true
export LDR_CNTRL=MAXDATA=0x80000000
Note 2:
-------
I think the problem is that there are typically a lot of JNI allocations
in
the heap that are pinned and are allocated for the life of the
application.
Most of these are allocated during startup. If the min and max heap
sizes
are the same, these pinned allocations are scattered throughout the
heap.
Whereas if the min heap size is quite low, most of these allocations
will be
closer together at the start of the heap, leaving the bulk of the heap
(when
it's expanded) more free of pinned memory.
Note 3:
-------
55.5 Installing Perl:
=====================
Note that starting from Perl 5.7.2 (and consequently 5.8.0) and AIX 4.3
or newer Perl uses the AIX native
dynamic loading interface in the so called runtime linking mode instead
of the emulated interface that
was used in Perl releases 5.6.1 and earlier or, for AIX releases 4.2 and
earlier. This change does break
backward compatibility with compiled modules from earlier perl releases.
The change was made to make
Perl more compliant with other applications like Apache/mod_perl which
are using the AIX native interface.
This change also enables the use of C++ code with static constructors
and destructors in perl extensions,
which was not possible using the emulated interface.
Starting from AIX 4.3.3 Perl 5 ships standard with AIX. (Perl 5.8.0 with
AIX 5L V5.2, 5.6.0 with AIX 5L V5.1,
5.005_03 with AIX 4.3.3.)
You either get the source code and compile Perl, or in some situations
you might be happy with installing
a binary build.
DB2 Connect
DB2(R) Connect provides fast and robust connectivity to IBM(R) mainframe
databases for e-business
and other applications running under UNIX(R) and Windows(R) operating
systems.
Note 1:
-------
For example, if the product name for DB2 Enterprise Server Edition is
ese, then enter the following command:
When you have completed your installation, DB2 will be installed in the
one of the following directories:
For AIX:
/usr/opt/db2_08_01
If you want your DB2 product to have access to DB2 documentation either
on your
local computer or on another computer on your network, then you must
install the DB2 Information Center.
The DB2 Information Center contains documentation for DB2 Universal
Database and DB2 related products.
Note 2: db2admin
----------------
Authorization
Local administrator on Windows, or DASADM on UNIX based systems.
Required connection
None
Command syntax
>>-db2admin----------------------------------------------------->
>--+-----------------------------------------------------------------+-
><
+-START-----------------------------------------------------------+
+-STOP--+--------+------------------------------------------------+
| '-/FORCE-' |
+-CREATE--+----------------------+--+---------------------------+-+
| '-/USER:--user-account-' '-/PASSWORD:--user-password-' |
+-DROP------------------------------------------------------------+
+-SETID--user-account--user-password------------------------------+
+-SETSCHEDID--sched-user--sched-password--------------------------+
+- -?-------------------------------------------------------------+
'- -q-------------------------------------------------------------
Note:
If no parameters are specified, and the DB2 Administration Server
exists, this command returns the name
of the DB2 Administration Server.
START
Start the DB2 Administration Server.
STOP /FORCE
Stop the DB2 Administration Server. The force option is used to force
the DB2 Administration Server to stop,
regardless of whether or not it is in the process of servicing any
requests.
db2admin stop
db2admin start
Note 3: db2start
----------------
Using AIX, you would use the command ps -ef in order to examine
processes. On Solaris and HP-UX, ps -ef
will only show the db2sysc process (the main DB2 engine process) for all
server-side processes
(eg: agents, loggers, page cleaners, and prefetchers). If you're using
Solaris or HP-UX, you can see these
side processes with the command /usr/ucb/ps -axw. Both of these versions
of the ps command work on Linux.
Example 1:
Example 2:
- db2dasrrm: The DB2 Admin Server process. This process supports both
local and remote administration requests
using the DB2 Control Center
- db2gds: The DB2 Global Daemon Spawner process that starts all DB2 EDUs
(processes) on UNIX.
There is one db2gds per instance or database partition
- Agents
Subagent (db2agntp)
When the intra_parallel database manager configuration parameter is
enabled, the coordinator agent distributes
the database requests to subagents (db2agntp). These agents perform
the requests for the application.
Once the coordinator agent is created, it handles all database
requests on behalf of its application
by coordinating subagents (db2agent) that perform requests on the
database.
When an agent or subagent completes its work it becomes idle. When a
subagent becomes idle, its name changes
from db2agntp to db2agnta.
For example:
db2agnta processes are idle subagents that were used in the past by a
coordinator agent.
In DB2r Universal DatabaseT (DB2 UDB) Version 8.1, the db2hmon process
was controlled by the HEALTH_MON
database manager configuration parameter. When HEALTH_MON was set to
ON, a single-threaded independent
coordinator process named db2hmon would start. This process would
terminate if HEALTH_MON was set to OFF.
In DB2 UDB Version 8.2, the db2hmon process is no longer controlled by
the HEALTH_MON database manager
configuration parameter. Rather, it is a stand-alone process that is
part of the database server
so when DB2 is started, the db2hmon process starts. db2hmon is a
special multi-threaded DB2FMP process
that is named db2hmon on UNIX/Linux platforms and DB2FMP on Windows.
Note 6: db2icrt
---------------
>>-db2icrt--+-----+--+-----+--+---------------+----------------->
+- -h-+ '- -d-' '- -a--AuthType-'
'- -?-'
>--+---------------+--+---------------+--+----------------+----->
'- -p--PortName-' '- -s--InstType-' '- -w--WordWidth-'
>--+---------------+--InstName---------------------------------><
'- -u--FencedID-'
Example 1
On a client machine: 1
usr/opt/db2_08_01/instance/db2icrt db2inst1
On a server machine: 1
usr/opt/db2_08_01/instance/db2icrt -u db2fenc1 db2inst1
where db2fenc1 is the user ID under which fenced user-defined functions
and fenced stored procedures will run.
Example 2
On an AIX machine, if you have Alternate FixPak 1 installed, run the
following command to create an instance
running FixPak 1 code from the Alternate FixPak install path:
/usr/opt/db2_08_FP1/instance/db2icrt -u db2fenc1 db2inst1
You can have one or more databases in each instance but a database is
not exactly the same as you have
on z/OS either. On z/OS you have one catalog per subsystem and a
database is merely a logical collection of tables,
indexes that usually have a distinct relationship to a given
application. On the LUW platform each database has
its own catalogs associated with it which stores all the metadata about
that database.
Why the difference? Well, as with many of the differences you will find
at the server or storage layer,
they are mostly due to the "culture" or "industry standard terms" that
are typically used in a Linux,
UNIX or for that matter a Windows environment. An Instance is a common
term across a number of distributed
platform RDBMSs to represent a copy of the database management code
running on a server. And you won't likely
find the term subsystem used to describe anything on a distributed
platform (except for maybe some people
talking about storage but if you dig a bit you will likely find that in
a past life these people
worked on a mainframe).
---------------------------- ----------
-----------------------
| Subsystem | |INSTANCE| |INSTANCE
|
---------------------------- ----------
-----------------------
| | | | |
|
--------- ---- ---- ------------
------------ ------------
|CATALOG| |DB| |DB| |DB+catalog| |
DB+catalog| |DB+catalog|
--------- ---- ---- ------------
------------ ------------
Na de installatie van DB2 kunt u alleen met DB2 communiceren door het
instanti%ren van DB2.
Met andere woorden, u maakt een object (lees: Database Manager) binnen
DB2 aan,
die voor u de communicatie verzorgt.
Stel u heeft een instance van een Database Manager aangemaakt. Deze
Database Manager verzorgt de communicatie
met zowel lokale als remote databases. U dient de Database Manager te
instrueren hoe en op welke wijze
bepaalde databases benaderd kunnen worden. Tevens geeft u aan onder
welke `eenvoudige' naam deze set van
instructies gebruikt kunnen worden. Dit is de zogenaamde Alias.
AIX z/OS
------------------------------
-------------------------------------
| ------------- | |
|
| |Application| | | Een Partitie
|
| ------|------- | |
---------------- |
| | | | | DBMS 1
port A | |
| ------------------------ | | |
---- | |
| | Instance = | | |------------------------> |
DB| | |
| | Database Manager | | | | |
---- | |
| | | | | | | ----
| |
| | ---------- | | |---------------------->|DB|
| |
| | |Alias 1 | | | | | | ----
| |
| | | |------------------ |
---------------- |
| | ---------- | | |
---------------- |
| | | | | |DBMS 2 port B
| |
| | ---------- | | | |
| |
| | |Alias 2 | | | alias | | ----
| |
| | | |---------------------------------------> |DB|
| |
| | ---------- | | | | ----
| |
| | | | | |
| |
| ------------------------ | |
----------------- |
-------------------------------
--------------------------------------
-------------------------------------------------------------------
|Alias (heeft de database op het mainframe gekoppeld aan de Node) |
-------------------------------------------------------------------
|
|
------------------------------------------------------------------------
-------------------
|Node (kent het IP nummer van het mainframe en het poortnummer van de
DBMS op de partitie)|
------------------------------------------------------------------------
-------------------
E,n alias heeft ,,n verbinding met ,,n DBMS op ,,n partitie op het
mainframe.
Binnen het DBMS `leven' namelijk meerdere databases. Indien een
connectie moet worden gelegd
tussen AIX en een andere database in een andere DBMS op dezelfde
partitie dient een nieuwe alias
(en dus node) aangemaakt te worden.
Bij het configureren van een Remote Database praten we dus over de
connectie tussen DB2
en een database op een partitie op het mainframe. De volgende stappen
moeten we doorlopen om een
werkende connectie aan te maken:
------------------------------------------------
Nodenaam: De naam van de node. Deze kunt u zelf kiezen (bijv NOO49 :
NOde Ontwikkeling 49).
Ip-adres mainframe: T-partitie: 10.73.64.183
Poortnummer mainframe T-partitie: 447 of 448 (afhankelijk van DBMS):
BACDB2O = 447 (Ontwikkel omgeving)
-- Ter controle:
-----------------------------------------------
Vervolgens koppelen we de database op het mainframe middels een node aan
een alias. Voer uit:
-- Ter controle:
Nu doen we
-- Ter controle:
Dus een sessie tot stand brengen gaat als in het onderstaande
voorbeeld:
First of all install the DB2 client (for me it was DB2connect 7.1) and
register it
with the proper license (using db2licm).
For every DB, I need three registrations: tcp/ip node, database and DCS.
db2inst1@brepredbls01:~> db2
(c) Copyright IBM Corporation 1993,2001
Command Line Processor for DB2 SDK 7.2.0
db2 =>
example:
to unregister it:
Node Directory
Node 1 entry:
Node 2 entry:
Node 3 entry:
Where DBname is the name of the remote database, DBalias is the name you
are going to use in your connection
and nodename is the node alias you registered above.
The chosen authentication has been DCS for my environment.
Example:
Database 1 entry:
Database 2 entry:
Database 3 entry:
example:
to unregister:
DCS 1 entry:
Local database name = DB2DSPT
Target database name = DB2DSPT
Application requestor name =
DCS parameters =
Comment =
DCS directory release level = 0x0100
DCS 2 entry:
DCS 3 entry:
ex:
/usr/opt/db2_08_01/adm/db2licm -a /prj/db2/install/udb/8.1/db2ese.lic
Note 10: DB2 Connect Configuration files:
-----------------------------------------
- db2nodes.cfg
1.
I tried starting the database, and I still get the above error.
Exacly how am I suppose to start the database, and how do I get rid of
the above error?
- or use mkdev
To validate that the tty has been added to the customized VPD object
class, enter
# lscfg -vp | grep tty
tty0 01-S1-00-00 Asynchronous Terminal
57: chroot:
===========
chroot
SYNTAX
chroot NEWROOT [COMMAND [ARGS]...]
'chroot' changes the root to the directory NEWROOT (which must exist)
and then runs COMMAND with optional ARGS.
AIX:
----
chroot Command
Purpose
Changes the root directory of a command.
Syntax
chroot Directory Command
Description
The Directory path name is always relative to the current root. Even if
the chroot command is in effect,
the Directory path name is relative to the current root of the running
process.
A majority of programs may not operate properly after the chroot command
runs. For example, the commands
that use the shared libraries are unsuccessful if the shared libraries
are not in the new root file system.
The most commonly used shared library is the /usr/ccs/lib/libc.a
library.
Examples
# mkdir /usr/bin/lib
# cp /usr/ccs/lib/libc.a /usr/bin/lib
This makes the directory name / (slash) refer to the /var/tmp for the
duration of the /usr/bin/ksh command.
It also makes the original root file system inaccessible. The file
system on the /var/tmp file must contain
the standard directories of a root file system. In particular, the shell
looks for commands in the
/bin and /usr/bin files on the /var/tmp file system.
The date command can be very interesting to use on shell scripts, for
example for testing purposes.
You can device a test like
daynumber=`date -u %d`
export daynumber
if daynumber=31 then
..
The following shows what can be done using date.
NAME
date - print or set the system date and time
SYNOPSIS
date [OPTION]... [+FORMAT]
date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
DESCRIPTION
Display the current time in the given FORMAT, or set the system
date.
-d, --date=STRING
display time described by STRING, not `now'
-f, --file=DATEFILE
like --date once for each line of DATEFILE
-ITIMESPEC, --iso-8601[=TIMESPEC]
output date/time in ISO 8601 format.
TIMESPEC=`date' for date
only, `hours', `minutes', or `seconds' for date and time
to the
indicated precision. --iso-8601 without TIMESPEC
defaults to
`date'.
-r, --reference=FILE
display the last modification time of FILE
-R, --rfc-822
output RFC-822 compliant date string
-s, --set=STRING
set time described by STRING
--version
output version information and exit
FORMAT controls the output. The only valid option for the second
form
specifies Coordinated Universal Time. Interpreted sequences are:
%% a literal %
%D date (mm/dd/yy)
%F same as %Y-%m-%d
%h same as %b
%H hour (00..23)
%I hour (01..12)
%k hour ( 0..23)
%l hour ( 1..12)
%m month (01..12)
%M minute (00..59)
%n a newline
%N nanoseconds (000000000..999999999)
%t a horizontal tab
%Y year (1970...)
%z RFC-822 style numeric timezone (-0500) (a nonstandard
extension)
`-' (hyphen) do not pad the field `_' (underscore) pad the
field
with spaces
ENVIRONMENT
TZ Specifies the timezone, unless overridden by command line
param-
eters. If neither is specified, the setting from
/etc/localtime
is used.
DATE=$(date +%d"-"%B"-"%Y)
ERRORDATE=$(date +%m%d0000%y)
==================================
59. SOME NOTES ON LPARS ON POWER5:
==================================
Before the POWER5 Architecture, you could only use lpars with dedicated
cpu's, and disks dedicated to an lpar.
As from POWER5 you can use "Micro Partitioning" (assign cpu power in
increments of 10% to lpars),
you can use "Dynamic LPAR" (reassign resouces to and from lpars without
a reboot of lpars)
and every resource (SCSI, Netcards etc..) can be virtualized. But DLPAR
was also available before Power5.
- You can use HMC to define partitions and administer partitions (e.g.
start, shutdown an lpar)
The HMC is a desktop connected with ethernet to the pSeries machine.
HMC makes use of "partition profiles", in which you for example, can
define for a lpar what the desired and
minimum and maximum resource values are. The IVM does not make use of
profiles.
You can create a "system profile" that lists which partion profiles are
to be used when the
Server is restarted.
Take notice of the fact that the HMC has the lpar configuration
information in the form of saved profiles.
IVM does not have a commandline interface. You can telnet or ssh from
your PC to the lpar for VOIS, and
use the "mkvt" command to create a vt to another lpar.
In order to use the PLM, you need to have a HMC connected to the managed
Server, and you must have
an AIX 5.2 ML4 or 5.3 lpar or Server where PLM will be running.
You can create a Virtual Ethernet and VLAN's with VID's which enables
lpars to communicate
with each other through this "internal" network.
Server Operating Systems can be placed in LPARS, like AIX 5.2, AIX 5.3,
Linux and some others.
For AIX, only 5.3 can be a virtual client of virtualized resources.
To enable Power5 Partitioning, you must have obtained a key from IBM.
But on the 570 and above,
this feature is per default implemented.
An AIX 5.2 lpar needs dedicated resources. AIX 5.3 can use all
virtualization features.
- logon to HMC
- Choose "Server and Partition"
- Choose "Server management"
- Choose your Server from list
- Rightclick on Partitions -> Click Create -> Click Logical Partition
- logon to HMC
- Choose "Server and Partition"
- Choose "Server management"
- Choose your Server from list
- click on Partitions -> rightclick the partitionprofile of the
partition who is about to use the virtual ethernet adapter
-> Select Dynamic Logical Partitions -> Virtual adapterrescouces ->
Add/Remove
-> Choose the tab Virtual I/O -> Choose Ethernet -> Create
-> A dialog Properties will be displayed
-> Fill in the slotnumber, Port Virtual LAN ID (PVID)
chown emcdmeu:emcdgeu
Preparation:
============
In order for DLPAR to work on an lpar, you need to see the following
subsystems installed and active:
Subsystem
ctrmc Resource monitoring and control subsystem
IBM.CSMAgentRM is for handshaking between the lpar and hmc
IBM.ServiceRM
IBM.DRM is for executing the dlpar commands on the lpar
IBM.HostRM is for obtaining OS information
On the HMC, you can check which lpars are ready for DLPAR with the
following command:
# lspartition -dlpar
4. You need to have rsh and rcp access for all lpars.
If those are not enabled, the do the following:
- edit the .rhosts file on any lpar, and type in the lines
plmserver1 root
plmserver1.domain.com root
- edit /etc/inetd.conf and make sure that this line is not commented
out:
shell stream tcp6 nowait root /usr/sbin/rshd rshd
- You need to have an ssh connection between the HMC and the PLM Server.
Install Openssh on the PLM Server, and create a ssh user on the HMC.
To install Openssh on AIX, you need to have Openssl as well.
Create the ssh keys to make communication possible from HMC to PLM
Server.
Installation:
=============
IOSCLI:
=======
- In interactive mode, you use the aliases for the ioscli subcommands.
That is, start the ioscli, and then just type the subcommand, like in
# ioscli
# lsdev -virtual
You cannot run external commands from interactive mode, like grep or
sed.
First leave the interactive mode with "exit".
lsmap command:
--------------
Description
The lsmap command displays the mapping between virtual host adapters and
the physical devices they are backed to.
Given a device name (ServerVirtualAdapter) or physical location code
(PhysicalLocationCode) of a
server virtual adapter, the device name of each connected virtual target
device (child devices),
its logical unit number, backing device(s) and the backing devices
physical location code is displayed.
If the -net flag is specified the supplied device must be a virtual
server Ethernet adapter.
Examples:
- To list all virtual target devices and backing devices mapped to the
server virtual SCSI adapter vhode2, type:
VTD vtscsi0
LUN 0x8100000000000000
Backing device vtd0-1
Physloc
VTD vtscsi1
LUN 0x8200000000000000
Backing device vtd0-2
Physloc
VTD vtscsi2
LUN 0x8300000000000000
Backing device hdisk2
Physloc U787A.001.0397658-P1-T16-L5-L0
- To list the shared Ethernet adapter and backing device mapped to the
virtual server Ethernet adapter ent4, type:
# lsmap -vadapter ent4 -net
SEA ent5
Backing device ent1
Physloc P2-I4/E1
- To list the shared Ethernet adapter and backing device mapped to the
virtual server Ethernet adapter ent5
in script format separated by a : (colon), type:
- To list all virtual target devices and backing devices, where the
backing devices are of type disk or lv, type:
VTD vtscsi0
LUN 0x8100000000000000
Backing device hdisk0
Physloc U7879.001.DQD0KN7-P1-T12-L3-L0
VTD vtscsi2
LUN 0x8200000000000000
Backing device lv04
Physloc
VTD vtscsi1
LUN 0x8100000000000000
Backing device lv03
Physloc
mkvdev command:
---------------
Purpose
Adds a virtual device to the system.
Syntax
To create a virtual target device:
Description
The mkvdev command creates a virtual device. The name of the virtual
device will be automatically generated
and assigned unless the -dev DeviceName flag is specified, in which case
DeviceName will become
the device name. If the -lnagg flag is specified, a Link Aggregation or
IEEE 802.3 Link Aggregation
(automatic Link Aggregation) device is created. To create an IEEE 802.3
Link Aggregation set the mode attribute
to 8023ad. If the -sea flag is specified, a Shared Ethernet Adapter is
created. The TargetDevice may be a
Link Aggregation adapter (note, however, that the VirtualEthernetAdapter
may not be Link Aggregation adapters).
The default virtual Ethernet adapter, DefaultVirtualEthernetAapter, must
also be included as one of the
virtual Ethernet adapters, VirtualEthernetAdapter. The -vlan flag is
used to create a VLAN device and
the -vdev flag creates a virtual target device which maps the
VirtualServerAdapter to the TargetDevice.
Examples:
---------
Example 1:
----------
Suppose you have VIOS running, and you want to create three AIX53 client
lpars, LPS1, LPS2 and LPS3.
Suppose from VIOS, you have created a number of virtual scsi
controllers:
# lsdev -virtual
Suppose hdisk2, hdisk3, and hdisk4 are not yet assigned, and thus are
free to create VG's.
- Create mappings.
vhostx = LV \
vhosty = LV -> VG {disk(s)}
vhostz = LV /
More examples:
--------------
- From a AIX 5.3 client partition run the lsdev command, like
# lsdev -Cc disk -s vscsi
hdisk2 Available Virtual SCSI Disk Drive
PLATFORM SPECIFIC
Name: disk
Node: disk
Device Type: block
- To create a virtual target device that maps the logical volume lv20 as
a virtual disk for a client partition
hosted by the vhost0 virtual server adapter, type:
- To create a virtual target device that maps the physical volume hdisk6
as a virtual disk for a client partition
served by the vhost2 virtual server adapter, type:
# tn vioserver1
Trying...
Connected to vioserver1.
Escape character is '^T'.
telnet (vioserver1)
login: padmin
padmin's Password:
Last unsuccessful login: Mon Sep 24 04:25:04 CDT 2007 on /dev/vty0
Last login: Wed Nov 21 05:10:29 CST 2007 on /dev/pts/0 from
starboss.antapex.org
Suppose you have logged on as padmin on a VIO server. Now you try the
following commands
to retrieve information of the system:
name status
description
fcs0 Available FC Adapter
fcs1 Available FC Adapter
fcs2 Available FC Adapter
fcs3 Available FC Adapter
name status
description
fcs0 Available FC Adapter
Part Number.................03N7069
EC Level....................A
Serial Number...............1B64505069
Manufacturer................001B
Feature Code/Marketing ID...280B
FRU Number.................. 03N7069
Device Specific.(ZM)........3
Network Address.............10000000C95CDBFD
ROS Level and ID............02881955
Device Specific.(Z0)........1001206D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF801413
Device Specific.(Z5)........02881955
Device Specific.(Z6)........06831955
Device Specific.(Z7)........07831955
Device Specific.(Z8)........20000000C95CDBFD
Device Specific.(Z9)........TS1.91A5
Device Specific.(ZA)........T1D1.91A5
Device Specific.(ZB)........T2D1.91A5
Device Specific.(YL)........U7879.001.DQDTZXG-P1-C6-T1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP10000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U7879.001.DQDTZXG-P1-C6-T1
$ lsdev -slots
name status
description
hdisk0 Available 16 Bit LVD SCSI Disk Drive
hdisk1 Available 16 Bit LVD SCSI Disk Drive
..
hdisk10 Available 16 Bit LVD SCSI Disk Drive
hdisk11 Available SAN Volume Controller MPIO Device
hdisk12 Available SAN Volume Controller MPIO Device
..
hdisk35 Available SAN Volume Controller MPIO Device
vg01sanl02 Available Virtual Target Device - Disk
vg01sanl03 Available Virtual Target Device - Disk
..
vg04sanl14 Available Virtual Target Device - Disk
vg05sanl14 Available Virtual Target Device - Disk
vzd110l01 Available Virtual Target Device - Logical Volume
vzd110l02 Available Virtual Target Device - Logical Volume
..
vzd110l14 Available Virtual Target Device - Logical Volume
$ lsdev -virtual
name status
description
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
..
vhost33 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vg01sanl02 Available Virtual Target Device - Disk
vg01sanl03 Available Virtual Target Device - Disk
..
vg05sanl14 Available Virtual Target Device - Disk
..
vzd110l01 Available Virtual Target Device - Logical Volume
vzd110l14 Available Virtual Target Device - Logical Volume
sys0
System Object
sysplanar0
System Planar
vio0
Virtual I/O Bus
vhost33 U9117.570.65B61FE-V17-C324
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C324
vg05sanl14 U9117.570.65B61FE-V17-C324-L2
Virtual Target Device - Disk
vg03sanl14 U9117.570.65B61FE-V17-C324-L1
Virtual Target Device - Disk
vhost32 U9117.570.65B61FE-V17-C323
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C323
vg04sanl14 U9117.570.65B61FE-V17-C323-L2
Virtual Target Device - Disk
vg03sanl13 U9117.570.65B61FE-V17-C323-L1
Virtual Target Device - Disk
vhost31 U9117.570.65B61FE-V17-C224
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C224
vg02sanl14 U9117.570.65B61FE-V17-C224-L1
Virtual Target Device - Disk
vhost30 U9117.570.65B61FE-V17-C223
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C223
..
..
vg01sanl05 U9117.570.65B61FE-V17-C115-L1
Virtual Target Device - Disk
vhost15 U9117.570.65B61FE-V17-C113
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C113
vg04sanl03 U9117.570.65B61FE-V17-C113-L3
Virtual Target Device - Disk
vg03sanl03 U9117.570.65B61FE-V17-C113-L2
Virtual Target Device - Disk
vg01sanl03 U9117.570.65B61FE-V17-C113-L1
Virtual Target Device - Disk
vhost14 U9117.570.65B61FE-V17-C112
Virtual SCSI Server Adapter
Device Specific.(YL)........U9117.570.65B61FE-V17-C112
..
..
Device Specific.(YL)........U9117.570.65B61FE-V17-C0
vty0 U9117.570.65B61FE-V17-C0-L0
Asynchronous Terminal
pci6 U7311.D11.655158B-P1
PCI Bus
Device Specific.(YL)........U7311.D11.655158B-P1
pci14 U7311.D11.655158B-P1
PCI Bus
Device Specific.(YL)........U7311.D11.655158B-P1
fcs3 U7311.D11.655158B-P1-C6-T1
FC Adapter
Part Number.................03N7069
EC Level....................A
Serial Number...............1B64504CA3
Manufacturer................001B
Feature Code/Marketing ID...280B
FRU Number.................. 03N7069
Device Specific.(ZM)........3
Network Address.............10000000C95CDDEE
ROS Level and ID............02881955
Device Specific.(Z0)........1001206D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF801413
Device Specific.(Z5)........02881955
Device Specific.(Z6)........06831955
Device Specific.(Z7)........07831955
Device Specific.(Z8)........20000000C95CDDEE
Device Specific.(Z9)........TS1.91A5
Device Specific.(ZA)........T1D1.91A5
Device Specific.(ZB)........T2D1.91A5
Device Specific.(YL)........U7311.D11.655158B-P1-C6-T1
fcnet3 U7311.D11.655158B-P1-C6-T1
Fibre Channel Network Protocol Device
fscsi3 U7311.D11.655158B-P1-C6-T1
FC SCSI I/O Controller Protocol Device
pci5 U7311.D11.655157B-P1
PCI Bus
Device Specific.(YL)........U7311.D11.655157B-P1
Other Example:
==============
root@zd110l06:/root#lspv
hdisk0 00cb61fe223c3926 rootvg
active
hdisk1 00cb61fe2360b1b7 rootvg
active
hdisk2 00cb61fe3339af9f appsvg
active
hdisk3 00cb61fe3339b066 datavg
active
[email protected]:/root#lspv
hdisk0 00cb61fe09fe92bd rootvg
active
hdisk1 00cb61fe0a47a802 rootvg
active
hdisk2 00cb61fe336bc95b appsvg
active
hdisk3 00cb61fe321664d1 datavg
active
Purpose
Displays Virtual I/O Server devices and their characteristics.
Syntax
To list devices
lsdev -vpd
lsdev -slots
Description:
The lsdev command displays information about devices in the Virtual I/O
Server. If no flags are specified,
a list of all devices, both physical and virtual, in the Virtual I/O
Server is displayed.
To list devices, both physical and virtual, of a specific type use the
-type DeviceType flag.
Use the -virtual flag to list only virtual devices. Combining both the
-type and -virtual flags
will list the virtual devices of the specified type.
Examples
- To list all virtual adapters and display the name and status fields,
type:
vhost0 Available
vhost1 Available
vhost2 Available
ent6 Available
ent7 Available
ent8 Available
ent9 Available
- To list all devices of type disk and display the name and physical
location fields, type:
hdisk0 U9111.520.10004BA-T15-L5-L0
hdisk1 U9111.520.10004BA-T15-L8-L0
hdisk2 U9111.520.10004BA-T16-L5-L0
hdisk3 U9111.520.10004BA-T16-L8-L0
hdisk4 UTMP0.02E.00004BA-P1-C4-T1-L8-L0
hdisk5 UTMP0.02E.00004BA-P1-C4-T2-L8-L0
hdisk6 UTMP0.02F.00004BA-P1-C8-T2-L8-L0
hdisk7 UTMP0.02F.00004BA-P1-C4-T2-L8-L0
hdisk8 UTMP0.02F.00004BA-P1-C4-T2-L11-L0
vtscsi0 U9111.520.10004BA-V1-C2-L1
vtscsi1 U9111.520.10004BA-V1-C3-L1
vtscsi2 U9111.520.10004BA-V1-C3-L2
vtscsi3 U9111.520.10004BA-V1-C4-L1
vtscsi4 U9111.520.10004BA-V1-C4-L2
vtscsi5 U9111.520.10004BA-V1-C5-L1
scsi0
- To display all I/O slots that are not hot-pluggable but can have DLPAR
operations performed on them, type:
# lsdev -slots
On the client partition, run the cfmgr command and a cd0 device will be
configured for use.
Mounting the CD device is now possible, as is using the mkdvd command.
rmvdev command:
---------------
Purpose
To remove the connection between a physical device and its associated
virtual SCSI adapter.
Syntax
rmvdev [ -f ] { -vdev TargetDevice | -vtd VirtualTargetDevice } [-rmlv]
Description
The rmdev command removes the connection between a physical device and
its associated virtual SCSI adapter.
The connection can be identified by specifying the backing (physical)
device or the virtual target device.
If the connection is specified by the device name and there are multiple
connections between the
physical device and virtual SCSI adapters and error is returned unless
the -f flag is also specified.
If -f is included then all connections associated with the physical
device are removed.
If the backing (physical) device is a logical volume and the -rmlv flag
is specified,
then logical volume will be removed as well.
Example:
# lsslot -c slot
# Slot Description Device(s)
U1.5-P2/Z2 Logical I/O Slot pci15 scsi2
U1.9-P1-I8 Logical I/O Slot pci13 ent0
U1.9-P1-I10 Logical I/O Slot pci14 scsi0 scsi1
2) Delete the PCI adapter and all of its children in AIX before removal:
# rmdev -l pci14 -d -R
cd0 deleted
rmt0 deleted
scsi0 deleted
scsi1 deleted
pci14 deleted
3) Now, you can remove the PCI I/O slot device using the HMC:
f) In the newly created popup, select the task "Remove resource from
this partition"
g) Select the appropriate adapter from the list (only desired one will
appear)
i) You should have a popup window which tells you if it was successful.
Example
# mkdvd -d /dev/cd1
# mkdvd -d /dev/cd1
Note:
All savevg backup images are non-bootable.
To generate a non-bootable system backup, but stop mkdvd before the DVD
is created and save
the final images to the /mydata/my_cd file system, and create the other
mkdvd file systems in myvg, enter:
./a
./b
./b/d
./c
./c/f
./c/f/g
lparstat command:
-----------------
From the AIX prompt in a lpar, you can enter the lparstat -i command to
get a list
of names and resources like, for example, if the partition is capped or
uncapped etc..
# lparstat -i
cfgdev command:
---------------
On the VIOS partition, run the "cfgdev" command to rebuild the list of
visible devices.
This is neccessary after you have created the partition and have added
virtual controllers.
The virtual SCSI server adapters are now available to the VIOS.
The name of these adapters are vhostx where x is a number assigned by
the system.
Use the following command to make sure your adapters are available:
$ lsdev -virtual
name status description
ent2 Available Virtual Ethernet Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
lspath command:
---------------
lspath Command
Purpose
Displays information about paths to a MultiPath I/O (MPIO) capable
device.
Syntax
lspath [ -dev DeviceName ] [ -pdev Parent ] [ -status Status ] [ -conn
Connection ] [ -field FieldName ]
[ -fmt Delimiter ]
Description
The lspath command displays one of three types of information about
paths to an MPIO capable device. It either
displays the operational status for one or more paths to a single
device, or it displays one or more attributes
for a single path to a single MPIO capable device. The first syntax
shown above displays the operational status
for one or more paths to a particular MPIO capable device. The second
syntax displays one or more attributes
for a single path to a particular MPIO capable device. Finally, the
third syntax displays the possible range
of values for an attribute for a single path to a particular MPIO
capable device.
Displaying Path Status with the lspath Command
When displaying path status, the set of paths to display is obtained by
searching the device configuration database
for paths that match the following criteria:
The target device name matches the device specified with the -dev flag.
If the -dev flag is not present, then the
target device is not used in the criteria.
The parent device name matches the device specified with the -pdev flag.
If the -pdev flag is not present, then
parent is not used in the criteria.
The connection matches the connection specified with the -conn flag. If
the -conn flag is not present, then
connection is not used in the criteria.
The path status matches status specified with the -status flag. If the
-status flag is not present, the path
status is not used in the criteria.
If none of the -dev, -pdev, -conn, or -status flags are specified, then
all paths known to the system are displayed.
-enabled
Indicates that the path is configured and operational. It will be
considered when paths are selected for IO.
-disabled
Indicates that the path is configured, but not currently operational. It
has been manually disabled and will
not be considered when paths are selected for IO.
-failed
Indicates that the path is configured, but it has had IO failures that
have rendered it unusable. It will not be considered when paths are
selected for IO.
-defined
Indicates that the path has not been configured into the device driver.
-missing
Indicates that the path was defined in a previous boot, but it was not
detected in the most recent boot of the system.
-detected
Indicates that the path was detected in the most recent boot of the
system, but for some reason it was not configured. A path should only
have this status during boot and so this status should never appear as a
result of the lspath command.
Exit Status
Return code Description
1 Invalid status value.
Examples:
If the target device is a SCSI disk, to display all attributes for the
path to parent scsi0 at connection 5,0,
use the command:
To display the status of all paths to hdisk1 with column headers and I/O
counts, type:
# lspath -l hdisk1 -H
# lspath -s disabled
hdisk1 scsi1 5, 0
hdisk2 scsi1 6, 0
hdisk23 scsi8 3, 0
hdisk25 scsi8 4, 0
chpath command:
---------------
chpath Command
Purpose
Changes the operational status of paths to an MultiPath I/O (MPIO)
capable device, or changes an attribute
associated with a path to an MPIO capable device.
Syntax
chpath -l Name -s OpStatus [ -p Parent ] [ -w Connection ]
chpath -h
Description
The chpath command either changes the operational status of paths to the
specified device (the -l Name flag)
or it changes one, or more, attributes associated with a specific path
to the specified device. The required syntax
is slightly different depending upon the change being made.
The first syntax shown above changes the operational status of one or
more paths to a specific device.
The set of paths to change is obtained by taking the set of paths which
match the following criteria:
Disabling a path affects path selection at the device driver level. The
path_status of the path is not changed in the device configuration
database. The lspath command must be used to see current operational
status of a path.
The second syntax shown above changes one or more path specific
attributes associated with a particular path to a particular device.
Note that multiple attributes can be changed in a single invocation of
the chpath command; but all of the attributes must be associated with a
single path. In other words, you cannot change attributes across
multiple paths in a single invocation of the chpath command. To change
attributes across multiple paths, separate invocations of chpath are
required; one for each of the paths that are to be changed.
Flags
-a Attribute=Value Identifies the attribute to change as well as the new
value for the attribute. The Attribute is the name of a path specific
attribute. The Value is the value which is to replace the current value
for the Attribute. More than one instance of the -a Attribute=Value can
be specified in order to change more than one attribute.
-h Displays the command usage message.
-l Name Specifies the logical device name of the target device for the
path(s) affected by the change. This flag is required in all cases.
-p Parent Indicates the logical device name of the parent device to use
in qualifying the paths to be changed. This flag is required when
changing attributes, but is optional when change operational status.
-P Changes the path's characteristics permanently in the ODM object
class without actually changing the path. The change takes affect on the
path the next time the path is unconfigured and then configured
(possibly on the next boot).
-w Connection Indicates the connection information to use in qualifying
the paths to be changed. This flag is optional when changing operational
status. When changing attributes, it is optional if the device has only
one path to the indicated parent. If there are multiple paths from the
parent to the device, then this flag is required to identify the
specific path being changed.
-s OpStatus Indicates the operational status to which the indicated
paths should be changed. The operational status of a path is maintained
at the device driver level. It determines if the path will be considered
when performing path selection.The allowable values for this flag are:
enable
Mark the operational status as enabled for MPIO path selection. A path
with this status will be considered for use when performing path
selection. Note that enabling a path is the only way to recover a path
from a failed condition.
disable
Mark the operational status as disabled for MPIO path selection. A path
with this status will not be considered for use when performing path
selection.
This flag is required when changing operational status. When used in
conjunction with the -a Attribute=Value flag, a usage error is
generated.
Security
Privilege Control: Only the root user and members of the system group
have execute access to this command.
Auditing Events:
Event Information
DEV_Change The chpath command line.
Examples
To disable the paths between scsi0 and the hdisk1 disk device, enter:
paths disabled
or
some paths enabled
The first message indicates that all PATH_AVAILABLE paths from scsi0 to
hdisk1 have been successfully enabled.
The second message indicates that only some of the PATH_AVAILABLE paths
from scsi0 to hdisk1 have been
successfully disabled.
# uname -L
12 zd110l12
# oslevel -r
5300-05
|
# lsdev -Cc disk -s vscsi
|
hdisk0 Available Virtual SCSI Disk Drive
|
hdisk1 Available Virtual SCSI Disk Drive
|
|
# lscfg -vpl hdisk1
|
hdisk1 U9117.570.65B61FE-V12-C6-T1-L810000000000 Virtual SCSI
Disk Drive <----
|
# lsslot -c slot
|
# Slot Description Device(s)
|
U7879.001.DQDTZXG-P1-C2 Logical I/O Slot pci2 fcs0
|
U7879.001.DQDTPAK-P1-C6 Logical I/O Slot pci3 fcs1
|
U9117.570.65B61FE-V12-C0 Virtual I/O Slot vsa0
|
U9117.570.65B61FE-V12-C2 Virtual I/O Slot ent0
|
U9117.570.65B61FE-V12-C5 Virtual I/O Slot vscsi0
|
U9117.570.65B61FE-V12-C6 Virtual I/O Slot vscsi1
<------------------------------------
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000043268101002
Device Specific.(Z1)........0200640
Serial Number...............600507680190014E3000000000000199
(LUN)
PLATFORM SPECIFIC
Name: disk
Node: disk
Device Type: block
HMC commands:
lssyscfg List the hardware resource configuration
mksyscfg Creates the hardware resource configuration
chsyscfg Changes the hardware resource configuration
rmsyscfg Removes the hardware resource configuration
Example:
Detail on lssyscfg:
-------------------
NAME
SYNOPSIS
DESCRIPTION
lssyscfg can also list the attributes of cages in the managed-frame, the
attributes of the managed-frame,
or the attributes of all of the frames managed by this HMC.
OPTIONS
-r
The type of resources to list. Valid values are lpar for partitions,
prof for partition profiles, sys for
managed systems, sysprof for system profiles, cage for managed frame
cages, and frame for managed frames.
-m
The name of either the managed system to list, or the managed system
which has the system resources to list.
The name may either be the user-defined name for the managed system, or
be in the form tttt-mmm*ssssssss,
where tttt is the machine type, mmm is the model, and ssssssss is the
serial number of the managed system.
The tttt-mmm*ssssssss form must be used if there are multiple managed
systems with the same user-defined name.
This option is required when listing partitions, partition profiles, or
system profiles. This option is optional
when listing managed systems, and if it is omitted, then all of the
systems managed by this HMC will be listed.
This option is not valid when listing managed frame cages or managed
frames.
-e
The name of either the managed frame to list, or the managed frame which
contains the cages to list.
The name may either be the user-defined name for the managed frame, or
be in the form ttttmmm* ssssssss,
where tttt is the type, mmm is the model, and ssssssss is the serial
number of the managed frame.
The tttt-mmm*ssssssss form must be used if there are multiple managed
frames with the same user-defined name.
This option is required when listing managed frame cages. This option is
optional when listing managed frames,
and if it is omitted, then all of the frames managed by this HMC will be
listed. This option is not valid when
listing partitions, partition profiles, system profiles, or managed
systems.
--filter
The filter(s) to apply to the resources to be listed. Filters are used
to select which resources of the specified
resource type are to be listed. If no filters are used, then all of the
resources of the specified resource type
will be listed. For example, specific partitions can be listed by using
a filter to specify the names or IDs of the partitions to list.
Otherwise, if no filter is used, then all of the partitions in the
managed system will be listed. The filter data consists of filter
name/value pairs, which are in comma separated value (CSV) format. The
filter data must be enclosed in double quotes.
The format of the filter data is as follows:
"filter-name=value,filter-name=value,..."
""filter-name=value,value,...",..."
lpar_names | lpar_ids
Either the name or the ID of the partition which has the partition
profiles to be listed must be specified.
Only one partition name or ID can be specified.
profile_names
profile_names
-F
A delimiter separated list of attribute names for the desired attribute
values to be displayed for each resource.
If no attribute names are specified, then values for all of the
attributes for the resource will be displayed.
When this option is specified, only attribute values will be displayed.
No attribute names will be displayed.
The attribute values displayed will be separated by the delimiter which
was specified with this option.
This option is useful when only attribute values are desired to be
displayed, or when the values of only
selected attributes are desired to be displayed.
--header
Display a header record, which is a delimiter separated list of
attribute names for the attribute values that
will be displayed. This header record will be the first record
displayed. This option is only valid when used
with the -F option.
--help
Display the help text for this command and exit.
EXAMPLES
lssyscfg -r sys
List only the user-defined name, machine type and model, and serial
number for all of the systems managed by this HMC,
and separate the output values with a colon:
List all partitions in the managed system, and only display attribute
values for each partition, following a header
of attribute names:
lssyscfg -r lpar -m 9406-570*12345678 -F --header
List only the names, IDs, and states of partitions lpar1, lpar2, and
lpar3, and separate the output values
with a comma:
List the partition profiles prof1 and prof2 defined for the partition
that has an ID of 2:
lssyscfg -r frame
get_partition_state
To pop a hung partiton into the debugger (aka 'soft reset'):
lssyscfg -r sys
To list only the "name" field of all of the systems, use the -F flag,
together with the name of the field
(in this case, name):
The above may be combined with the -F flag as well, to list only one
attribute for one machine.
lssyscfg -r frame
To add one virtual CPU: (note these use -p instead of -n for the
partition name)
chhwres command:
----------------
NAME
SYNOPSIS
chhwres -r io -m managed-system -o {a | r | m}
{-p partition-name | --id partition-ID}
[{-t target-partition-name |
--tid target-partition-ID}]
-l slot-DRC-index [-a "attributes"]
[-w wait-time] [-d detail-level] [--force]
chhwres -r io -m managed-system -o s
{-p partition-name | --id partition-ID}
--rsubtype {iopool | taggedio}
-a "attributes"
OPTIONS
-r
--rsubtype
-m
The name of the managed system for which the hardware resource
configuration is to be changed.
The name may either be the user-defined name for the managed system, or
be in the form tttt-mmm*ssssssss,
where tttt is the machine type, mmm is the model, and ssssssss is the
serial number of the managed system.
The tttt-mmm*ssssssss form must be used if there are multiple managed
systems with the same user-defined name.
-o
-p
--id
-t
The name of the target partition for a move operation. The partition
must be in the running state.
You can either use this option to specify the name of the target
partition, or use the --tid option to specify
the ID of the partition. The -t and the --tid options are mutually
exclusive.
--tid
The ID of the target partition for a move operation. The partition must
be in the running state.
You can either use this option to specify the ID of the target
partition, or use the -t option to specify
the name of the target partition. The --tid and the -t options are
mutually exclusive.
-l
The DRC index of the physical I/O slot to add, remove, or move.
-s
The virtual slot number of the virtual I/O adapter to add or remove.
When adding a virtual I/O adapter, if this option is not specified then
the next available virtual slot number
will be assigned to the virtual I/O adapter.
-q
--procs
--procunits
--5250cpwpercent
-w
This option is valid for all add, remove, and move operations for
AIX(R), Linux(TM), and virtual I/O server
partitions. This option is also valid for memory add, remove, and move
operations for i5/OS partitions.
-d
This option is valid for all add, remove, and move operations for AIX,
Linux, and virtual I/O server partitions.
--force
-a
The configuration data needed to create virtual I/O adapters or set
hardware resource related attributes.
The configuration data consists of attribute name/value pairs, which are
in comma separated value (CSV) format.
The configuration data must be enclosed in double quotes.
attribute-name=value,attribute-name=value,...
"attribute-name=value,value,...",...
Valid attribute names for attributes that can be set when adding,
removing, or moving a physical I/O slot:
slot_io_pool_id
Valid attribute names for setting I/O pool attributes:
lpar_io_pool_ids
comma separated
Valid attribute names for setting tagged I/O resources (i5/OS partitions
only):
load_source_slot
DRC index of I/O slot, or virtual slot number
alt_restart_device_slot
DRC index of I/O slot, or virtual slot number
console_slot
DRC index of I/O slot, virtual slot number, or the value hmc
alt_console_slot
DRC index of I/O slot, or virtual slot number
op_console_slot
DRC index of I/O slot, or virtual slot number
Valid attribute names for adding a virtual ethernet adapter:
ieee_virtual_eth
Valid values:
0 - not IEEE 802.1Q compatible
1 - IEEE 802.1Q compatible
Required
port_vlan_id
Required
addl_vlan_ids
is_trunk
Valid values:
0 - no
1 - yes
trunk_priority
Valid values are integers between 1 and 15, inclusive
Required for a trunk adapter
Valid attribute names for adding a virtual SCSI adapter:
adapter_type
Valid values are client or server (server adapters can only be added to
i5/OS partitions on IBM(R)
eServer(TM) i5 servers, or virtual I/O server partitions)
Required
remote_lpar_id | remote_lpar_name
One of these attributes is required for a client adapter
remote_slot_num
Required for a client adapter
Valid attribute names for adding a virtual serial adapter:
adapter_type
Valid values are client or server (client adapters cannot be added to
i5/OS partitions on IBM System p5 or
eServer p5 servers, and server adapters can only be added to i5/OS or
virtual I/O server partitions)
Required
remote_lpar_id | remote_lpar_name
One of these attributes is required for a client adapter
remote_slot_num
Required for a client adapter
supports_hmc
The only valid value is 0 for no
Valid attribute names for setting virtual ethernet attributes:
mac_prefix
Valid attribute names for setting HSL OptiConnect attributes (i5/OS
partitions only):
hsl_pool_id
Valid values are:
0 - HSL OptiConnect is disabled
1 - HSL OptiConnect is enabled
Valid attribute names for setting virtual OptiConnect attributes (i5/OS
partitions only):
virtual_opti_pool_id
Valid values are:
0 - virtual OptiConnect is disabled
1 - virtual OptiConnect is enabled
Valid attribute names for setting memory attributes:
requested_num_sys_huge_pages
Valid attribute names for setting processing attributes:
sharing_mode
Valid values are:
keep_idle_procs - valid with dedicated processors
share_idle_procs - valid with dedicated processors
cap - valid with shared processors
uncap - valid with shared processors
uncap_weight
--help
EXAMPLES
Add the I/O slot with DRC index 21010001 to partition p1 and set the I/O
pool ID for the slot to 3:
Add 128 MB of memory to the partition with ID 1, and time out after 10
minutes:
Remove 512 MB of memory from the AIX partition aix_p1, return a detail
level of 5:
chhwres -r mem -m 9406-520*1234321A -o r -p aix_p1 -q 512
-d 5
Set the number of pages of huge page memory requested for the managed
system to 2 (the managed system must be
powered off):
Add .25 processing units to the i5/OS partition i5_p1 and add 10 percent
5250 CPW:
lshwres command:
----------------
List system level memory information and include the minimum memory
required to support a maximum of 1024 MB:
List all memory information for partitions lpar1 and lpar2, and only
display attribute values, following a
header of attribute names:
lpar_netboot command:
---------------------
NAME
SYNOPSIS
lpar_netboot [-v] [-x] [-f] [-i] [-g args] [{-A -D | [-D] -l physical-
location-code | [-D] -m MAC-address}]
-t ent -s speed -d duplex -S server -G gateway -C client
partition-name partition-profile managed-system
lpar_netboot [-v] [-x] [-f] [-i] [-g args] [{-A -D | [-D] -l physical-
location-code | [-D] -m MAC-address}]
-t ent -s speed -d duplex -S server -G gateway -C client
managed-system managed-system
DESCRIPTION
OPTIONS
-A
Perform a ping test and use the adapter that successfully pings the
server specified with the -S option.
-G
The IP address of the machine from which to retrieve the network boot
image during network boot.
-d
The duplex setting of the partition specified with the -C option. Valid
values are full, half, and auto.
-f
The physical location code of the network adapter to use for network
boot.
-m
The MAC address of the network adapter to use for network boot.
-n
The speed setting of the partition specified with the -C option. Valid
values are 10, 100, 1000, and auto.
-t
The type of adapter for MAC address or physical location code discovery
or for network boot. The only valid value is
ent for ethernet.
-v
partition-profile
managed-system
--help
EXAMPLES
To retrieve the MAC address and physical location code for partition
machA with partition profile machA_prof
on managed system test_sys:
To network boot the partition machA using the network adapter with a MAC
address of 00:09:6b:dd:02:e8 with
partition profile machA_prof on managed system test_sys:
To network boot the partition machA using the network adapter with a
physical location code of
U1234.121.A123456-P1-T6 with partition profile machA_prof on managed
system test_sys:
To perform a ping test along with a network boot of the partition machA
with partition profile machA_prof on
managed system test_sys:
Example mksyscfg:
-----------------
If you already have LPARs created you can use this command to get their
configuration which can be reused as template:
If you want to create more LPARS at once you can use a configuration
file and provide it as input for mksyscfg.
Here is an example for 3 LPARs, each definition starting at new line:
name=LPAR1,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem
=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,
desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,ma
x_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_
virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/11/1,7/client/9
/vio2a/11/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm
,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=n
one,shared_proc_pool_util_auth=1
name=LPAR2,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem
=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,
desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,ma
x_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_
virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/12/1,7/client/9
/vio2a/12/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm
,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=n
one,shared_proc_pool_util_auth=1
name=LPAR3,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem
=1024,desired_mem=15360,max_mem=15360,proc_mode=shared,min_proc_units=0.
4,desired_proc_units=1.0,max_proc_units=4.0,min_procs=1,desired_procs=4,
max_procs=4,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,ma
x_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/13/1,7/client
/9/vio2a/13/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=no
rm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id
=none,shared_proc_pool_util_auth=1
Main menu
1. Select Language
2. Setup Remote IPL
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
When the installation has finished, use the padmin user to login.
After logging in, you will be placed in the IOSCLI. Type the following
command
to accept the license:
# license -accept
Note:
For the installation method that you choose, ensure that you follow the
sequence of steps as shown.
Within each procedure, you must use AIX to complete some installation
steps, while other steps are completed
using the HMC interface.
In this procedure, you will perform a New and Complete Base Operating
System Installation on
a logical partition using the partition's CD-ROM device. This procedure
assumes that there is an HMC
attached to the managed system.
Prerequisites
Before you begin this procedure, you should have already used the HMC to
create a partition
and partition profile for the client. Assign the SCSI bus controller
attached to the CD-ROM device,
a network adapter, and enough disk space for the AIX operating system to
the partition.
Set the boot mode for this partition to be SMS mode. After you have
successfully created the partition
and partition profile, leave the partition in the Ready state. For
instructions about how to create
a logical partition and partition profile, refer to the Creating logical
partitions and partition profiles
article in the IBM eServer Hardware Information Center.
1. Activate and install the partition (perform these steps in the HMC
interface)
Insert the AIX 5L Volume 1 CD into the CD device of the managed system.
Right-click on the partition to open the menu.
Select Activate. The Activate Partition menu opens with a selection of
partition profiles.
Be sure the correct profile is highlighted.
Select Open a terminal window or console session at the bottom of the
menu to open
a virtual terminal (vterm) window.
Press the 5 key and press Enter to select 5. Select Boot Options.
PowerPC Firmware
Version SF220_001
SMS 1.5 (c) Copyright IBM Corp. 2000, 2003 All rights reserved.
------------------------------------------------------------------------
-------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
------------------------------------------------------------------------
-------
Press the 2 key and press Enter to select 2. Select Boot Devices.
Press the 1 key and press Enter to select 1. Select 1st Boot Device.
Press the 3 key and press Enter to select 3. CD/DVD.
Select the media type that corresponds to the CD-ROM device and press
Enter.
Select the device number that corresponds to the CD-ROM device and press
Enter.
The CD-ROM device is now the first device in the Current Boot Sequence
list.
Press the ESC key until you return to the Configure Boot Device Order
menu.
Press the 2 key to select 2. Select 2nd Boot Device.
Press the 5 key and press Enter to select 5. Hard Drive.
If you have more than one hard disk in your partition, determine which
hard disk you will use
to perform the AIX installation. Select the media type that corresponds
to the hard disk and press Enter.
Select the device number that corresponds to the hard disk and press
Enter.
Press the x key to exit the SMS menu. Confirm that you want to exit SMS.
Type the number of your choice and press Enter. Choice is indicated by
>>>.
88 Help ?
99 Previous Menu
>>> Choice [1]: 2
Type the number for each disk you choose in the Choice field and press
Enter. Do not press Enter
a final time until you have finished selecting all disks. If you must
deselect a disk, type its number
a second time and press Enter.
When you have finished selecting the disks, type 0 in the Choice field
and press Enter.
The Installation and Settings screen displays with the selected disks
listed under System Settings.
Note:
Changes to the primary language environment do not take effect until
after the Base Operating System Installation
has completed and your system is rebooted.
Monitoring VIOS:
----------------
Note 1:
-------
With Virtual I/O Server fix pack 8.1.0, you can install and configure
the
"IBM Tivoli Monitoring System Edition for System pT agent" on the
Virtual I/O Server.
IBM Tivoli Monitoring System Edition for System p enables you to monitor
the health and availability
of multiple IBM System p servers (including the Virtual I/O Server) from
the Tivoli EnterpriseT Portal.
IBM Tivoli Monitoring System Edition (SE) for System p V6.1 is a new
offering of the popular IBM Tivoli
Monitoring (ITM) product specifically designed for IBM System p AIX
customers. ITM SE for System p V6.1 monitors
the health and availability of System p servers, providing rich
graphical views of your AIX, LPAR, CEC,
and VIOS resources in a single console, delivering robust monitoring and
quick time to value.
Note 2:
-------
Download the latest agent- (or whole source-) pack from the OpenSMART
home page.
telnet vio-server
login: padmin
padmin's Password:
Last unsuccessful login: Tue Feb 28 03:08:08 CST 2006 on /dev/vty0
Last login: Wed Mar 15 16:14:11 CST 2006 on /dev/pts/0 from 192.168.1.1
$ oem_setup_env
# mkdir /home/osmart
# useradd -c "OpenSMART Monitoring" -d /home/osmart osmart
# chown -R saicsadm:staff /home/osmart
# passwd osmart
Changing password for "saicsadm"
osmart's New password: ******
Enter the new password again: *****
# su - osmart
$ mkdir ostemp
$ cd ostemp
$ gunzip /tmp/opensmart-client-0.4.tar.gz
$ tar -xf /tmp/opensmart-client-0.4.tar
$ ./agent/install_agent ~
[ ... ]
Copy ../lib/opensmartresponse.dtd ->
/usr/local/debis/os/etc/opensmartresponse.dtd
chmod 644 /usr/local/debis/os/etc/opensmartresponse.dtd
**********************************************
* OpenSMART agent installed successfully *
**********************************************
$ cd ~
$ rm -rf ostemp
Note 3: lpar2rrd
----------------
FEATURES
www.ibm.com/servers/eserver/pseries/lpar/
publib.boulder.ibm.com/infocenter/pseries/index.jsp?
topic=/com.ibm.help.doc/welcome.htm
Errors at VIOS:
---------------
Note 1:
-------
# Login as "padmin"
# Switch to "oem" prompt
oem_setup_env
# Remove all hdisks except for hdisk0 and hdisk1 - assumed to be rootvg
for i in $( lsdev -Cc disk -F name | grep hdisk | egrep -v 'hdisk0$ |
hdisk1$' )
do
rmdev -Rdl ${i}
done
# Set fast fail Parameter for SCSI Adapters and Reconfigure FC Adapters
-l fscsi0 -a fc_err_recov=fast_fail
chdev -l fscsi1 -a fc_err_recov=fast_fail
chdev -l fscsi2 -a fc_err_recov=fast_fail
cfgmgr -vl fcs0
cfgmgr -vl fcs1
cfgmgr -vl fcs2
Note 1:
-------
Note 2:
-------
VIOS Install
On our p5-550, I have allocated most physical devices to the VIOS LPAR
so it can be used to divide these
amongst AIX/Linux LPARs. The VIOS LPAR has four gigabit ethernet
adapters allocated to it.
Presently only two are in use as an aggregated link to the "real world".
It also has a virtual ethernet adapter
which connects internally to the p5-550.
Two of the Fiber Channel HBAs are assigned to the VIO partition and
connected to port 13 of both switch ports
in the SAN fabric. The SAN has been configured to attach an 860Gbyte
RAID5 LUN to the IBM. Due to lack of
multipathing support in VIOS, there are multiple apparent disks
(hdisk6 ... hdisk13) which are in fact one.
The first (hdisk6) was used to create the client_data volume group. It
is intended that this volume group will
be used for /data filesystems.
Note: To add virtual devices dynamically to the VIOS partition, use the
"Dynamic" option in the HMC.
Networking on the VIOS LPAR:
----------------------------
Two (at present) of the gigabit adapters assigned to the VIOS LPAR are
channelled together for redundancy.
Telecomms will deliver all the relevant VLANS down this interface which
can be bridged to internal Ethernets.
Note that VLAN14 is configured as the native VLAN of the channel. To do
this :-
- Channel the two Ethernet NICs attached to the network: mkvdev -lnagg
ent2 ent3 which produced ent5
- Bridge between the channelled adapter and the internal network.
mkvdev -sea ent5 -vadapter ent4 -default ent4 -defaultid 1 which
produced ent6
- Configure the new bridge with an IP address:
mktcpip -hostname name -inetaddr 148.197.14.x -netmask 255.255.255.0
-gateway 148.197.14.254 -interface ent6
- VLAN interfaces are unlikely to be necessary on the VIOS, but can be
created :-
mkvdev -vlan ent6 -tagid 240.
-Create a logical volume for the relevant client. The name of it should
be easily identifyable as
being assiciated with the relevant client. ... mklv -lv clientname_sys
clients 18G. This creates a logical
volume 18Gbytes in size (enough for AIX or Linux operating system) on
the clients volume group.
-Mirror the logical volume for safety: mklvcopy lv_name. Warning, this
is SLOW.
-Assign the logical volume to a virtual adaptor:
mkvdev -vdev logical-volume -vadapter vhostN -dev name_for_target
Note 3:
-------
NIM Hacking
Error ED995F18
--------------
VSCSI_ERR3
ED995F18
000DRCFFFF FFF9
The Virtual SCSI server adapter (partition number and slot number)
specified in the client adapter definition
does not exist
Error BFE4C025
--------------
thread:
A:
http://www-912.ibm.com/eserver/support/fixes/fcgui. jsp
Q:
get a error "BFE4C025" from errpt, and this error often happen at
almost all type RS/6000.
when this happened, the system is running OK also, I also get none
error information by 'diag' tools.
I don't know what happen at the system. Who can help me?
A:
Your word 3 is 0007 so it looks like some one forced a dump from the O/S
or the Op Panel or the HMC,
either by powering off an LPAR configured to dump or the LPAR crashed
and dumped.
http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?
topic=/iphb5/f22msdc.htm
Note:
-----
error detail:
LABEL: SCAN_ERROR_CHRP
IDENTIFIER: BFE4C025
Description
UNDETERMINED ERROR
Failure Causes
UNDETERMINED
Recommended Actions
RUN SYSTEM DIAGNOSTICS.
Detail Data
PROBLEM DATA
..
lots of hex numbers
..
DLPAR scripts:
==============
Note 1:
-------
Abstract
For related information about this topic, refer to the following IBM
Redbooks publication:
AIX 5L Differences Guide Version 5.2 Edition, SG24-5765-02
.
Contents
_
DLPAR scripts, used to automate LPAR reconfiguration, are written by
system administrators or software vendors.
Scripts can be implemented in any scripting language, such as perl or
shell, or it can be a compiled program.
They are maintained by the system administrator using the drmgr command.
The syntax of the command is as follows:
Descriptions of the most important flags for the drmgr command are
provided in the following table.
For a complete reference, refer to the man page or the documentation.
drmgr -i /root/root_dlpar_test.sh
To list the details, the drmgr -l command is used. The output is similar
to the following:
The execution user ID and group ID are set to the uid or gid of the
script.
The PATH environment is set to /usr/bin:/etc:/usr/sbin.
The working directory is /tmp.
Environment variables that describe the DLPAR event are set.
DLPAR scripts can write any necessary output to stdout. The format of
the output should be name=value pair strings
separated by newline characters to relay specific information to the
drmgr. For example, the output DR_VERSION=1.0
could be produced with the following ksh command:
echo "DR_VERSION=1.0"
Error and logging messages are provided by DLPAR scripts in the same way
as regular output by writing
name=value pairs to stdout. The DR_ERROR=message pair should be used to
provide error descriptions.
The name=value pairs contain information to be used to provide error and
debug output for the syslog.
DLPAR scripts can also write additional information to stdout that will
be reflected to the HMC.
The level of information that should be provided is based on the detail
level passed to the script
in the DR_DETAIL_LEVEL=N environment variable. N must be in the range of
0 to 5, where the default value
of zero (0) signifies no information. A value of one (1) is reserved for
the operating system and is used
to present the high-level flow. The remaining levels (2-5) can be used
by the scripts to provide information
with the assumption that larger numbers provide greater detail.
The environment variables provided in the following table are set for
processor add and remove operations:
#!/usr/bin/ksh
if [[ $# -eq 0 ]]
then
echo "DR_ERROR= Script usage error"
exit 1
fi
ret_code=0
command=$1
case $command in
scriptinfo )
echo "DR_VERSION=1.0"
echo "DR_DATE=19092002"
echo "DR_SCRIPTINFO=DLPAR test script"
echo "DR_VENDOR=IBM";;
usage )
echo "DR_USAGE=root_dlpar_test.sh command [parameter]";;
register )
echo "DR_RESOURCE=cpu";;
checkacquire )
:;;
preacquire )
:;;
undopreaquire )
:;;
postacquire )
:;;
checkrelease )
:;;
prerelease )
:;;
undoprerelease )
:;;
postrelease )
:;;
* )
ret_code=10;;
esac
exit $ret_code
=======================================
60. SOME NOTES ON VIRTUALIZATION HP-UX:
=======================================
HP has had nPar hard partitions in the HP 9000 midrange and Superdome
computers since the September 2000 launch
of the Superdomes. These servers are based on a four-way cell board, and
each cell board can be logically
and electronically isolated from the others in the system, have its own
HP-UX operating system installed on it,
and function like a free-standing Unix server. In August 2001, HP
announced vPar virtual partitions,
which it rolled out first with the Superdomes and then cascaded down the
HP 9000 server line.
The Itanium-based Integrity server line has had static partitions for
HP-UX and Windows operating systems
at the high-end, and has supported HP-UX, Linux, and Windows at the low
end. Only two weeks ago, HP announced
that Linux was available on eight-way partitions on the 16-way and 64-
way variants of the Integrity Superdome boxes
through eight-way nPars. (Linux was not supported on the Superdomes
until then.)
In both the above cases one server box can be devided in multiple
servers, thus allowing consolidation.
Each npar or vpar is a separate machine. You can transfer CPUs between
vpars on the fly, but in a serious hardware
failure you can lose all vpars. npar is more solid than vpar but you
cannot transfer CPUs on the fly, it needs reboot
and you can transfer only cell boards, I mean single CPU cannot be
transfered to another npar.
1. ISL
(Initial System Loader)
2. hpux
(secondary system loader)
3. /stand/vmunix
(kernel)
-- Adding vPars adds the monitor layer, so now hpux loads the monitor
and then the monitor boots the kernels
of the virtual partitions. The boot sequence becomes
1. ISL
2. hpux
3. /stand/vpmon
(vPars monitor and partition database)
4. /stand/vmunix
(kernels of the virtual partitions)
ISL>
MON>
In this example, the vPars monitor would load the virtual partition
szilva1 and launch the kernel from the
boot device specified for szilva1. (The boot device is assigned when the
virtual partition is created and is
recorded in the monitor database.)
Its possible to install AIX onto another disk on the same system. This
is not partitioning,
its just a second install of the BOS, on another disk.
Once you have installed these filesets, the alternate disk installation
functions are available
to you.
You can use the "smitty alt_install" or "smitty alt_clone" or "smitty
alt_mksysb" fastpath:
# smitty alt_install
-----------------------------------------------
So, the Alternate Disk Installation can be used in one of two ways:
- Cloning the current rootvg to an alternate disk.
- Installing a mksysb image on another disk.
# smitty alt_mksysb
-----------------------------------------------
Install mksysb on an Alternate Disk
-----------------------------------------------
You can also use the "alt_disk_install" command to clone the rootvg to
another disk.
The command creates an "altinst_rootvg" volumegroup on the destination
disk and prepares
the same logical volumes as in the rootvg, except the names are
prepended with "alt_",
for example, alt_hd1. Similar are the filesystems renamed to
"/alt_inst/filesystemname"
and the original data (mksysb or rootvg) is copied.
After this first fase, a second fase begins where an optional
configuration action
can be performed, either a custom script or update of software, when
cloning rootvg.
Example:
# lspv
hdisk0 00fa7377474 rootvg
hdisk1 00hdgfh6374 None
performs cloning hdisk0 to hdisk1 where hdisk1 will be the new rootvg.
Boot the managed system as a Full System Partition so you have access to
all the disks in the managed system.
Configure the system and install the necessary applications.
Run the alt_disk_install command to begin cloning the rootvg on hdisk0
to hdisk1, as follows:
# /usr/sbin/alt_disk_install -O -B -C hdisk1
Logical partitioning
Frequently asked questions
DLPAR
AIX: 5.2
HMC: Release 3, Version 1.0
Platform Firmware: 10/2002 system firmware or later.
To determine platform firmware level, on any AIX partition type:
lscfg -vp | grep -p Platform.
The last 6 digits of the ROM Level represent the Platform Firmware date
in the format: "YYMMDD".
Question: Does the upgrade of the HMC or Platform Firmware affect my AIX
5.1 partitions?
Answer: The upgrade of Platform Firmware on some 5.1 systems may cause
some systems difficulty in reboot.
Thus, users are encouraged to apply APAR IY31961 on their AIX 5.1
partitions before upgrading Platform Firmware.
Question: What is the order for AIX, HMC, and Platform Hardware updates?
Question: Where would I find latest versions or upgrades for: AIX or HMC
or Platform Firmware?
Answer: Yes. The HMC GUI will not display Dynamic LPAR menus for
partitions that are not DLPAR enabled.
Answer: Yes. Linux distro's that use the Linux 2.6 Kernel or higher have
the capability of supporting DLPAR on POWER5 systems. Currently both
Novell/SUSE Linux for Power and RedHat Linux for Power Distro both
support DLAR capabilities.
Question: Do all DLPAR operations have to be done through the HMC GUI?
Answer: While it is recommended that users use the HMC GUI for dynamic
resource re-allocation, it is possible for a user or script to execute
commands on the HMC command line to perform dynamic resource operations
on a dynamic capable partition.
Question: What conditions may impede DLPAR operations?
Answer: There may be cases where the resource that users wish to
deallocate are not available because they are in use by the operating
system or applications. In those cases, the operation may not complete
until these resources are freed. Dynamic LPAR operations are also
constrained by the resource specifications in the active LPAR profile,
such as maximum/minimum processors or memory, or required I/O slots.
Question: How much time does it take for a DLPAR operation to complete?
Answer: This sets the various level of debug output displayed during
DLPAR operations. Additionally, this allows the user to see all the
steps that AIX performed in the DLPAR operation providing
tracing/logging information for debug and problem determination.
Question: How is the timeout value for DLPAR operations used by the HMC?
Answer: The user can set a time limit (in minutes) setting so that the
DLPAR operation request will be canceled if the pre-set time limit is
exceeded. An example is a situation requiring memory moves. When the
memory cannot be re-allocated because resource memory is pinned to the
physical memory, sometimes certain operations will take a very long time
to complete. A time limit in this case may be used to limit the amount
of retries that take place. A time limit of zero implies that there is
no time limit.
Question: With a timeout limit of zero, how can I stop a command that
may not complete because the DLPAR command will not succeed?
Answer: although a user may set the timeout limit to zero, HMC and AIX
each have a set of default behaviors that will ensure a DLPAR command,
that will eventually fail, will return with the appropriate error
message.
Question: Are there special AIX filesets or PTF levels required for
DLPAR?
Answer: DLPAR can be used to bring online a resource that has been
activated through CUoD.
Answer: Users can perform DLPAR operations on I/O slots with affinity
partitions, but not with processor or memory resources.
Question: Are there any examples of using the HMC command line to
automate DLPAR?
AIX only.
If you are editing the bosinst.data file, use one of the following
procedures:
Verify the contents of the edited bosinst.data file using the bicheck
command:
/usr/lpp/bosinst/bicheck filename
touch /save_bosinst.data_file
Back up the system, using one of the following: the Web-based System
Manager Backups application,
the System Management Interface Tool (SMIT), or mksysb command.
Create one customized bosinst.data file for each client and, using the
Network Installation Manager (NIM),
define the files as resources. Refer to AIX Version 4.3 Network
Installation Management Guide and Reference
for more information about how to use the bosinst.data file as a
resource in network installations.
data
Back up the edited bosinst.data file and the new signature file to
diskette with the following command:
ls ./bosinst.data ./signature | backup -iqv
OR
Put the diskette in the diskette drive of the target machine you are
installing.
Boot the target machine from the install media (tape, CD-ROM, or
network) and install AIX.
The BOS installation program will use the diskette file, rather than the
default bosinst.data file
shipped with the installation media.
control_flow:
CONSOLE = Default
INSTALL_METHOD = overwrite
PROMPT = no
EXISTING_SYSTEM_OVERWRITE = yes
RUN_STARTUP = no
RM_INST_ROOTS = yes
ERROR_EXIT =
CUSTOMIZATION_FILE =
TCB = no
BUNDLES =
RECOVER_DEVICES = Default
BOSINST_DEBUG = no
ACCEPT_LICENSES = yes
INSTALL_CONFIGURATION =
DESKTOP = CDE
INSTALL_DEVICES_AND_UPDATES = yes
IMPORT_USER_VGS = yes
ENABLE_64BIT_KERNEL = yes
CREATE_JFS2_FS = yes
ALL_DEVICES_KERNELS = yes
GRAPHICS_BUNDLE = no
DOC_SERVICES_BUNDLE = no
NETSCAPE_BUNDLE = yes
HTTP_SERVER_BUNDLE = yes
KERBEROS_5_BUNDLE = yes
SERVER_BUNDLE = yes
ALT_DISK_INSTALL_BUNDLE = yes
REMOVE_JAVA_118 = no
target_disk_data:
PVID =
CONNECTION =
LOCATION =
SIZE_MB =
HDISKNAME = hdisk0
locale:
BOSINST_LANG = en_US
CULTURAL_CONVENTION = en_US
MESSAGES = en_US
KEYBOARD = en_US
64. NIM:
========
=======
Note 1:
=======
http://freeunixtips.com/2009/02/create-aix-nim-master/
The following shows a quick way to build a NIM master that you can use
to install the AIX OS.
This allows for more customization than using eznim if you use
“Configure a Basic NIM Environment (Easy Startup).”
You can use the following instructions to build a NIM master from 4.3
on. Keep in mind that the newer
the AIX OS, the better and more NIM functions are introduced. I did not
like NIM previous to AIX 5.2.
Install the AIX OS on release that you want to the NIM master to serve
(i.e., 5.1, 5.2, 5.3 or 6.1).
If you want it to be AIX 6.1 TL 02 SP2, you will need to patch the NIM
master to that level after installing the base OS.
Once you are satisified with the hostname, IP, network interface,
interface speed/setup
(etherchannel, trunked, etc.), you can setup the NIM master. You can
make changes to the network configuration
later in NIM however, it’s easier if you take care of any configuration
before making the server a NIM master.
I would also make /tftpboot a seperate filesystem. This can get quite
large depending on how many clients you are building.
bos.sysmgt.nim.master
bos.sysmgt.nim.spot
bos.sysmgt.nim.client
bos.net.nfs.server
The following 2 fields are all that is required, everything else can be
left as default:
* Network Name []
* Primary Network Install Interface [] +
Once you select “enter” you should see the following indicating that you
now have a NIM master:
Now you need to create the lpp_source and spot resources so the NIM
master can build an OS:
Create a filesystem that you will use to lay down your base lpp_source
and spot directories.
These can be large since each lpp_source is usually 4G or larger and
each spot is usually around 700 MB.
I use /export/nim as a filesystem.
You can use “smitty bffcreate” or copy the installp/ppc directory from
the install CD/DVD’s
to /export/nim/6100/lpp_source
I get rid of any filesets that is not en_us or EN_US since I don’t need
other languages,
this will save 1 to 1.5 GB of disk space.
# /usr/lib/instl/lppmgr -d /<installp/ppc_directory> -u -b -x -r
This should only take a few minutes to complete, once you see “ok”
For “Source of Install Images” select F4 and chose the lpp_source you
created in the previous step.
This can take from 15-30 minutes to complete, once you see “ok”
You now have a NIM master that you can use to build out the OS on AIX
server.
If you have a mksysb of a server you want to use, copy it to the server,
create a mksysb NIM resource.
=======
Note 2:
=======
The SPOT (Shared Product Object Tree) is created from the "lpp_source".
=======
Note 3:
=======
AIX only.
With NIM, you can have unattended installation of clients. The NIM
Server also provides you with
the backup images of all your Servers (the NIM clients).
NIM objects:
------------
This topic explains the objects concept as it is used in the NIM
environment.
The machines you want to manage in the NIM environment, their resources,
and the networks through
which the machines communicate are all represented as objects within a
central database that resides
on the master. Network objects and their attributes reflect the physical
characteristics
of the network environment. This information does not affect the running
of a physical network
but is used internally by NIM for configuration information.
Each object in the NIM environment has a unique name that you specify
when the object is defined.
The NIM name is independent of any of the physical characteristics of
the object it identifies
and is only used for NIM operations. The benefit of unique names is that
an operation can be performed
using the NIM name without having to specify which physical attribute
should be used.
NIM determines which object attributes to use. For example, to easily
identify NIM clients,
the host name of the system can be used as the NIM object name, but
these names are independent
of each other. When an operation is performed on a machine, the NIM name
is used, and all other data
for the machine (including the host name) is retrieved from the NIM
database.
NIM machines:
-------------
The types of machines that can be managed in the NIM environment are
standalone, diskless,
and dataless clients. This section describes the differences between the
machines, the attributes required
to define the machines, and the operations that can be performed on
them.
The NIM environment is composed of two basic machine roles: master and
client. The NIM master manages
the installation of the rest of the machines in the NIM environment. The
master is the only machine
that can remotely run NIM commands on the clients. All other machines
participating in the NIM environment
are clients to the master, including machines that may also serve
resources.
- bosinst_data: This data file contains information that drives the BOS
install
(e.g., prompt vs. no-prompt, which disk to install the OS on, and the
type of installation
(Overwrite, Preservation, or Migration) to name a few). First, we
created separate bosinst_data resources
for each machine type (S80, H70, B50, M80, P680, and 43P). Then, by
specifying two disks to target
in our bosinst_data resource and specifying copies in the image_data
resource, we could set up
mirroring during the initial load.
We used the 43P systems as our NIM masters for each data center because
they could complete remote
installations of machines or be moved and directly connected to a server
for OS installations.
These NIM masters were also designated as the resource servers in our
environment.
To ensure consistency and standardization of each NIM master (for the
different data centers),
we created a standard NIM master machine, which we cloned. We made a
stacked tape containing a mksysb image
and a savevg image of the standard NIM master to sync up and update the
other NIM masters.
Here are the commands we ran on the standard NIM master to create this
stacked single tape:
# mksysb -i /dev/rmt0
# tctl -f/dev/rmt0.1 fsf4
# savevg -i -m {volume_group_name} -f/dev/rmt0.1
# mt -f/dev/rmt0 rewind
To restore the tape to the other NIM masters, we did the following:
Booted and restored the mksysb image from the stacked tape
# tctl -f/dev/rmt0.1 fsf4
# restvg volume_group_name
Setup NIM:
----------
Needed Filesets:
If you need to install the NIM client, master and spot filesets
Installation Summary
Name Level Part Event Result
To install NIM:
# smitty nim_config_env
to setup the basic NIM environment for the first time. It needs a
minimum of two pieces of information.
- Input device for installation images
- Primary network interface
Default values are provided for the remaining options. Once this smitty
panel has been completed successfully,
the following actions will have been completed:
. NIM master initialized on the primary interface
. NIM daemons running
. lpp_source created and available
. SPOT resource created and available (Shared Product Object Tree)
# smitty nim_config_env
EZNIM:
------
1. smitty eznim
2. Select "Configure as a NIM Master"
3. Select "Setup the NIM Master Environment"
4. Verify the default selections for software source, volume group etc..
To display the NIM resources that have been created, do the following:
use "smit eznim_master_panel" fast path, or select "Show the NIM
environment".
The nim_master_setup command uses the rootvg volume group and creates an
"/export/nim" file system, by default.
You can change these defaults using the volume_group and file_system
options. The nim_master_setup command
also allows you to optionally not create a system backup, if you plan to
use a mksysb image
from another system. The nim_master_setup usage is as follows:
Default values:
mk_resource = yes
file_system = /export/nim
volume_group = rootvg
device = /dev/cd0
To install the NIM master fileset and initialize the NIM environment
using install media located
in device /dev/cd1, type:
# nim_master_setup -a device=/dev/cd1
Note: If no client object names are given, all clients in the NIM
environment are enabled for
BOS installation; unless clients are defined using the -c option.
Examples:
To define client objects from /export/nim/client.defs file, initialize
the newly defined clients
for BOS install using resources from the basic_res_grp resource group,
and reboot the clients to begin install, type:
# nim_clients_setup -c -r
To initialize clients client1 and client2 for BOS install, using the
backup file
/export/resource/NIM/530mach.sysb as the restore image, type:
# nim_clients_setup -m /export/resource/NIM/530mach.sysb \ client1
client2
To initialize all clients in the NIM environment for native (rte) BOS
install using resources
from the basic_res_grp resource group, type:
# nim_clients_setup -n
nim -o bos_inst \
-a source=mksysb \
-a spot=aix520-01_spot \
-a mksysb=base520-02-64bit_mksysb or base520-02-
32bit_mksysb \
-a accept_licenses=yes \
-a preserve_res=yes \
-a installp_flags="cNgXY" \
-a fb_script=osg-mksysb-install_firstboot \
name of resource
If you do not want the machine to be rebooted right now, then add the
following:
-a no_client_boot=yes
nim -o reset \
name of resource
-a force=yes
If after you try to reset the state and try to install again and you are
told that the resource is
still allocated run the following:
AIXr 5.3 uses NIM Service Handler (NIMSH) to eliminate the need for rsh
services during NIM client communication.
The NIM client daemon (NIMSH) uses reserved ports 3901 and 3902, and it
installs as part of the
bos.sysmgt.nim.client fileset.
While NIMSH eliminates the need for rsh, it does not provide trusted
authentication based on key encryption.
To use cryptographic authentication with NIMSH, you can configure
OpenSSL in the NIM environment.
When you install OpenSSL on a NIM clients, SSL socket connections are
established during NIMSH
service authentication. Enabling OpenSSL provides SSL key generation and
includes all cipher suites
supported in SSL version 3.
bos.sysmgt.nim.client
bos.sysmgt.nim.master
bos.sysmgt.nim.spot
These are available on the AIX Product CD 1.
Installation Summary
--------------------
Name Level Part Event Result
3.Configure the NIM environment (ensure you have AIX product CD 1 loaded
in the CD or DVD Drive<top>
# smitty nim_config_env
You also need to specify the primary network interface and path to the
CD or DVD drive
Copy the contents of AIX Volume 2,5, Expansion Pack and the AIX ToolBox
to the lpp_source, for each CD
enter the below
If the AIX CD's you are using to create the lpp and spot resources is a
base level AIX CD, and the clients
you are intending to build are at a higher level than the base level.
You will need to update the
lpp and spot resources.
Identify the location of your update filesets and update with the below
command
Once complete, confirm the maintenance level of the spot1 resource with
the below command
# lsnim -l spot1
In this example, I have updated the lpp_source1 and spot1 to AIX 5.3 ML
3
spot1:
class = resources
type = spot
plat_defined = chrp
arch = power
bos_license = yes
Rstate = ready for use
prev_state = verification is being performed
location = /export/spot/spot1/usr
version = 5
release = 3
mod = 0
oslevel_r = 5300-01
alloc_count = 0
server = master
Rstate_result = success
mk_netboot = yes
mk_netboot = yes
mk_netboot = yes
Before you can start a BOS install task you need to define the machines
you are going to install.
a.server hostname
b.platform
c.netboot_kernel
d.subnet mask
e.default gateway of the master
f.master name
If you are adding a machine that is already running, you need to ensure
the bos.sysmgt.nim.client fileset
is installed and issue the following command on the client
note: change the name= and master= to match the client and master you
are adding
The output from the following command will show your newly defined
machine
# lsnim -c machines
To get detailed output of your newly created machine, run the below
a.On the master server and clients install the openssl rpm from the AIX
toolbox
# nimconfig -c
# mv /etc/niminfo /etc/niminfo.bak
# niminit -aname=pr-testdb -amaster=pr-tsm -a connect=nimsh
# nimclient -C
Once you have defined your machines, add them to add mac_group. This
will aid administration for future
installation tasks
To define a group containing the sp-tsm2 machine run the below command
# nim -o define -t mac_group -a add_member=sp-tsm2 speedy_mac_group
For each machine to be added, use the option and argument `-a
add_member=<hostname>' where <hostname> is the name
of the server you are adding
control_flow:
CONSOLE = Default
INSTALL_METHOD = overwrite
PROMPT = no
EXISTING_SYSTEM_OVERWRITE = yes
INSTALL_X_IF_ADAPTER = yes
RUN_STARTUP = yes
RM_INST_ROOTS = no
ERROR_EXIT =
CUSTOMIZATION_FILE =
TCB = no
INSTALL_TYPE =
BUNDLES =
RECOVER_DEVICES = no
BOSINST_DEBUG = no
ACCEPT_LICENSES = yes
DESKTOP = NONE
INSTALL_DEVICES_AND_UPDATES = yes
IMPORT_USER_VGS =
ENABLE_64BIT_KERNEL = yes
CREATE_JFS2_FS = yes
ALL_DEVICES_KERNELS = yes
GRAPHICS_BUNDLE = yes
MOZILLA_BUNDLE = no
KERBEROS_5_BUNDLE = no
SERVER_BUNDLE = yes
REMOVE_JAVA_118 = no
HARDWARE_DUMP = yes
ADD_CDE = yes
ADD_GNOME = no
ADD_KDE = no
ERASE_ITERATIONS = 0
ERASE_PATTERNS =
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME = hdisk0
locale:
BOSINST_LANG = en_US
CULTURAL_CONVENTION = en_GB
MESSAGES = en_US
KEYBOARD = en_GB
large_dumplv:
DUMPDEVICE=lg_dumplv
SIZEGB=2
dump:
PRIMARY=/dev/lg_dumplv
SECONDARY=/dev/sysdumpnull
FORCECOPY=no
COPYDIR=/dump
ALLOWS_ALLOW=yes
Once you have created the bosinst.data file, you need to define it to
the NIM environment with the below command
Once created, define the script to the NIM server with the below command
Details of your newly created script resource can be viewed with the
below
Now that you have created a number of resources and machines, it would
be a good idea to add a cron job
to take a backup of the NIM database on a weekly basis. This will by
default be picked up by Tivoli and mksysb
then sent to tape.
#!/bin/sh
#-----------------------------------------------------------------------
---------
#
# File : nim_backup_db.sh
#
# Author : Steve Burgess
#
# Description : Wrapper script to backup the NIM database
#
# Change History:
#
# Date Version Author Description
# ------- ------- ---------------- -----------------------------
#-----------------------------------------------------------------------
---------
#-------------------------
# Backup The NIM database
#-------------------------
/usr/lpp/bos.sysmgt/nim/methods/m_backup_db /etc/objrepos/nimdb_backup
2>&1 | tee /usr/red2/logs/nim_backup.log
if [ $? -ne 0 ]
then
echo "`date +%Y%m%d` NIM_BACKUP_FAILURE" | tee -a
/usr/red2/logs/nim_backup.log
else
echo "`date +%Y%m%d` NIM_BACKUP_SUCCESS" | tee -a
/usr/red2/logs/nim_backup.log
fi
# /usr/lpp/bos.sysmgt/nim/methods/m_restore_db -f
/etc/objrepos/nimdb.backup
You are now ready to initiate a BOS install for one of your defined
machines. Run the below command
to initate a BOS install for sp-tsm2:
This will make the previously created resources, inst_script and bosinst
available to the server.
Next you need to follow the below procedure to boot your machine from
the NIM server
Following successful BOS installation, you will need to confirm the post
tasks you defined in your inst_script have completed. Anything that has
failed will need to be run manually
Enter the below command to initiate the restore from the NIM server
Once entered, refer to section 11 to boot the server you are recovering
over the network
15.Booting the server into diagnostics<top>
Occasionally you may need to boot the server into diagnostic mode to
allow you to resolve a hardware issue. To do this, first enter the below
Occasionally you may need to boot the server into maintenance mode. To
do this, first enter the below
To update a client with the whole contents of an lpp resource, enter the
below
19.To add a new lpp resource that contains a new AIX level, then apply
that update to a NIM client. <top>
To update a server from the new aix maint level # nim -o cust -a
lpp_source=aix_maint_ML3 -a fixes=update_all \
-a installp_flags="a c g X p" sp-tsm2 Tutorial Tools
Show Printable Version
Email this Page
65. ACCOUNTING:
===============
General in unix:
----------------
- remove ``cleans up'' the saved pacct and wtmp files left in the sum
directory by runacct.
The login and init programs record connect sessions by writing records
into /var/adm/wtmp.
Any date changes (made by running date with an argument) are also
written to /var/adm/wtmp.
Reboots and shutdowns (via acctwtmp) are also recorded in /var/adm/wtmp.
When a process ends, the kernel writes one record per process, in the
form of acct.h, in the /var/adm/pacct file.
Two programs track disk usage by login: acctdusg and diskusg. They are
invoked by the shell script dodisk.
Every hour cron executes the ckpacct program to check the size of
/var/adm/pacct.
If the file grows past 500 blocks (default), turnacct switch is
executed. (The turnacct switch program
moves the pacct file and creates a new one.) The advantage of having
several smaller pacct files
becomes apparent when trying to restart runacct if a failure occurs when
processing these records.
On AIX:
-------
When your login program ends (when you logout), the init command records
the end of the session
by writing another record in the "/var/adm/wtmp" file.
Both the login and logout records have the form described in the utmp.h
file.
- Shutdown:
acctwtmp command:
The "acctwtmp" command also writes special entries in the /var/adm/wtmp
file concerning
system shutdowns and startups.
- Process accounting:
accton command:
The system collects data on resource usage for each process as it runs,
including
the memory use, elapsed time and processor time, user and group id under
which the process runs etc..
The "accton" command records these data in the "/var/adm/pacct" file.
Note 1:
-------
There are some differences between EtherChannel and IEEE 802.3ad Link
Aggregation. Consider the differences
given in Table 15 to determine which would be best for your situation.
Table 15.
Differences between EtherChannel and IEEE 802.3ad Link Aggregation.
Supported Adapters
EtherChannel and IEEE 802.3ad Link Aggregation are supported on the
following Ethernet adapters:
Important:
Mixing adapters of different speeds in the same EtherChannel, even if
one of them is operating
as the backup adapter, is not supported. This does not mean that such
configurations will not work.
The EtherChannel driver makes every reasonable attempt to work even in a
mixed-speed scenario.
For information on configuring and using EtherChannel, see EtherChannel.
For more information on configuring
and using IEEE 802.3ad link aggregation, see IEEE 802.3ad Link
Aggregation. For information on the different
AIX and switch configuration combinations and the results they produce,
see Interoperability Scenarios.
EtherChannel
The adapters that belong to an EtherChannel must be connected to the
same EtherChannel-enabled switch.
You must manually configure this switch to treat the ports that belong
to the EtherChannel
as an aggregated link. Your switch documentation might refer to this
capability as link aggregation
or trunking.
For example, ent0 and ent1 could be configured as the main EtherChannel
adapters, and ent2 as the backup adapter,
creating an EtherChannel called ent3. Ideally, ent0 and ent1 would be
connected to the same
EtherChannel-enabled switch, and ent2 would be connected to a different
switch. In this example, all traffic
sent over en3 (the EtherChannel's interface) would be sent over ent0 or
ent1 by default (depending on the
EtherChannel's packet distribution scheme), whereas ent2 will be idle.
If at any time both ent0 and ent1 fail,
all traffic would be sent over the backup adapter, ent2. When either
ent0 or ent1 recover, they will once again
be used for all traffic.
Configuring EtherChannel:
-------------------------
Follow these steps to configure an EtherChannel.
Considerations
You can have up to eight primary Ethernet adapters and only one backup
Ethernet adapter per EtherChannel.
You can configure multiple EtherChannels on a single system, but each
EtherChannel constitutes an additional
Ethernet interface. The no command option, ifsize, may need to be
increased to include not only the
Ethernet interfaces for each adapter, but also any EtherChannels that
are configured.
In AIX 5.2 and earlier, the default ifsize is eight. In AIX 5.2 and
later, the default size is 256.
You can use any supported Ethernet adapter in an EtherChannel (see
Supported Adapters). However, the Ethernet adapters
must be connected to a switch that supports EtherChannel. See the
documentation that came with your switch
to determine if it supports EtherChannel (your switch documentation may
refer to this capability also as
link aggregation or trunking).
All adapters in the EtherChannel should be configured for the same speed
(100 Mbps, for example) and should be
full duplex.
The adapters used in the EtherChannel cannot be accessed by the system
after the EtherChannel is configured.
To modify any of their attributes, such as media speed, transmit or
receive queue sizes, and so forth,
you must do so before including them in the EtherChannel.
The adapters that you plan to use for your EtherChannel must not have an
IP address configured on them
before you start this procedure. When configuring an EtherChannel with
adapters that were previously configured
with an IP address, make sure that their interfaces are in the detach
state. The adapters to be added
to the EtherChannel cannot have interfaces configured in the up state in
the Object Data Manager (ODM),
which will happen if their IP addresses were configured using SMIT. This
may cause problems bringing up
the EtherChannel when the machine is rebooted because the underlying
interface is configured before the
EtherChannel with the information found in ODM. Therefore, when the
EtherChannel is configured, it finds
that one of its adapters is already being used. To change this, before
creating the EtherChannel,
type smit chinet, select each of the interfaces of the adapters to be
included in the EtherChannel,
and change its state value to "detach". This will ensure that when the
machine is rebooted the EtherChannel
can be configured without errors.
For more information about ODM, see Object Data Manager (ODM) in AIX 5L
Version 5.3
General Programming Concepts: Writing and Debugging Programs.
Note:
In AIX 5.2 with 5200-03 and later, enabling the link polling mechanism
is not necessary. The link poller
will be started automatically.
If you plan to use jumbo frames, you may need to enable this feature in
every adapter before creating
the EtherChannel and in the EtherChannel itself. Type smitty chgenet at
the command line.
Change the Enable Jumbo Frames value to yes and press Enter. Do this for
every adapter for which you want
to enable Jumbo Frames. You will enable jumbo frames in the EtherChannel
itself later.
Note:
In AIX 5.2 and later, enabling the jumbo frames in every underlying
adapter is not necessary once it is enabled
in the EtherChannel itself. The feature will be enabled automatically if
you set the Enable Jumbo Frames attribute to yes.
Configure an EtherChannel:
--------------------------
Type "smit etherchannel" at the command line.
Select Add an EtherChannel / Link Aggregation from the list and press
Enter.
Select the primary Ethernet adapters that you want on your EtherChannel
and press Enter. If you are planning to use
EtherChannel backup, do not select the adapter that you plan to use for
the backup at this point.
The EtherChannel backup option is available in AIX 5.2 and later.
Note:
The Available Network Adapters displays all Ethernet adapters. If you
select an Ethernet adapter that is already
being used (has an interface defined), you will get an error message.
You first need to detach this interface
if you want to use it.
netif_backup: This option is available only in AIX 5.1 and AIX 4.3.3. In
this mode, the EtherChannel
will activate only one adapter at a time. The intention is that the
adapters are plugged into different
Ethernet switches, each of which is capable of getting to any other
machine on the subnet or network.
When a problem is detected either with the direct connection (or
optionally through the inability
to ping a machine), the EtherChannel will deactivate the current adapter
and activate a backup adapter.
This mode is the only one that makes use of the Internet Address to
Ping, Number of Retries, and
Retry Timeout fields.
Network Interface Backup Mode does not exist as an explicit mode in AIX
5.2 and later.
To enable Network Interface Backup Mode in AIX 5.2 and later, you must
configure one adapter in the
main EtherChannel and a backup adapter. For more information, see
Configure Network Interface Backup.
8023ad: This options enables the use of the IEEE 802.3ad Link
Aggregation Control Protocol (LACP)
for automatic link aggregation. For more details about this feature, see
IEEE 802.3ad Link Aggregation.
Hash Mode: You can choose from the following hash modes, which will
determine which data value will be used
by the algorithm to determine the outgoing adapter:
default: In this hash mode the destination IP address of the packet will
be used to determine the outgoing adapter.
For non-IP traffic (such as ARP), the last byte of the destination MAC
address is used to do the calculation.
This mode will guarantee packets are sent out over the EtherChannel in
the order they were received, but it may
not make full use of the bandwidth.
src_port: In this hash mode the source UDP or TCP port value of the
packet will be used to determine the
outgoing adapter. If the packet is not UDP or TCP traffic, the last byte
of the destination IP address will be used.
If the packet is not IP traffic, the last byte of the destination MAC
address will be used.
dst_port: In this hash mode the destination UDP or TCP port value of the
packet will be used to determine
the outgoing adapter. If the packet is not UDP or TCP traffic, the last
byte of the destination IP will be used.
If the packet is not IP traffic, the last byte of the destination MAC
address will be used.
src_dst_port: In this hash mode both the source and destination UDP or
TCP port values of the packet will be used
to determine the outgoing adapter (specifically, the source and
destination ports are added and then divided
by two before being fed into the algorithm). If the packet is not UDP or
TCP traffic, the last byte of the
destination IP will be used. If the packet is not IP traffic, the last
byte of the destination MAC address
will be used. This mode can give good packet distribution in most
situations, both for clients and servers.
Note:
Backup Adapter: This field is optional. Enter the adapter that you want
to use as your EtherChannel backup.
EtherChannel backup is available in AIX 5.2 and later.
Internet Address to Ping: This field is optional and only takes effect
if you are running Network Interface
Backup mode or if you have only one adapter in the EtherChannel and a
backup adapter. The EtherChannel will
ping the IP address or host name that you specify here. If the
EtherChannel is unable to ping this address
for the Number of Retries times in Retry Timeout intervals, the
EtherChannel will switch adapters.
Number of Retries: Enter the number of ping response failures that are
allowed before the EtherChannel
switches adapters. The default is three. This field is optional and
valid only if you have set an
Internet Address to Ping.
Retry Timeout: Enter the number of seconds between the times when the
EtherChannel will ping the Internet Address
to Ping. The default is one second. This field is optional and valid
only if you have set an Internet Address to Ping.
The Network Interface Backup setup is most effective when the adapters
are connected to different network switches,
as this provides greater redundancy than connecting all adapters to one
switch. When connecting to different switches,
make sure there is a connection between the switches. This provides
failover capabilities from one adapter
to another by ensuring that there is always a route to the currently-
active adapter.
Additionally, AIX 5.2 and later versions provide priority, meaning that
the adapter configured in the primary
EtherChannel will be used preferentially over the backup adapter. As
long as the primary adapter is functional,
it will be used. This contrasts from the behavior of Network Interface
Backup mode in releases prior to AIX 5.2,
where the backup adapter was used until it also failed, regardless of
whether the primary adapter had
already recovered.
For example, ent0 could be configured as the main adapter, and ent2 as
the backup adapter, creating an
EtherChannel called ent3. Ideally, ent0 and ent2 would be connected to
two different switches. In this example,
all traffic sent over en3 (the EtherChannel's interface) would be sent
over ent0 by default, whereas ent2
will be idle. If at any time ent0 fails, all traffic would be sent over
the backup adapter, ent2.
When ent0 recovers, it will once again be used for all traffic.
Load-balancing options
There are two load balancing methods for outgoing traffic in
EtherChannel, as follows: round-robin, which spreads the outgoing
traffic evenly across all the adapters in the EtherChannel; and
standard, which selects the adapter using an algorithm. The Hash Mode
parameter determines which numerical value is fed to the algorithm.
Round-Robin
All outgoing traffic is spread evenly across all of the adapters in the
EtherChannel. It provides the highest bandwidth optimization for the AIX
server system. While round-robin distribution is the ideal way to
utilize all the links equally, consider that it also introduces the
potential for out-of-order packets at the receiving system.
Standard or 8032ad
Standard algorithm. The standard algorithm is used for both standard and
IEEE 802.3ad-style link aggregations. AIX divides the last byte of the
"numerical value" by the number of adapters in the EtherChannel and uses
the remainder to identify the outgoing link. If the remainder is zero,
the first adapter in the EtherChannel is selected; a remainder of one
means the second adapter is selected, and so on (the adapters are
selected in the order they are listed in the adapter_names attribute).
The Hash Mode selection determines the numerical value used in the
calculation. By default, the last byte of the destination IP address or
MAC address is used in the calculation, but the source and destination
TCP or UDP port values may also be used. These alternatives allow you to
fine-tune the distribution of outgoing traffic across the real adapters
in the EtherChannel.
In src_dst_port hash mode, the TCP or UDP source and destination port
values of the outgoing packet are added, then divided by two. The
resultant whole number (no decimals) is plugged into the standard
algorithm. TCP or UDP traffic is sent on the adapter selected by the
standard algorithm and selected hash mode value. Non-TCP or UDP traffic
will fall back to the default hash mode, meaning the last byte of either
the destination IP address or MAC address. The src_dst_port hash mode
option considers both the source and the destination TCP or UDP port
values. In this mode, all of the packets in one TCP or UDP connection
are sent over a single adapter so they are guaranteed to arrive in
order, but the traffic is still spread out because connections (even to
the same host) may be sent over different adapters. The results of this
hash mode are not skewed by the connection establishment direction
because it uses both the source and destination TCP or UDP port values.
In src_port hash mode, the source TCP or UDP port value of the outgoing
packet is used. In dst_port hash mode, the destination TCP or UDP port
value of the outgoing packet is used. Use the src_port or dst_port hash
mode options if port values change from one connection to another and if
the src_dst_port option is not yielding a desirable distribution.
On AIX 5.2 with 5200-01 and earlier, type ifconfig interface detach,
where interface is your EtherChannel's or Link Aggregation's interface.
(On AIX 5.2 with 5200-03 and later, you can change the alternate address
of the EtherChannel without detaching its interface).
On the command line, type smit etherchannel.
Select Change / Show Characteristics of an EtherChannel and press Enter.
If you have multiple EtherChannels, select the EtherChannel for which
you want to create an alternate address.
Change the value in Enable Alternate EtherChannel Address to yes.
Enter the alternate address in the Alternate EtherChannel Address field.
The address must start with 0x and be a 12-digit hexadecimal address
(for example, 0x001122334455).
Press Enter to complete the process.
Note:
Changing the EtherChannel's MAC address at runtime may cause a temporary
loss of connectivity. This is because the adapters need to be reset so
they learn of their new hardware address, and some adapters take a few
seconds to be initialized.
Dynamic Adapter Membership
Prior to AIX 5.2 with 5200-03, in order to add or remove an adapter from
an EtherChannel, its interface first had to be detached, temporarily
interrupting all user traffic. To overcome this limitation, Dynamic
Adapter Membership (DAM) was added in AIX 5.2 with 5200-03. It allows
adapters to be added or removed from an EtherChannel without having to
disrupt any user connections. A backup adapter can also be added or
removed; an EtherChannel can be initially created without a backup
adapter, and one can be added a later date if the need arises
You may also turn a regular EtherChannel into an IEEE 802.3ad Link
Aggregation (or vice versa), allowing users to experiment with this
feature without having to remove and recreate the EtherChannel.
Furthermore, with DAM, you may choose to create a one-adapter
EtherChannel. A one-adapter EtherChannel behaves exactly like a regular
adapter; however, should this adapter ever fail, it would be possible to
replace it at runtime without ever losing connectivity. To accomplish
this, you would add a temporary adapter to the EtherChannel, remove the
defective adapter from the EtherChannel, replace the defective adapter
with a working one using Hot Plug, add the new adapter to the
EtherChannel, and then remove the temporary adapter. During this process
you would never notice a loss in connectivity. If the adapter had been
working as a standalone adapter, however, it would have had to be
detached before being removed using Hot Plug, and during that time any
traffic going over it would simply have been lost.
Notes:
When adding an adapter at runtime, note that different Ethernet adapters
support different capabilities (for example, the ability to do checksum
offload, to use private segments, to do large sends, and so forth). If
different types of adapters are used in the same EtherChannel, the
capabilities reported to the interface layer are those supported by all
the adapters (for example, if all but one adapter supports the use of
private segments, the EtherChannel will state it does not support
private segments; if all adapters do support large send, the channel
will state it supports large send). When adding an adapter to an
EtherChannel at runtime, be sure that it supports at least the same
capabilities as the other adapters already in the EtherChannel. If you
attempt to add an adapter that does not support all the capabilities the
EtherChannel supports, the addition will fail. Note, however, that if
the EtherChannel's interface is detached, you may add any adapter
(regardless of which capabilities it supports), and when the interface
is reactivated the EtherChannel will recalculate which capabilities it
supports based on the new list of adapters.
If you are not using an alternate address and you plan to delete the
adapter whose MAC address was used for the EtherChannel (the MAC address
used for the EtherChannel is "owned" by one of the adapters), the
EtherChannel will use the MAC address of the next adapter available (in
other words, the one that becomes the first adapter after the deletion,
or the backup adapter in case all main adapters are deleted). For
example, if an EtherChannel has main adapters ent0 and ent1 and backup
adapter ent2, it will use by default ent0's MAC address (it is then said
that ent0 "owns" the MAC address). If ent0 is deleted, the EtherChannel
will then use ent1's MAC address. If ent1 is then deleted, the
EtherChannel will use ent2's MAC address. If ent0 were later re-added to
the EtherChannel, it will continue to use ent2's MAC address because
ent2 is now the owner of the MAC address. If ent2 were then deleted from
the EtherChannel, it would start using ent0's MAC address again.
Deleting the adapter whose MAC address was used for the EtherChannel may
cause a temporary loss of connectivity, because all the adapters in the
EtherChannel need to be reset so they learn of their new hardware
address. Some adapters take a few seconds to be initialized.
Tracing EtherChannel
Use tcpdump and iptrace to troubleshoot the EtherChannel. The trace hook
id for the transmission packets is 2FA and for other events is 2FB. You
cannot trace receive packets on the EtherChannel as a whole, but you can
trace each adapter's receive trace hooks.
Note:
In the General Statistics section, the number shown in Adapter Reset
Count is the number of failovers. In EtherChannel backup, coming back to
the main EtherChannel from the backup adapter is not counted as a
failover. Only failing over from the main channel to the backup is
counted.
In the Number of Adapters field, the backup adapter is counted in the
number displayed.
Switch ports have a forwarding delay counter that determines how soon
after initialization each port should begin forwarding or sending
packets. For this reason, when the main channel is re-enabled, there is
a delay before the connection is re-established, whereas the failover to
the backup adapter is faster. Check the forwarding delay counter on your
switch and make it as small as possible so that coming back to the main
channel occurs as fast as possible.
Adapters that have a link polling mechanism have an ODM attribute called
poll_link, which must be set to yes for the link polling to be enabled.
Before creating the EtherChannel, use the following command on every
adapter to be included in the channel:
smit chgenet
Change the Enable Link Polling value to yes and press Enter.
smitty chgenet
Change the Enable Jumbo Frames value to yes and press Enter. On AIX 5.2
and later, jumbo frames are enabled automatically in every underlying
adapter when it is set to yes.
Remote Dump
Remote dump is not supported over an EtherChannel.
Although the IEEE 802.3ad specification does not allow the user to
choose which adapters are aggregated, the AIX implementation does allow
the user to select the adapters. According to the specification, the
LACP determines, completely on its own, which adapters should be
aggregated together (by making link aggregations of all adapters with
similar link speeds and duplexity settings). This prevents you from
deciding which adapters should be used standalone and which ones should
be aggregated together. The AIX implementation gives you control over
how the adapters are used, and it never creates link aggregations
arbitrarily.
You can also configure an IEEE 802.3ad Link Aggregation if the switch
supports EtherChannel but not IEEE 802.3ad. In that case, you would have
to manually configure the ports as an EtherChannel on the switch (just
as if a regular EtherChannel had been created). By setting the mode to
8023ad, the aggregation will work with EtherChannel-enabled as well as
IEEE 802.3ad-enabled switches. For more information about
interoperability, see Interoperability Scenarios.
Note:
The steps to enable the use of IEEE 802.3ad varies from switch to
switch. You should consult the documentation for your switch to
determine what initial steps, if any, must be performed to enable LACP
in the switch.
For information in how to configure an IEEE 802.3ad aggregation, see
Configuring IEEE 802.3ad Link Aggregation.
Considerations
Consider the following before configuring an IEEE 802.3ad Link
Aggregation:
entstat -d device
where device is the Link Aggregation device.
Inactive: LACP has not been initiated. This is the status when a Link
Aggregation has not yet been configured, either because it has not yet
been assigned an IP address or because its interface has been detached.
Negotiating: LACP is in progress, but the switch has not yet aggregated
the adapters. If the Link Aggregation remains on this status for longer
than one minute, verify that the switch is correctly configured. For
instance, you should verify that LACP is enabled on the ports.
Aggregated: LACP has succeeded and the switch has aggregated the
adapters together.
Failed: LACP has failed. Some possible causes are that the adapters in
the aggregation are set to different line speeds or duplex modes or that
they are plugged into different switches. Verify the adapters'
configuration.
In addition, some switches allow only contiguous ports to be aggregated
and may have a limitation on the number of adapters that can be
aggregated. Consult the switch documentation to determine any
limitations that the switch may have, then verify the switch
configuration.
Note:
The Link Aggregation status is a diagnostic value and does not affect
the AIX side of the configuration. This status value was derived using a
best-effort attempt. To debug any aggregation problems, it is best to
verify the switch's configuration.
Interoperability Scenarios
The following table shows several interoperability scenarios. Consider
these scenarios when configuring your EtherChannel or IEEE 802.3ad Link
Aggregation. Additional explanation of each scenario is given after the
table.
Table 17. Different AIX and switch configuration combinations and the
results each combination will produce. EtherChannel mode Switch
configuration Result
8023ad IEEE 802.3ad LACP OK - AIX initiates LACPDUs, which triggers an
IEEE 802.3ad Link Aggregation on the switch.
standard or round_robin EtherChannel OK - Results in traditional
EtherChannel behavior.
8023ad EtherChannel OK - Results in traditional EtherChannel behavior.
AIX initiates LACPDUs, but the switch ignores them.
standard or round_robin IEEE 802.3ad LACP Undesirable - Switch cannot
aggregate. The result may be poor performance as the switch moves the
MAC address between switch ports
Note:
In this case, the entstat -d command will always report the aggregation
is in the Negotiating state.
standard or round_robin with IEEE 802.3ad LACP:
This setup is invalid. If the switch is using LACP to create an
aggregation, the aggregation will
never happen because AIX will never reply to LACPDUs. For this to work
correctly, 8023ad should be
the mode set on AIX.
Note 5:
-------
In order to use IP over Fibre Channel, your system must have a Fibre
Channel switch and either
the 2 Gigabit Fibre Channel Adapter for 64-bit PCI Bus or the 2 Gigabit
Fibre Channel PCI-X Adapter.
devices.common.ibm.fc
devices.pci.df1000f7
devices.pci.df1080f9
devices.pci.df1000f9
After the adapter has been enabled, IP needs to be configured over it.
Follow these steps to configure IP:
ifconfig -a
If your configuration was successful, you will see results similar to
the following among the results:
fc1: flags=e000843
<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,PSEG,CH
AIN>
inet 11.11.11.18 netmask 0xffffff00 broadcast 11.11.11.255
Additionally, you can run the following command:
ifconfig fcx
where x is the minor number of the interface.
To look at your virtual memory and its causes, you can use a combination
of:
Otherwise:
- vmtune command in AIX lower than AIX 5L, like AIX 4.1:
--------------------------------------------------------
The vmtune command can be used to modify the VMM parameters that control
the behavior of the memory-management
subsystem. Some options are available to alter the defaults for LVM and
file systems; the options dealing
with disk I/O are discussed in the following sections.
vmtune:2:wait:/usr/samples/kernel/vmtune -P 50
How to use the vmtune command? Use vmtune with a flag representing the
parameter you want to change,
for example "maxfree".
maxfree
Purpose: The maximum size to which the VMM page-frame free list will
grow by page stealing.
Values: Default: configuration-dependent, Range: 16 to 204800 (4KB
frames)
Display: vmtune
Change: vmtune -F NewValue
Introduction
By default, AIX is tuned for a mixed workload, and will grow its VMM
file cache up to 80% of physical RAM.
While this may be great for an NFS server, SMTP relay or web server, it
is very poor for running any application
which does its own cache management. This includes most databases
(Oracle, DB2, Sybase, PostgreSQL,
MySQL using InnoDB tables, TSM) and some other software (eg. the Squid
web cache).
Common symptoms include high paging (high pgspin and pgspout in topas),
high system CPU time,
the lrud kernel thread using CPU, slow overall system throughput, slow
backups and slow process startup.
For most database systems, the ideal solution is to use raw logical
volumes. If this is not acceptable,
then direct I/O and concurrent I/O should be used. If for some reason
this is not possible, then the last solution
is to tune the AIX file caches to be less aggressive.
Parameters
The three main parameters that should be tuned are those controlling the
size of the persistent file cache
(minperm% and maxperm%) used for JFS filesystems, and the client file
cache (maxclient%) used by
NFS, CDRFS and JFS2 filesystems
- numperm%
Defines the current size of the persistent file cache.
- minperm%
Defines the minimum amount of RAM the persistent file cache may
occupy. If numperm% is less than or equal
to minperm%, file pages will not be stolen when RAM is required.
- maxperm%
Defines the maximum amount of RAM the persistent file cache may occupy
before it is used as the sole source
of new pages by the page stealing algorithm. By default, numperm% may
exceed maxperm% if there is
free memory available. The setting strict_maxperm may be set to one to
change maxperm% into a hard limit,
guaranteeing numperm% will never exceed maxperm%.
- strict_maxperm
As above, if set to 1, changes maxperm% into a hard limit.
- numclient%
Defines the current size of the client file cache.
- maxclient%
Defines the hard maximum size of the client file cache.
- strict_maxclient
Introduced in 5.2 ML4, allows the changing of maxclient% into a soft
limit, similar to strict_maxperm.
# vmo -L
NAME CUR DEF BOOT MIN MAX UNIT
TYPE
DEPENDENCIES
------------------------------------------------------------------------
--------
memory_frames 512K 512K 4KB pages
S
------------------------------------------------------------------------
--------
pinnable_frames 427718 427718 4KB pages
S
------------------------------------------------------------------------
--------
maxfree 128 128 128 16 200K 4KB pages
D
minfree
memory_frames
...
Kernel tuning parameters:
AIX 5.2 introduces a new method that is more flexible and centralized
for setting most of the AIX kernel
tuning parameters. It is now possible to make permanent changes without
having to edit any rc files.
This is achieved by placing the reboot values for all tunable parameters
in a new stanza file,
/etc/tunables/nextboot. When the machine is rebooted, the values in that
file are automatically applied.
Another stanza file, /etc/tunables/lastboot is automatically generated
with all the values as they were set
just after the reboot. This provides the capability to return to those
values at any time. The log file for
any changes made or impossible to make during reboot is stored in
/etc/tunables/lastboot.log. There are sets
of SMIT panels and a WebSm plug-in also available to manipulate current
and reboot values for all tuning
parameters as well as the files in the /etc/tunables directory.
There are four new commands introduced in AIX 5.2 to modify the tunables
files. The tunsave command is used
to save values to a stanza file. The tunrestore command is used to apply
a file, for example, to change all
tunables parameter values to those listed in a file. The command
tuncheck must be used to validate a file
created manually and the tundefault command is available to reset
tunable parameters to their default values.
All four commands work on both current and reboot tunables parameters
values. See the respective man pages
for more information.
The ioo command will handle all the I/O related tuning parameters, while
the vmo command will handle
all the other VMM parameters previously managed by vmtune. All three
commands are part of the new fileset
"bos.perf.tune" which also contains tunsave, tunrestore, tuncheck, and
tundefault.
The bos.adt.samples fileset will still include the vmtune and schedtune
commands, which will simply
be compatibility shell scripts calling vmo, ioo, and schedo as
appropriate. The compatibility scripts
only support changes to parameters which can be changed interactively.
That is, parameters that need bosboot
and then require a reboot of the machine to be effective are no longer
supported by the vmtune script.
To change those parameters, users must now use vmo -r. The options (all
from vmtune) and parameters
in question are as follows:
vmtune option parameter name new command
-C 0|1 page coloring vmo -r -o pagecoloring=0|1
-g n1
-L n2 large page size
number of large pages to reserve vmo -r -o lpg_size=n1 -o lpg_regions=n2
-m n memory pools vmo -r -o mempools=n
-v n number of frames per memory pool vmo -r -o framesets=n
-i n interval for special data segment identifiers vmo -r -o
spec_dataseg_int=n
-V n number of special data segment identifiers to reserve vmo -r -o
num_spec_dataseg
-y 0|1 p690 memory affinity vmo -r -o memory_affinity=0|1
Purpose
Manages Virtual Memory Manager tunable parameters.
Syntax
vmo [ -p | -r ] { -o Tunable [= Newvalue]}
vmo [ -p | -r ] -D
vmo [ -p | -r ] -a
vmo -?
vmo -h [ Tunable ]
vmo -L [ Tunable ]
vmo -x [ Tunable ]
Note:
Multiple -o, -d, -x and -L are allowed.
Description
Note:
The vmo command can only be executed by root.
Use the vmo command to configure Virtual Memory Manager tuning
parameters. This command sets or displays
current or next boot values for all Virtual Memory Manager tuning
parameters. This command can also make
permanent changes or defer changes until the next reboot. Whether the
command sets or displays a parameter
is determined by the accompanying flag. The -o flag performs both
actions. It can either display the
value of a parameter or set a new value for a parameter.
If the number of file pages (permanent pages) in memory is less than the
number specified by the
minperm% parameter, the VMM steals frames from either computational or
file pages, regardless
of repage rates. If the number of file pages is greater than the number
specified by the maxperm% parameter,
the VMM steals frames only from file pages. Between the two, the VMM
normally steals only file pages,
but if the repage rate for file pages is higher than the repage rate for
computational pages,
computational pages are stolen as well.
You can also modify the thresholds that are used to decide when the
system is running out of paging space.
The npswarn parameter specifies the number of paging-space pages
available at which the system begins
warning processes that paging space is low. The npskill parameter
specifies the number of paging-space
pages available at which the system begins killing processes to release
paging space.
The purpose of the free list is to keep track of real-memory page frames
released by terminating processes
and to supply page frames to requestors immediately, without forcing
them to wait for page steals
and the accompanying I/O to complete.
The minfree limit specifies the free-list size below which page stealing
to replenish the free list
is to be started. The maxfree parameter is the size above which stealing
ends. In the case of enabling
strict file cache limits, like the strict_maxperm or strict_maxclient
parameters,
the minfree value is used to start page stealing.
Examples:
You must configure your system to use large pages and you must also
specify the amount of physical memory
that you want to allocate to back large pages. The system default is to
not have any memory allocated
to the large page physical memory pool. You can use the vmo command to
configure the size of the large page
physical memory pool. The following example allocates 4 GB to the large
page physical memory pool:
# vmo -p -o v_pinshm=1
To see how many large pages are in use on your system, use the vmstat -l
command as in the following example:
# vmstat -l
From the above example, you can see that there are 16 active large
pages, alp, and 16 free large pages, flp.
2. Tuning Examples:
smitty chgsys
Another example:
----------------
1. java.lang.OutOfMemory
2. javax.naming.NameNotFoundException
3. javax.servlet.ServletException
4. java.lang.StringIndexOutOfBoundsException
5. java.net.SocketException
6. java.io.IOException
7. java.io.FileNotFoundException
8. java.util.MissingResourceException
9. java.lang.ClassNotFoundException
10.java.lang.StringIndexOutOfBoundsException
11.java.io.InterruptedIOException
12.com.splwg.cis.common.NestedRuntimeException
The number that is associated with action determines the type of garbage
collection that is being done:
action=1 means a preemptive garbage collection cycle.
action=2 means a full allocation failure.
action=3 means that a heap expansion takes place.
action=4 means that all known soft references are cleared.
action=5 means that stealing from the transient heap is done.
action=6 means that free space is very low.
Note 1 on java.lang.OutOfMemory
-------------------------------
The Java process has two memory areas: the Java heap, and the "native
heap",
which combine total the memory usage of the process.
The Java heap is controlled via the -Xms and -Xmx setting, and the space
available to the native heap is that which isn't used by the Java heap.
The act of reducing the maximum Java heap size has made the "native
heap"
bigger, and this is the area that was memory constrained.
We know this because the OutOfMemoryError was generated the message
informed
you that the JVM was unable to allocate a new native stack, this is
allocated onto the native heap (there is also a Java thread object which
is
created and allocated onto the Java heap).
Note 2 on java.lang.OutOfMemory
-------------------------------
Hi,
Any suggestion?
Lishin
Hi Lishin
We are running tomcat on Solaris 2.6. Each new connection uses at least
one
socket connection, which is treated as a file-descriptor. There is a
default limit (user) of 64 file descriptors
ulimit -n
ulimit -n <num>
system limit:
ulimit -Hn
I hope this helps - I had a very frustrating time solving this one!
Joe.
Note 3 on java.lang.OutOfMemory
-------------------------------
5.0.2:
------
5.0.2.x x in 2-16
5.0.2.1
5.0.2.2
5.0.2.3
5.0.2.4
5.0.2.5
5.0.2.6
5.0.2.7
5.0.2.8
5.0.2.9
5.0.2.10
5.0.2.11
5.0.2.12
5.0.2.13
5.0.2.14
5.0.2.15 SDK is not updated
5.1:
----
5.1.1 ca1420-20040626
5.1.1.1
5.1.1.2
5.1.1.3
5.1.1.4
5.1.1.5
5.1.1.6
5.1.1.7
5.1.1.8
5.1.1.9 SDK is not updated
6.0:
----
6.0 ca142sr1w-20041028
6.0.0.2
6.0.0.3 SDK is not updated
6.0.1 ca142sr1a-20050209(SR1a)
6.0.1.1
6.0.1.2 SDK is not updated
6.0.2 ca142-20050609
6.0.2.1
6.0.2.3
6.0.2.5
6.0.2.7 SDK is not updated
- maxuproc
Purpose: Specifies the maximum number of processes per user ID.
Values: Default: 40; Range: 1 to 131072
Display: lsattr -E -l sys0 -a maxuproc
Change: chdev -l sys0 -a maxuproc=NewValue
Change takes effect immediately and is preserved over boot. If value is
reduced, then it goes into effect
only after a system boot.
Diagnosis: Users cannot fork any additional processes.
Tuning: This is a safeguard to prevent users from creating too many
processes.
- ncargs
Purpose: Specifies the maximum allowable size of the ARG/ENV list (in
4KB blocks) when running exec() subroutines.
Values: Default: 6; Range: 6 to 1024
Display: lsattr -E -l sys0 -a ncargs
Change: chdev -l sys0 -a ncargs=NewValue
Change takes effect immediately and is preserved over boot.
Diagnosis: Users cannot execute any additional processes because the
argument list passed to the exec()
system call is too long. A low default value might cause some programs
to fail with the arg list too long
error message, in which case you might try increasing the ncargs value
with the chdev command above and then
rerunning the program.
Tuning: This is a mechanism to prevent the exec() subroutines from
failing if the argument list
is too long. Please note that tuning to a higher ncargs value puts
additional constraints on system memory resources.
minpout Purpose: Specifies the point at which programs that have reached
maxpout can resume writing to the file.
Values: Default: 0 (no checking); Range: 0 to n (n should be a multiple
of 4 and should be at least 4 less than maxpout)
Display: lsattr -E -l sys0 -a minpout
Change: chdev -l sys0 -a minpout=NewValue
Change is effective immediately and is permanent. If the -T flag is
used, the change is immediate and lasts until
the next boot. If the -P flag is used, the change is deferred until the
next boot and is permanent.
Diagnosis: If the foreground response time sometimes deteriorates when
programs with large amounts of sequential
disk output are running, sequential output may need to be paced.
Tuning: Set maxpout to 33 and minpout to 16. If sequential performance
deteriorates unacceptably,
increase one or both. If foreground performance is still unacceptable,
decrease both.
Paging Space Size Purpose: The amount of disk space required to hold
pages of working storage.
Values: Default: configuration-dependent; Range: 32 MB to n MB for hd6,
16 MB to n MB for non-hd6
Display: lsps -a mkps or chps or smitty pgsp
Change: Change is effective immediately and is permanent. Paging space
is not necessarily put into use immediately, however.
Diagnosis: Run: lsps -a. If processes have been killed for lack of
paging space, monitor the situation with the
psdanger() subroutine.
Tuning: If it appears that there is not enough paging space to handle
the normal workload,
add a new paging space on another physical volume or make the existing
paging spaces larger.
Disk Drive Queue Depth Purpose: Maximum number of requests the disk
device can hold in its queue.
Values: Default: IBMr disks=3; Non-IBM disks=0; Range: specified by
manufacturer
Display: lsattr -E -l hdiskn
Change: chdev -l hdiskn -a q_type=simple -a queue_depth=NewValue
Change is effective immediately and is permanent. If the -T flag is
used, the change is immediate and lasts until the next boot. If the -P
flag is used, the change is deferred until the next boot and is
permanent.
Diagnosis: N/A
Tuning: If the non-IBM disk drive is capable of request-queuing, make
this change to ensure that the operating system takes advantage of the
capability.
Refer to: Setting SCSI-Adapter and Disk-Device Queue Limits
MALLOCBUCKETS Options
Default Value
number_of_buckets1
16
bucket_sizing_factor (32-bit)2
32
bucket_sizing_factor (64-bit)3
64
blocks_per_bucket
10244
Notes:
EXTSHM (AIX 4.2.1 and later) Purpose: Turns on the extended shared
memory facility.
Values: Default: Not set
Possible Value: ON
Display: echo $EXTSHM
Change: EXTSHM=ON export EXTSHMChange takes effect immediately in this
shell. Change is effective until logging out of this shell. Permanent
change is made by adding EXTSHM=ON command to the /etc/environment file.
Diagnosis: N/A
Tuning: Setting value to ON will allow a process to allocate shared
memory segments as small as 1 byte (though this will be rounded up to
the nearest page); this effectively removes the limitation of 11 user
shared memory segments. Maximum size of all segments together can still
only be 2.75 GB worth of memory for 32-bit processes. 64-bit processes
do not need to set this variable since a very large number of segments
is available. Some restrictions apply for processes that set this
variable, and these restrictions are the same as with processes that use
mmap buffers.
Refer to: Extended Shared Memory (EXTSHM)
NODISCLAIM Purpose: Controls how calls to free() are being handled. When
PSALLOC is set to early, all free() calls result in a disclaim() system
call. When NODISCLAIM is set to True, this does not occur.
Values: Default: Not set
Possible Value: True
Display: echo $NODISCLAIM
Change: NODISCLAIM=true export NODISCLAIMChange takes effect immediately
in this shell. Change is effective until logging out of this shell.
Permanent change is made by adding NODISCLAIM=true command to the
/etc/environment file.
Diagnosis: If number of disclaim() system calls is very high, you may
want to set this variable.
Tuning: Setting this variable will eliminate calls to disclaim() from
free() if PSALLOC is set to early.
Refer to: Early Page Space Allocation
NSORDER Purpose: Overwrites the set name resolution search order.
Values: Default: bind, nis, local
Possible Values: bind, local, nis, bind4, bind6, local4, local6, nis4,
or nis6
Display: echo $NSORDER (this is turned on internally, so the initial
default value will not be seen with the echo command)
Change: NSORDER=value, value, ... export NSORDERChange takes effect
immediately in this shell. Change is effective until logging out of this
shell. Permanent change is made by adding NSORDER=value command to
the /etc/environment file.
Diagnosis: N/A
Tuning: NSORDER overrides the /etc/netsvc.conf file.
Refer to: Tuning Name Resolution
RT_MPC (AIX 4.3.3 and later) Purpose: When running the kernel in real-
time mode (see bosdebug command), an MPC can be sent to a different CPU
to interrupt it if a better priority thread is runnable so that this
thread can be dispatched immediately.
Values: Default: Not set; Range: ON
Display: echo $RT_MPC
Change: RT_MPC=ON
export RT_MPC
Change takes effect immediately. Change is effective until next boot.
Permanent change is made by adding RT_MPC=ON command to the
/etc/environment file.
Diagnosis: N/A
Note on LDR_CNTRL:
------------------
Setting the maximum number of AIX data segments that a process can use
(LDR_CNTRL)
In AIX, Version 4.3.3 and later, the number of segments that a process
can use for data is controlled
by the LDR_CNTRL environment variable. It is defined in the parent
process of the process that
is to be affected. For example, the following defines one additional
data segment:
The following table shows the LDR_CNTRL setting and memory increase for
various numbers of data segments:
Most UNIX systems use the LANG variable to specify the desired locale.
Different UNIX operating systems, however,
require different locale names to specify the same language. Be sure to
use a value for LANG that is supported
by the UNIX operating system that you are using.
To obtain the locale names for your UNIX system, enter the following:
# locale -a
LC_COLLATE
LC_CTYPE
LC_MONETARY
LC_NUMERIC
LC_TIME
LC_MESSAGES
LC_ALL
To verify that you have a language package installed for your UNIX or
Linux system, enter the following:
# locale
LANG=en_US
LC_COLLATE="en_US"
LC_CTYPE="en_US"
LC_MONETARY="en_US"
LC_NUMERIC="en_US"
LC_TIME="en_US"
LC_MESSAGES="en_US"
LC_ALL=
LANG=en_US
LC_COLLATE="C"
LC_CTYPE="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_MESSAGES="C"
LC_ALL=
# export LANG=De_A.88591
With this setting it should be possible for that user to find any
relevant catalogs should they exist.
Should the LANG variable not be set, the value of
LC_MESSAGES as returned by setlocale() is used.
If this is NULL, the default path as defined in <nl_types.h> is used.
The proper way to activate UTF-8 is the POSIX locale mechanism. A locale
is a configuration setting that
contains information about culture-specific conventions of software
behaviour, including the character encoding,
the date/time notation, alphabetic sorting rules, the measurement system
and common office paper size, etc.
The names of locales usually consist of ISO 639-1 language and ISO 3166-
1 country codes, sometimes with
additional encoding names or other qualifiers.
You can get a list of all locales installed on your system (usually
in /usr/lib/locale/) with the command
locale -a. Set the environment variable LANG to the name of your
preferred locale. When a C program executes
the setlocale(LC_CTYPE, "") function, the library will test the
environment variables
LC_ALL, LC_CTYPE, and LANG in that order, and the first one of these
that has a value will determine which
locale data is loaded for the LC_CTYPE category (which controls the
multibyte conversion functions).
The locale data is split up into separate categories. For example,
LC_CTYPE defines the character encoding
and LC_COLLATE defines the string sorting order. The LANG environment
variable is used to set the default locale
for all categories, but the LC_* variables can be used to override
individual categories. Do not worry too much
about the country identifiers in the locales. Locales such as en_GB
(English in Great Britain) and en_AU
(English in Australia) differ usually only in the LC_MONETARY category
(name of currency, rules for printing
monetary amounts), which practically no Linux application ever uses.
LC_CTYPE=en_GB and LC_CTYPE=en_AU have exactly
the same effect.
You can query the name of the character encoding in your current locale
with the command locale charmap.
This should say UTF-8 if you successfully picked a UTF-8 locale in the
LC_CTYPE category. The command locale -m
provides a list with the names of all installed character encodings.
If you use exclusively C library multibyte functions to do all the
conversion between the external character
encoding and the wchar_t encoding that you use internally, then the C
library will take care of using the right
encoding according to LC_CTYPE for you and your program does not even
have to know explicitly what the current
multibyte encoding is.
# export LANG=en_GB.UTF-8
# export LANG en_US.UTF-8
Note:
For some apps you must have the LANG and LC_ALL environment variables
set to the appropriate locale
in your current session before you start that app.
LANG
This variable determines the locale category for native language, local
customs and coded character set
in the absence of the LC_ALL and other LC_* (LC_COLLATE, LC_CTYPE,
LC_MESSAGES, LC_MONETARY, LC_NUMERIC,
LC_TIME) environment variables. This can be used by applications to
determine the language to use for
error messages and instructions, collating sequences, date formats, and
so forth.
LC_ALL
This variable determines the values for all locale categories. The value
of the LC_ALL environment variable
has precedence over any of the other environment variables starting with
LC_ (LC_COLLATE, LC_CTYPE, LC_MESSAGES,
LC_MONETARY, LC_NUMERIC, LC_TIME) and the LANG environment variable.
LC_COLLATE
This variable determines the locale category for character collation. It
determines collation information
for regular expressions and sorting, including equivalence classes and
multi-character collating elements,
in various utilities and the strcoll() and strxfrm() functions.
Additional semantics of this variable, if any,
are implementation-dependent.
LC_CTYPE
This variable determines the locale category for character handling
functions, such as tolower(), toupper()
and isalpha(). This environment variable determines the interpretation
of sequences of bytes of text data
as characters (for example, single- as opposed to multi-byte
characters), the classification of characters
(for example, alpha, digit, graph) and the behaviour of character
classes. Additional semantics of
this variable, if any, are implementation-dependent.
LC_MESSAGES
This variable determines the locale category for processing affirmative
and negative responses and the language
and cultural conventions in which messages should be written. It also
affects the behaviour of the
catopen() function in determining the message catalogue. Additional
semantics of this variable, if any,
are implementation-dependent. The language and cultural conventions of
diagnostic and informative messages
whose format is unspecified by this specification set should be affected
by the setting of LC_MESSAGES.
LC_MONETARY
This variable determines the locale category for monetary-related
numeric formatting information.
Additional semantics of this variable, if any, are implementation-
dependent.
LC_NUMERIC
This variable determines the locale category for numeric formatting (for
example, thousands separator
and radix character) information in various utilities as well as the
formatted I/O operations in printf()
and scanf() and the string conversion functions in strtod(). Additional
semantics of this variable, if any,
are implementation-dependent.
LC_TIME
This variable determines the locale category for date and time
formatting information. It affects the behaviour
of the time functions in strftime(). Additional semantics of this
variable, if any, are implementation-dependent.
NLSPATH
This variable contains a sequence of templates that the catopen()
function uses when attempting to locate
message catalogues. Each template consists of an optional prefix, one or
more substitution fields, a filename
and an optional suffix. For example:
NLSPATH="/system/nlslib/%N.cat"
Note 1:
-------
ar Command
Purpose
Syntax
ar [ -c ] [ -l ] [ -g | -o ] [ -s ] [ -v ] [ -C ] [ -T ] [ -z ] { -h |
-p | -t |
-x } [ -X {32|64|32_64}] ArchiveFile [ File ... ]
ar [ -c ] [ -l ] [ -g | -o ] [ -s ] [ -v ] [ -C ] [ -T ] [ -z ] { -m |
-r [ -u ]
} [ { -a | -b | -i } PositionName ] [ -X {32|64|32_64}] ArchiveFile File
...
ar [ -c ] [ -l ] [ -g | -o ] [ -s ] [ -v ] [ -C ] [ -T ] [ -z ] { -d |
-q } [ -X
{32|64|32_64}] ArchiveFile File ...
ar [ -c ] [ -l ] [ -v ] [ -C ] [ -T ] [ -z ] { -g | -o | -s | -w } [ -X
{32|64|32_64}] ArchiveFile
Description
There are two file formats that the ar command recognizes. The Big
Archive
Format, ar_big, is the default file format and supports both 32-bit and
64-bit
object files. The Small Archive Format can be used to create archives
that are
recognized on versions older than AIX 4.3, see the -g flag. If a 64-bit
object
is added to a small format archive, ar first converts it to the big
format,
unless -g is specified. By default, ar only handles 32-bit object files;
any
64-bit object files in an archive are silently ignored. To change this
behavior,
use the -X flag or set the OBJECT_MODE environment variable.
Flags
In an ar command, you can specify any number of optional flags from the
set
cClosTv. You must specify one flag from the set of flags dhmopqrstwx. If
you
select the -m or -r flag, you may also specify a positioning flag (-a,
-b, or
-i); for the -a, -b, or -i flags, you must also specify the name of a
file
within ArchiveFile (PositionName), immediately following the flag list
and
separated from it by a blank.
-h Sets the modification times in the member headers of the named files
to the
current date and time. If you do not specify any file names, the ar
command sets
the time stamps of all member headers. This flag cannot be used with the
-z
flag.
-q Adds the named files to the end of the library. In addition, if you
name the
same file twice, it may be put in the library twice.
If a named file does not already appear in the library, the ar command
adds it.
In this case, positioning flags do affect placement. If you do not
specify a
position, new files are placed at the end of the library. If you name
the same
file twice, it may be put in the library twice.
-T Allows file name truncation if the archive member name is longer than
the
file system supports. This option has no effect because the file system
supports
names equal in length to the maximum archive member name of 255
characters.
-u Copies only files that have been changed since they were last copied
(see the
-r flag discussed previously).
-w Displays the archive symbol table. Each symbol is listed with the
name of the
file in which the symbol is defined.
-x Extracts the named files by copying them into the current directory.
These
copies have the same name as the original files, which remain in the
library. If
you do not specify any files, the -x flag copies all files out of the
library.
This process does not alter the library.
-X mode Specifies the type of object file ar should examine. The mode
must be
one of the following:
32
Processes only 32-bit object files
64
Processes only 64-bit object files
32_64
Processes both 32-bit and 64-bit object files
$ enq -q -PAMSPM00040
Queue Dev Status Job Files User PP % Blks
Cp Rnk
------- ----- --------- --- ------------------ ---------- ---- -- -----
--- ---
AMSPM00 hp@UT READY
$ lp -dAMSPM00040 testje.txt
Job number is: 351
$ rm testje.txt
$ qchk -#351
Queue Dev Status Job Files User PP % Blks
Cp Rnk
------- ----- --------- --- ------------------ ---------- ---- -- -----
--- ---
prtdumm null READY
qstatus: (WARNING): 0781-350 Job 351 not found -- perhaps it's done?
Further information:
--------------------
The following defines terms commonly used when discussing UNIX printing.
* Print Job
A print job is a unit of work to be run on a printer. A print job can
consist of
printing one or more files depending on how the print job is requested.
The
system assigns a unique job number to each job it runs.
* Queue
The queue is where you direct a print job. It is a stanza in the
/etc/qconfig
file whose name is the name of the queue and points to the associated
queue device.
* Queue Device
The queue device is the stanza in the /etc/qconfig file that normally
follows
the local queue stanza. It specifies the /dev file (printer device) that
should
be used.
* qdaemon
The qdaemon is a process that runs in the background and controls the
queues. It is generally started during IPL.
* Print Spooler
The spooler is not specifically a print job spooler. Instead, it
provides a
generic spooling function that can be used for queuing various types of
jobs including print jobs queued to a printer.
The spooler does not normally know what type of job it is queuing
The main spooler command is the enq command. Although you can invoke
this command directly to queue a print job, three front-end commands are
defined for submitting a print job: The lp, lpr, and qprt commands. A
print
request issued by one of these commands is first passed to the enq
command, which then places the information about the file in the queue
for
the qdaemon to process.
* Real Printer
A real printer is the printer hardware attached to a serial or parallel
port at
a unique hardware device address. The printer device driver in the
kernel
communicates with the printer hardware and provides an interface
between the printer hardware and a virtual printer, but it is not aware
of
the concept of virtual printers. Real printers sometimes run out of
paper.
* Printer Backend
The printer backend is a collection of programs called by the spooler's
qdaemon command to manage a print job that is queued for printing. The
printer backend performs the following functions:
AIX supports The AIX printsubsystem and the System5 BSD like
printsubsystem.
Individual device files can be listed with the ls command, for example
# ls -al /dev/lp0
crw-rw-rw- 1 root system 25,0 Oct 19 13:62 /dev/lp0
The file that holds the configuration for the printers that exist on the
system is
the /etc/qconfig file. It is the most important file in the spooler
domain for
these reasons:
..
..
lpforu:
device = lp0
lp0:
file = /dev/lp0
header = never
trailer = never
access = both
backend = /usr/lib/lpd/piobe
Quick checks:
-------------
- The lpq command reports the status of the specified job or all jobs
associated with the specified UserName and JobNumber variables.
The lpq command syntax is as follows:
lpq [ + [ Number ] ] [ -l | -W ] [-P Printer ] [JobNumber] [UserName]
The following is an example of the lpq command without any flags.
# lpq
Queue Dev Status Job Files User PP% Blks Cp Rnk
------ ---- ------- --- ---------------- ------------ --- ---- -- ---
lpforu lp0 READY
- The lpr command uses a spooling daemon to print the named File
parameter when facilities become available.
The lpr command syntax is as follows:
lpr [ -f ] [ -g ] [ -h ] [ -j ] [ -l ] [ -m ] [ -n ] [ -p ] [ -r ] [ -s
] [ -P Printer ] [ -# NumberCopies ] [ -C Class ] [ -J Job ] [ -T Title
] [ -i [ NumberColumns ] ] [ -w Width ] [ File ... ]
The following is an example of using the lpr command to print the file
/etc/passwd.
# lpr /etc/passwd
# lpstat
Queue Dev Status Job Files User PP % Blks Cp Rnk
------ ---- -------- --- ---------------- -------- ---- -- ---- -- ---
lpforu lp0 RUNNING 3 /etc/passwd root 1 100 1 1 1
Example:
--------
In the following scenario, you have a job printing on a print queue, but
you
need to stop the queue so that you can put more paper in the printer.
# lpstat -vlpforu
Disable the print queue using the enq command as shown in the following
example.
# enq -D -P 'lpforu:lp0'
Checking the printer queue using the qchk command as shown in the
following example.
# qchk -Plpforu
Queue Dev Status Job Files User PP % Blks Cp Rnk
------ ---- -------- --- ---------------- -------- ---- -- ---- -- ---
lpforu lp0 DOWN 3 /etc/passwd root 1 100 1 1 1
You have replaced the paper, and you now want to restart the print queue
so
that it will finish your print job. Here is how you would do this.
# lpstat -vlpforu
Queue Dev Status Job Files User PP % Blks Cp Rnk
------ ---- -------- --- ---------------- -------- ---- -- ---- -- ---
lpforu lp0 DOWN 3 /etc/passwd root 1 100 1 1 1
# enq -U -P 'lpforu:lp0'
# qchk -P lpforu
Queue Dev Status Job Files User PP % Blks Cp Rnk
------ ---- -------- --- ---------------- -------- ---- -- ---- -- ---
lpforu lp0 RUNNING 3 /etc/passwd root 1 100 1 1
# enq -q -P<QUEUENAME>
for example
# enq -q -PAMSPM00028
# smitty lsallq
# lsallq -c
_ Deleting a queue:
# smitty rmpq
or
# rmvirprt
# rmquedev
# rmque
# smitty qstop
# smitty qstart
Or use the "qadm" command to bring printers, queues, and the spooling
system up or down.
Example:
To bring down the PCL-mv200 queue, enter one of the following commands:
# qadm -D PCL-mv200
# disable PCL-mv200
System5: lp
BSD : lpr
AIX : qprt
1. To submit a printjob, use either lp. lpr, or qprt. All jobs will go
to the system default queue
unless the PRINTER or LPDEST variables are set. You can also specify on
the command line which
queue ti ose.
Use -d with lp or use -P with qprt and lpr.
All the printcommands lp, lpr, and qprt, actually call the "enq"
command, which places
the print request in a queue.
To print multiple copies, use the "qprt -N #" or "lp -n #" command.
For lpr use just a dash followed by the number of copies, like "lpr -
#".
Examples:
# smitty qstatus
# smitty qchk
System5: lpstat
BSD : lpq
AIX : qchk
- Cancelling a printjob:
System5: cancel
BSD : lprm
AIX : qcan
For example to cancel Job Number 127 on whatever queue the job is on,
run
# qcan -x 127
# cancel 127
# qcan -X -Plp0
# cancel lp0
- Demons:
# switch.prt -s System5
# switch.prt -s AIX
73. Apache:
===========
The default configuration file installed with the Apache HTTP Server
works without alteration
for most situations. This chapter, however, outlines how to customize
the Apache HTTP Server
configuration file (/etc/httpd/conf/httpd.conf) for situations where the
default configuration does not
suit your needs.
.The mod_dav package has been incorporated into the httpd package.
.The mod_put and mod_roaming packages have been removed, since their
functionality is a subset of that
provided by mod_dav.
.The version number for the mod_ssl package is now synchronized with the
httpd package. This means that the
mod_ssl package for Apache HTTP Server 2.0 has a lower version number
than mod_ssl package for
Apache HTTP Server 1.3.
Warning
It is vital that this line be inserted when migrating an existing
configuration.
. The dbmmanage command has been replaced. - The dbmmanage command has
been replaced by htdbm.
. The logrotate configuration file has has been renamed. - The logrotate
configuration file has been renamed
from /etc/logrotate.d/apache to /etc/logrotate.d/httpd.
- After Installation
After you have installed the httpd package, the Apache HTTP Server's
documentation is available by
installing the httpd-manual package and pointing a Web browser to
http://localhost/manual/ or you can
browse the Apache documentation available on the Web at
http://httpd.apache.org/docs-2.0/.
The Apache HTTP Server's documentation contains a full list and complete
descriptions of all
configuration options. For your convenience, this chapter provides short
descriptions of the configuration
directives used by Apache HTTP Server 2.0.
The version of the Apache HTTP Server included with Red Hat Linux
includes the ability to set up secure Web servers
using the strong SSL encryption provided by the mod_ssl and openssl
packages. As you look through the
configuration files, be aware that it includes both a non-secure and a
secure Web server.
The secure Web server runs as a virtual host, which is configured in the
/etc/httpd/conf.d/ssl.conf file.
Note
If you are running the Apache HTTP Server as a secure server, you will
be prompted to type your password.
To stop your server, type the command:
# /sbin/service httpd stop
Note
If you are running the Apache HTTP Server as a secure server, you will
not need to type your password when
using the reload option as the password will remain cached across
reloads.
By default, the httpd process will not start automatically when your
machine boots. You will need to configure
the httpd service to start up at boot time using an initscript utility,
such as /sbin/chkconfig, /sbin/ntsysv,
or the Services Configuration Tool program.
Note
If you are running the Apache HTTP Server as a secure server, you will
be prompted for the secure server's
password after the machine boots, unless you generated a specific type
of server key file.
If you need to configure the Apache HTTP Server, edit httpd.conf and
then either reload, restart,
or stop and start the httpd process. How to reload, stop and start the
Apache HTTP Server is covered in the
Section called Starting and Stopping httpd.
- Default Modules
The Apache HTTP Server is distributed with a number of modules. By
default the following modules are installed
and enabled with the httpd package on Red Hat Linux:
mod_access
mod_auth
mod_auth_anon
mod_auth_dbm
mod_auth_digest
mod_include
mod_log_config
mod_env
mod_mime_magic
mod_cern_meta
mod_expires
mod_headers
mod_usertrack
mod_unique_id
mod_setenvif
mod_mime
mod_dav
mod_status
mod_autoindex
mod_asis
mod_info
mod_cgi
mod_dav_fs
mod_vhost_alias
mod_negotiation
mod_dir
mod_imap
mod_actions
mod_speling
mod_userdir
mod_alias
mod_rewrite
mod_auth_mysql
mod_auth_pgsql
mod_perl
mod_python
mod_ssl
php
squirrelmail
Note
You cannot use name-based virtual hosts with your Red Hat Linux Advanced
Server, because the SSL handshake
occurs before the HTTP request which identifies the appropriate name-
based virtual host. If you want to use
name-based virtual hosts, they will only work with your non-secure Web
server.
The configuration directives for your secure server are contained within
virtual host tags in the
/etc/httpd/conf.d/ssl.conf file. If you need to change anything about
the configuration of your secure server,
you will need to change the configuration directives inside the virtual
host tags.
By default, both the secure and the non-secure Web servers share the
same DocumentRoot. To change the DocumentRoot
so that it is no longer shared by both the secure server and the non-
secure server, change one of the DocumentRoot
directives. The DocumentRoot either inside or outside of the virtual
host tags in httpd.conf defines the
DocumentRoot for the non-secure Web server. The DocumentRoot within the
virtual host tags in
conf.d/ssl.conf define the document root for the secure server.
The secure the Apache HTTP Server server listens on port 443, while your
non-secure Web server listens on port 80.
To stop the non-secure Web server from accepting connections find the
line which reads:
Then comment out any line in httpd.conf which reads Listen 80.
#<VirtualHost *>
# ServerAdmin [email protected]
# DocumentRoot /www/docs/dummy-host.example.com
# ServerName dummy-host.example.com
# ErrorLog logs/dummy-host.example.com-error_log
# CustomLog logs/dummy-host.example.com-access_log common
#</VirtualHost>
Uncomment all of the lines, and add the correct information for the
virtual host.
In the first line, change * to your server's IP address. Change the
ServerName to a valid DNS name to use
for the virtual host.
You will also need to uncomment one of the NameVirtualHost lines below:
NameVirtualHost *
Next change the IP address to the IP address, and port if necessary, for
the virtual host. When finished it will
look similar to the following example:
NameVirtualHost 192.168.1.1:80
Then add the port number to the first line of the virtual host
configuration as in the following example:
<VirtualHost ip_address_of_your_server:12331>
This line would create a virtual host that listens on port 12331.
You must restart httpd to start a new virtual host. See the Section
called Starting and Stopping httpd for
instructions on how to start and stop httpd.
- Using Apache
To display static web pages with Apache, simply place your files in the
correct directory. In SUSE LINUX,
the correct directory is /srv/www/htdocs. A few small example pages may
already be installed there.
Use these pages to check if Apache was installed correctly and is
currently active. Subsequently, you can
simply overwrite or uninstall these pages. Custom CGI scripts are
installed in /srv/www/cgi-bin.
During operation, Apache writes log messages to the file
/var/log/httpd/access_log or /var/log/apache2/access_log.
These messages show which resources were requested and delivered at what
time and with which method
(GET, POST, etc.). Error messages are logged to /var/log/apache2.
- Active Contents
Apache provides several possibilities for the delivery of active
contents. Active contents are HTML pages
that are generated on the basis of variable input data from the client,
such as search engines that respond
to the input of one or several search strings (possibly interlinked with
logical operators like AND or OR)
by returning a list of pages containing these search strings.
Module
Apache offers interfaces for executing any modules within the scope of
request processing. Apache gives these
programs access to important information, such as the request or the
HTTP headers. Programs can take part
in the generation of active contents as well as in other functions (such
as authentication). The programming
of such modules requires some expertise. The advantages of this approach
are high performance and possibilities
that exceed those of SSI and CGI.
While CGI scripts are executed directly by Apache (under the user ID of
their owner), modules are controlled
by a persistent interpreter that is embedded in Apache. In this way,
separate processes do not need to be
started and terminated for every request (this would result in a
considerable overhead for the process management,
memory management, etc.). Rather, the script is handled by the
interpreter running under the ID of the web server.
However, this approach has a catch. Compared to modules, CGI scripts are
relatively tolerant of careless
programming. With CGI scripts, errors, such as a failure to release
resources and memory, do not have a
lasting effect, because the programs are terminated after the request
has been processed. This results in the
clearance of memory that was not released by the program due to a
programming error. With modules, the
effects of programming errors accumulate, as the interpreter is
persistent. If the server is not restarted
and the interpreter runs for several months, the failure to release
resources, such as database connections,
can be quite disturbing.
The main advantage of CGI is that this technology is quite simple. The
program merely must exist in a
specific directory to be executed by the web server just like a command-
line program. The server sends
the program output on the standard output channel (stdout) to the
client.
> With POST, the server passes the parameters to the program
on the standard input channel (stdin). The program would receive its
input in the same way when
started from a console.
> With GET, the server uses the environment variable QUERY_STRING to
pass the parameters to the program.
An environment variable is a variable made available globally by the
system (such as the variable PATH,
which contains a list of paths the system searches for executable
commands when the user enters a command).
Languages for CGI
Theoretically, CGI programs can be written in any programming language.
Usually, scripting languages
(interpreted languages), such as Perl or PHP, are used for this purpose.
If speed is critical,
C or C++ may be more suitable.
> First, there are modules that can be integrated in Apache for the
purpose of handling specific functions,
such as modules for embedding programming languages. These modules are
introduced below.
mod_perl
Perl is a popular, proven scripting language. There are numerous modules
and libraries for Perl, including
a library for expanding the Apache configuration file. The home page for
Perl is http://www.perl.com/.
A range of libraries for Perl is available in the Comprehensive Perl
Archive Network (CPAN) at http://www.cpan.org/.
Setting up mod_perl
To set up mod_perl in SUSE LINUX, simply install the respective package
(see Section 15.6. "Installation").
Following the installation, the Apache configuration file will include
the necessary entries
(see /etc/apache2/mod_perl-startup.pl). Information about mod_perl is
available at http://perl.apache.org/.
mod_perl versus CGI
In the simplest case, run a previous CGI script as a mod_perl script by
requesting it with a different URL.
The configuration file contains aliases that point to the same directory
and execute any scripts it contains
either via CGI or via mod_perl. All these entries already exist in the
configuration file. The alias entry for
CGI is as follows:
<IfModule mod_perl.c>
# Provide two aliases to the same cgi-bin directory,
# to see the effects of the 2 different mod_perl modes.
# for Apache::Registry Mode
ScriptAlias /perl/ "/srv/www/cgi-bin/"
# for Apache::Perlrun Mode
ScriptAlias /cgi-perl/ "/srv/www/cgi-bin/"
</IfModule>
The following entries are also needed for mod_perl. These entries
already exist in the configuration file.
#
# If mod_perl is activated, load configuration information
#
<IfModule mod_perl.c>
Perlrequire /usr/include/apache/modules/perl/startup.perl
PerlModule Apache::Registry
#
# set Apache::Registry Mode for /perl Alias
#
<Location /perl>
SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI
PerlSendHeader On
</Location>
#
# set Apache::PerlRun Mode for /cgi-perl Alias
#
<Location /cgi-perl>
SetHandler perl-script
PerlHandler Apache::PerlRun
Options ExecCGI
PerlSendHeader On
</Location>
</IfModule>
Apache::PerlRun
The scripts are recompiled for every request. Variables and subroutines
disappear from the namespace between
the requests (the namespace is the entirety of all variable names and
routine names that are defined at a
given time during the existence of a script). Therefore, Apache::PerlRun
does not necessitate painstaking
programming, as all variables are reinitialized when the script is
started and no values are kept from previous
requests. For this reason, Apache::PerlRun is slower than
Apache::Registry but still a lot faster than CGI
(in spite of some similarities to CGI), because no separate process is
started for the interpreter.
mod_php4
PHP is a programming language that was especially developed for use with
web servers. In contrast to other languages
whose commands are stored in separate files (scripts), the PHP commands
are embedded in an HTML page
(similar to SSI). The PHP interpreter processes the PHP commands and
embeds the processing result in the HTML page.
The home page for PHP is http://www.php.net/. For PHP to work, install
mod_php4-core and, in addition,
apache2-mod_php4 for Apache 2.
mod_python
Python is an object-oriented programming language with a very clear and
legible syntax. An unusual but convenient
feature is that the program structure depends on the indentation. Blocks
are not defined with braces (as in C and
Perl) or other demarcation elements (such as begin and end), but by
their level of indentation. The package to
install is apache2-mod_python.
mod_ruby
Ruby is a relatively new, object-oriented high-level programming
language that resembles certain aspects of Perl
and Python and is ideal for scripts. Like Python, it has a clean,
transparent syntax. On the other hand, Python
has adopted abbreviations, such as $.r for the number of the last line
read in the input file - a feature that
is welcomed by some programmers and abhorred by others. The basic
concept of Ruby closely resembles Smalltalk.
Note 1:
-------
Note 2:
-------
OPTIONS
--quiet | -q
Makes output quieter.
--machine | -m [machinename[,machinename]*]
Adds machinename to the list of machines that the command is exeuted.
The syntax of machinename allows
username@machinename where remote shell is invoked with the option to
make it of username.
From version 0.21.4, it is possible to specify in the format of
"username@machinename,username@machinename,
username@machinename" so that multiple hosts can be specified with
comma-delimited values.
--all | -a
Add all machines found in /etc/dsh/machines.list to the list of machines
that the specified command is executed.
--help | -h
Output help message and exits.
--wait-shell | -w
Executes on each machine and waits for the execution finishing before
moving on to the next machine.
--concurrent-shell | -c
Executes shell concurrently.
--show-machine-names | -M
Prepends machine names on the standard output. Useful to be used in
conjunction with the --concurrent-shell option
so that the output is slightly more parsable.
--duplicate-input | -i
Duplicates the input to dsh process to individual process that are
remotely invoked. Needs to have --concurrent-shell set.
Due to limitations in current implementation, it is only useful for
running shell. Terminate the shell session
with ctrl-D.
--version | -V
Outputs version information and exits.
--num-topology | -N
Changes the current topology from 1. 1 is the default behavior of
spawning the shell from one node to every node.
Changing the number to a value greater than 2 would result in dsh being
spawned on other machines as well.
EXIT STATUS
EXAMPLES
dsh -a w
Shows list of users logged in on all workstations.
dsh -r ssh -a -- w
Shows list of users logged in on all workstations, and use ssh command
to connect.
(It should be of note that when using ssh, ssh-agent is handy.)
FILES
/etc/dsh/machines.list | $(HOME)/.dsh/machines.list
List of machine names to be used for when -a command-line option is
specified.
/etc/dsh/dsh.conf | $(HOME)/.dsh/dsh.conf
Configuration file containing the day-to-day default.
Note 3:
-------
PSSP's distributed shell commands "dsh" and "dshbak" are now standard in
AIX 5.2. They run commands in parallel
on multiple hosts, and format the output. The dsh commands greatly
simplify managing server farms.
The set of nodes to which commands are sent can be set on the command
line or by the contents of a file named
by the DSH_LIST environment variable.
Here are a couple simple examples how these commands can be used.
(Assume DSH_LIST has been set to the name of the
file containing the list of servers. In this case, just three servers:
dodgers, surveyor and pioneer)
# dsh date
dodgers: Fri Jun 4 14:46:06 PDT 2004
surveyor: Fri Jun 4 14:16:18 PDT 2004
pioneer: Fri Jun 4 14:32:28 PDT 2004
You can also use "dshbak" to group common output from the # dsh command.
This makes it easier to identify
differences when you have a lot of servers. For example, we can
consolidate the output of the above instfix command
as follows.
========================================
75. General Parallel File System (GPFS):
========================================
GPFS operates often within the context of a HACMP cluster, but you can
build just GPFS "clusters" as well.
Suppose we have two nodes named node2 and node3. Our goal is to create a
single GPFS filesystem,
named "/my_gpfs", consisting of 2 disks used for data and metadata.
These disks are housed by two
DS4300 storage subsystems. A tiebreaker disk, in a seperate DS4100, will
be used to maintain node quorom
during single nodes failures. Additionally, a "filesystem descriptor"
disk for /my_gpfs is located
at the same site.
When node quorom is not met, GPFS stops its cluster-wide services and
access to all filesystems
within the cluster is no longer possible. If less than 50% of disks
serving a GPFS file system fail,
disk quorom, that is the number of "filesystem descriptors" for that
particular file system,
is no longer met and the filesystem will be unmounted.
-- Preparations:
-- -------------
example:
root@starboss:/var/mmfs/etc#cat mmfs.cfg
#
# WARNING: This is a machine generated file. Do not edit!
# Use the mmchconfig command to change configuration parameters.
#
clusterName cluster_name.starboss
clusterId 729741152660153204
clusterType lc
autoload no
useDiskLease yes
maxFeatureLevelAllowed 912
tiebreakerDisks gpfs3nsd;gpfs4nsd
[zd110l13]
takeOverSdrServ yes
-- Creating the GPFS cluster:
-- ---------------------------
The first step is to create a GPFS cluster named TbrCl using the
command:
# Node2 can be a file system manager and is relevant for GPFS quorum
node2:manager-quorom
# Node3 can be a file system manager and is relevant for GPFS quorum
node3:manager-quorom
Each node can fullfill the function of a file system manager and is
relevant for maintaining node quorom.
A GPFS cluster designates a primary cluster manager (node2) and appoints
a backup (node3) in case the
primary fails. Cluster services will be started automatically during
node boot (-A). After successfully
creating the cluster, you can verify your setup:
# mmlscluster
# mmstartup -a
With GPFS you can administer the whole cluster from any cluster node.
After starting GPFS services you
should examine the state of the cluster:
# mmgetstate -aL
Node number Node name Quorom Nodes up Total nodes GPFS state
-------------------------------------------------------------
1 node2 2 2 2 active
2 node3 2 2 2 active
At this point, the cluster software is running, but you haven't done
anything yet on the filesystems.
Before starting with the configuration of GPFS disks, you have to make
sure that each cluster node has
access to each SAN attached disk when running in a shared disk
environment. With AIX 5L, you can use
the lspv command to verify your disks (hdisk) are properly configured:
# lspv
If you look for LUN related information (e.g. volume names) issue the
following command against a
dedicated hdisk:
..
.... (in the output, you will also see SAN stuff)
..
Its very important to keep a well balanced disk configuration when using
GPFS because this makes sure
you get optimal performance by distributing I/O requests evenly among
storage subsystems and attached
data disks. Keep in mind that all GPFS disks belonging to a particular
file system should be of same size.
GPFS uses a mechanism called Network Shared Disk (NSD) to provide file
system access to cluster nodes,
which do not have direct physical access to file system disks. A
diskless node accesses an NSD via the
cluster network and I/O operations are handled as if they run against a
directly attached disk from
an operating systems perspective. A special device driver handles data
shipping using the cluster network.
NSDs can also be used in a purely SAN based GPFS configuration where
each node can directly access
any disk. In case a node looses direct disk access, it automatically
switches to NSD-mode, sending I/O
requests via network to other direct direct disk attached nodes. This
mechanism increases file system
availability, and should normally be used.
When using NSD, a primary and a backup server are assigned to each NSD.
In case a node looses its
direct disk attachment, it contacts the primary NSD server, or backup
server in case the primary
is not available.
# cat /var/mmfs/conf/diskfile
Here, our cluster uses 4 disks with GPFS. Filesystem "/my_gpfs" uses
hdisk2 and hdisk3 for data and metadata.
Therefore these disks will use the NSD mechanism to provide file system
data access in case direct disk access
fails on one of the cluster nodes.
Node2 is the primary NSD server for hdisk2 with node3 being its backup.
The same is true for hdisk3, but then
the other way around.
Each of these disks belongs to a different "failure group" (1=site A,
2=site B) which basically enables
replication of file system data and metadata between the two sites.
# mmlsnsd -aL
During NSD creation, the diskfile was rewritten. Each hdisk stanza is
commented out, and a
equivalent NSD stanza is inserted.
After issuing the mmcrnsd command, we have made the disks available and
ready to create GPFS filesystems.
`
-- Activating tiebreaker mode
-- --------------------------
When using a two node cluster with tiebraker disks, the cluster
configuration must be switched
to tiebreaker mode. Ofcourse you need to know which disks are being used
as tiebreaker disks.
Up to 3 disks are allowed. In our example, gpfs4nsd (that is hdisk5) is
the only tiebreaker disk.
With the following command sequence, tiebreaker mode is turned on:
# mmshutdown -a
# mmstartup -a
# mmgetstate -aL
Node number Node name Quorom Nodes up Total nodes GPFS state
---------------------------------------------------------------
1 node2 1* 2 2 active
2 node3 1* 2 2 active
# mmlsconfig
Filesystem /my_gpfs is now available to both nodes with all three file
system descripters being well
balanced across failure groups and disks.
# mmlsdisk my_gpfs
Notes:
------
Note 2: pv listing:
In a gpfs cluster, a lspv might show output like the following example:
root@zd110l13:/root# lspv
hdisk0 00cb61fe0b562af0 rootvg
active
hdisk1 00cb61fe0fb40619 rootvg
active
hdisk2 00cb61fe33429fa6 vge0corddap01
active
hdisk3 00cb61fe3342a096 vge0corddap01
active
hdisk4 00cb61fe3342a175 gpfs3nsd
hdisk5 00cb61fe33536125 gpfs4nsd
..
..
/data/documentum/dmadmin:
dev = /dev/gpfsfs0
vfs = mmfs
nodename = -
mount = mmfs
type = mmfs
account = false
options = rw,mtime,atime,dev=gpfsfs0
..
..
Name
mmcrcluster - Creates a GPFS cluster from a set of nodes.
Synopsis
mmcrcluster -n NodeFile -p PrimaryServer [-s SecondaryServer] [-r
RemoteShellCommand]
[-R RemoteFileCopyCommand] [-C ClusterName] [-U
DomainName] [-A] [-c ConfigFile]
Description
Use the mmcrcluster command to create a GPFS cluster.
Upon successful completion of the mmcrcluster command, the
/var/mmfs/gen/mmsdrfs and the /var/mmfs/gen/mmfsNodeData
files are created on each of the nodes in the cluster. Do not delete
these files under any circumstances.
For further information, see the General Parallel File System: Concepts,
Planning, and Installation Guide.
You must follow these rules when creating your GPFS cluster:
While a node may mount file systems from multiple clusters, the node
itself may only be added to a single cluster
using the mmcrcluster or mmaddnode command.
The nodes must be available for the command to be successful. If any of
the nodes listed are not available
when the command is issued, a message listing those nodes is displayed.
You must correct the problem on each node
and issue the mmaddnode command to add those nodes.
You must designate at least one node as a quorum node. You are strongly
advised to designate the cluster
configuration servers as quorum nodes. How many quorum nodes altogether
you will have depends on whether
you intend to use the node quorum with tiebreaker algorithm. or the
regular node based quorum algorithm.
For more details, see the General Parallel File System: Concepts,
Planning, and Installation Guide and
search for designating quorum nodes.
Parameters
-A
Specifies that GPFS daemons are to be automatically started when nodes
come up. The default is not to start
daemons automatically.
-C ClusterName
Specifies a name for the cluster. If the user-provided name contains
dots, it is assumed to be a fully
qualified domain name. Otherwise, to make the cluster name unique, the
domain of the primary configuration
server will be appended to the user-provided name.
If the -C flag is omitted, the cluster name defaults to the name of the
primary GPFS cluster configuration server.
-c ConfigFile
Specifies a file containing GPFS configuration parameters with values
different than the documented defaults.
A sample file can be found in /usr/lpp/mmfs/samples/mmfs.cfg.sample. See
the mmchconfig command for a detailed
description of the different configuration parameters.
The -c ConfigFile parameter should only be used by experienced
administrators. Use this file to only set up
parameters that appear in the mmfs.cfg.sample |file. Changes to any
other values may be ignored by GFPS.
When in doubt, use the mmchconfig command instead.
-n NodeFile
NodeFile consists of a list of node descriptors, one per line, to be
included in the GPFS cluster.
Node descriptors are defined as:
NodeName:NodeDesignationswhere:
Format Example
Short hostname k145n01
Long hostname k145n01.kgn.ibm.com
IP address 9.119.19.102
You must provide a descriptor for each node to be added to the GPFS
cluster.
-p PrimaryServer
Specifies the primary GPFS cluster configuration server node used to
store the GPFS configuration data.
This node must be a member of the GPFS cluster.
-R RemoteFileCopy
Specifies the fully-qualified path name for the remote file copy program
to be used by GPFS. The default value is
/usr/bin/rcp.
The remote copy command must adhere to the same syntax format as the rcp
command, but may implement an
alternate authentication mechanism.
-r RemoteShellCommand
Specifies the fully-qualified path name for the remote shell program to
be used by GPFS. The default value is
/usr/bin/rsh.
The remote shell command must adhere to the same syntax format as the
rsh command, but may implement an
alternate authentication mechanism.
-s SecondaryServer
Specifies the secondary GPFS cluster configuration server node used to
store the GPFS cluster data.
This node must be a member of the GPFS cluster.
It is suggested that you specify a secondary GPFS cluster configuration
server to prevent the loss of
configuration data in the event your primary GPFS cluster configuration
server goes down. When the GPFS daemon
starts up, at least one of the two GPFS cluster configuration servers
must be accessible.
If your primary GPFS cluster configuration server fails and you have not
designated a secondary server,
the GPFS cluster configuration files are inaccessible, and any GPFS
administrative commands that are issued fail.
File system mounts or daemon startups also fail if no GPFS cluster
configuration server is available.
-U DomainName
Specifies the UID domain name for the cluster.
A detailed description of the GPFS user ID remapping convention is
contained in UID Mapping for GPFS In a
Multi-Cluster Environment at
www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html.
Exit status
0
Successful completion.
1
A failure has occurred.
Security
You must have root authority to run the mmcrcluster command.
You may issue the mmcrcluster command from any node in the GPFS cluster.
A properly configured .rhosts file must exist in the root user's home
directory on each node in the GPFS cluster.
If you have designated the use of a different remote communication
program on either the mmcrcluster or the
mmchcluster command, you must ensure:
Example 1:
----------
To create a GPFS cluster made of all of the nodes listed in the file
/u/admin/nodelist, using node k164n05
as the primary server, and node k164n04 as the secondary server, issue:
k164n04.kgn.ibm.com:quorum
k164n05.kgn.ibm.com:quorum
k164n06.kgn.ibm.com
# mmlscluster
------------------------------------------------------------------------
--
1 k164n04 198.117.68.68 k164n04.kgn.ibm.com quorum
node
2 k164n05 198.117.68.69 k164n05.kgn.ibm.com quorum
node
3 k164n06 198.117.68.70 k164n06.kgn.ibm.com
Example 2:
----------
# mmstartup -a
# mmlscluster
75.2.2 Other GPFS commands:
---------------------------
# mmlscluster
------------------------------------------------------------------------
--
1 k164n04 198.117.68.68 k164n04.kgn.ibm.com quorum
node
2 k164n05 198.117.68.69 k164n05.kgn.ibm.com quorum
node
3 k164n06 198.117.68.70 k164n06.kgn.ibm.com
# mmgetstate -aL
Node number Node name Quorom Nodes up Total nodes GPFS state
-------------------------------------------------------------
1 node2 2 2 2 active
2 node3 2 2 2 active
# mmlsconfig
Configuration data for cluster TbrCl.node2:
-------------------------------------------
ClusterName TbrCl.node2
ClusterId 8262362723390
ClusterType 1c
Multinode yes
autoload yes
useDiskLease yes
MaxFeatureLevelAllowed 809
tiebreakerDisks gpfs4nsd
root@zd110l13:/root#mmlsconfig
Configuration data for cluster cluster_name.zd110l13:
-----------------------------------------------------
clusterName cluster_name.zd110l13
clusterId 729741152660153204
clusterType lc
autoload no
useDiskLease yes
maxFeatureLevelAllowed 912
tiebreakerDisks gpfs3nsd;gpfs4nsd
[zd110l13]
takeOverSdrServ yes
You can even simulate the loss of a NSD disk from a Cluster, for example
# mmchcluster -p k145n03
Use the mmchfs command to change the attributes of a GPFS file system.
With the mmchfs command, you can for example change the number of inodes
of GPFS filesystem, like
for example
# mmdf /dev/gpfsfs0
disk disk size failure holds holds free
KB free KB
name in KB group metadata data in full
blocks in fragments
--------------- ------------- -------- -------- -----
-------------------- -------------------
Disks in storage pool: system
gpfs3nsd 7340032 1 yes yes 5867008
( 80%) 434992 ( 6%)
gpfs1nsd 314572800 1 yes yes 268067328
( 85%) 17170032 ( 5%)
gpfs2nsd 115343360 1 no no 0
( 0%) 0 ( 0%)
-------------
-------------------- -------------------
(pool total) 437256192 273934336
( 63%) 17605024 ( 4%)
=============
==================== ===================
(total) 437256192 273934336
( 63%) 17605024 ( 4%)
Inode Information
-----------------
Number of used inodes: 177011
Number of free inodes: 679053
Number of allocated inodes: 856064
Maximum number of inodes: 856064
2048006
# mmdf /dev/gpfsfs0
# mmchfs gpfsfs0 -F 2457612:2457612
1228806
mmchfs Device [-A {yes | no | automount}] [-E {yes | no}] [-D {nfs4 |
posix}]
[-F MaxNumInodes[:NumInodesToPreallocate]] [-k {posix |
nfs4 | all}]
[-K {no | whenpossible | always}] [-m
DefaultMetadataReplicas]
[-o MountOptions] [-Q {yes | no}]
[-r DefaultDataReplicas] [-S {yes | no} ] [-T Mountpoint]
[-V] [-z {yes | no}]
# mmchfs fs0 -m 2 -r 2
# mmlsfs fs0 -m -r
With the mmchfs command, you can for example also change the number of
inodes of GPFS filesystem, like
for example
Example:
To add the nodes "k164n06" and "k164n07" as quorom nodes, designating
"k164n06" to be available as
manager node, use the following command:
# mmaddnode -N k164n06:quorom-manager,k164n07:quorom
Use the mmmount and mmumount to mount or unmount GPFS filesystem on one
or more nodes in the cluster.
Examples:
# mmmount all -a
# mmmount fs2 -o ro
# mmumount fs1 -a
The mmcrnsd command is used to create cluster-wide names for NSDs used
by GPFS.
This is the first GPFS step in preparing a disk for use by a GPFS file
system. A disk descriptor file supplied
to this command is rewritten with the new NSD names and that rewritten
disk descriptor file can then be supplied
as input to the mmcrfs command.
To identify that the disk has been processed by the mmcrnsd command, a
unique NSD volume ID is written on
sector 2 of the disk. All of the NSD commands (mmcrnsd, mmlsnsd, and
mmdelnsd) use this unique
NSD volume ID to identify and process NSDs.
After the NSDs are created, the GPFS cluster data is updated and they
are available for use by GPFS.
Examples:
sdav1:k145n05:k145n06:dataOnly:4
sdav2:k145n04::dataAndMetadata:5:ABC
enter:
# mmcrnsd -F nsdesc
Creation of a file that contains all of the nodes in your GPFS cluster
prior to the installation of GPFS,
will be useful during the installation process. Using either host names
or IP addresses when constructing
the file will allow you to use this information when creating your
cluster through the mmcrcluster command.
For example, create the file /tmp/gpfs.allnodes, listing the nodes one
per line:
k145n01.dpd.ibm.com
k145n02.dpd.ibm.com
k145n03.dpd.ibm.com
k145n04.dpd.ibm.com
k145n05.dpd.ibm.com
k145n06.dpd.ibm.com
k145n07.dpd.ibm.com
k145n08.dpd.ibm.com
>>Installation procedures
Follow these steps to install the GPFS software using the installp
command:
# mkdir /tmp/gpfslpp
Copy the installation images from the CD-ROM to the new directory, by
issuing:
This will place the following GPFS images in the image directory :
gpfs.base
gpfs.docs
gpfs.msg.en_US
# cd /tmp/gpfslpp
Use the inutoc command to create a .toc file. The .toc file is used by
the installp command.
# inutoc .
In order to use the GPFS man pages you must install the gpfs.docs image.
The GPFS manual pages will be
located at /usr/share/man/.
Installation consideration:
The gpfs.docs image need not be installed on all nodes if man pages are
not desired or local file system space
on the node is minimal.
If you are installing on a shared file system network, to place the GPFS
images on each node in your network,
issue:
Otherwise, issue:
If you have made changes to any of these files, you will have to
reconcile the differences with the
new versions of the files in directory /var/mmfs/etc. This does not
apply to file /var/mmfs/etc/mmfs.cfg
which is automatically maintained by GPFS.
Use the lslpp command to verify the installation of GPFS file sets on
each node:
lslpp -l gpfs\*
------------------------------------------------------------------------
----
Path: /usr/lib/objrepos
gpfs.base 2.3.0.0 COMMITTED GPFS File Manager
gpfs.docs.data 2.3.0.0 COMMITTED GPFS Server Manpages
gpfs.msg.en_US 2.3.0.0 COMMITTED GPFS Server Messages - U.S.
English
Path: /etc/objrepos
gpfs.base 2.3.0.0 COMMITTED GPFS File Manager
Example:
root@zd110l14:/root#lslpp -L "*gpfs*"
Fileset Level State Type Description
(Uninstaller)
------------------------------------------------------------------------
----
gpfs.base 3.1.0.11 C F GPFS File Manager
gpfs.docs.data 3.1.0.4 C F GPFS Server Manpages
and
Documentation
gpfs.msg.en_US 3.1.0.10 C F GPFS Server Messages
- U.S.
English
State codes:
A -- Applied.
B -- Broken.
C -- Committed.
E -- EFIX Locked.
O -- Obsolete. (partially migrated to newer version)
? -- Inconsistent State...Run lppchk -v.
Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
E -- Interim Fix
root@zd110l14:/root#lslpp -l gpfs\*
Fileset Level State Description
------------------------------------------------------------------------
----
Path: /usr/lib/objrepos
gpfs.base 3.1.0.11 COMMITTED GPFS File Manager
gpfs.msg.en_US 3.1.0.10 COMMITTED GPFS Server Messages -
U.S.
English
Path: /etc/objrepos
gpfs.base 3.1.0.11 COMMITTED GPFS File Manager
Path: /usr/share/lib/objrepos
gpfs.docs.data 3.1.0.4 COMMITTED GPFS Server Manpages
and
Documentation
Note 1:
-------
Example:
root@zd110l13:/var/adm/ras#cat mmfs.log.latest
Sun May 20 22:10:37 DFT 2007 runmmfs starting
Removing old /var/adm/ras/mmfs.log.* files:
Loading kernel extension from /usr/lpp/mmfs/bin . . .
GPFS: 6027-500 /usr/lpp/mmfs/bin/aix64/mmfs64 loaded and configured.
Sun May 20 22:10:39 2007: GPFS: 6027-310 mmfsd64 initializing. {Version:
3.1.0.11 Built: Apr 6 2007 09:38:56} ...
Sun May 20 22:10:44 2007: GPFS: 6027-1710 Connecting to 10.32.143.184
zd110l14.nl.eu.abnamro.com
Sun May 20 22:10:44 2007: GPFS: 6027-1711 Connected to 10.32.143.184
zd110l14.nl.eu.abnamro.com
Sun May 20 22:10:44 2007: GPFS: 6027-300 mmfsd ready
Sun May 20 22:10:44 DFT 2007: mmcommon mmfsup invoked
Sun May 20 22:10:44 DFT 2007: mounting /dev/gpfsfs0
Sun May 20 22:10:44 2007: Command: mount gpfsfs0 323816
Sun May 20 22:10:46 2007: Command: err 0: mount gpfsfs0 323816
Sun May 20 22:10:46 DFT 2007: finished mounting /dev/gpfsfs0
At GPFS startup, files that have not been accessed during the last ten
days are deleted.
If you want to save old files, copy them elsewhere.
This example shows normal operational messages that appear in the MMFS
log file:
The GPFS log is a repository of error conditions that have been detected
on each node, as well as
operational events such as file system mounts. The GPFS log is the first
place to look when attempting
to debug abnormal events. Since GPFS is a cluster file system, events
that occur on one node may affect
system behavior on other nodes, and all GPFS logs may have relevant
data.
Note 2:
-------
errpt -a
The error log contains information about several classes of events or
errors. These classes are:
MMFS_ABNORMAL_SHUTDOWN
MMFS_DISKFAIL
MMFS_ENVIRON
MMFS_FSSTRUCT
MMFS_GENERIC
MMFS_LONGDISKIO
MMFS_PHOENIX
MMFS_QUOTA
MMFS_SYSTEM_UNMOUNT
MMFS_SYSTEM_WARNING
MMFS_ABNORMAL_SHUTDOWN
MMFS_DISKFAIL
The MMFS_DISKFAIL error log entry indicates that GPFS has detected the
failure of a disk and forced the disk
to the stopped state. Unable to access disks describes the actions taken
in response to this error.
This is ordinarily not a GPFS error but a failure in the disk subsystem
or the path to the disk subsystem.
the book AIX 5L System Management Guide: Operating System and Devices
and search on logical volume.
Follow the problem determination and repair actions specified.
MMFS_ENVIRON
MMFS_ENVIRON error log entry records are associated with other records
of the MMFS_GENERIC or MMFS_SYSTEM_UNMOUNT types.
They indicate that the root cause of the error is external to GPFS and
usually in the network that supports GPFS.
Check the network and its physical connections. The data portion of this
record supplies the return code provided
by the communications code.
MMFS_FSSTRUCT
The MMFS_FSSTRUCT error log entry indicates that GPFS has detected a
problem with the on-disk structure of
the file system. The severity of these errors depends on the exact
nature of the inconsistent data structure.
If it is limited to a single file, EIO errors will be reported to the
application and operation will continue.
If the inconsistency affects vital metadata structures, operation will
cease on this file system.
These errors are often associated with an MMFS_SYSTEM_UNMOUNT error log
entry and will probably occur on all nodes.
If the error occurs on all nodes, some critical piece of the file system
is inconsistent. This may occur as a
result of a GPFS error or an error in the disk system. Issuing the
mmfsck command may repair the error:
MMFS_GENERIC
The MMFS_GENERIC error log entry means that GPFS self diagnostics have
detected an internal error, or that
additional information is being provided with an MMFS_SYSTEM_UNMOUNT
report. If the record is associated with an
MMFS_SYSTEM_UNMOUNT report, the event code fields in the records will be
the same. The error code and return code
fields may describe the error. See Messages for a listing of codes
generated by GPFS.
MMFS_LONGDISKIO
The MMFS_LONGDISKIO error log entry indicates that GPFS is experiencing
very long response time for disk requests.
This is a warning message and may indicate that your disk system is
overloaded or that a failing disk is requiring
many I/O retries. Follow your operating system's instructions for
monitoring the performance of your I/O subsystem
on this node. The data portion of this error record specifies the disk
involved.
There may be related error log entries from the disk subsystems that
will pinpoint the actual cause of the problem.
See the book AIX 5L Performance Management Guide.
MMFS_PHOENIX
MMFS_PHOENIX error log entries reflect a failure in GPFS interaction
with Group Services. Go to the book
Reliable Scalable Cluster Technology: Administration Guide. Search for
diagnosing group services problems.
Follow the problem determination and repair action specified. These
errors are usually not GPFS problems,
although they will disrupt GPFS operation.
MMFS_QUOTA
The MMFS_QUOTA error log entry is used when GPFS detects a problem in
the handling of quota information.
This entry is created when the quota manager has a problem reading or
writing the quota file. If the quota manager
cannot read all entries in the quota file when mounting a file system
with quotas enabled, the quota manager
shuts down, but file system manager initialization continues. Client
mounts will not succeed and will return
an appropriate error message.
MMFS_SYSTEM_UNMOUNT
The MMFS_SYSTEM_UNMOUNT error log entry means that GPFS has discovered a
condition which may result in
data corruption if operation with this file system continues from this
node. GPFS has marked the file system
as disconnected and applications accessing files within the file system
will receive ESTALE errors.
This may be the result of:
MMFS_SYSTEM_WARNING
The MMFS_SYSTEM_WARNING error log entry means that GPFS has detected a
system level value approaching its
maximum limit. This may occur as a result of the number of inodes
(files) reaching its limit. Issue the mmchfs
command to increase the number of inodes for the file system so there is
at least a minimum of 5% free.
LABEL: MMFS_ABNORMAL_SHUTD
IDENTIFIER: 1FB9260D
Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED
Probable Causes
SOFTWARE PROGRAM
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
COMPONENT ID
5765B9500
PROGRAM
mmfsd64
DETECTING MODULE
/fs/mmfs/ts/phoenix/PhoenixInt.C
MAINTENANCE LEVEL
2.2.0.0
LINE
4409
RETURN CODE
668
REASON CODE
0000 0000
EVENT CODE
0
Note 3:
-------
A fix is available
Download fix packs
APAR status
Closed as program error.
Error description:
When starting gpfs, mmfsd64 on the 64-bit kernel may segfault
with a stack trace similar to:
cxiMapShSeg__Fv() at 0x1003579d4
CleanOldSharedMemory__Fv() at 0x1000025dc
mainBody__FiPPc(??, ??) at 0x100334c20
main(??, ??) at 0x10000257c
Local fix
Problem summary
When starting gpfs, mmfsd64 on the 64-bit kernel may segfault
with a stack trace similar to:
cxiMapShSeg__Fv() at 0x1003579d4
CleanOldSharedMemory__Fv() at 0x1000025dc
mainBody__FiPPc(??, ??) at 0x100334c20
main(??, ??) at 0x10000257c
SYMPTOM STRING
Problem conclusion
Make sure to update the current cpu's ppda rather than another
cpu's ppda
Temporary fix
Comments
APAR information
APAR number IY35279
Reported component name AIX 5L POWER
Reported component ID 5765E6100
Reported release 510
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2002-10-02
Closed date 2002-10-02
Last modified date 2002-11-07
Note 4:
-------
IY56448: WHEN CLLSIF OUTPUT IS NOT CORRECT, MMCOMMON DOES NOT HANDLE
A fix is available
Obtain fix for this APAR
APAR status
Closed as program error.
Error description
from GPFS log:
sort: 0653-655 Cannot open /var/mmfs/tmp/cllsifOutput.mmcommon.2
82794
Local fix
correct cluster infomation so that cllsif is correct.
Problem summary
WHEN CLLSIF OUTPUT IS NOT CORRECT, MMCOMMON DOES NOT HANDLE
Problem conclusion
add checks for invalid data from HACMP, RPD, or SDR when
getNodeData is called
Temporary fix
Comments
APAR information
APAR number IY56448
Reported component name GPFS FOR AIX
Reported component ID 5765F6400
Reported release 220
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2004-05-03
Closed date 2004-05-03
Last modified date 2004-06-24
http://book.opensourceproject.org.cn/enterprise/cluster/ibmcluster/opens
ource/7819/ddu0070.html
The first thing to check is the connection authorization from one node
to other nodes and for extraneous messages
in the command output. You can find information on OpenSSH customization
in Appendix B, "Common facilities"
on page 275. Check that all nodes can connect to all others without any
password prompt.
You can also check if your GPFS cluster has been configured correctly to
use the specified remote shell
and remote copy commands by issuing the mmlscluster command, as in
Example 8-17. Verify the contents
of the remote shell command and remote file copy command fields.
Nodes in nodeset 1:
-------------------
1 storage001-myri0 10.2.1.141 storage001-myri0.cluster.com
10.0.3.141
2 node001-myri0 10.2.1.1 node001-myri0.cluster.com
10.0.3.1
3 node002-myri0 10.2.1.2 node002-myri0.cluster.com
10.0.3.2
4 node003-myri0 10.2.1.3 node003-myri0.cluster.com
10.0.3.3
5 node004-myri0 10.2.1.4 node004-myri0.cluster.com
10.0.3.4
[root@storage001 root]#
There are many things that could cause this problem: cable failures,
network cardproblems, switch failures,
and so on. You can start by checking if the affected node is powered on.
If the node is up, check the node connectivity
and verify the sshd daemon is running on the remote node. If not,
restart the daemon by issuing:
Sometimes you may see a mmdsh error message due to the lack of an mmfsd
process on some of the nodes,
as in Example 8-18. Make sure the mmfsd is running on all nodes, using
lssrc -a, as in Example 8-19.
# lssrc -a
Subsystem Group PID Status
cthats cthats 843 active
cthags cthags 943 active
ctrmc rsct 1011 active
ctcas rsct 1018 active
IBM.HostRM rsct_rm 1069 active
IBM.FSRM rsct_rm 1077 active
IBM.CSMAgentRM rsct_rm 1109 active
IBM.ERRM rsct_rm 1110 active
IBM.AuditRM rsct_rm 1148 active
mmfs aixmm 1452 active
IBM.SensorRM rsct_rm inoperative
IBM.ConfigRM rsct_rm inoperative
You can verify whether the disk is reachable by the operating system
using mmlsnsd -m, as shown in Example 8-20.
In this situation, the GPFS disk gpfs1nsd is unreachable. This could
mean that the disk has been turned off,
has been removed from its bay, or has failed for some other reason.
To correct this problem, you must first verify whether the disk is
correctly attached and that it is not dead. After that, you can verify
whether the driver for the disk is operational, and reload the driver
using the rmmod and insmod commands. If the disk had only been removed
from its bay or turned off, reloading the driver will activate the disks
again, and then you can enable them again following the steps in "The
disk is down and will not come up" on page 241. If the disk had any kind
of hardware problem that will require replacing the disk, refer to
8.1.3, "Replacing a failing disk in an existing GPFS file system" on
page 230.
For our example, we see that the gpfs0 file system has lost two of its
three disks: gpfs1nsd and gpfs3nsd. In this situation, we have to
recover the two disks, run a file system check, and then re-stripe the
file system.
Because the file system check and re-stripe require access to the file
system, which is down, you must first re-activate the disks. Once the
file system is up again, recovery may be undertaken. In Example 8-21, we
verify which disks are down using the mmlsdisk command, re-activate the
disks by using the mmchdisk command, and then verify the disks again
with mmlsdisk.
Now that we have the three disks up, it is time to verify the file
system consistency. Additionally, because some operations could have
occurred on the file system when only one of the disks was down, we must
re-balance it. We show the output of the mmfsck and mmrestripefs
commands in Example 8-22. The mmfsck command has some important options
you may need to use, like -r, for read-only access, and -y, to
automatically correct problems found in the file system.
33792 inodes
14 allocated
0 repairable
0 repaired
0 damaged
0 deallocated
0 orphaned
0 attached
384036 subblocks
4045 allocated
0 unreferenced
0 deletable
0 deallocated
231 addresses
0 suspended
==========
76. HACMP:
==========
You can use the following hardware for your CSM management server,
install server, and nodes:
IBM System x: System x, IBM xSeriesr, IBM BladeCenterr*, and IBM eServer
325, |326, and 326m hardware |
IBM System p: System p, IBM pSeries, IBM BladeCenter*, System p5, IBM
eServer OpenPower
*The BladeCenter JS models use the POWER architecture common to all
System p servers.
-- GPFS:
-- -----
GPFS is a high-performance cluster file system for AIX 5L, Linux and
mixed clusters that provides users
with shared access to files spanning multiple disk drives. By dividing
individual files into blocks
and reading/writing these blocks in parallel across multiple disks, GPFS
provides very high bandwidth;
in fact, GPFS has won awards and set world records for performance. In
addition, GPFS's multiple data paths
can also eliminate single points of failure, making GPFS extremely
reliable. GPFS currently powers many of
the world's largest scientific supercomputers and is increasingly used
in commercial applications requiring
high-speed access to large volumes of data such as digital media,
engineering design, business intelligence,
financial analysis and geographic information systems. GPFS is based on
a shared disk model, providing lower
overhead access to disks not directly attached to the application nodes,
and using a distributed protocol
to provide data coherence for access from any node.
IBM's General Parallel File System (GPFS) provides file system services
to parallel and serial applications.
GPFS allows parallel applications simultaneous access to the same files,
or different files, from any node
which has the GPFS file system mounted while managing a high level of
control over all file system operations.
GPFS is particularly appropriate in an environment where the aggregate
peak need for data bandwidth exceeds
the capability of a distributed file system server.
GPFS allows users shared file access within a single GPFS cluster and
across multiple GPFS clusters.
A GPFS cluster consists of:
AIX 5LT nodes, Linuxr nodes, or a combination thereof (see GPFS cluster
configurations). A node may be:
An individual operating system image on a single computer within a
cluster.
A system partition containing an operating system. Some System p5T and
pSeriesr machines allow multiple
system partitions, each of which is considered to be a node within the
GPFS cluster.
Network shared disks (NSDs) created and maintained by the NSD component
of GPFS
All disks utilized by GPFS must first be given a globally accessible NSD
name.
The GPFS NSD component provides a method for cluster-wide disk naming
and access.
On Linux machines running GPFS, you may give an NSD name to:
Physical disks
Logical partitions of a disk
Representations of physical disks (such as LUNs)
On AIXr machines running GPFS, you may give an NSD name to:
Physical disks
Virtual shared disks
Representations of physical disks (such as LUNs)
With PSSP 3.5, AIX 5L 5.1 or 5.2 must be on the control workstation.
Note that your control workstation
must be at the highest AIX level in the system. If you have any HMC-
controlled servers in your system,
AIX 5L 5.1 or 5.2 must be on each HMC-controlled server node. Other
nodes can have AIX 5L 5.1 and PSSP 3.4,
or AIX 4.3.3 with PSSP 3.4 or PSSP 3.2. However, you can only run with
the 64-bit AIX kernel and switch
between 64-bit and 32-bit AIX kernel mode on nodes with PSSP 3.5.
-- HACMP:
-- ------
-- RSCT:
-- -----
Group Services and Topology Services, although included in RSCT, are not
used in the management
domain structure of CSM. These two components are used in peer domain
clusters for applications,
such as High-Availability Cluster Multiprocessing (HACMP) and General
Parallel File System (GPFS),
providing node and process coordination and node and network failure
detection. Therefore, for these
applications, a .rhosts file may be needed (for example, for HACMP
configuration synchronization).
>> You configure a set of nodes for manageability using the Clusters
Systems Management (CSM) product as
described in IBMr Cluster Systems Management: Administration Guide. The
set of nodes configured for manageability
is called a management domain of your cluster.
-- HPSS:
-- -----
-- C-SPOC:
-- -------
-- HA Network Server:
-- ------------------
Resource Groups can be available from a single node or, in the case of
concurrent applications,
available simultaneously from multiple nodes.
thread:
Q:
Hi All,
We have 2 servers running HACMP 4.3.1 in
non-concurrent rotating mode with IP Take Over
Facility Enabled. We have only one resourse group
running on Server A. In case of Failure, Services
Transfer to Server B(backup Server with same
configuration).
A:
After you define the application server, you can add it to a resource
group. A resource group is a set of
resources that you define so that the HACMP software can treat them as a
single unit.
76.4 Daemons:
=============
Cluster Services:
Notice that if you list the daemons in the AIX System Resource
Controller (SRC), you will see ES appended
to their names. The actual executables do not have the ES appended; the
process table shows the executable
by path (/usr/es/sbin/cluster...).
This daemon monitors the status of the nodes and their interfaces, and
invokes the appropriate scripts
in response to node or network events. It also centralizes the storage
of and publishes updated information
about HACMP-defined resource groups. The Cluster Manager on each node
coordinates information gathered from
the HACMP global ODM, and other Cluster Managers in the cluster to
maintain updated information about the content,
location, and status of all HACMP resource groups. This information is
updated and synchronized among all nodes
whenever an event occurs that affects resource group configuration,
status, or location.
All cluster nodes must run the clstrmgr daemon.
-- Cluster SMUX Peer daemon (clsmuxpd):
The HACMP/ES daemons are collected into the following SRC subsystems and
groups:
When using the SRC commands, you can control the clstrmgr, clinfo, and
clsmuxpd daemons by specifying
the SRC cluster group.
Starting with hacmp 5.3, the cluster manager process is always running.
It can be in one of two states,
as displayed by the command
Using smitty:
-------------
To start the HACMP cluster (the HACMP Cluster Manager) on the cluster
nodes, there are two methods.
1. The first method is the most convenient; however, it can only be used
if rsh is enabled. It allows the
Cluster Manager to be started on both nodes with a single command:
% smitty hacmp
% smitty hacmp
Cluster Services
Using the C-SPOC utility, you can start cluster services on any node (or
on all nodes) in a cluster
by executing the C-SPOC /usr/es/sbin/cluster/sbin/cl_rc.cluster command
on a single cluster node.
The C-SPOC cl_rc.cluster command calls the rc.cluster command to start
cluster services on the nodes specified
from the one node. The nodes are started in sequential order, not in
parallel. The output of the command
run on the remote node is returned to the originating node. Because the
command is executed remotely,
there can be a delay before the command output is returned.
The following example shows the major commands and scripts executed on
all cluster nodes when cluster
services are started in clusters using the C-SPOC utility.
NODE A NODE B
cl_rc.cluster
| \rsh
| \
rc.cluster rc.cluster
| |
| |
clstart clstart
| |
| |
startsrc startsrc
The following figure illustrates the major commands and scripts called
at cluster shutdown:
Using the C-SPOC utility, you can stop cluster services on a single node
or on all nodes in a cluster
by executing the C-SPOC /usr/es/sbin/cluster/sbin/cl_clstop command on a
single node. The C-SPOC cl_clstop
command performs some cluster-wide verification and then calls the
clstop command to stop cluster services
on the specified nodes. The nodes are stopped in sequential order, not
in parallel. The output of the command
run on the remote node is returned to the originating node. Because the
command is executed remotely,
there can be a delay before the command output is returned.
NODE A NODE B
cl_clstop
| \rsh
| \
clstop clstop
| |
| |
stopsrc stopsrc
smit cl_admin -> Manage HACMP Services -> Start Cluster Services
smit cl_admin -> Manage HACMP Services -> Stop Cluster Services
If you consider the question of how the failover node takes control of a
Resource Group, we can consider
the following options:
/usr/sbin/cluster/
history/cluster.mmdd Contains time-stamped, formatted messages
generated by the HACMP for AIX scripts.
The system creates a new cluster history log file
every day that has a cluster event
occurring. It identifies each day's file by the
file name extension, where mm indicates
the month and dd indicates the day.
/tmp/cm.log Contains time-stamped, formatted messages
generated by HACMP for AIX clstrmgr activity.
Information in this file is used by IBM Support
personnel when the clstrmgr is in debug mode.
Note that this file is overwritten every time
cluster services are started;
so, you should be careful to make a copy of it
before restarting cluster services on a
failed node.
/tmp/cspoc.log Contains time-stamped, formatted messages
generated by HACMP for AIX C-SPOC commands.
Because the C-SPOC utility lets you start or stop
the cluster from a single cluster node,
the /tmp/cspoc.log is stored on the node that
initiates a C-SPOC command.
/tmp/dms_logs.out Stores log messages every time HACMP for AIX
triggers the deadman switch.
/tmp/emuhacmp.out Contains time-stamped, formatted messages
generated by the HACMP for AIX Event Emulator.
The messages are collected from output files on
each node of the cluster, and cataloged
together into the /tmp/emuhacmp.out log file. In
verbose mode (recommended), this log file
contains a line-by-line record of every event
emulated. Customized scripts within the event
are displayed, but commands within those scripts
are not executed.
/var/hacmp/clverify
/clverify.log Contains messages when the cluster verification
has run.
Note 1:
-------
thread:
Q:
A:
A:
Yes they can co-exist. But my question is why complicate things. You
cannot have a RAC cluster without
the Oracle Clusterware. Meaning if you install HACMP you will have to
install Oracle Clusterware also
on top of this. Why complicate the stack... keep it simple.. we have
been using Oracle clusteware on AIX
without HACMP without any issues so far.
thread:
Q:
I've also heard that if RAC is used for a cold failover solution, then
the
price is discounted.
A:
Note 1:
-------
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD101347
Document Author:
Shawn Bodily
Document ID:
TD101347
Doc. Organization:
Advanced Technical Support
Document Revised:
03/06/2007
Product(s) covered:
HACMP
AIX 4.3.3 AIX 5.1 AIX 5.1(64-bit) AIX 5.2 AIX 5.3
HACMP 4.5 No Yes No Yes No
HACMP/ES 4.5 No Yes Yes Yes No
HACMP/ES 5.1 No Yes Yes Yes Yes
HACMP/ES 5.2 No Yes Yes Yes Yes
HACMP/ES 5.3 No No No Yes Yes
HACMP/ES 5.4 No No No Yes Yes
Note 2:
-------
HACMP 5.2:
AIX
Each cluster node must have one of the following installed:
AIX 5L v5.1 plus the most recent maintenance level (minimum ML 5)
AIX 5L v5.2 plus the most recent maintenance level (minimum ML 2)
Do all cluster nodes need to be at the same version of HACMP and AIX 5L
operating system?
Can I use an existing Enhanced Concurrent Mode volume group for disk
heartbeat? Or do I need to define a new one?
Answer: Before HACMP can manage and keep your application highly
available, you need to tell HACMP about
your cluster and the application. There are 4 steps:
Step 1) Define the nodes that will keep your application highly
available
The local node (the one where you are configuring HACMP) is assumed to
be one of the cluster nodes
and you must give HACMP the name of the other nodes that make up the
cluster. Just enter a hostname or IP address
for each node.
To see just how easy it is to configure HACMP, look for Using the SMIT
Assistant in Chapter 11 of the
Installation Guide. View the online documentation for HACMP. HACMP for
Linux does not include the advanced
discovery and verification features available on AIX 5L. When
configuring HACMP for Linux you must manually
define the cluster, networks and network interfaces. Any changes to the
configuration require HACMP for Linux
to be restarted on all nodes.
HACMP V5 includes a new feature whereby you may be able to avoid some of
the subnet requirements
by configuring HACMP to use a different set of IP alias addresses for
heartbeat. With this feature you provide
a base or starting address and HACMP calculates a set of addresses in
proper subnets-when cluster services
are active, HACMP adds these addresses as IP alias addresses to the
interfaces and then uses these alias
addresses exclusively for heartbeat traffic. You can then assign your
"regular" boot, service and persistent
labels in any subnet, but be careful: although this feature avoids
multipath routing for heartbeat,
multipath routing may adversely affect your application. Heartbeat via
IP Aliasing is discussed in Chapter 2
of the Concepts and Facilities Guide and Chapter 3 of the Administration
and Troubleshooting Guide.
View the online documentation for HACMP.
Answer: 1) Make the nodes look at /etc/hosts first before the nameserver
by creating a
/etc/netsvc.conf file with the following entry:
hosts=local,bind
where local tells it to look at /etc/hosts first and then the nameserver
If the config_too_long event is run, you should check the hacmp.out file
to determine the cause and if manual
intervention is required. For more information on recovery after an
event failure, refer to Recover from HACMP
Script Failure in Chapter 18 of the Administration and Troubleshooting
Guide.
Answer: No, though there are some restrictions when running mixed mode
clusters.
Mixed levels of AIX 5L on cluster nodes do not cause problems for HACMP
as long as the level of AIX 5L
is adequate to support the level of HACMP being run on that node. All
cluster operations are supported
in such an environment. The HACMP install and update packaging will
enforce the minimum level of AIX 5L
required on each system.
If you configure the disk heartbeat path using the same disk and vg as
is used by the application, the best practice
is to select a disk which does not have frequently accessed or
performance critical application data:
although the disk heartbeat overhead is small (2-4 seeks/sec), it could
potentially impact application performance or,
conversely, excess application access could cause the disk hb connection
to appear to go up and down.
Ultimately the decision of which disk and volume group to use for
heartbeat depends on what makes sense for
your shared disk environment and management procedures. For example,
using a separate vg just for heartbeat
isolates the heartbeat from the application data, but adds another
volume group that has to be maintained
(during upgrades, changes, etc) and consumes another LUN.
Note 5:
-------
thread:
Q:
Hi,
A:
You need to install the bos.clvm.rte fileset from the HACMP CD in order
to make HACMP start the gsclvmd service
# /usr/es/sbin/cluster/clstat -a -o
Other example:
root@n5101l01:/root#clstat -a -o
Start the daemons on all of the nodes in the nodeset by issuing the
mmstartup command:
# mmstartup -C set1
If GPFS does not start, see the General Parallel File System for AIX 5L
in an HACMP Cluster:
Problem Determination Guide and search for the GPFS daemon will not come
up.
# mmshutdown -C set1
7.1.
----
The following lines are added to inttab when you initially install
hacmp.
# startsrc -s clcomdES
7.2.
----
To install HACMP:
# smitty install_all
7.3.
----
Devices supported:
7.4.
----
7.5.
----
Shared Logical Volume:
7.6.
----
Note the use of the -w option on the grep invocation - this ensures that
if you have a sharedvg and a sharedvg2
volume group then the grep only finds the sharedvg line (if it exists).
If you need to do something if the volume group is offline and don't
need to do anything if it is online
then use this:
Some people don't like the null command in the above example. They may
prefer the following alternative:
Although we're not particularily keen on the null command in the first
approach, we really don't like the use
of $? in if tests since it is far to easy for the command generating the
$? value to become separated from the
if test (a classic example of how this happens is if you add an echo
command immediately before the if command
when you're debugging the script). If we find ourselves needing to test
the exit status of a command in an if test
then we either use the command itself as the if test (as in the first
approach) or we do the following:
In our opinion (your's may vary), this makes it much more obvious that
the exit status of the grep command is
important and must be preserved.
su - dbadmin -c "/usr/local/db/startmeup.sh"
This will run the startmeup.sh script in a process owned by the dbadmin
user. Note that it is possible
to pass parameters to the script/program as well:
DBUSER=dbadmin
DBNAME=PRODDB
STARTCMD="/usr/local/db/startmeup.sh $DBNAME"
su - $DBUSER -c "$STARTCMD"
DBUSER=dbadmin
kill ` ps -u $DBUSER -o pid= `
Since a simple kill is rarely enough and a kill -9 is a rather rude way
to start a conversation, the following
sequence might be useful:
DBUSER=dbadmin
kill ` ps -u $DBUSER -o pid= `
sleep 10
kill -9 ` ps -u $DBUSER -o pid= `
To see how this works, just enter the ps command. It produces output
along these lines:
12276
12348
Note that equal sign in the pid= part is important as it eliminates the
normal PID title which would appear
at the top of the column of output. I.e. without the equal sign, you'd
get this:
PID
12276
12348
Passing PID to the kill command is just a bad idea as writing scripts
which normally produce error messages
makes it much more difficult to know if things are working correctly.
#!/bin/ksh
DBUSER=dbadmin
STOPCMD="/usr/local/db/stopdb.sh"
# ask nicely
su - $DBUSER -c "$STOPCMD"
>> thread:
Q:
Hello,
A:
ok, I think I have found it - it is a bug in rsct 2.4.6 and cab be fixed
installing fix for APAR IY91960
http://www-1.ibm.com/support/docview.wss?uid=isg1IY91960
it is:
rsct.basic.rte.2.4.6.3.bff
rsct.core.hostrm.2.4.6.1.bff
rsct.core.rmc.2.4.6.3.bff
rsct.core.sec.2.4.6.3.bff
rsct.core.sensorrm.2.4.6.1.bff
rsct.core.utils.2.4.6.3.bff
rsct.opt.saf.amf.2.4.6.1.bff
rsct.opt.storagerm.2.4.6.3.bff
>> thread:
APAR IY26257
APAR status
Closed as program error.
Error description
Problem summary
When the RSCT Topology Services daemon exits in one node, it
takes a finite time for the node to be detected as down by
the other nodes on each of the networks being monitored.
This happens because the other nodes need to go through a
process of missing incoming heartbeats from the given node,
and can only declare the node down after enough heartbeats
are missed. If a new instance of the daemon is started
then it is possible for the old instance to be still
thought as alive by other nodes by the time the new
instance starts.
LABEL: GS_DOM_NOT_FORM_WA
IDENTIFIER: AA8DB7B3
Type: INFO
Resource Name: grpsvcs
Description: Group Services daemon has not been
established.
LABEL: GS_ERROR_ER
IDENTIFIER: 463A893D
>> thread:
Note 9:
-------
#
# HACMP - Do not modify!
#
10.17.4.11 n5101l01-boot.nl.eu.abnamro.com n5101l01-boot
10.17.4.10 zd101l01-boot.nl.eu.abnamro.com zd101l01-boot
10.17.3.59 n5101l01.nl.eu.abnamro.com n5101l01
10.17.3.51 zd101l01.nl.eu.abnamro.com zd101l01
10.17.3.100 sonriso.nl.eu.abnamro.com sonriso
#
# End of HACMP
#
======================================================================
79. Notes on Installation and Migration AIX, HP-UX, Linux:
======================================================================
79.1 Migrations AIX 5.1,AIX 5.2,AIX 5.3:
---------------------------------------
-- Preservation
This method replaces an earlier version of the BOS but retains the root
volume group, the user-created logical volumes,
and the /home file system. The system file systems /usr, /var, /tmp, and
/ (root) are overwritten.
Product (application) files and configuration data stored in these file
systems will be lost.
Information stored in other non-system file systems will be preserved.
For instructions on preserving the user-defined structure of an existing
BOS, refer to Installing new and
complete BOS overwrite or preservation.
-- Migration
This method upgrades from AIX 4.2, 4.3, 5.1, or 5.2 versions of the BOS
to AIX 5.3 (see the release notes
for restrictions). The migration installation method is used to upgrade
from an existing version or release
of AIX to a later version or release of AIX. A migration installation
preserves most file systems,
including the root volume group, logical volumes, and system
configuration files. It overwrites the /tmp file system.
Note 1:
-------
thread
Found that the prngd subsystem (used with ssh, random number generator)
on AIX 5.1 is incompatible with the
AIX 5.2 upgrade. BEFORE migration this subsystem should be disabled
either in /etc/rc.local or erased completely:
rmssys -s prngd
Boot into maintenance mode (needs first 5.2 CD and SMS console)
Limited function shell (or getrootfs)
vi /etc/rc.local to disable prngd
- Firmware/Microcode upgrade
It is wise to update the firmware/microcode of your system before
upgrading the system. Checkout the IBM support
site Directly via ftp site.
- Base system
Straightforward like installing from scratch. When asked, select
"Migration" instead of "Overwrite" installation.
Note 2:
-------
thread:
Note 3:
-------
------------------------------------------------------------------------
--------
This document contains the latest tips for successful installation of
AIX 5.2, and will be updated as new tips become available.
APARs and PTFs mentioned in this document, when available, can be
obtained from the following web site.
http://www.ibm.com/servers/eserver/support/pseries/aixfixes.html
http://www14.software.ibm.com/webapp/set2/sas/f/genunix3/aixfixes.html
The AIX installation CD-ROMs and the level of AIX pre-installed on new
systems may not contain the latest fixes available at the time you
install the system, and may contain errors. Some these fixes may be
critical to the proper operation of your system. We recommend that you
update to the latest service level, which can be obtained from
http://www.ibm.com/servers/eserver/support/pseries/aixfixes.html.
The compare_report command, which is documented in the AIX Commands
Reference, can be used to determine which available updates are newer
than those installed on your system.
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
http://techsupport.services.ibm.com/server/mdownload/
http://www14.software.ibm.com/webapp/set2/firmware/gjsn
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
Inventory Scout
Inventory Scout introduces a new microcode management graphical user
interface (GUI). This feature is available on your AIX system by
installing an additional fileset, invscout.websm, onto the system, or if
a Hardware Management Console (HMC) is attached, using the microcode
update function. This GUI is a Web-based System Manager plug-in that
surveys the microcode levels of the system, and on POWER4 systems,
downloads and installs microcode. Inventory Scout continues to work with
the applet found at
https://techsupport.services.ibm.com/server/aix.invscoutMDS to survey
only.
This release of Inventory Scout significantly changes the method used to
determine the microcode levels of systems, adapters, and devices to
compare to the latest available levels. Previously, data was collected
and sent to IBM to determine the current state of the system.
invscout.com 2.1.0.1
invscout.ldb 2.1.0.2
invscout.rte 2.1.0.1
invscout.websm 2.1.0.1
To obtain the required filesets, order APAR IY44381. Go to the following
URL:
http://www.ibm.com/servers/eserver/support/pseries/aixfixes.html
If you are using this microcode management feature tool through the HMC,
your HMC must be at Release 3, Version 2.2. This can be obtain by
ordering APAR IY45844.
Known Problems:
For more information about these devices, see the Readme files at
http://techsupport.services.ibm.com/server/mdownload.
When updating system firmware from an HMC, the connection between the
HMC and the system might get out of sync. This situation can be
recovered by going to your server management panel on the HMC and
selecting Rebuild Managed System.
Due to the changes in how the survey works, you can no longer
concatenate survey results prior to sending them to IBM.
------------------------------------------------------------------------
--------
See: http://www-1.ibm.com/support/docview.wss?
uid=isg1SSRVAIX52TIPS081512_450
Note 4:
-------
thread:
Q:
Hi,
I am running AIX 5.2 ML03, I am receving following Attention msg during
the
mksysb
****ATTENTION****
The boot image you created might fail to boot because the size exceeds
the
system limit. For information about fixes or workarounds,
see/usr/lpp/bos.sysmgt/README.
****ATTENTION****
..
Creating list of files to back up..
Backing up 569000 files.................................
This solution DOES NOT WORK on models 7028, 7029, 7038, 7039, and 7040
systems, see option 4 regarding these models.
If APAR IY40824 (AIX 5.1) or IY40975 (AIX 5.2) was installed prior to
making
the backup, then you may boot from the backup and go to the open
firmware
prompt. To get to the open firmware prompt, when the system beeps twice
after
powering it on, press F8 on the keyboard (or the 8 key on an ASCII
terminal).
You can also get to the open firmware prompt from SMS. The open firmware
prompt is also referred to as the "OK" prompt. On some systems there
will be
a menu option located on the initial SMS menu. On others, it will be
located
under the Multiboot menu. From the open firmware prompt execute the
following:
Notes:
a) To use this option, the backup must have this APAR in it and
therefore
must be created after installing the APAR.
b) The above commands will have to be executed each time you boot from
the
large boot image backup media.
Note 5:
-------
If there are problems with the bosinst.data file, the image.data file,
or the tapeblksz file, these files
can be restored from the second image on the tape and checked. These
files, as well as commands necessary
for execution in the RAM file system (when running in maintenance mode
after booting from the tape),
are stored in the second image.
Note 6:
-------
thread
Before you migrate 5.1 -> 5.2, do as an absolute minimum the following:
- errpt, and resolve all serious issues. If you can't, then STOP.
- enough free space rootvg, /, /tmp, /usr, /var
- lppchk -v If dependencies are not OK, then correct or STOP.
- check firmware. Is the current firmware ok for AIX52? Use "prtconf" or
"lsmcode".
Example:
Or use
# lscfg -vp | grep -p Platform
Note: its quite likely that your apps still need a number of AIX fixes,
APARS
before they can run on AIX52.
Note 7:
-------
thread
During the reboot, file "/etc/rc.net" is executed (boot stage two). This
one call "/usr/lib/methods/cfgif"
which configure the network (ethernet adapter, server name, default
gateway, static routes).
Because of the two unconfigured cards and the execution of
"/usr/lib/methods/cfgif", server do a "SYSTEM DUMP DEV"
and reboot again.
THEN, BEFORE A AIX UPGRADE 4.3.3 TO 5.2, BE SURE TO HAVE ALL cards are
correctly configured.
Thanks
Note 8:
-------
AIX can use the /etc/rc.local but you need an entry in /etc/inittab as
follows:
==========================
80. Some IP subnetting:
==========================
Notice the first bits in the first byte in the class address.
ax2macht7+ax2macht6+ax2macht6+ax2macht5+ax2macht4+ax2macht3+ax2macht2+ax
2macht1+ax2macht0
So, in for example in class A: 0xxxxxxx, means that the first byte is at
maximum value of
0x128 + 1x64 + 1x32 + 1x16 + 1x8 + 1x4 + 1x2 + 1x1 = 127, but 127 is
reserved, so Class A runs from 1 - 126
Subnetting:
Class C subnetting:
No of No of No of No of
subnets hosts subnetbits hostbits
-----------------------------------------------------------
*255.255.255.128 NA NA 1 7 * not valid
with most routers
255.255.255.192 2 62 2 6
255.255.255.224 6 30 3 5
255.255.255.240 14 14 4 4
255.255.255.248 30 6 5 3
255.255.255.252 62 2 6 2
Class B subnetting:
No of No of No of No of
subnets hosts subnetbits hostbits
-----------------------------------------------------------
255.255.128.0 NA NA 1 15
255.255.192.0 2 16382 2 14
255.255.224.0 6 8190 3 13
255.255.240.0 14 4094 4 12
255.255.248.0 30 2046 5 11
255.255.252.0 62 1022 6 10
255.255.254.0 126 510 7 9
255.255.255.0 254 254 8 8
255.255.255.128 510 126 9 7
255.255.255.192 1022 62 10 6
255.255.255.224 2046 30 11 5
255.255.255.240 4094 14 12 4
255.255.255.248 8190 6 13 3
255.255.255.252 16382 2 14 2
========================================
81. Notes on TSM:
========================================
Data that has been backed up or archived from a TSM v5.1 client cannot
be restored or retrieved to any
previous level client. The data must be restored or retrieved by a
v5.1.0 or higher level client.
Once you migrate to 5.1 you cannot go back to an older client (but you
can certainly restore older data).
This is non-negotiable. You have been warned.
This product installs into /usr/tivoli/tsm/client. It requires 40 to 50
megabytes of space.
During the installation SMIT will extend the filesystem if you do not
have enough space.
If you do not have space in rootvg, you can symlink
/usr/tivoli/tsm/client into a directory
where you do have enough space.
You must have registered a node and have received confirmation of your
node name. Make sure you know
the password that you specified when applying for the node.
You must have xlC.rte installed in order to install the client. If you
wish to use the graphical client
under AIX you must have AIXwindows X11R6, Motif 1.2 or Motif 2.0, and
the CDE installed.
Acquire the software from Tivoli. You can use wget or lynx to retrieve
the files from their web site
(or use the "Save Target As..." feature of your browser):
ftp://service.boulder.ibm.com/storage/tivoli-storage-
management/maintenance/client/v5r1/
Start SMIT to install the software:
smitty install
Select "Install and Update Software", then "Install and Update from
LATEST Available Software".
When it prompts you for the "INPUT device / directory for software"
specify the directory in which
you saved the installation files. Proceed to install the software
("_all_latest")
cd /usr/tivoli/tsm/client/ba/bin
Create and edit the dsm.sys, dsm.opt, and inclexcl files for your
system. Sample files are linked.
At a minimum, you will have to edit dsm.sys and insert your node name.
Start dsmc by using the ./dsmc command. Enter the command "query
schedule" and you will be prompted
for your node's password. Enter your password and press enter. Once it
successfully displays the node's
backup schedule, enter the command "quit" to exit it. This saves your
node's password, so that backups
and other operations can happen automatically.
To start the TSM client on reboot, edit /etc/inittab and insert the line
(all one line):
Verify that the client has started and is working by checking the log
files in /usr/tivoli/tsm/client/ba/bin.
You can perform a manual backup to test your settings using the command:
/usr/tivoli/tsm/client/ba/bin/dsmc incremental
To upgrade the TSM client from 4.2.1 to 5.1 use the following procedure:
Obtain a copy of the software (use the links at the top of this page).
Kill the running copy of dsmc (a "ps -ef | grep dsmc" will show you what
is running. Kill the parent process).
Upgrade the TSM client packages using "smitty install". Select "Install
and Update Software",
then "Update Installed Software to Latest Level (Update All)". Specify
the directory in which
the software was downloaded.
Edit your dsm.sys file and ensure that the TCPServeraddress flag is set
to buckybackup2.doit.wisc.edu
OR buckybackup3.doit.wisc.edu (this just ensures future compatibility
with changes to the service).
This setting could be either server, depending on when you registered
your node.
Watch your logs to ensure that a backup happened. You can also invoke a
manual backup using
"dsmc incremental" from the command line.
So how to install:
zd77l06:/usr/tivoli/tsm/client/ba/bin>cat dsm.opt
SErvername ZTSM01
dateformat 4
compressalways no
followsymbolic yes
numberformat 5
subdir yes
timeformat 1
zd77l06:/usr/tivoli/tsm/client/ba/bin>cat dsm.sys
SErvername ZTSM01
COMMmethod TCPip
TCPPort 1500
TCPServeraddress cca-tsm01.ao.nl.abnamro.com
HTTPPort 1581
PASSWORDACCESS GENERATE
schedmode PROMPTED
nodename zd77l06
compression yes
SCHEDLogretention 7
ERRORLogretention 7
ERRORLogname /beheer/log/tsm/dsmerror.log
SCHEDLogname /beheer/log/tsm/dsmsched.log
If you need to exclude a filesystem in the backup run, you can edit
dsm.sys and put in an exclude statement
like in the following example:
SErvername ZTSM01
Exclude "/data/documentum/dmadmin/*"
COMMmethod TCPip
TCPPort 1500
TCPServeraddress cca-tsm01.ao.nl.abnamro.com
HTTPPort 1581
PASSWORDACCESS GENERATE
schedmode PROMPTED
nodename zd110l14
compression yes
SCHEDLogretention 7
ERRORLogretention 7
ERRORLogname /beheer/log/tsm/dsmerror.log
SCHEDLogname /beheer/log/tsm/dsmsched.log
81.2 Examples of the dsmc command:
==================================
To view schedules that are defined for your client node, enter:
# dsmc query schedule
# dsmc q ses
-- Example 1:
To restore a file /a/b to /c/d :
-- Example 2:
Restore the most recent backup version of the /home/monnett/h1.doc file,
even if the backup is inactive.
-- Example 3:
Display a list of active and inactive backup versions of files from
which you can select versions to restore.
-- Example 4:
Restore the files in the /home file system and all of its
subdirectories.
-- Example 5:
Restore all files in the /home/mydir directory to their state as of 1:00
PM on August 17, 2002.
-- Example 6:
Restore all files from the /home/projecta directory that end with .bak
to the /home/projectn/ directory.
# dsmc restore "/home/projecta/*.bak" /home/projectn/
!#/usr/bin/ksh
cd /data/documentum/dmadmin
dsmc restore /data/documentum/dmadmin/backup_3011/*
-- Example 7:
- Use of FROMDate=date
Specify a beginning date for filtering backup versions. Do not
restore files that were backed up before this date.
You can use this option with the TODATE option to create a time
window for backup versions. You can list files that were backed
up between two dates.
For example, to restore all the files that you backed up from
the /home/case directory from April 7, 1995 through April 14,
1995, enter:
The date must be in the format you select with the DATEFORMAT
option. For example, the date for date format 1 is mm/dd/yyyy,
where mm is month; dd is day; and yyyy is year. If you include
DATEFORMAT with the command, it must precede FROMDATE and
TODATE.
- Use of TODate=date
Specify an end date for filtering backup versions. ADSM does not
restore backup versions that were backed up after this date.
You can use this option with the FROMDATE option to create a
time window for backups. You can restore backup versions that
were backed up between two dates
The date must be in the format you select with the DATEFORMAT
option. For example, the date for date format 1 is mm/dd/yyyy,
where mm is month; dd is day; and yyyy is year. If you include
DATEFORMAT with the command, it must precede FROMDATE and
TODATE.
To start the clients Graphical User Interface enter dsm. The TSM GUI
appears.
Example in Dutch:
-----------------
Files die u dezelfde dag op het systeem zet en weer weggooid, kunt u
niet met een restore terughalen!
met de "'s.
--Een voorbeeld
Voorbeeld van een restore
De voorbeeld file:
$ rm testfile
$ ls -l testfile
ls: testfile: No such file or directory
pick> 1
Bevestig met OK
pick> o
** Interrupted **
ANS1114I Waiting for mount of offline media.
$ ls -l testfile
-rw-rw-r-- 1 faq faq 269 Mar 18 16:12 testfile
Example:
--------
womit Sie alle Ihre gesicherten Dateien aufgelistet bekommen. Die Option
-inactive gestattet zus,tzlich
das Auflisten aller gespeicherten fr_heren Versionen Ihrer Dateien.
Haben Sie z.B. unter Unix im Verzeichnis
/u/holo/briefe/1997 durch Znderungen mehrere Versionen Ihrer Briefe
konferenz97.tex abgespeichert,
so bekommen Sie eine Liste aller Dateien durch:
Der Befehl:
-replace=yes
angeben.
With the use of RMAN, TDP for Oracle allows you to perform the following
functions:
TDPO.OPT File
This feature provides a centralized place to define all the options
needed by RMAN for TDP for Oracle backup
and restore operations. This eliminates the need to specify environment
variables for each session,
thereby reducing the potential for human error. This also simplifies the
establishment of multiple sessions.
The Data Protection for Oracle options file, tdpo.opt, contains options
that determine the behavior and performance
of Data Protection for Oracle. The only environment variable Data
Protection for Oracle Version 5.2 recognizes
within an RMAN script is the fully qualified path name to the tdpo.opt
file. Therefore, some RMAN scripts may need
to be edited to use TDPO_OPTFILE=fully qualified path and file name of
options file variable in place of other
environment variables. For example:
Note 1:
-------
Make sure these conditions exist before installing Data Protection for
Oracle:
Attention: A root user must install the Tivoli Storage Manager API
before installing Data Protection
for Oracle on the workstation where the target database resides.
After Data Protection for Oracle is installed, you must perform the
following configuration tasks:
Note 2:
-------
For more information, please see the 'Using the Utilities' section in
the
Data Protection for Oracle User's Guide.
Note 3:
-------
thread
Q:
Any good step-by-step docs out there? I just need to get this thing
setup and working quickly.
Don't have time (unless it is my only choice of course) do filter
through several manuals to pick out
the key info... Help if you can - I surly would appreciate it
A:
3. Add a stanza in dsm.sys for the TDP, this should be the second or
third stanza since the first stanza is
for the client.
4. In the TDP installation directory, modify the tdpo.opt file - this is
the configuration file for the TDP.
This file is self explanatory
5. Execute tdpconf showenv - you should get a response back from tsm.
7. Once you have gotten this far, in Oracle's home directory - create
the dsm.opt file and make sure it contains
only one line, the servername line of the newly created stanza. The
file needs to be owned by oracle.
8. If you are using tracing, the tdpo.opt file will identify the
location.
9. Configure RMAN
Note 4:
-------
thread
Q:
I see this question has been asked several times in the list, but I fail
to see any answers on ADSM.ORG.
A:
Dale,
Did you check the basics of, as oracle, or your tdpo user:
Make sure the DSMI variables point to the right locations, then verify
those files are readable by your user.
If after verifying this, you might want to let us know what version of
oracle, tdpo and tsmc you have on this node.
A:
We had an issue with this and discovered that it was looking in the api
directory for the dsm.sys and not the ba/bin directory so we just put a
link
in api to bin and it worked.
A:
You may want to break the link to prevent TDP from using the INCLEXCL
file that's
normally in a dsm.sys file. If you don't, you'll generate errors. If
linked, and
commented out, your normal backups
won't have an INCLEXCL file, hence, you'll backup everything on your
client server
during your regular client backup.
Note 5:
-------
http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg24012732
Abstract
Data Protection for Oracle v5.3.3 refresh.
Download Description
Data Protection for Oracle v5.3.3 refresh.
These packages contains no license file. The customer must already have
a Paid version of the package
to obtain the license file.
Prerequisites
A Paid version of the Data Protection for Oracle package is required.
Installation Instructions
See the README.TDPO in the download directory.
Note 6:
-------
APAR status
Closed as documentation error.
Error description
When installing tivoli.tsm.client.api.64bit 5.3.0.0 on AIX,
tivoli.tsm.client.api.32bit 5.3.0.0 is required as pre-requsite
for the installation. The installation will fail if
tivoli.tsm.client.api.32bit 5.3.0.0 is not avaiable for install.
tivoli.tsm.client.api.32bit 5.3.0.0 is needed because of
languages enhancement in 5.3.
Local fix
Problem summary
****************************************************************
* USERS AFFECTED: AIX CLIENTS *
****************************************************************
* PROBLEM DESCRIPTION: API 32bit PREREQ for API 64bit not in *
* README. *
****************************************************************
* RECOMMENDATION: apply next available fix. *
****************************************************************
Problem conclusion
Add info to README files and database.
tivoli.tivguid Y
tivoli.tsm.books.en_US.client.htm Y
tivoli.tsm.books.en_US.client.pdf Y
tivoli.tsm.client.api.32bit Y
tivoli.tsm.client.api.64bit Y
tivoli.tsm.client.ba.32bit.base Y
tivoli.tsm.client.ba.32bit.common Y
tivoli.tsm.client.ba.32bit.image Y
tivoli.tsm.client.ba.32bit.nas Y
tivoli.tsm.client.ba.32bit.web Y
tivoli.tsm.client.oracle.aix.64bit Y
tivoli.tsm.client.oracle.books.htm Y
tivoli.tsm.client.oracle.books.pdf Y
tivoli.tsm.client.oracle.tools.aix.64bit
#! /bin/sh
# Copyright (c) 1989, Silicon Graphics, Inc.
#ident "$Revision: 1.1 $"
state=$1
case $state in
'start')
set `who -r`
if [ $8 != "0" ]
then
exit
fi
if [ -f /usr/tivoli/tsm/client/ba/bin/dsmcad ]; then
/usr/tivoli/tsm/client/ba/bin/dsmcad > /dev/null 2>&1 &
if [ $? -eq 0 ]; then
$ECHO " done"
else
$ECHO " failed"
exit 2
fi
else
echo " failed, no dsm installed"
exit 3
fi
;;
'stop')
$ECHO "Stopping dsm schedule:"
killall dsmcad
;;
esac
It is also possible now to start and stop dsmcad using the script. For
example :
/etc/init.d/dsmcad start
/etc/init.d/dsmcad stop
- To restart dsmcad (for example to refresh daemon after dsm.sys or
dsm.opt modification)
/etc/init.d/dsmcad restart
/etc/init.d/dsmcad status
-or-
ps -ef | grep dsmcad
Or use:
root@zd111l08:/etc#./rc.dsm stop
dsmcad en scheduler gestopt
root@zd111l08:/etc#./rc.dsm start
UNIX
/opt/IBM/SCM/client
./jacclient status
HCVIN0033I The Tivoli Security Compliance Manager client is currently
running.
Problem
ANS1005E TCP/IP read error on socket = 6, errno = 73, reason: 'A
connection with a remote socket was reset
by that socket.'.
Cause
The same ANR1005E message with errno 10054 is well-documented, but very
little documentation exists for errno 73
Solution
ANS1005E TCP/IP read error on socket = 6, errno = 73, reason: 'A
connection with a remote socket was reset
by that socket.'.
The errno 73 seen in the message above indicates that the connection was
reset by the peer, usually an indication
that the session was cancelled or terminated on the TSM Server. In all
likelihood these sessions were terminated
on the server because they were in an idle wait for a period of time
exceeding the idletimeout value on
the TSM Server. We see that the sessions successfully reconnected and no
further errors were seen.
Sessions sitting in an idle wait is not uncommon and is frequently seen
when backing up large amounts of data.
With multi-threaded clients, some sessions are responsible for querying
the server to identify which files
are eligible to be backed up (producer sessions) while the other
sessions are responsible for the actual transfer
of data (consumer sessions). It usually takes longer to backup files
across the network than it takes for a list
of eligible files to be generated. Once the producer sessions have
completed building lists of eligible files
they will sit idle while the producer sessions actually backup these
files to the TSM Server. After some time,
the TSM Server will terminate the producer sessions because they have
been idle for a period of time longer
than the IDLETIMEOUT value specified on the server.
Many times this issue can be seen in firewall environment and has been
seen with network DNS problems and/or network
config problems. One of the most common is when a passive device
(router, switch, hub, etc.) is in between the
client & the server. If the port on the passive device is set to Auto-
Negotiate, it will automatically defer
to the active device (the NIC in the client) to set the connection
speed. If the NIC is also set to Auto-Negotiate
(default in most OS's) this often causes excessive delays and
interruptions in connectivity. This is because the NIC
is looking to the network appliance to set the connection speed and
vice-versa, so it takes some time before
the network device will find a suitable connection speed (not always
optimal, just suitable) and begin data transfer.
This repeats every time a data packet is sent across the network. While
the negotiating period is relatively short
by human standards (usually in the nanosecond range) it adds up over
time when trying to send a large amount
of data at a high speed and causes the connection to be broken. The best
workaround for that is to hard code
both the NIC and the network port for a specific setting. This is
usually 100Mb Full Duplex for a standard
CAT-5 copper connection, although older equipment may require
reconfiguration of 10/100 NICs to allow for that speed.
The other possible workaround for this issue is to estimate the file
transfer time and increase the IDLETIMEOUT
to a level higher than that time.
=========
82. LDAP:
=========
82.1: Introduction:
===================
-- Example 1:
-- Example 2:
CN=jdoe.OU=hrs.O=ADMN
or abbreviated to
jdoe.hrs.admn
-- Example 3:
cn=Oatmeal Deluxe,ou=recipes,dc=foobar,dc=com
Which means: In com, then in foobar, then in recipes, we can find the
object "Oatmeal Deluxe".
Note:
LDAP processes listen per default on port 389.
-- Programming:
-- ------------
-- Utilities:
-- ----------
The two LDIF files immediately following represent a directory entry for
a printer.
The string in the first line of each entry is the entry's name, called a
distinguished name.
The difference between the files is that the first describes the entry--
that is, the format is an index
of the information that the entry contains. The second, when used as
input to the command-line utility,
adds information about the speed of the printer.
Description
Modification
Java example:
-------------
Listing 1 shows a simple JNDI program that will print out the cn
attributes of all the Person type objects
on your console.
Listing 1. SimpleLDAPClient.java
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactor
y");
env.put(Context.PROVIDER_URL,
"ldap://localhost:10389/ou=system");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "uid=admin,ou=system");
env.put(Context.SECURITY_CREDENTIALS, "secret");
DirContext ctx = null;
NamingEnumeration results = null;
try {
ctx = new InitialDirContext(env);
SearchControls controls = new SearchControls();
controls.setSearchScope(SearchControls.SUBTREE_SCOPE);
results = ctx.search("", "(objectclass=person)", controls);
while (results.hasMore()) {
SearchResult searchResult = (SearchResult)
results.next();
Attributes attributes = searchResult.getAttributes();
Attribute attr = attributes.get("cn");
String cn = (String) attr.get();
System.out.println(" Person Common Name = " + cn);
}
} catch (NamingException e) {
throw new RuntimeException(e);
} finally {
if (results != null) {
try {
results.close();
} catch (Exception e) {
}
}
if (ctx != null) {
try {
ctx.close();
} catch (Exception e) {
}
}
}
}
}
VB.Net example:
---------------
Try
oSearcher.PropertiesToLoad.Add("uid")
oSearcher.PropertiesToLoad.Add("givenname")
oSearcher.PropertiesToLoad.Add("cn")
oResults = oSearcher.FindAll
Catch e As Exception
End Try
Return RetArray
End Function</PRE>
LDAPConnection lc("localhost");
try {
lc.bind("cn=user,dc=example,dc=org","secret");
} catch (LDAPException e) {
std::cerr << "Bind failed: " << e << std::endl;
objChild.NativeObject.AccountDisabled = False
objChild.CommitChanges()
Console.WriteLine("Added user")
When users log in, the LDAP client sends a query to the LDAP server to
get the user and group information
from the centralized database. DB2r is a database used for storing the
user and group information.
The LDAP database stores and retrieves information based on a
hierarchical structure of entries,
each with its own distinguishing name, type, and attributes. The
attributes (properties) define
acceptable values for the entry. An LDAP database can store and maintain
entries for many users.
An LDAP security load module was implemented as from AIX Version 4.3.
This load module provides
user authentication and centralized user and group management functions
through the IBM SecureWayr Directory.
A user defined on an LDAP server can be configured to log in to an LDAP
client even if that user
is not defined locally. The AIX LDAP load module is fully integrated
with the AIX operating system
http://www.ibm.com/developerworks/aix/library/au-ldapconfg/index.html?
ca=drs-
The following file sets are required to configure IBM Directory Server:
AIX provides the mksecldap command to set up the IBM Directory servers
and clients to exploit the servers.
The mksecldap command performs the following tasks for the new server
setup:
The "ldap.client" file set contains the IBM Directory client libraries,
header files, and utilities.
You can use the mksecldap command to configure the AIX client against
the IBM Directory Server,
as in the following example:
You must have the IBM Directory Server administrator DN and password to
configure the AIX client.
Once the AIX client is configured, the secldapclntd daemon starts
running. Once the AIX client is configured
against the IBM Directory Server, change the SYSTEM attribute in
"/etc/security/user" file to LDAP OR compat
or compat or LDAP to authenticate users against the AIX client system.
XX
LDAP:
program = /usr/lib/security/LDAP
program_64 = /usr/lib/security/LDAP64
ldapcfg utility:
----------------
The ldapcfg utility is a command-line tool that you can use to configure
IBM Tivoli Directory Server.
You can use ldapcfg instead of the Configuration Tool for the following
tasks:
where
adminDN is the administrator DN you want.
password is the password for the administrator DN.
Note:
Double byte character set (DBCS) characters in the password are not
supported.
For example:
Note:
Do not use single quotation marks (') to define DNs with spaces in them.
They are not interpreted correctly.
To accept the default administrator DN of cn=root and define a password,
type the following command
at a command prompt:
# ldapcfg -p password
where password is the password for the administrator DN.
For example:
# ldapcfg -p secret
When you configure the database, you must always specify a user ID and
password on the command line.
The instance name is, by default, the same as the user ID. The user ID
must already exist and must meet
certain requirements. If you want a different instance name you can
specify it using the -t option.
This name must also be an existing user ID that meets certain
requirements.
See Before you configure: creating the DB2 database owner and database
instance owner for information about
these requirements on both Windows and UNIX platforms.
Attention:
Before configuring the database, be sure that the environment variable
DB2COMM is not set.
Be sure to read this section before you use the ldapcfg command. Some
options (such as -f and -s) have changed.
Unpredictable results will occur if you use them incorrectly or as they
were used in previous releases.
The server must be stopped before you configure the database.
To configure a database, the following options are available:
-l location
Specifies the location of the DB2 database. For UNIX systems, this is a
directory name such as /home/ldapdb.
For Windows systems, this is a drive letter such as C:
-a id
Specifies the DB2 administrator ID.
-c
Creates a database in UTF-8 format. (The default, if you do not specify
this option, is to create a database
that is in the local code page.)
-i
Destroys any instance currently configured with IBM Tivoli Directory
Server. All databases associated with the
instance are also destroyed.
-w password
Specifies the DB2 administrator password.
Note:
The ldapcfg -w password command no longer changes the system password of
the database owner. It only updates
the ibmslapd.conf file. See Changing the DB2 administrator password for
information about using the -w option alone.
-d database
Specifies the DB2 database name.
-t dbinstance
Specifies the database instance. If you do not specify an instance, the
instance name is the same as the
DB2 administrator ID.
-o
Overwrites the database if one already exists. By default, the database
being overwritten is not deleted.
-r
Destroys any database currently configured with IBM Tivoli Directory
Server.
-f
Specifies the full path of a file to redirect output into. If used in
conjunction with the -q option,
only errors will be sent to the file.
-q
Runs in quiet mode. All output is suppressed except for errors.
-n
Runs in no prompt mode. All output is generated except for messages
requiring user interaction.
If you change the password for the DB2 administrator through the
operating system, you must also change it
using ldapcfg with the -w option. This changes the password in the
server configuration file. Similarly,
if you change the password for the DB2 administrator with the ldapcfg
command, you must also change it through
the operating system.
ldapcfg -w newpassword
Note:
Double byte character set (DBCS) characters in the password are not
supported.
userid='sidnsl2'
Notes:
------
Note 1:
-------
http://www-03.ibm.com/systems/p/os/aix/whitepapers/ldap_client.html
AIX first implemented a LDAP security load module in version 4.32. The
implementation worked well in a
uniform AIX environment. However, users have found it hard to configure
AIX systems to work with third party
LDAP servers. This shortcoming is primarily the result of the
proprietary schema used by AIX1.
Since AIX 5LT version 5.2, AIX supports the schema defined in RFC 2307
which is widely used among IBM peers
and which is becoming the industry standard for network entities. The
schema defines attributes and object classes
for such entities as users, groups, networks, services, hosts,
protocols, rpc, etc.3.
The RFC 2307 schema is often referred to as the nisSchema. Both of these
terms are used interchangeably
in this paper.
Client support for the nisSchema in AIX is part of Configurable Schema
Support Mechanism (CSSM),
which is a bigger effort to support arbitrary schema. With CSSM, AIX
systems can be configured to support
LDAP directory servers using any schema. At present, CSSM is implemented
for users and groups only.
Note 2:
-------
http://www.redbooks.ibm.com/abstracts/sg247165.html
Note 3:
-------
thread
Q:
All-
>
> Having a problem installing a DB2 client on a machine running AIX
> version 5.0. Client appeared to install one time succesfully, then
> was uninstalled and a reinstall was attempted. For some reasons, it
> does not complete the reinstall. See the status report from the GUI
> installer at the end of this note. Errors are towards the bottom.
> Everything installed in /usr/opt for DB2 but the sqllib folder that is
> supposed to be created in the home directory of the instance ownder is
> not installed (in our case the instance ownder is db2inst1). Have
> tried installing DB2 with the user db2inst1 already existing and not.
> Same error seems to appear. The key errors from the output below
> appear to be:
>
> ERROR:Could not switch current DB2INSTANCE to "db2inst1". The return
> code is
> "-2029059916".
> ERROR:DBI1122E Instance db2inst1 cannot be updated.[/color]
A:
Most likely, when you uninstalled, you removed the ~db2inst1/sqllib via
rm -rf, rather than via db2idrop. There are crumbs still sticking
around in your system.
A:
Note 4:
-------
Technote:
http://www-1.ibm.com/support/docview.wss?
rs=71&context=SSEPGG&q1=loopback+extshm&uid=swg21009742&loc=en_US&cs=utf
-8&lang=en
DB2 issues SQL1224N and WebSphere Application Server (WAS) admin server
fails with StaleConnectionException
when attempting more than 10 local concurrent DB2 connections from a
single process.
Problem
On AIX 4.3.3 or later, DB2 will issue SQL1224N and WebSphere
administration server will fail with
StaleConnectionException when attempting more than 10 local concurrent
DB2 connections from a single process.
JDK 1.1.8 allows a maximum number of 10 local concurrent DB2
connections. JDK 1.2.2 allows a maximum of 4
local connections. JDK 1.3.0 allows a maximum of 2 local connections.
Solution
Symptoms
DB2 errors:
at java.lang.Throwable.<init>(Throwable.java:96)
at java.lang.Exception.<init>(Exception.java:44)
at java.sql.SQLException.<init>(SQLException.java:45)
at COM.ibm.db2.jdbc.DB2Exception.<init>(DB2Exception.java:93)
at
COM.ibm.db2.jdbc.app.SQLExceptionGenerator.throw_SQLException(SQLExcepti
onGenerator.java:164)
at
COM.ibm.db2.jdbc.app.SQLExceptionGenerator.check_return_code(SQLExceptio
nGenerator.java:402)
at
COM.ibm.db2.jdbc.app.DB2Connection.connect(DB2Connection.java(Compiled
Code))
at COM.ibm.db2.jdbc.app.DB2Connection.<init>(DB2Connection.java(Compiled
Code))
at COM.ibm.db2.jdbc.app.DB2Driver.connect(DB2Driver.java(Compiled Code))
at java.sql.DriverManager.getConnection(DriverManager.java(Compiled
Code))
at java.sql.DriverManager.getConnection(DriverManager.java:183)
at newtest.connectDM(newtest.java:35)
at newtest.run(newtest.java:109)
at java.lang.Thread.run(Thread.java:498)
Possible cause
The error return code 18 indicates that there are too many files open
and therefore, no available
segment registers. The Websphere application has reached AIX's limit of
10 shared memory segments per process,
and so DIA9999E is generated.
Action
DB2 UDB Version 7.2 (DB2 UDB Version 7.1 FixPak 3) or later
The support of EXTSHM has been added to V7.2 (V7.1 Fixpak 3). By
default, AIX does not permit 32-bit applications
to attach to more than 11 shared memory segments per process, of which a
maximum of 10 can be used for
local DB2 connections. To use EXTSHM with DB2, do the following:
The above information has been documented in the DB2 UDB Release Notes
for Version 7.2 / Version V7.1 FixPak 3, page 366.
You can get it from:
ftp://ftp.software.ibm.com/ps/products/db2/info/vr7/pdf/letter/db2ire71.
pdf
Note 5:
-------
http://publib.boulder.ibm.com/infocenter/wpdoc/v510/index.jsp?
topic=/com.ibm.wp.ent.doc_5.1/wps/tbl_adm.html
When modifying user information via WebSphere Portal, if you receive the
error Backend storage system failed.
Please try again later. or the user attributes are not updated in LDAP,
it might mean that the default
tuning parameters for use with DB2 and IBM Tivoli Directory Server need
to be adjusted.
APP_CTL_HEAP_SZ 128
APPLHEAP_SZ 128
The parameters above are too small for IBM Tivoli Directory Server and
WebSphere Portal on AIX with 2000 user entries.
su -ldapdb2
db2 -c update db cfg for ldap using APP_CTL_HEAP_SZ 1024
db2 -c update db cfg for ldap using APPLHEAP_SZ 1024
The suite of OpenLDAP libraries and tools is spread out over the
following packages:
The slapd daemon is the standalone LDAP server while the slurpd daemon
is used to synchronize changes from
one LDAP server to other LDAP servers on the network. The slurpd daemon
is only necessary when dealing
with multiple LDAP servers.
Warning
Be sure to stop slapd by issuing "/usr/sbin/service slapd stop" before
using slapadd, slapcat or slapindex.
Otherwise, the consistency of the LDAP directory is at risk.
/lib/security/pam_ldap.so
Note
If the nss_ldap package is installed, it will create a file named
/etc/ldap.conf. This file is used by the
PAM and NSS modules supplied by the nss_ldap package. See the Section
called Configuring Your System to
Authenticate Using OpenLDAP for more information about this
configuration file.
-- slapd.conf
In order to use the slapd LDAP server, you will need to modify its
configuration file,
/etc/openldap/slapd.conf. You must to edit this file to make it specific
to your domain and server.
The suffix line names the domain for which the LDAP server will provide
information. The suffix line should be
changed from:
suffix "dc=your-domain,dc=com"
suffix "dc=example,dc=com"
The rootdn entry is the Distinguished Name (DN) for a user who is
unrestricted by access controls or
administrative limit parameters set for operations on the LDAP
directory. The rootdn user can be thought of as
the root user for the LDAP directory. In the configuration file, change
the rootdn line from its default value
to something like the example below:
rootdn "cn=root,dc=example,dc=com"
rootpw {SSHA}vv2y+i6V6esazrIv70xSSnNAJE18bb2u
In the rootpw example, you are using an encrypted root password, which
is a much better idea than leaving a
plain text root password in the slapd.conf file. To make this encrypted
string, type the following command:
# slappasswd
You will be prompted to type and then re-type a password. The program
prints the resulting encrypted password
to the terminal.
Warning
LDAP passwords, including the rootpw directive specified in
/etc/openldap/slapd.conf, are sent over the network
in plain text unless you enable TLS encryption.
For added security, the rootpw directive should only be used if the
initial configuration and population
of the LDAP directory occurs over a network. After the task is
completed, it is best to comment out the rootpw
directive by preceding it with a pound sign (#).
Tip
If you are using the slapadd command-line tool locally to populate the
LDAP directory, using the rootpw directive
is not necessary.
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/rfc822-MailMember.schema
include /etc/openldap/schema/autofs.schema
include /etc/openldap/schema/kerberosobject.schema
Caution
You should not modify any of the schema items defined in the schema
files installed by OpenLDAP.
include /etc/openldap/schema/local.schema
Next, go about defining your new attribute types and object classes
within the local.schema file.
Many organizations use existing attribute types and object classes from
the schema files installed by default
and modify them for use in the local.schema file. This can help you to
learn the schema syntax while meeting
the immediate needs of your organization.
/sbin/service/ldap start
After you have configured LDAP correctly, you can use chkconfig, ntsysv,
or Services Configuration Tool
to configure LDAP to start at boot time. For more information about
configuring services,
see to the chapter titled Controlling Access to Services in the Official
Red Hat Linux Customization Guide.
=========================
83. Introduction SAMBA:
=========================
83.1 Introduction:
==================
The SMB protocol can be installed on unix as well, making it "look" like
a Windows Server
as far as Windows clients are concerned, who want to use a Server for
file and print services.
For this to make a reality, you can instal "Samba" on your unix machine.
-- Authentication:
For example, on a unix machine, a user "can logon locally", using the
local password file (in reality,
this could be more complex), or be authenticated "remotely" by "NIS"
(Network Information System),
or be authenticated by a ldap Server etc..
In the next sections, we take a look on how samba can be used on HP-UX,
Solaris, RedHat, and AIX.
=========================
84. AIX and SNA:
=========================
Note 1:
-------
SNA defines a set of rules that systems use to communicate. These rules
define the layout of the data
that flows between the systems and the action the systems take when they
receive the data.
SNA does not specify how a system implements the rules. A fundamental
objective of SNA is to allow
systems that have very different internal hardware and software designs
to communicate.
The only requirement is that the externals meet the rules of the
architecture.
When defining a CICS region, you must also identify the SNA
synchronization level required.
CICS supports all three synchronization levels defined by SNA:
PU type 2.1 nodes may have support for Advanced Peer-to-Peer Networking
(APPN). This support enables a node
to search for an LU in the network, rather than requiring a remote LU's
location to be preconfigured locally.
There are two types of APPN nodes: end nodes and network nodes. An end
node can receive a search request
for an LU and respond, indicating whether the LU is local to the node or
not. A network node can issue search
requests, as well as respond to them, and maintains a dynamic database
that contains the results of
the search requests. Support for APPN can greatly reduce the maintenance
work in an SNA network, especially
if the network is large or dynamic. Communications Server for AIX
supports APPN.
Note 2:
-------
-Reaps the benefits of IBM's years of experience with SNA, TCP/IP, and
network computing
-Enables customers and Business Partners to choose applications based on
their business needs,
not their network infrastructure
-Provides an excellent offering for multi-protocol networking
environments with Enterprise Extender,
enhanced TN3270E Server, Telnet Redirector, and Remote API
client/server support
-Offers use of comprehensive Secure Sockets Layer (SSL) data encryption,
and SSL client and server
authentication with the TN3270E Server, the Telnet Redirector and the
Remote API Client/Server using
HTTPS connections for access to SNA networks
-Offers the ideal choice for customers who need more secure, robust
Telnet and Remote API networking environments
-Includes full implementation of APPN (network node and end node), HPR,
and DLUR, along with integrated
gateway capabilities, positioning itself as a participant in a host
(hierarchical) or peer-to-peer distributed
network environment
-Operating systems supported: AIX
Note 3:
-------
Introduction to SNA
Summary: In the early 1970s, IBM discovered that large customers were
reluctant to trust unreliable
communications networks to properly automate important transactions. In
response, IBM developed
Systems Network Architecture (SNA). "Anything that can go wrong will go
wrong," and SNA may be unique
in trying to identify literally everything that could possibly go wrong
in order to specify the proper response.
Certain types of expected errors (such as a phone line or modem failure)
are handled automatically.
Other errors (software problems, configuration tables, etc.) are
isolated, logged, and reported
to the central technical staff for analysis and response. This SNA
design worked well as long as communications
equipment was formally installed by a professional staff. It became less
useful in environments when any PC
simply plugs in and joins the LAN. Two forms of SNA developed: Subareas
(SNA Classic) managed by mainframes,
and APPN (New SNA) based on networks of minicomputers.
The mainframe runs an IBM product called VTAM, which controls the
network. Although individual messages
will flow from one NCP to another over a phone line, VTAM maintains a
table of all the machines and
phone links in the network. It selects the routes and the alternate
paths that messages can take between
different NCP nodes.
In the SNA network, a client and server cannot exchange messages unless
they first establish a session.
In a Subarea network, the VTAM program on the mainframe gets involved in
creating every session.
Furthermore, there are control blocks describing the session in the NCP
to which the client talks
and the NCP to which the server talks. Intermediate NCPs have no control
blocks for the session.
In APPN SNA, there are control blocks for the session in all of the
intermediate nodes through which
the message passes.
Most of APPN is the set of queries and replies that manage names,
routes, and sessions. Like the rest of SNA,
it is a fairly complicated and exhaustively documented body of code.
The native programming interface for modern SNA networks is the Common
Programming Interface for Communications
(CPIC). This provides a common set of subroutines, services, and return
codes for programs written in COBOL,
C, or REXX. It is documented in the IBM paper publication SC26-4399, but
it is also widely available in
softcopy on CD-ROM.
The traditional SNA network has been installed and managed by a central
technical staff in a large corporation.
If the network goes down, a company like Aetna Insurance is temporarily
out of business. TCP/IP is designed to be
casual about errors and to simply discard undeliverable messages.
Note 3:
-------
------------------------------------------------------------------------
--------
This command starts SNA, the node, and the main SNA process. It also
starts the links that listen
for other machines calling to activate links if the activation parameter
on the configuration of the DLC,
port, and link station is set to start the links at startup time.
If you have defined a link that calls another machine, you can start
this link by using the following command:
+-----------------------------------------------------------------------
---------+
| Start an SNA Session
|
|
|
|Type or select values in entry fields.
|
|Press Enter AFTER making all desired changes.
|
|
|
| [Entry Fields]
|
| Enter one of:
|
| Local LU alias [OPENCICS]
+ |
| Local LU name []
+ |
|
|
| Enter one of:
|
| Partner LU alias [CICSESA]
+ |
| Fully-qualified Partner LU name []
+ |
|
|
|* Mode name [CICSISC0]
+ |
| Session polarity POL_EITHER
+ |
| CNOS permitted? YES
+ |
|
|
|
|
|F1=Help F2=Refresh F3=Cancel F4=List
|
|F5=Reset F6=Command F7=Edit F8=Image
|
|F9=Shell F10=Exit Enter=Do
|
+-----------------------------------------------------------------------
---------+
If the command returns an error indicating that no sessions can be
activated between LUs, one of the
following problems exists:
Note 4:
-------
Problem(Abstract)
Versions of IBM's SNA Services for AIX and Communications Server
The listed AIX levels are the minimum levels required for CS/AIX to
function.
The only currently supported version is 6.3 on AIX 5.2 and higher.
EOS = End Of Service: No defect work will be performed after this date.
Note 1:
-------
http://www.codecoffee.com/tipsforlinux/articles/036.html
In this article, Sam Chessman explains the use of the dd command with a
lot of useful examples. This article is not aimed at absolute beginners.
Once you are familiar with the basics of Linux, you would be in a better
position to use the dd command.
The ' dd ' command is one of the original Unix utilities and should be
in everyone's tool box. It can strip headers, extract parts of
binary files and write into the middle of floppy disks; it is used by
the Linux kernel Makefiles to make boot images.
It can be used to copy and convert magnetic tape formats, convert
between ASCII and EBCDIC, swap bytes, and force to upper and lowercase.
For blocked I/O, the dd command has no competition in the standard tool
set. One could write a custom utility to do specific I/O or
formatting but, as dd is already available almost everywhere, it makes
sense to use it.
Like most well-behaved commands, dd reads from its standard input and
writes to its standard output, unless a command line specification
has been given. This allows dd to be used in pipes, and remotely with
the rsh remote shell command.
Unlike most commands, dd uses a keyword=value format for its parameters.
This was reputedly modeled after IBM System/360 JCL,
which had an elaborate DD 'Dataset Definition' specification for I/O
devices. A complete listing of all keywords is available from GNU dd
with
$ dd --help
Example 1
The 18b specifies 18 sectors of 512 bytes, the 2x multiplies the sector
size by the number of heads, and the 80x is for the cylinders--
a total of 1474560 bytes. This issues a single 1474560-byte read request
to /dev/fd0 and a single 1474560 write request to
/tmp/floppy.image, whereas a corresponding cp command
cp /dev/fd0 /tmp/floppy.image
issues 360 reads and writes of 4096 bytes. While this may seem
insignificant on a 1.44MB file, when larger amounts of data are
involved,
reducing the number of system calls and improving performance can be
significant.
This example also shows the factor capability in the GNU dd number
specification. This has been around since before the Programmers Work
Bench and,
while not documented in the GNU dd man page, is present in the source
and works just fine, thank you.
Example 2
The original need for dd came with the 1/2" tapes used to exchange data
with other systems and boot and install Unix on the PDP/11.
Those days are gone, but the 9-track format lives. To access the
venerable 9-track, 1/2" tape, dd is superior. With modern SCSI tape
devices,
blocking and unblocking are no longer a necessity, as the hardware reads
and writes 512-byte data blocks.
However, the 9-track 1/2" tape format allows for variable length
blocking and can be impossible to read with the cp command. The dd
command allows
for the exact specification of input and output block sizes, and can
even read variable length block sizes, by specifying an input buffer
size larger
than any of the blocks on the tape. Short blocks are read, and dd
happily copies those to the output file without complaint, simply
reporting on the
number of complete and short blocks encountered.
Then there are the EBCDIC datasets transferred from such systems as MVS,
which are almost always 80-character blank-padded Hollerith Card Images!
No problem for dd, which will convert these to newline-terminated
variable record length ASCII. Making the format is just as easy and dd
again
is the right tool for the job.
Notice the output record count is smaller than the input record count.
This is due to the padding spaces eliminated from the output file and
replaced with newline characters.
Example 3
The dd runs on the SGI and swaps the bytes before writing to the tar
command running on the local host.
Example 4
Murphy's Law was postulated long before digital computers, but it seems
it was specifically targeted for them.
When you need to read a floppy or tape, it is the only copy in the
universe and you have a deadline past due, that is when you will have a
bad spot
on the magnetic media, and your data will be unreadable. To the rescue
comes dd, which can read all the good data around the bad spot and
continue
after the error is encountered. Sometimes this is all that is needed to
recover the important data.
Example 5
The Linux kernel Makefiles use dd to build the boot image. In the Alpha
Makefile /usr/src/linux/arch/alpha/boot/Makefile,
the srmboot target issues the command:
This skips the first 512 bytes of the input bootimage file (skip=1) and
writes starting at the second sector of the $(BOOTDEV) device (seek=1).
A typical use of dd is to skip executable headers and begin writing in
the middle of a device, skipping volume and partition data.
As this can cause your disk to lose file system data, please test and
use these applications with care.
Note 2:
-------
Note 1:
-------
Note 2:
-------
Jane's web browser then decrypts the greeting with the bank's public
key. If the decrypted greeting matches the original greeting
sent by the browser, then Jane's browser can be sure it is really
talking to the owner of the private key -
because only the holder of the private key can encrypt a message in such
a way that the corresponding public key will decrypt it.
When the bank's web server sends its certificate to Jane's browser,
Jane's browser decrypts it with the public key of the
certificate authority. If the certificate is fake, the decryption
results in garbage. If the certificate is valid, out pops
the bank's public key, along with the identifying statement. And if that
statement doesn't include, among other information,
the same hostname that Jane connected to, Jane receives an appropriate
warning message and decides not to continue the connection.
Now, let's return to Bob. Can he substitute himself convincingly for the
bank? No, he can't, because he doesn't have the certificate authority's
private key. That means he can't sign a certificate claiming that he is
the bank.
Now that Jane's browser is thoroughly convinced that the bank is what it
appears to be, the conversation can continue.
-- certlist
Purpose
certlist lists the contents of one or more certificates.
Syntax
certlist [-c] [-a attr [attr....] ]tag [username]
Description
The certlist command lists the contents of one or more certificates.
Using the -c option causes the output to be formatted
as colon-separated data with the attribute names associated with each
field on the previous line as follows:
user:
attribute1=value
attribute2=value
attribute3=value
When neither of these command line options are selected, the attributes
are output as attribute=value pairs.
Flags
-c Displays the output in colon-separated records.
-f Displays the output in stanzas.
-a attr Selects one or more attributes to be displayed.
===================================================
86. Kernel parameters HP-UX, Solaris, Linux for MQ:
===================================================
-------
Note:
-------
Article:
HPUX HP-UX
Kernel configuration
WebSphere® MQ uses semaphores and shared memory. It is possible,
therefore, that the default kernel configuration is not adequate.
Note:
On platforms earlier than HP-UX 11i v1.6 (11.22), if you intended to run
a high number of concurrent connections
to WebSphere MQ, you were required to configure the number of kernel
timers (CALLOUTS) by altering the NCALLOUT kernel parameter.
On HP-UX 11i v1.6 (11.22) platforms or later, the NCALLOUT parameter is
obsolete as the kernel automatically adjusts the data structures.
Semaphore and swap usage does not vary significantly with message rate
or message persistence.
WebSphere MQ queue managers are generally independent of each other.
Therefore system tunable kernel parameters,
for example shmmni, semmni, semmns, and semmnu need to allow for the
number of queue managers in the system.
See the HP-UX documentation for information about changing these values.
shmmax 536870912
shmseg 1024
shmmni 1024
semaem 16384
semvmx 32767
semmns 16384
semmni 1024 (semmni < semmns)
semmnu 16384
semume 256
max_thread_proc 66
maxfiles 10000
maxfiles_lim 10000
nfile 10000
Note: For HP-UX 11.23 (11i V2) and later operating systems, the tunable
kernel parameters: shmem, sema, semmap, and maxusers, are obsolete.
This applies to the Itanium and PA-RISC platforms.
You must restart the system once you have made any changes to the
tunable kernel parameters.
If other software on the same machine recommends higher values, then the
operation of WebSphere MQ will not
be adversely affected if those higher values are used.
For the full documentation for these parameters see the HP-UX product
documentation.
To apply the settings to an HP-UX 11i system which has the System
Administration Manager (SAM) utility,
you can use SAM to achieve the following steps:
Select and alter the parameters
Process the new kernel
Apply the changes and restart the system
It is possible that other releases of HP-UX provide different facilities
to set the tunable kernel parameters. If so, then please
consult your HP-UX product documentation for the relevant information.
ulimit -Ha
ulimit -SaAmongst the console output you should see:
data(kbytes) 1048576
stack(kbytes) 8192If lower numbers are returned, then a ulimit command
has been issued in the current shell to lower the limits.
You should consult with your system administrator to resolve the issue.
-------
Note:
-------
Article:
You must change the default resource limits for each zone WebSphere MQ
will be installed in.
To set new default limits for all users in the mqm group, set up a
project for the mqm group in each zone.
To find out if you already have a project for the mqm group, log in as
root and enter the following command:
projects -lIf you do not already have a group.mqm project defined, enter
the following command:
process.max-file-descriptor=(basic,10000,deny)
project.max-sem-ids=(priv,1024,deny)
project.max-shm-ids=(priv,1024,deny)"
project.max-shm-memory=(priv,4294967296,deny)
If you need to change any of these values, enter the following command:
projmod -s -K "process.max-file-descriptor=(basic,10000,deny)"
-K "project.max-shm-memory=(priv,4GB,deny)"
-K "project.max-shm-ids=(priv,1024,deny)"
-K "project.max-sem-ids=(priv,1024,deny)" group.mqm
Note that you can omit any attributes from this command that are already
correct.
For example, to change only the number of file descriptors, enter the
following command:
projmod -s -K "process.max-file-descriptor=(basic,10000,deny)" group.mqm
(To set only the limits for starting the queue manager under the mqm
user,
login as mqm and enter the command projects. The first listed project is
likely to be default,
and so you can use default instead of group.mqm, with the projmod
command.)
You can find out what the file descriptor limits for the current project
are, by compiling and running the following program:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
int main () {
int fd;
for (;;) {
fd = open ("./tryfd", O_RDONLY);
printf ("fd is %d\n", fd);
if (fd == -1) break;
}
}
To ensure that the attributes for the project group.mqm are used by a
user session when running Websphere MQ,
make sure that the primary group of that user ID is mqm. In the above
examples, the group.mqm project ID will be used.
For further information on how projects are associated with user
sessions, see Sun's System Administration Guide:
Solaris Containers-Resource Management and Solaris Zones for your
release of Solaris.
As the root user, load the relevant kernel modules into the running
system by typing the following commands:
modload -p sys/msgsys
modload -p sys/shmsys
modload -p sys/semsys
sysdef
Check that the following parameters are set to the minimum values
required by WebSphere MQ, or higher.
The minimum values required by WebSphere MQ are documented in the tables
below.
To change any parameters that are lower than the minimum value required
by WebSphere MQ, edit your
/etc/system file to include the relevant lines from the following list:
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmni=1024
set semsys:seminfo_semmni=1024
set semsys:seminfo_semaem=16384
set semsys:seminfo_semvmx=32767
set semsys:seminfo_semmns=16384
set semsys:seminfo_semmsl=100
set semsys:seminfo_semopm=100
set semsys:seminfo_semmnu=16384
set semsys:seminfo_semume=256
set shmsys:shminfo_shmseg=1024
set rlim_fd_cur=10000
set rlim_fd_max=10000Note:
These values are suitable for running WebSphere MQ, other products on
the system might require higher values.
Do not change the value of shmmin from the system default value.
Semaphore and swap usage does not vary significantly with message rate
or persistence.
WebSphere MQ queue managers are generally independent of each other.
Therefore system kernel parameters,
for example shmmni, semmni, semmns, and semmnu need to allow for the
number of queue managers in the system.
After saving the /etc/system file, you must reboot your system
-------
Note:
-------
Article:
Kernel configuration
WebSphere® MQ makes use of System V IPC resources, in particular shared
memory and semaphores. The default configuration
of these resources, supplied with your installation, is probably
adequate for WebSphere MQ but if you have
a large number of queues or connected applications, you might need to
increase this configuration.
For example, to view the maximum size of a shared memory segment that
can be created enter:
cat /proc/sys/kernel/shmmax
To view the maximum number of semaphores and semaphore sets which can be
created enter:
cat /proc/sys/kernel/sem
------
Note:
------
MQ Processes: List 1:
=====================
MQSERIES PROCESSES BY PLATFORM
PLATFORM =AIX
ProcName Process Function
amqhasmx logger
amqharmx log formatter,used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agent(s)
amqzxma0 processing controller
runmqsc MQ Command interface
amqpcsea PCF command processor
amqcrsta Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl Any locally started channel over any protocol - Could be
SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr listener process
runmqchi channel initiator
PLATFORM = AS/400
ProcName Process Function
AMQHIXK4 Storage Manager (Housekeeper)
AMQMCPRA Data Store (Object Cache)
AMQCLMAA Listener
AMQALMP4 Check Point Process
AMQRMCLA Sender channel
AMQPCSVA PCF command processor
AMQRIMNA Channel initiator (trigger monitor to start channel)
AMQIQES4 Quiesce (forces user logoffs - for upgrades)
AMQIQEJ4 Quiesce (without user logoffs - for daily use if
desired)
AMQCRSTA Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
PLATFORM = HP/UX
ProcName Process Function
amqhasmx logger
amqharmx log formatter, used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agents
amqzxma0 processing controller
runmqsc MQ Command interface
amqpcsea PCF command processor
amqcrsta Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl Any locally started channel over any protocol - Could be
SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr listener process
runmqchi channel initiator
PLATFORM = OS2
ProcName Process Function
AMQHASM2.EXE The logger
AMQHARM2.EXE Log formatter (LINEAR logs only)
AMQZLLP0.EXE Checkpoint process
AMQZLAA0.EXE LQM agents
AMQZXMA0.EXE Execution controller
AMQXSSV2.EXE Shared memory servers
RUNMQSC.EXE MQSeries Command processor
AMQPCSEA.EXE PCF command processor
AMQCRSTA.EXE Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A.EXE Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
RUNMQCHL.EXE Any locally started channel over any protocol - Could be
SENDER,SERVER,CLUSSDR,REQUESTER
RUNMQLSR LISTENER PROCESS
RUNMQCHI CHANNEL INITIATOR
PLATFORM = SOLARIS
ProcName Process Function
amqhasmx logger
amqharmx log formatter, used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agents
amqzxma0 processing controller
amqcrsta Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl Any locally started channel over any protocol - Could be
SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr listener process
runmqchi channel initiator
runmqsc MQ Command interface
amqpcsea PCF command processor
Windows/NT
ProcName Process Function
AMQHASMN.EXE The logger
AMQHARMN.EXE Log formatter (LINEAR logs only)
AMQZLLP0.EXE Checkpoint process
AMQZLAA0.EXE LQM agents
AMQZTRCN.EXE Trace
AMQZXMA0.EXE Execution controller
AMQXSSVN.EXE Shared memory servers
AMQCRSTA.EXE Any remotely started channel over TCP/IP - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A.EXE Any remotely started channel over LU62/SNA - Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
RUNMQCHL.EXE Any locally started channel over any protocol - Could be
SENDER,SERVER,CLUSSDR,REQUESTER
RUNMQLSR LISTENER PROCESS
RUNMQCHI CHANNEL INITIATOR
RUNMQSC.EXE MQSeries Command processor
AMQPCSEA.EXE PCF command processor
AMQSCM.EXE Service Control Manager
MQ Processes: List 2:
=====================
Windows/NT
AMQHASMN.EXE - The logger
AMQHARMN.EXE - Log formatter (LINEAR logs only)
AMQZLLP0.EXE - Checkpoint process
AMQZLAA0.EXE - LQM agents
AMQZTRCN.EXE - Trace
AMQZXMA0.EXE - Execution controller
AMQXSSVN.EXE - Shared memory servers
AMQCRSTA.EXE - Any remotely started channel over TCP/IP
- Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A.EXE - Any remotely started channel over LU62/SNA
- Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
RUNMQCHL.EXE - Any locally started channel over any protocol
- Could be SENDER,SERVER,CLUSSDR,REQUESTER
RUNMQLSR - LISTENER PROCESS
RUNMQCHI - CHANNEL INITIATOR
RUNMQSC.EXE - MQSeries Command processor
AMQPCSEA.EXE - PCF command processor
AMQSCM.EXE - Service Control Manager
SOLARIS
amqhasmx - logger
amqharmx - log formatter, used only if the queue manager has linear
logging selected
amqzllp0 - checkpoint processor
amqzlaa0 - queue manager agents
amqzxma0 - processing controller
amqcrsta - Any remotely started channel over TCP/IP
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a - Any remotely started channel over LU62/SNA
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl - Any locally started channel over any protocol
- Could be SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr - listener process
runmqchi - channel initiator
runmqsc - MQ Command interface
amqpcsea - PCF command processor
AS/400
AMQHIXK4 - Storage Manager (Housekeeper)
AMQMCPRA - Data Store (Object Cache)
AMQCLMAA - Listener
AMQALMP4 - Check Point Process
AMQRMCLA - Sender channel
AMQCRSTA - Any remotely started channel over TCP/IP
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A - Any remotely started channel over LU62/SNA
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQPCSVA - PCF command processor
AMQRIMNA - Channel initiator (trigger monitor to start channel)
AMQIQES4 - Quiesce (forces user logoffs - for upgrades)
AMQIQEJ4 - Quiesce (without user logoffs - for daily use if desired)
AIX
amqhasmx - logger
amqharmx - log formatter, used only if the queue manager has linear
logging selected
amqzllp0 - checkpoint processor
amqzlaa0 - queue manager agent(s)
amqzxma0 - processing controller
amqcrsta - Any remotely started channel over TCP/IP
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a - Any remotely started channel over LU62/SNA
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl - Any locally started channel over any protocol
- Could be SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr - listener process
runmqchi - channel initiator
runmqsc - MQ Command interface
amqpcsea - PCF command processor
HP/UX
amqhasmx - logger
amqharmx - log formatter, used only if the queue manager has linear
logging selected
amqzllp0 - checkpoint processor
amqzlaa0 - queue manager agents
amqzxma0 - processing controller
amqcrsta - Any remotely started channel over TCP/IP
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
amqcrs6a - Any remotely started channel over LU62/SNA
- Could be RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
runmqchl - Any locally started channel over any protocol
- Could be SENDER,SERVER,CLUSSDR,REQUESTER
runmqlsr - listener process
runmqchi - channel initiator
runmqsc - MQ Command interface
amqpcsea - PCF command processor
OS2
AMQHASM2.EXE - The logger
AMQHARM2.EXE - Log formatter (LINEAR logs only)
AMQZLLP0.EXE - Checkpoint process
AMQZLAA0.EXE - LQM agents
AMQZXMA0.EXE - Execution controller
AMQXSSV2.EXE - Shared memory servers
AMQCRSTA.EXE - Any remotely started channel over TCP/IP
- Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
AMQCRS6A.EXE - Any remotely started channel over LU62/SNA
- Could be
RECEIVER,REQUESTER,CLUSRCVR,SVRCONN,SENDER,SERVER
RUNMQCHL.EXE - Any locally started channel over any protocol
- Could be SENDER,SERVER,CLUSSDR,REQUESTER
RUNMQLSR - LISTENER PROCESS
RUNMQCHI - CHANNEL INITIATOR
RUNMQSC.EXE - MQSeries Command processor
AMQPCSEA.EXE - PCF command processor
PLATFORM =AIX
ProcName Process Function
amqhasmx logger
amqharmx log formatter,used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agent(s)
amqzxma0 processing controller
amqcrsta TCPIP Receiver channel & Client Connection
amqcrs6a LU62 Receiver channel & Client Connection
runmqchl Sender Channel
runmqsc MQ Command interface
amqpcsea PCF command processor
PLATFORM = AS/400
ProcName Process Function
PLATFORM = HP/UX
ProcName Process Function
amqhasmx logger
amqharmx log formatter, used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agents
amqzxma0 processing controller
amqcrsta TCPIP Receiver channel & Client Connection
amqcrs6a LU62 Receiver channel & Client Connection
runmqchl Sender Channel
runmqsc MQ Command interface
amqpcsea PCF command processor
PLATFORM = OS2
ProcName Process Function
AMQHASM2.EXE The logger
AMQHARM2.EXE Log formatter (LINEAR logs only)
AMQZLLP0.EXE Checkpoint process
AMQZLAA0.EXE LQM agents
AMQZXMA0.EXE Execution controller
AMQXSSV2.EXE Shared memory servers
AMQCRSTA.EXE TCPIP Receiver channel & Client Connection
AMQCRS6A.EXE LU62 Receiver channel & Client Connection
RUNMQCHL.EXE Sender Channel
RUNMQSC.EXE MQSeries Command processor
AMQPCSEA.EXE PCF command processor
PLATFORM = SOLARIS
ProcName Process Function
amqhasmx logger
amqharmx log formatter, used only if the queue manager has linear
logging selected
amqzllp0 checkpoint processor
amqzlaa0 queue manager agents
amqzxma0 processing controller
amqcrsta TCPIP Receiver channel & Client Connection
MQ Processes: List 3:
=====================
You can view all jobs connected to a queue manager, except listeners
(which do not connect),
using option 22 on the Work with Queue Manager (WRKMQM) panel. You can
view listeners using the WRKMQMLSR command
=========================
87. Connect Direct:
=========================
--------------
Note 1: Intro:
--------------
Typical processes:
=========================
88. OTHER STUFF SECTION:
=========================
lrud:
=====
To strictly set the maximum number of file pages cached you would set
strict_maxperm, but you usually do not have to do this unless you are
working with a very large amount of memory (64Gb and up) ... so, i
would leave well alone if you only have a couple of GB...
gil:
====
GIL is one of the kprocs (kernel processes) in AIX 4.3.3, 5.1 and 5.2.
Since the advent of topas in AIX 4.3.3 and changes made to the ps
command in AIX 5.1, system administrators have become aware of this
class of processes, which are not new to AIX. These kprocs have no
user interfaces and have been largely undocumented in base
documentation. Once a kproc is started, typically it stays in the
process table until the next reboot. The system resources used by any
one kproc are accounted as kernel resources, so no separate account is
kept of resources used by an individual kproc.
.
Most of these kprocs are NOT described in base AIX documentation and
the descriptions below may be the most complete that can be found.
.
GIL term is an acronym for "Generalized Interrupt Level" and was
created by the Open Software Foundation (OSF), This is the networking
daemon responsible for processing all the network interrupts, including
incoming packets, tcp timers, etc.
.
Exactly how these kprocs function and much of their expected behavior
is considered IBM proprietary information.
picld:
------
Upon startup, the PICL daemon loads and initializes the plug-in modules.
These modules use the
libpicltree(3PICLTREE) interface to create nodes and properties in the
PICL tree to publish
platform configuration information. After the plug-in modules are
initialized, the daemon opens
the PICL daemon door to service client requests to access information in
the PICL tree.
arraymon:
---------
sar:
----
Note 1:
-------
I need to find out what is listening on the ports below and how to
disable services for them.
This host will only run standalone firewall and sendmail only.
On Solaris 2.6 these listners and procs do not exist.
Regarding the "smcboot" process the answer is simple. This is the boot
process for the
Solaris Management Console (SMC) which is a GUI (well - more a framework
with a several existing modules)
to manage your system.
If you're not interested to manage your host using SMC, then you can
safely disable this
(remove or diable /etc/rc2.d/S90wbem). This smc process is also
responsible for listening on port 898 and 5987.
The port 32768 is not used for a fixed service. You should check your
system to idenfity
which process is using this port. This can be done by using the pfiles
command, e.g.
"cd /proc; /usr/proc/bin/pfiles * > /tmp/pfiles.out" and then look in
/tmp/pfiles.out for the portnumber.
The picld process is a new abstraction layer for programs who want to
access platform specific information.
Instead of using some platform specific program applications can use the
picl library to access
information in a generic way.
Disabling the picld daemon will affect applications which are using the
libpicltree.
You can use the "ldd" command to identify such applications and decide
whether you're using them or not.
Example applications are "prtpicl" or "locator" (see the manpages).
bpbkar:
=======
<defunct> process:
==================
Note 1:
In Solaris 2.3 (and presumably earlier) there is a bug in the pseudo tty
modules that makes them hang in close.
This causes processes to hang forever while exiting.
In all Solaris 2 releases prior to 2.5 (also fixed in the latest 2.4
kernel jumbo patch),
init (process 1) calls sync() every five minutes which can hang init for
some considerable time.
This can cause a lot of zombies accumulating with process 1 as parent,
but occurs only in rare circumstances.
Note 2:
My app has a parent that forks a child. Sometimes, one of them dies and
leaves a defunct process,
along with shared memory segments. I try to get rid of the shared memory
and kill the defunct task,
but to no avail. I then have to reboot the system to clean up the shared
memory and to get rid
of the defunct process. How can I kill a defunct process and get rid of
the associated shared memory ?
Note 3:
A zombie process is a process which has died and whose parent process is
still running
and has not wait()ed for it. In other words, if a process becomes a
zombie, it means
that the parent process has not called wait() or waitpid() to obtain the
child process's
termination status. Once the parent retrieves a child's termination
status, that child process
no longer appears in the process table.
If any process terminates before its children do, init inherits those
children.
When they die, init calls one of the wait() functions to retrieve the
child's termination status,
and the child disappears from the process table.
Note 4:
# /usr/proc/bin/pfiles 194
194: /usr/sbin/nscd
Current rlimit: 256 file descriptors
0: S_IFCHR mode:0666 dev:85,1 ino:3291 uid:0 gid:3 rdev:13,2
O_RDWR
1: S_IFCHR mode:0666 dev:85,1 ino:3291 uid:0 gid:3 rdev:13,2
O_RDWR
2: S_IFCHR mode:0666 dev:85,1 ino:3291 uid:0 gid:3 rdev:13,2
O_RDWR
3: S_IFDOOR mode:0777 dev:275,0 ino:0 uid:0 gid:0 size:0
O_RDWR FD_CLOEXEC door to nscd[194]
# /usr/proc/bin/pfiles 254
254: /usr/dt/bin/dtlogin -daemon Current rlimit: 2014 file descriptors
0: S_IFDIR mode:0755 dev:32,24 ino:2 uid:0 gid:0 size:512
O_RDONLY|O_LARGEFILE 1: S_IFDIR mode:0755 dev:32,24 ino:2 uid:0 gid:0
size:512
O_RDONLY|O_LARGEFILE 2: S_IFREG mode:0644 dev:32,24 ino:143623 uid:0
gid:0 size:41
O_WRONLY|O_APPEND|O_LARGEFILE 3: S_IFCHR mode:0666 dev:32,24 ino:207727
uid:0 gid:3 rdev:13,12
O_RDWR 4: S_IFSOCK mode:0666 dev:174,0 ino:4686 uid:0 gid:0 size:0
O_RDWR|O_NONBLOCK 5: S_IFREG mode:0644 dev:32,24 ino:143624 uid:0 gid:0
size:4
O_WRONLY|O_LARGEFILE advisory write lock set by process 245 7:
S_IFSOCK mode:0666 dev:174,0 ino:3717 uid:0 gid:0 size:0 O_RDWR 8:
S_IFDOOR mode:0444 dev:179,0 ino:65516 uid:0 gid:0 size:0 O_RDONLY|
O_LARGEFILE FD_CLOEXEC door to nscd[171]
This listing shows the files open by the dtlogin process. Notice how
easy it is to decipher the file types
in this output. We have:
Limits on the number of files that a process can open can be changed
system-wide in the /etc/system file.
If you support a process that opens a lot of sockets, then you can
monitor the number of open files
and socket connections by using a command such as this:
The third limit determines how many file references can be held in
memory at any time (in the inode cache).
If you're running the sar utility, then a sar -v command will show you
(in one column of its output (inod-sz))
the number of references in memory and the maximum possible. On most
systems, these two numbers will be oddly
stable throughout the day. The system maintains the references even
after a process has stopped running
-- just in case it might need them again. These references will be
dropped and the space reused as needed.
The sar output might look like this:
The 4th field reports the number of files currently referenced in the
inode cache and
the maximum that can be stored.
#!/usr/bin/ksh
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
export NLS_LANG
ORACLE_SID=ECM
export ORACLE_SID
cd /u03/dumps/ECM
mv ECM.dmp.Z ECMformer.dmp.Z
exp system/arcturus81 file=ECM.dmp full=y statistics=none
cp ECM.dmp /u01/dumps/ECM
compress -v ECM.dmp
xntpd:
======
# lssrc -s xntpd
utmpd:
======
Solaris:
NAME
utmpd - utmpx monitoring daemon
SYNOPSIS
utmpd [-debug]
DESCRIPTION
The utmpd daemon monitors the /var/adm/utmpx file. See
utmpx(4) (and utmp(4) for historical information).
OPTIONS
-debug
Run in debug mode, leaving the process connected to
the controlling terminal. Write debugging information
to standard output.
HP-UX 11i:
pwconv:
=======
NAME
pwconv - installs and updates /etc/shadow with information
from /etc/passwd
DESCRIPTION
The pwconv command creates and updates /etc/shadow with
information from /etc/passwd.
TCL:
====
Tcl
Description
ESCON:
======
FICON:
======
nscd:
=====
logfile /var/adm/nscd.log
enable-cache passwd no
enable-cache group no
positive-time-to-live hosts 3600
negative-time-to-live hosts 5
suggested-size hosts 211
keep-hot-count hosts 20
old-data-ok hosts no
check-files hosts yes
enable-cache exec_attr no
enable-cache prof_attr no
enable-cache user_attr no
If your system has any instability with respect to host names and/or IP
addresses, it is possible
to substitute the following line for all the above lines containing
hosts.
This may slow down host name lookups, but it should fix the name
translation problem.
enable-cache hosts no
thread 1:
---------
thread 2:
---------
1. nvdmetoa command:
Examples:
Converts an EBCDIC file taken off an AS400 and converts to an ASCII file
for the pSeries or RS/6000
thread 3:
---------
od command:
The od command translate a file into other formats, like for example
hexadecimal format.
To translate a file into several formats at once, enter:
thread 4:
---------
AIX:
----
Purpose
Describes formats for user and accounting information.
Description
The utmp file, the wtmp file, and the failedlogin file contain records
with user and accounting information.
When a user attempts to logs in, the login program writes entries in two
files:
The /etc/utmp file, which contains a record of users logged into the
system.
The /var/adm/wtmp file (if it exists), which contains connect-time
accounting records.
On an invalid login attempt, due to an incorrect login name or password,
the login program makes an entry in:
failedlogin:
Use the who command to read the contents of the
/etc/security/failedlogin file:
# who /etc/security/failedlogin
# cp /dev/null /etc/security/failedlogin
CONSOLE=/dev/console
into the "/etc/default/login" file, root can only logon from the
console,
and not from other terminals.
What is it?
-----------
libc = C Libary
glibc = GNU C library (on linux and open systems)
It is an XCOFF shared library under AIX and hence a critical part of the
running
system.
For each function or variable that the library provides, the definition
of that symbol will include
information on which header files to include in your source to obtain
prototypes and type definitions
relevant to the use of that symbol.
Note that many of the functions in `libm.a' (the math library) are
defined in `math.h' but are not present
in libc.a. Some are, which may get confusing, but the rule of thumb is
this--the C library contains
those functions that ANSI dictates must exist, so that you don't need
the -lm if you only use ANSI functions.
In contrast, `libm.a' contains more functions and supports additional
functionality such as the matherr
call-back and compliance to several alternative
standards of behavior in case of FP errors.
Version:
--------
On AIX, you can determine the version of the libc fileset on your
machine as follows:
# lslpp -l bos.rte.libc
Note: You might want to look at the "recsh" recovery shell command
first.
export LIBPATH=/other/directory
And your future commands will work. But if you renamed libc.a, this
won't do it. If you have an NFS mounted directory somewhere, you can
put libc.a on the that host, and point LIBPATH to that directory as
shown above.
Or..
a. # cp -f your_dir/locale_format/lib/libc.a /usr/ccs/lib/
b. # chown bin.bin /usr/ccs/lib/libc.a
c. # chmod 555 /usr/ccs/lib/libc.a
d. # ln -sf /usr/ccs/lib/libc.a /usr/lib/libs.a
e. # unset LIBPATH
f. # slibclean
Now Reboot.
The information in this how-to was tested using AIXr 5.3. If you are
using a different version or level of AIX,
the results you obtain might vary significantly.
At this point, commands should run as before. If you still do not have
access to a shell,
skip the rest of this procedure and continue with the next section,
Restore a Deleted System Library File.
Type the following command to unset the LIBPATH environment variable.
unset LIBPATH
Note 1:
-------
Note 2:
-------
thread:
Q:
Hi there
I've just tried to install Informix 9.3 64-bit on AIX 52. It failed with
the
error shown below. Any suggestions as to what could be wrong? I tried to
find information on the web as to what versions of Informix (if any) are
supported on AIX52, but could not find anything.
A:
Did you enable AIX aio? If not then run the following smit command.
$ smit aio
Also check that you enabled 64-bit version of AIX run time.
Note 3:
-------
Q:
Examples:
Error Message
Executing some ArcInfo Workstation commands, or running ArcView GIS
cause the following errors to occur:
root@n5110l13:/appl/emcdctm/dba/log#cat dmw_et.log
Could not load program ./documentum:
Symbol resolution failed for /usr/lib/libc_r.a(aio.o) because:
Symbol kaio_rdwr (number 0) is not exported from dependent
module /unix.
Symbol listio (number 1) is not exported from dependent
module /unix.
Symbol acancel (number 2) is not exported from dependent
module /unix.
Symbol iosuspend (number 3) is not exported from dependent
module /unix.
Symbol aio_nwait (number 4) is not exported from dependent
module /unix.
Symbol aio_nwait64 (number 5) is not exported from dependent
module /unix.
Symbol aio_nwait_timeout (number 6) is not exported from
dependent
module /unix.
Symbol aio_nwait_timeout64 (number 7) is not exported from
dependent
module /unix.
System error: Error 0
A:
Cause
The AIX asynchronous I/O module has not been loaded.
Solution or Workaround
Load asynchronous I/O. You must do this as a ROOT user:
Use SMITTY and navigate to Devices > Async I/O > Change/Show.
Make the defined option available.
Reboot the machine.
or
AIX Only
-- kdb command:
This command is implemented as an ordinary user-space program and is
typically used for post-mortem analysis
of a previously-crashed system by using a system dump file. The kdb
command includes subcommands specific to the
manipulation of system dumps.
Both the KDB kernel debugger and kdb command allow the developer to
display various structures normally found
in the kernel's memory space. Both do the following:
slibclean:
==========
AIX:
Note 1:
Syntax
# slibclean
Description
The slibclean command unloads all object files with load and use counts
of 0. It can also be used to
remove object files that are no longer used from both the shared library
region and in the shared library
and kernel text regions by removing object files that are no longer
required.
Files
/usr/sbin/slibclean Contains the slibclean command.
Note 1:
-------
thread:
Q:
thread_waitlock
Hello all
Can someone please provide me with a link to where the above function is
documented ?? I know its part of libc_r.a and is used for thread
synchronization ... I need to get some details on the function as to
what exactly it does since a program I'm trying to debug is getting a
ENOTSUP error while calling this function ...
A:
thread_waitlock()
Reply from Richard Joltes on 8/25/2003 5:48:00 PM
Note 2:
-------
thread:
PROBLEM DESCRIPTION:
Program termination due to an assert in thread_setstate_fast.
PROBLEM SUMMARY:
Assert in thread_setstate_fast
PROBLEM CONCLUSION:
Increase lock scope.
Note 3:
-------
thread:
> Increasing swap might help, but I would not expect it.
> You are running out of *heap* space. Check your limits, e.g. 'ulimit
> -a' in *sh or 'limit' in *csh.
Note 4:
-------
e.g.
skulker command:
================
AIX:
The find command and the xargs command form a powerful combination for
use in the skulker command.
Most file selection criteria can be expressed conveniently with find
expressions. The resulting file list
can be segmented and inserted into rm commands using the xargs command
to reduce the overhead that would
result if each file were deleted with a separate command.
Note
Because the skulker command is run by a root user and its whole purpose
is to remove files,
it has the potential for unexpected results. Before installing a new
skulker command, test any additions
to its file removal criteria by running the additions manually using the
xargs -p command. After you have
verified that the new skulker command removes only the files you want
removed, you can install it.
To enable the skulker command, you should use the crontab -e command to
remove the comment statement
by deleting the # (pound sign) character from the beginning of the
/usr/sbin/skulker line in the
/var/spool/cron/crontabs/root file.
Note 1:
-------
Q:
What is L3 cache?
A:
Answer from LinksMaster "CPU Cache (the example CPU is a little old but
the concepts are still the same)
Note 2:
-------
L2 cachhe, short for Level 2 cache, cache memory that is external to the
microprocessor. In general, L2 cache memory,
also called the secondary cache, resides on a separate chip from the
microprocessor chip.
Although, more and more microprocessors are including L2 caches into
their architectures.
xcom:
=====
Used for filetransfer between systems with many nice features like
printing a report of the transfer,
queuing of transfers, EBCIDIC - ASCII conversion, scheduling etc..
Use:
xcomd -c to kill/stop
xcomd to start the daemon.
Example commands:
xcomtcp -c1 -f /tmp/xcom.cnf LOCAL_FILE=/tmp/xcomtest.txt
REMOTE_FILE=Q:\
REMOTE_SYSTEM=NLPA020515.patest.nl.eu.abnamro.com QUEUE=NO
PROTOCOL=TCPIP PORT=8044
Note 1:
-------
/etc/services
# 32775-32895 # Unassigned
Problem:
This error may arise due to various causes. Most common cause is
incorrect permission on
some mount point.
Here, some cases are presented which may resemble your situation:
Case 1:
=======
Incorrect underlying mount point permission's prevent WebSphere
Application Server from starting
Technote (troubleshooting)
Problem(Abstract)
When the filesystem in which Application Server is mounted has incorrect
mount point permission's
the server may not start when attempting to start the server as a non-
root user.
Cause
The JVM is Unable to initialize because the user does not have the
proper permission to access the necessary
files due to incorrect underlying mount point permission's.
Case 2:
=======
WebSphere Application Server V6.1's Java on AIX will not start as a non-
root user and will report error
"JVMXM008" or "JVM not found: libjvm.a - libjvm.a"
Technote (troubleshooting)
Problem(Abstract)
When attempting to run WebSphere Application Server V6.1 as a non-root
user on AIX,
it may fail to initialize. It produces an error message, "JVMXM008:
Error occured while initialising System ClassException
in thread "main" Could not create the Java virtual machine." or a
different error message,
"JVM not found: libjvm.a - libjvm.a". Running the same process as the
root user does not produce any problems.
Symptom
This article describes an issue clients may encounter after successfully
installing WebSphere Application Server V6.1
on AIX as either the root user or a non-root user.
Examples
In these examples illustrating the Java initialization errors, assume
that WebSphere Application Server
has been successfully installed as a non-root user to the /usr/WAS
directory.
cd /
/usr/WAS/java/jre/bin/java -version
Example Two: Java fails to initialize when running it from inside its
"bin" directory
cd /usr/WAS/java/jre/bin
./java -version
The error shown in Example Two occurs even though the libjvm.a file is
present in the proper location
and has correct permissions.
Cause
If the mount point for the filesystem to which WebSphere is installed is
not set up correctly, this can cause
problems for Java initialization.
A non-root user must have read access to all files installed with
WebSphere Application Server V6.1, including all
of the files in the product's "java" subdirectory. This can be
accomplished by granting ownership of all
the product's files to that non-root user.
A non-root user should also have access to the AIX system libraries,
such as the libraries in /usr/lib and /usr/ccs/lib .
If each of those points are checked and appear to be in good order, then
there is one more aspect of the WebSphere
configuration to review. Follow these steps to check and correct an
issue which could lead to problems
when initializing Java as a non-root user:
Check the permissions of the blank directory which acts as the mount
point for the filesystem.
The permissions should be at least "755". The ownership and permissions
of that directory must be configured
in a manner which allows the non-root user read access to that
directory.
Check that the directory which acts as the mount point is empty. If the
directory contains anything,
remove that content so that the directory becomes empty.
Case 3:
=======
JVMXM008 workaround
Brian D. Carlstrom wrote:
I noticed that Sergiy Kyrylkov had posted in his blog about a similar
issue when running
JikesRVM regressions from cron:
http://sergiy.kyrylkov.name/blog/Jikes%20RVM/
I found that if I forced the ssh remote bash to be a login shell, I did
not get the error.
Eventually I narrowed it down to the fact that /etc/profile was sourcing
/etc/profile.d/lang.sh
which was setting the LANG environment variable. I found that if I my
remote command set LANG explicitly,
I can build jikesrvm over ssh without the JVMXM008 error.
I hope I'll have time soon to try this workaround on UNM machines.
Case 4:
=======
Q:
Hi
When I try to install PT 8.46.02 in AIX box, I m got the below error
message.
"JVMXM008: Error occurred while initializing System Class Exception in
thread "main" Could not create
the Java virtual machine".
A:
set java_home class path before installing
Case 5:
=======
Everywhere else in the filesystem, there was no problem and java spits
out its version string.
The cause was, that the /usr/ibm/WebSphere/Appserver filesystem was a
mounted volumegroup.
/dev/volg /usr/ibm
Althoug the rights on /usr/ibm looked good, when the FS was mounted, the
JVM did not work.
drwxr-xr-x 13 root system 4096 Jul 29 17:56 /usr/ibm
The root cause shows up, when the /usr/ibm fs gets unmounted:
drwxr-x--- 13 root system 4096 Jan 01 17:56 /usr/ibm
So changing the /usr/ibm directory rights while the fs was unmounted
solved the problem.
Posted by awaldman at 4:35 PM in IBM TIM TDI
Or
# killall
Example:
# mkramdisk 40000
# ls -l /dev | grep ram
# mkfs -V jfs /dev/ramdiskx
# mkdir /ramdiskx
# mount -V jfs -o nointegrity /dev/ramdiskx /ramdiskx
thread:
Q:
Hi all,
A:
Hi,
# ls -l /usr/sbin/ping
-r-sr-xr-x 1 root system 31598 Dec 17 2002 /usr/sbin/ping
A:
Technote IBM
When trying to ping as a user that is not root, the following error
message was
displayed:
Solution
------------------------------------------------------------------------
--------
Environment
AIX Version 5.x Change the setuid bit permissions for /usr/sbin/ping.
Enter:
chmod 4555 /usr/sbin/ping
#mount /dev/dsk/c0t0d0s0 /a
Now open the password file and remove the password entry i.e.
the second field root:
$1$NYDu1c8p$Mdm2n6IPb9k14pP2s2FXZ.:13063:0:99999:7:::
# vi /a/etc/shadow
#umount /a
#ok boot -s
#passwd
New Password:
Verify Password:
This will reset the password for root and you will be able to login to
the box using this password.
itm_ora_App2
Note 1:
-------
thread
Q:
A:
You can use the certlist command without any parameters or flags, which
will show
you all installed certificates for your account on your system.
certlist Command
Purpose
certlist lists the contents of one or more certificates.
Syntax
certlist [-c] [-a attr [attr....] ]tag [username]
Description
The certlist command lists the contents of one or more certificates.
Using the -c option causes the output
to be formatted as colon-separated data with the attribute names
associated with each
field on the previous line as follows:
user:
attribute1=value
attribute2=value
attribute3=value
When neither of these command line options are selected, the attributes
are output as attribute=value pairs.
Flags
-c Displays the output in colon-separated records.
-f Displays the output in stanzas.
-a attr Selects one or more attributes to be displayed.
The username parameter specifies the name of the AIX user to be queried.
If invoked without the username parameter,
the certdelete command uses the name of the current user.
Exit Status
0 If successful.
EINVAL If the command is ill-formed or the arguments are invalid.
ENOENT If a) the user doesn't exist, b) the tag does not exist c) the
file does not exist.
EACCES If the attribute cannot be listed, for example, if the invoker
does not have read_access to the user data-base.
EPERM If the user identification and authentication fails.
errno If system error.
Security
This command can be executed by any user in order to list the attributes
of a certificate.
Certificates listed may be owned by another user.
Audit
This command records the following event information:
CERT_List <username>
Examples
$ certlist -f -a verified keystore label signcert bob
bob:
verified=false
keystore=file:/var/pki/security/keys/bob
label=signcert
SAM on HP-UX:
=============
# sam
..
..
Starting the terminal version of sam...
------------------------------------------------------
|File View Options Actions Help|
| ---- ---- ------- ------------------------------- ---|
On any screen, press "CTRL-K" for more information on how to use the
keyboard.
Choose "Accounts for Users and Groups" and the following screen shows:
[pl003][tdbaeduc][/dbms/tdbaeduc/educroca/admin/dump/bdump] vmstat -v
1572864 memory pages
1506463 lruable pages
36494 free pages
7 memory pools
336124 pinned pages
80.0 maxpin percentage
20.0 minperm percentage
80.0 maxperm percentage
43.4 numperm percentage
654671 file pages
0.0 compressed percentage
0 compressed pages
45.8 numclient percentage
80.0 maxclient percentage
690983 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
--> 8868259 paging space I/Os blocked with no psbuf
--> 2740 filesystem I/Os blocked with no fsbuf
--> 13175 client filesystem I/Os blocked with no fsbuf
--> 319766 external pager filesystem I/Os blocked with no
fsbuf
Note 1:
-------
http://www.circle4.com/jaqui/eserver/eserver-AugSep06-AIXPerformance.pdf
..
..
The last five lines of the vmstat -v report are useful when you're
looking for I/O problems. The first line is
for disk I/Os that were blocked because there were no pbufs. Pbufs are
pinned memory buffers used
to hold I/O requests at the logical volumemanager layer. Prior to AIX
v5.3, this was a systemwide parameter.
It's now tuneable on a volume-group basis using the lvmo command. The
ioo parameter that controls the default
number of pbufs to add when a disk is added to a volume groupis
pv_min_pbuf, and it defaults to 512.
This specifies the minimum number of pbufs per PV that the LVM uses, and
it's a global value that applies to all
VGs on the system. If you see the pbuf blocked I/Os field above
increasing over time, you may want to use the
lvmo -a command to find out which volume groups are having problems with
pbufs and then slowly increase
pbufs for that volume group using the lvmo command. A reasonable value
could be 1024.
Paging space I/Os blocked with no psbuf refers to the number of paging
space I/O requests blocked
because no psbuf was available. These are pinned memory buffers used to
hold I/O requests at the
virtual memory manager layer. If you see these increasing, then you need
to either find out why the system
is paging or increase the size of the page datasets. Filesystem I/Os
blocked with no fsbufs refers to the
number of filesystem I/O requests blocked because no fsbuf was
available. Fsbufs are pinned memory buffers
used to hold I/O requests in the filesystem layer. If this is constantly
increasing, then it may be necessary
to use ioo to increase numfsbufs so that more bufstructs are available.
The default numfsbufs value
is determined by the system and seems to normally default to 196. I
regularly increase this to either 1,024 or 2,048.
Note 2:
-------
thread:
Q:
What does these indicate, short of real mem or does some kernal
parameters need to be adjusted?
A:
HTH,
p5wizard
p5wizard, Thanks, I dont have the access to it,i will get the SA to get
me the output.
Are these figures cummulative since the last reboot of the box.
what a good setting for this
The above are our settings, Are these the default settings?
yes, cumulative (so depends on how long the system's been running to
interpret the values).
Run "topas 2" for a few iterations, and post that screenful please.
Also "vmo -L|egrep 'perm%|client%'" output please.
I googled for "aix psbufs" and found an Oracle AIX performance Technical
Brief, here's an excerpt:
Note 3:
-------
thread:
4.2) File System Buffers. By default, the number of file system buffers
is set to 196. For high I/O systems,
this is typically too small. To see if you are blocking I/O due to not
having enough
file system buffers, run: vmstat -v.
For JFS file systems, look at the "filesystem I/Os blocked with no
fsbuf" line.
For JFS2 file systems, look at the "client filesystem I/Os blocked with
no fsbuf" line.
If these values are more than a couple thousand, you may need to
increase the respective parameters.
For JFS file systems, you will need to change the numfsbufs parameter.
For JFS2 file systems,
change the j2_nBufferPerPagerDevice parameter. Changing this parameter
does not require a reboot,
but will only take effect when the file system is mounted, so you will
have to unmount/mount the file system.
4.2) JFS Log Devices. Heavily used filesystems should ALWAYS have their
own JFS log on a
separate physical disk. All writes to a JFS (or JFS2) file system are
written to the JFS log.
By default, there is only one JFS log created for any volume group
containing JFS file systems.
This means that ALL writes to ALL the file systems in the volume group
go to ONE PHYSICAL DISK!!
(This is, unless, your underlying disk structure is striped or another
form of RAID for performance.)
Creating separate JFS logs on different physical disks is very important
to getting the most out
of the AIX I/O subsystem.
/usr/ccs/bin/shlap64:
=====================
The /usr/ccs/bin/shlap64 process is the Shared Library Support Daemon.
ntpdate:
========
The ntpdate command sets the local date and time by polling the NTP
servers specified to determine the correct time.
It obtains a number of samples from each server specified and applies
the standard NTP clock filter and
selection algorithms to select the best of the samples.
To set the local date and time by polling the NTP servers at address
9.3.149.107, enter:
/usr/sbin/ntpdate 9.3.149.107
/etc/ncs/glbd:
==============
glbd Daemon
Purpose
Manages the global location broker database.
Syntax
/etc/ncs/glbd [ -create { -first [-family FamilyName] | -from HostName }
] [ -change_family FamilyName ]
[ -listen FamilyList] [ -version ]
Description
The glbd daemon manages the global location broker (GLB) database. The
GLB database, part of the
Network Computing System (NCS), helps clients to clients to locate
servers on a network or internet.
The GLB database stores the locations (specifically, the network
addresses and port numbers) of servers
on which processes are running. The glbd daemon maintains this database
and provides access to it.
There are two versions of the GLB daemon, glbd and nrglbd.
RBAC:
=====
RBAC allows the creation of roles for system administration and the
delegation of administrative tasks
across a set of trusted system users. In AIXr, RBAC provides a mechanism
through which the administrative
functions typically reserved for the root user can be assigned to
regular system users.
Beginning with AIX 6.1, a new implementation of RBAC provides for a very
fine granular mechanism
to segment system administration tasks. Since these two RBAC
implementations differ greatly in functionality,
the following terms are used:
llbd:
=====
llbd Daemon
Purpose
Manages the information in the local location broker database.
Syntax
llbd [-family FamilyName] [ -version]
Description
The llbd daemon is part of the Network Computing System (NCS). It
manages the local location broker (LLB) database,
which stores information about NCS-based server programs running on the
local host.
A host must run the llbd daemon to support the location broker
forwarding function or to allow remote access
(for example, by the lb_admin tool) to the LLB database. In general, any
host that runs an NCS-based server
program should run an llbd daemon, and llbd should be running before any
such servers are started.
Additionally, any network or internet supporting NCS activity should
have at least one host running a
global location broker daemon (glbd).
startsrc -s llbd
/etc/ncs/llbd &
TCP/IP must be configured and running on your system before you start
the llbd daemon.
(You should start the llbd daemon before starting the glbd or nrglbd
daemon.)
tripwire:
=========
While Tripwire is a valuable tool for auditing the security state of Red
Hat Linux systems, Tripwire is not
supported by Red Hat, Inc. Refer to the Tripwire project's website
(http://www.tripwire.org/) for more
information about Tripwire.
SA-Agent uctsp0:
================
EMC Documentum:
===============
General:
--------
Client Connect:
---------------
http://<servername>:<portnumber>/bps/http/http
or
https://<servername>:<portnumber>/bps/http/http
As you can see from the URL, the http listener can use both http and
https protocol. However it should
be kept in mind that application server uses two separate ports to
communicate with http and https protocol.
If we provide http protocol port number (say 8080) to construct the
https URL, it will not work.
This is a common error one can make while configuring BPS http listener.
In the following pages we will
step through the installation, configuration and testing of BPS http
listener.
Configuring the BPS Handlers
<connections>
<docbase-connection name="connection">
<docbase-name>zebra</docbasename>
<user-name>dmadmin</user-name>
<password>mypassword</password>
</docbase-connection>
</connections>
Either enable some of them or point towards your own custom handler
class like this
<handlers>
..
..
<handler name="LinkToFolderExample">
<service-name>
com.documentum.bps.handlers.LinkToFolderService
</service-name>
<params>
<param name="folderName" value="/bpsinbound/"/>
</params>
</handler>
</handlers>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<processors>
<local name="default"/>
</processors>
<connections>
<docbase-connection name="myconnection1?>
<docbase-name>mydocbase1</docbase-name>
<user-name>myname1</user-name>
<password>mypassword1</password>
</docbase-connection>
<docbase-connection name="myconnection2?>
<docbase-name>mydocbase2</docbase-name>
<user-name>myname2</user-name>
<password>mypassword2</password>
</docbase-connection>
</connections>
<handlers>
<handler name="ErrorHandlerService">
<service-name>com.documentum.bps.handlers.ErrorHandlerService</service-
name>
</handler>
<handler name="Redirector">
<service-name>com.documentum.bps.handlers.SubjectMessageFilter</service-
name>
</handler>
<handler name="LinkToFolderExample">
<service-name>com.documentum.bps.handlers.LinkToFolderService</service-
name>
<params>
<param name="folderName" value="/bpsinbound"/>
</params>
</handler>
</handlers>
<listeners>
<http-listener>
<local-listener>
<allow-non-ssl>true</allow-non-ssl>
</local-listener>
<remote-listener>
<allow-non-ssl>true</allow-non-ssl>
</remote-listener>
</http-listener>
</listeners>
</config>
<HTML>
<h1>BPS http listener and LinkToFolder handler test</h1>
<form method="post" enctype="multipart/form-data"
ACTION="http://localhost:8080/bps/http/http">
<input type="hidden" name="DctmBpsHandler" value="LinkToFolderExample">
<input type="hidden" name="DctmBpsId" value="4b08ac1980001d29?>
Connection name: <input type="text" name="DctmBpsConnection" size="20?
><br/>
File1 to upload: <input type="file" name="file to upload1? id="file1?
size="20?>
<br/>
File2 to upload: <input type="file" name="file to upload2? id="file2?
size="20?>
<br/>
<br/>
<input type="submit" value="Submit">
</form>
</HTML>
Create a file called test.html out of this code and save it in the bps
root folder.
Start the application server where BPS is deployed and then invoke the
html page by typing the following URL
in your browser
http://<servername>:<portnumber>/bps/test.html
A page should appear in your browser. If not then please check if your
application server is running or
if it has been installed properly
Fill up the connection name such as myconnection1 and then select a file
to upload and then hit submit.
This will cause the html form to be submitted to the BPS http listener,
which will pass the message
to LinkToFolder message handler and the file will be stored in
bpsinbound folder. Once message handler succeeded,
it will present a success page.
Expanding the root virtual document will show the attached file.
Summary- BPS http Installation and Configuration
BPS http listener can be installed by selecting proper option in the BPS
installer. To run the
http listener, you will require an application server like Tomcat. The
handler is implemented as Servlet.
Before using the listener and the message handlers, BPS default.xml file
needs to be configured.
Please follow the instruction provided in this Whitepaper.to configure
the default.xml file. Once it is configured;
the http listener is ready for test. Use the test.html file provided in
this White Paper to test
the http listener
TAKE NOTICE:
First of all, on any documentum server, find the account of the software
owner.
Since there are serveral accounts, depending on the site, you must check
this
before starting or stopping a Service.
You can allways check for the correct owner by looking at the owner of
the
"/appl/emcdctm" directory
root@zd111l13:/appl#ls -al
total 16
drwxr-xr-x 4 root staff 256 Jul 13 15:43 .
drwxr-xr-x 24 root system 4096 Aug 21 15:09 ..
drwxr-xr-x 13 emcdmeu emcdgeu 4096 Aug 9 15:04 emcdctm
drwxr-xr-x 3 root staff 256 Jun 29 15:35 oracle
Now you do a swich user to the owner. In the example it would be "su -
emcdmeu"
1. Docbroker processes:
-----------------------
Start
$DOCUMENTUM/dba/dm_launch_Docbroker
Stop
$DOCUMENTUM/dba/dm_stop_Docbroker
Logs:
tail -f $DOCUMENTUM/dba/log/docbroker.<host name>.1489.log
* for example
tail -f
$DOCUMENTUM/dba/log/docbroker.ZD110L12.nl.eu.abnamro.com.1489.log
2. Content Server:
------------------
Start
$DOCUMENTUM/dba/dm_launch_Docbroker
$DOCUMENTUM/dba/dm_start_dmwpreu1
$DM_HOME/tomcat/bin/startup.sh
$DM_HOME/tomcat/bin/shutdown.sh
$DOCUMENTUM/dba/dm_shutdown_dmwpreu1
$DOCUMENTUM/dba/dm_stop_Docbroker
Stop
$DM_HOME/tomcat/bin/shutdown.sh
$DOCUMENTUM/dba/dm_shutdown_dmw_eu
$DOCUMENTUM/dba/dm_stop_Docbroker
Start
$DOCUMENTUM/dba/dm_launch_Docbroker
$DOCUMENTUM/dba/dm_start_dmw_et
$DOCUMENTUM/dba/dm_start_dmw_et3
$DM_HOME/tomcat/bin/startup.sh
Stop
$DM_HOME/tomcat/bin/shutdown.sh
$DOCUMENTUM/dba/dm_shutdown_dmw_et
$DOCUMENTUM/dba/dm_shutdown_dmw_et3
$DOCUMENTUM/dba/dm_stop_Docbroker
Logs
*Repository
tail -f $DOCUMENTUM/dba/log/dmw_et.log
*JMS
tail -f $DM_HOME/tomcat/logs/catalina.out
Or:
3. BPS:
-------
Start
# As user {NL} emcdm, or {EU} wasemceu
cd $DOCUMENTUM/dfc/bps/inbound/bin
./start_jms_listener.sh
Better is:
Stop
# As user {NL} emcdm, or {EU} wasemceu
ps -ef | grep bps
kill -9 <process id>
4. Index Server:
----------------
Indexer - server IX
Index servers have 3 services: Docbroker, Index Server,
and Index Agent {per repository}
Start
$DOCUMENTUM/dba/dm_launch_Docbroker
$DOCUMENTUM/fulltext/IndexServer/bin/startup.sh
$DOCUMENTUM_SHARED/IndexAgents/IndexAgent1/startupIndexAgent.sh
Stop
$DOCUMENTUM_SHARED/IndexAgents/IndexAgent1/shutdownIndexAgent.sh
$DOCUMENTUM/fulltext/IndexServer/bin/shutdown.sh
$DOCUMENTUM/dba/dm_stop_Docbroker
Logs
tail -f $DOCUMENTUM/dfc/logs/IndexAgent1.log
5. Websphere:
-------------
su - wasemceu
START:
/appl/was51/bin/startNode.sh
/appl/was51/bin/startServer.sh server1
/appl/was51/bin/startServer.sh STM1DNL
/appl/was51/bin/startServer.sh STM1DAN
tail -f /beheer/log/was51/server1/SystemOut.log
tail -f /beheer/log/was51/STM1DNL/SystemOut.log
tail -f /beheer/log/was51/STM1DNL/SystemErr.log
tail -f /beheer/log/was51/STM1DAN/SystemOut.log
STOP:
/appl/was51/bin/stopServer.sh STM1DAN
/appl/was51/bin/stopServer.sh STM1DNL
/appl/was51/bin/stopServer.sh server1
/appl/was51/bin/stopNode.sh
1 cold backup:
--------------
Using is referred to as a full, cold backup. There are options for hot
and/or incremental backups but it does get
more complicated (and possibly expensive). The full,cold backup is the
simplest option available.
Catalina:
=========
The Java Serlvet womb part of Apache Tomcat server. It lets Java
Servlets handle HTTP requests.
Catalina is the name of the Java class of Tomcat from version 4.0
Tomcat's servlet container was redesigned as Catalina in Tomcat version
4.x
XMWLM:
======
Note 1:
-------
xmwlm Command
Purpose
Syntax
Description
The xmwlm agent provides recording capability for a limited set of local
system performance metrics. These include
common CPU, memory, network, disk, and partition metrics typically
displayed by the topas command. Daily recordings
are stored in the /etc/perf/daily directory. The topasout command is
used to output these recordings in raw ASCII or
speadsheet format. The xmwlm agent can also be used to provide recording
data from Workload Management (WLM). This is
the default format used when xmwlm is run without any flags. Daily
recordings are stored in the /etc/perf/wlm
directory. The wlmmon command can be used to process WLM-related
recordings. The xmwlm agent can be started from the
command line, from a user script, or can be placed near the end of
the /etc/inittab file. All recordings cover 24-
hour periods and are only retained for two days.
A fix is available
Obtain fix for this APAR
APAR status
Closed as program error.
Error description
xmwlm daemon may consume well over 1% of CPU resources
some disk counter values may be inaccurate in topasout output
Local fix
Problem summary
xmwlm daemon may consume well over 1% of CPU resources
some disk counter values may be inaccurate in topasout output
Problem conclusion
Reduce SPMI instrumentations internal polling frequency for
filesystem metrics. Update topasout for certain counter data
types.
Temporary fix
Comments
APAR information
APAR number IY78009
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2005-10-21
Closed date 2005-10-21
Last modified date 2005-11-17
Publications Referenced
Fix information
Fixed component name AIX 5.3
Fixed component ID 5765G0300
Note 3:
-------
APAR status
Closed as program error.
Error description
High cpu consumption by xmwlm
Local fix
Problem summary
High cpu consumption by xmwlm
Problem conclusion
Stop xmwlm from looking infinitely in signal handler and
avoid xmwlm from crashing when it has to record more than
4096 metrics by recording only 4096 metrics at max.
Temporary fix
Comments
APAR information
APAR number IY95912
Reported component name AIX 5.3
Reported component ID 5765G0300
Reported release 530
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Submitted date 2007-03-11
Closed date 2007-03-11
Last modified date 2007-03-15
Second superuser:
=================
For safety reasons, you might want to have a second root user on your
system.
Note 1:
-------
-- Creating a second root user
Create a user.
Manually edit the user ID field and group ID field in the /etc/passwd
file.
Change the user ID to ID 0.
For a typical user ID, for example, change the entry from:
russ:!:206:1::/u/russ:/bin/ksh
to
russ:!:0:0::/u/russ:/bin/ksh
This creates a user (in this case, russ) with identical permissions to
root.
Special users that have root authority but can only execute one command
may also be created. For instance,
to create a user that can only reboot the system, create a regular user
called shutdown and modify the /etc/passwd
command to change the user and group ID to 0. For example, in AIX 3.2:
shutdown:!:0:0::/u/shutdown:/bin/ksh
shutdown:!:0:0::/u/shutdown:/etc/shutdown -Fr
For AIX 4, the /etc/passwd entry for the user called shutdown should be:
shutdown:!:0:0::/u/shutdown:/usr/sbin/shutdown -Fr
Appendix A. Base Operating System Error Codes for Services That Require
Path-Name Resolution
The following errors apply to any service that requires path name
resolution:
/*
* COMPONENT_NAME: CMDERRLG system error logging and reporting
facility
*
* External definitions and declarations for liberrlog.a
*
*/
#include <sys/types.h>
#include <sys/err_rec.h>
/*
* These magic numbers will indicate which version of errlog
* entry is being returned.
* All users of errlog_entry_t should use only LE_MAGIC.
*/
#define LE_MAGIC_41 0x0C3DF420
/* LE_MAGIC434_INTERUM is an interum 43T magic, before le_errdiag was
added. */
#define LE_MAGIC434_INTERUM 0x0C3DF434
#define LE_MAGIC434 0x0C4DF434
#define LE_MAGIC52F 0x0C4DF52F
#define LE_MAGIC53D 0x0C4DF53D
#define LE_MAGIC LE_MAGIC53D /* current errlog_open magic #
*/
/* VALID_LE_MAGIC gives valid magic numbers for an error log record. */
#define VALID_LE_MAGIC(m) (((m) == LE_MAGIC_41) || \
((m) == LE_MAGIC434_INTERUM) || ((m) == LE_MAGIC434))
/* VALID_LENTRY_MAGIC gives valid magic numbers for errlog_open(). */
#define VALID_LENTRY_MAGIC(m) (((m) == LE_MAGIC) || ((m) == LE_MAGIC434)
||\
((m) == LE_MAGIC52F))
/*
* Optional duplicate information.
*/
struct errdup {
unsigned int ed_dupcount;
time32_t ed_time1;
time32_t ed_time2;
};
/*
* This structure is used to pass search criteria to errlog_find_first.
/* Flags to combine with the field id to indicate the data type of the
field */
#define LE_TYPE 0xff00
#define LE_TYPE_INT 0x0100
#define LE_TYPE_STRING 0x0200
#define LE_TYPE_BOOLEAN 0x0300
/*
* Define the directions find can walk through the errlog file.
*/
/*
* Define the errors that the functions can return.
*/
/*
* These are the functions that comprise the API
*/
extern int errlog_open(char *path,
int mode,
unsigned int magic,
errlog_handle_t *handle);
#endif
clsprod@starboss:/usr/include $
#ifndef _H_ERRNO
#define _H_ERRNO
#include <standards.h>
/*
* Error codes
*
* The ANSI, POSIX, and XOPEN standards require that certain values
be
* in errno.h. The standards allow additional macro definitions,
* beginning with an E and an uppercase letter.
*
*/
#ifdef _ANSI_C_SOURCE
#ifndef _KERNEL
#else
#endif /* _KERNEL */
#ifdef _ALL_SOURCE
#endif /* _ALL_SOURCE */
/* ipc/network software */
/* argument errors */
#define ENOTSOCK 57 /* Socket operation on non-socket */
#define EDESTADDRREQ 58 /* Destination address required */
#define EDESTADDREQ EDESTADDRREQ /* Destination address required */
#define EMSGSIZE 59 /* Message too long */
#define EPROTOTYPE 60 /* Protocol wrong type for socket */
#define ENOPROTOOPT 61 /* Protocol not available */
#define EPROTONOSUPPORT 62 /* Protocol not supported */
#define ESOCKTNOSUPPORT 63 /* Socket type not supported */
#define EOPNOTSUPP 64 /* Operation not supported on socket */
#define EPFNOSUPPORT 65 /* Protocol family not supported */
#define EAFNOSUPPORT 66 /* Address family not supported by
protocol family */
#define EADDRINUSE 67 /* Address already in use */
#define EADDRNOTAVAIL 68 /* Can't assign requested address */
/* operational errors */
#define ENETDOWN 69 /* Network is down */
#define ENETUNREACH 70 /* Network is unreachable */
#define ENETRESET 71 /* Network dropped connection on reset
*/
#define ECONNABORTED 72 /* Software caused connection abort */
#define ECONNRESET 73 /* Connection reset by peer */
#define ENOBUFS 74 /* No buffer space available */
#define EISCONN 75 /* Socket is already connected */
#define ENOTCONN 76 /* Socket is not connected */
#define ESHUTDOWN 77 /* Can't send after socket shutdown */
/*
* AIX returns EEXIST where 4.3BSD used ENOTEMPTY;
* but, the standards insist on unique errno values for each errno.
* A unique value is reserved for users that want to code case
* statements for systems that return either EEXIST or ENOTEMPTY.
*/
#if defined(_ALL_SOURCE) && !defined(_LINUX_SOURCE_COMPAT)
#define ENOTEMPTY EEXIST /* Directory not empty */
#else /* not _ALL_SOURCE */
#define ENOTEMPTY 87
#endif /* _ALL_SOURCE */
/* disk quotas */
#define EDQUOT 88 /* Disc quota exceeded */
/* errnos 90-92 reserved for future use compatible with AIX PS/2 */
/* errnos 94-108 reserved for future use compatible with AIX PS/2 */
/* security */
#define ENOATTR 112 /* no attribute found */
#define ESAD 113 /* security authentication denied */
#define ENOTRUST 114 /* not a trusted program */
/* SVR4 STREAMS */
#define ENOSR 118 /* temp out of streams resources */
#define ETIME 119 /* I_STR ioctl timed out */
#define EBADMSG 120 /* wrong message type at stream head */
#define EPROTO 121 /* STREAMS protocol error */
#define ENODATA 122 /* no message ready at stream head */
#define ENOSTR 123 /* fd is not a stream */
#endif /* _ANSI_C_SOURCE */
#endif /* _H_ERRNO */
clsprod@starboss:/usr/include/sys $
/*
** SYSEXITS.H -- Exit status codes for system programs.
**
** This include file attempts to categorize possible error
** exit statuses for system programs, notably delivermail
** and the Berkeley network.
**
** Error numbers begin at EX__BASE to reduce the possibility of
** clashing with other exit statuses that random programs may
** already return. The meaning of the codes is approximately
** as follows:
**
** EX_USAGE -- The command was used incorrectly, e.g., with
** the wrong number of arguments, a bad flag, a bad
** syntax in a parameter, or whatever.
** EX_DATAERR -- The input data was incorrect in some way.
** This should only be used for user's data & not
** system files.
** EX_NOINPUT -- An input file (not a system file) did not
** exist or was not readable. This could also include
** errors like "No message" to a mailer (if it cared
** to catch it).
** EX_NOUSER -- The user specified did not exist. This might
** be used for mail addresses or remote logins.
** EX_NOHOST -- The host specified did not exist. This is used
** in mail addresses or network requests.
** EX_UNAVAILABLE -- A service is unavailable. This can occur
** if a support program or file does not exist. This
** can also be used as a catchall message when something
** you wanted to do doesn't work, but you don't know
** why.
** EX_SOFTWARE -- An internal software error has been detected.
** This should be limited to non-operating system related
** errors as possible.
** EX_OSERR -- An operating system error has been detected.
** This is intended to be used for such things as "cannot
** fork", "cannot create pipe", or the like. It includes
** things like getuid returning a user that does not
** exist in the passwd file.
** EX_OSFILE -- Some system file (e.g., /etc/passwd, /etc/utmp,
** etc.) does not exist, cannot be opened, or has some
** sort of error (e.g., syntax error).
** EX_CANTCREAT -- A (user specified) output file cannot be
** created.
** EX_IOERR -- An error occurred while doing I/O on some file.
** EX_TEMPFAIL -- temporary failure, indicating something that
** is not really an error. In sendmail, this means
** that a mailer (e.g.) could not create a connection,
** and the request should be reattempted later.
** EX_PROTOCOL -- the remote system returned something that
** was "not possible" during a protocol exchange.
** EX_NOPERM -- You did not have sufficient permission to
** perform the operation. This is not intended for
** file system problems, which should use NOINPUT or
** CANTCREAT, but rather for higher level permissions.
** For example, kre uses this to restrict who students
** can send mail to.
**
*/
A fix is available
3.9.0.8-TIV-TEC-IF0106 IBM Tivoli Enterprise Console Version 3.9 Interim
Fix
APAR status
Closed as program error.
Error description
TEC 3.9
AIX 5.3 SP8 or greater
When traps are received by the snmp adapter, it crashes
Truss 1 Output:
open("/usr/lib/nls/msg/en_US/libc.cat", O_RDONLY) = 9
kioctl(9, 22528, 0x00000000, 0x00000000) Err#25 ENOTTY
kfcntl(9, F_SETFD, 0x00000001) = 0
kioctl(9, 22528, 0x00000000, 0x00000000) Err#25 ENOTTY
kread(9, "\0\001 ?\007\007 I S O 8".., 4096) = 4096
lseek(9, 0, 1) = 4096
lseek(9, 0, 1) = 4096
lseek(9, 0, 1) = 4096
_getpid() = 101604
lseek(9, 0, 1) = 4096
lseek(9, 8069, 0) = 8069
kread(9, " T h e s y s t e m c".., 4096) = 4096
close(9) = 0
__loadx(0x07000000, 0xF01E0438, 0x0000001A, 0xF015B6F8,
0x100140A3) = 0xF015C35C
__loadx(0x07000000, 0xF01E0444, 0x0000001A, 0xF015B6F8,
0x100140A3) = 0xF015C3A4
__loadx(0x07000000, 0xF01E0450, 0x0000001A, 0xF015B6F8,
0x100140A3) = 0xF015C314
__loadx(0x07000000, 0xF01E045C, 0x0000001A, 0xF015B6F8,
0x100140A3) = 0xF015C3EC
__loadx(0x07000000, 0xF01E0468, 0x0000001A, 0xF015B6F8,
0x100140A3) = 0xF015C428
__loadx(0x05000000, 0x2FF1F6A8, 0x00000960, 0xF015B6F8,
0x00000000) = 0x00000000
kread(8, " h o s t s n i s _ l d".., 4096) = 0
close(8) = 0
getdomainname(0xF023D178, 1024) = 0
getdomainname(0xF023D178, 1024) = 0
getdomainname(0xF023D178, 1024) = 0
getdomainname(0xF023D178, 1024) = 0
_getpid() = 101604
getuidx(1) = 0
kwrite(7, " 2 7", 2) Err#32 EPIPE
Received signal #13, SIGPIPE [default]
*** process killed ***
Truss 2 Output
tecad_snmp.err output:
Universal Command:
==================
AIX:
# lslpp -La 'UCmdP'
HP:
# swlist -l subproduct UCmd
Tiger:
======
AIX ONLY:
[pl101][tdbaprod][/home/tdbaprod] errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
5FC2DD4B 0225151808 I H ent2 PING TO REMOTE HOST FAILED
9F7B0FA6 0225151108 I H ent2 PING TO REMOTE HOST FAILED
LABEL: ECH_PING_FAIL_BCKP
IDENTIFIER: 5FC2DD4B
Description
PING TO REMOTE HOST FAILED
Probable Causes
CABLE
SWITCH
ADAPTER
Failure Causes
CABLES AND CONNECTIONS
Recommended Actions
CHECK CABLE AND ITS CONNECTIONS
IF ERROR PERSISTS, REPLACE ADAPTER CARD.
Detail Data
FAILING ADAPTER
ent1
SWITCHING TO ADAPTER
ent0
Unable to reach remote host through backup adapter: switching over to
primary adapter
-- thread 1:
The details of the message says it can't ping the default gateway
through backup adapter.
Why does it try this? Why does it fail because if we pull the primary
cable it switches
to the backup adapter with no problems.
Cheers
-- thread 2:
Hello:
I've seen similar things happen when the switch is not on "port host"
(meaning the port begins receiving and sending
packets quickly, instead of running Spanning Tree Protocol before going
in the FORWARDING state): in this case,
the EtherChannel sends the ping packets, they are dropped because the
switch is still initializing,
and the cycle continues on and on. Still, 30 minutes sounds like a long
time.
You can try the following:
- verify that the EtherChannel switch ports are set to "port host"
(i.e., STP should be disabled)
Does this ONLY happen when updating from FP74 to FP8, or every time the
VIOS boots?
Kind regards,
-- thread 3:
Hi All,
I am getting the following error consistently on one of my servers. when
i
do a entstat -d ent3 | grep "Active channel", it does come back with
Active
channel: primary channel. Could you please provide me with any
suggestions
or steps I can take to fix this error?
Hi
Just Etherchannel or Etherchannel with Backup Adapter connected to a
failover Switch just in case everything fails ??
If so, please take a read of the following:
http://publib.boulder.ibm.com/infocenter/clresctr/v xrx/index.jsp?
topic=/com.ibm.cluster.rsct.doc/rsct _aix5l53/bl5adm05/bl5adm0559.html
Hope this helps
-- thread 4:
Note 1:
-------
Q:
Out of memory during "large" request for 528384 bytes, total sbrk() is
16302080 bytes at ... line 176.
when the server is busy. This is for a background perl script. What
I would like to do is to check to see if memory for the request is
available before running the offending the command, and just pause
everything until memory is available. Is that possible?
Thanks all,
A:
Note 2:
-------
According to that output, perl was already using 464MB, and a malloc
request for 64MB failed, which is reasonable since the default hard
datasize limit on FreeBSD is 512MB. To raise it, put this in
/boot/loader.conf and reboot:
kern.maxdsiz="1024M"
Also running perl 5.8.6, on the same mailbox? Maybe different perl
versions allocate memory differently.
Note 3:
-------
Out of memory during "large" request for 528384 bytes, total sbrk() is
2330624 bytes at citationBuilder.pl line 37.
The script parses through a table to build another table. The first
table has about 550 entries and the second
that gets built by the script should have around 3500.
Out of memory during request for 4064 bytes, total sbrk() is 41822208
bytes!
Segmentation fault: 11
Code:
foreach my $line (<FILE>) # reads the whole file into @_
{
process($line);
}
Do this:
Code:
while (my $line = <FILE>) # only reads one line into memory at a time
{
process($line);
}
Note 4:
-------
Out of memory!
Callback called exit.
Sometimes you discover that your server is not responding and its
error_log file has filled up the remaining space
on the filesystem. When you finally get to see the contents of the
error_log file, it includes millions of lines like this:
Perl Version 5.005 and higher is recommended for its improved malloc.c,
and also for other features that improve
the performance of mod_perl and are turned on by default.
Note 5:
-------
Glenn wrote:
[...]
>>http://perl.apache.org/docs/1.0/guide/troubleshooting.html#Callback_ca
lled_exit
>
>
> I've followed that advice and explicitly allocated memory into $^M.
> I have the following in my mod_perl_startup.pl, which I run from
> httpd.conf with PerlRequire /path/to/mod_perl_startup.pl
> If 64K is not enough for you, try increasing the allocation.
>
> Cheers,
> Glenn
>
> use strict;
>
> ## ----------
> ## ----------
>
> ## This section is similar in scope to Apache::Debug.
> ## Delivers a stack backtrace to the error log when perl code dies.
>
> ## Allocate 64K as an emergency memory pool for use in out of memory
situation
> $^M = 0x00 x 65536;
>
> ## Little trick to initialize this routine here so that in the case of
OOM,
> ## compiling this routine doesn't eat memory from the emergency memory
pool $^M
> use CGI::Carp ();
> eval { CGI::Carp::confess('init') };
>
> ## Importing CGI::Carp sets $main::SIG{__DIE__} = \&CGI::Carp::die;
> ## Override that to additionally give a stack backtrace
> $main::SIG{__DIE__} = \&CGI::Carp::confess;
Brian, you may want to include Glenn's useful tips as well in the patch.
__________________________________________________________________
Stas Bekman JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide ---> http://perl.apache.org
mailto:[email protected] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org http://ticketmaster.com
--
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Note 6:
-------
oranh202:/home/se1223>mwa status
MeasureWare scope status:
WARNING: scopeux is not active (MWA data collector)
root@zd110l13:/etc#rc.mwa stop
root@zd110l13:/etc#
For example, to start up scopeux but not the servers, change the value
to "/opt/perf/bin/mwa start scope".
To disable MeasureWare Agent startup when the system reboots, change the
variable MWA_START=1 to MWA_START=0.
MWA Command:
SYNOPSIS
mwa [action] [subsystem] [parms]
DESCRIPTION
mwa is a script that is used to start, stop, and re-
initialize MeasureWare Agent processes.
ACTION
-? List all mwa options.
If your shell interprets ? as a wildcard character, use an
invalid option such as -xxx nstead of -?.
start Start all or part of MeasureWare Agent. (default)
stop Stop all or part of MeasureWare Agent.
restart Reinitialize all or part of MWA. This option causes some
processes to be stopped and restarted.
status List the status of all or part of MWA processes.
version List the version of the all or part of MWA files.
SUBSYSTEM
all Perform the selected action on all MWA components. (default)
scope Perform the selected action on the scopeux collector.
The restart operation causes the scopeux collector to stop, then
restart.
This causes the parm and ttd.conf files to be re-read.
alarm Perform the selected action on the MWA server alarm component.
Restart is the only valid option and causes the alarmdef file
to be reprocessed.
db Perform the selected action on the MWA server db component.
PARMS
-midaemon <miparms> Provide the midaemon with parameters to initiate it
with other than default parameters.
Example:
phred01:/> mwa status
MeasureWare scope status:
WARNING: scopeux is not active (MWA data collector)
References:
HP OpenView Performance Agent for HP-UX 10.20 and 11 Installation &
Configuration Guide
man mwa(Command)
mwa status reports scopeux not running. Manual states to use restart
command to retain existing logs (status.scope).
But, I'm more concerned about the database collected prior to
"mysterious" end of scopeux. Will restart (or start)
of scope (scopeux) preserve existing data?
Thanks.
Vic.
Once the data is written to the log files, it stays there when scopeux
stops and starts. The data is deleted
after the logfile reachs its size limit and starts to wrap. The oldest
data is overwritten first.
HTH
Marty
If you only want to work with the Scope Collector itself (I.E. All other
MeasureWare processes are running)
do the following:
This will narrow down what part of the MeasureWare product you are
working with.
The status.scope file might help you figure out why scope stopped.
To see what may have happened to the scope daemon, look in its status
file /var/opt/perf/status.scope.
You can also use "perfstat -t" to see the last few lines of all the OVPA
status files.
Since the "perfstat" command shows glance and OVPA (mwa) status, I
recommend using perfstat instead of
"mwa status" (also less to type!).
I could not get midaemon and scopeux to start. When using glance, the
following error messages appears
and what does it mean?
Measureware ran for 10 days, and during this period, it had the
following error message and then finally one day
it stopped running.
It looks like your OS is not allocating enough buffer space. You will
need to increase your kernel parameters
pertaining to buffer space and regen the kernel.
HTH
Marty
ctcasd daemon:
==============
The ctcasd daemon is used by the cluster security services library when
UNIX-identity-based authentication
is configured and active within the cluster environment. The cluster
security services uses ctcasd
when service requesters and service providers try to create a secured
execution environment through
a network connection. ctcasd is not used when service requesters and
providers establish
a secured execution environment through a local operating system
connection such as a UNIX domain socket.
-- Tivoli servers
The Tivoli server includes the libraries, binaries, data files, and the
graphical user interface (GUI)
(the Tivoli desktop) needed to install and manage your Tivoli
environment. The Tivoli server performs
all authentication and verification necessary to ensure the security of
Tivoli data. The following components
comprise a Tivoli server:
- An object database, which maintains all object data for the entire
Tivoli region.
- An object dispatcher, which coordinates all communication with managed
nodes and gateways.
The object dispatcher process is the oserv, which is controlled by the
oserv command.
- An endpoint manager, which is responsible for managing all of the
endpoints in the Tivoli region.
When you install the Tivoli server on a UNIX operating system, the
Tivoli desktop is automatically installed.
When you install the Tivoli server on a Windows operating system, you
must install Tivoli Desktop for Windows
separately to use the Tivoli desktop.
-- Managed nodes
A managed node runs the same software that runs on a Tivoli server.
Managed nodes maintain their own
object databases that can be accessed by the Tivoli server. When managed
nodes communicate directly
with other managed nodes, they perform the same communication or
security operations that are performed
by the Tivoli server.
The difference between a Tivoli server and a managed node is that the
Tivoli server object database is global
to the entire region including all managed nodes. In contrast, the
managed node database is local to the
particular managed node.
-- Gateways
--- Autostart:
thread:
Technote (FAQ)
Problem
The AIX server comes preloaded with the Tivoli Endpoint software
installed. How can you make this process
autostart at bootup?
Solution
Create the /etc/inittab entry:
thread:
aix comes with tivoli agents installed (but not configured). you should
configure the last.cfg file (usually,
you can find this in /../lcf/dat/1) add a line
'lcs.logininterfaces=gatewayip+9494' this line will instruct
the endpoint to report on you existing tivoli gateway.
Hi, previous info is correct accept an writing mistake.
It should be lcs.login_interfaces=ipoftmrserver
stop the endpoint (sh lcfd.sh stop) and start it (sh lcfd.sh start)
After succ. registration you should see an lcf.dat file in the lcf/dat/1
folder.
Note 1:
-------
thread:
Q:
Dear friends,
thank's a lot.
A:
Its an agent that runs on your system as part of the Tivoli Distributed
Monitoring. It reports various things about your sysem back to the
Tivoli
Enterprise Console - usually your help desk. The basic monitors include
things like file system usage (e.g if a FS is more than 80% used the
system
gets flagged at the console), or monitoring log files. Basically you can
configure it to monitor whatever you want.
Note 2:
-------
Problem
The AIX server comes preloaded with the Tivoli Endpoint software
installed. How can you make this process
autostart at bootup?
Solution
Create the /etc/inittab entry:
Note 3:
-------
Of the three log files, the lcfd.log file is sometimes the most useful
for debugging endpoint problems.
However, remote access to the endpoint is necessary for one-to-one
contact.
timestamp
Displays the date and time that the message was logged.
level
Displays the logging level of the message.
app_name
Displays the name of the application that generated the message.
message
Displays the full message text. The content of message is provided by
the application specified in app_name.
The default limit of the log file is 1 megabyte, which you can adjust
with the lcfd (or lcfd.sh) command
with the -D log_size =max_size option. The valid range is 10240 through
10240000 bytes. When the maximum size
is reached, the file reduces to a size of approximately 200 messages and
continues to log.
lcfd_port=9495
lcfd_preferred_port=9495
gateway_port=9494
protocol=TCPIP
log_threshold=1
start_timeout=120
run_timeout=120
lcfd_version=41100
logfile=C:\Program Files\Tivoli\lcf\dat\1\lcfd.log
config_path=C:\Program Files\Tivoli\lcf\dat\1\last.cfg
run_dir=C:\Program Files\Tivoli\lcf\dat\1
load_dir=C:\Program Files\Tivoli\lcf\bin\w32-ix86\mrt
lib_dir=C:\Program Files\Tivoli\lcf\bin\w32-ix86\mrt
cache_loc=C:\Program Files\Tivoli\lcf\dat\1\cache
cache_index=C:\Program Files\Tivoli\lcf\dat\1\cache\Index.v5
cache_limit=20480000
log_queue_size=1024
log_size=1024000
udp_interval=300
udp_attempts=6
login_interval=1800
lcs.machine_name=andrew1
lcs.crypt_mode=196608
lcfd_alternate_port=9496
recvDataTimeout=2
recvDataNumAttempts=10
recvDataQMaxNum=50
login_timeout=300
login_attempts=3
When you change endpoint configuration with the lcfd command, the
last.cfg file changes. Therefore, you should
not modify the last.cfg file. If you require changes, use the lcfd
command to make any changes.
However, running the lcfd command requires stopping and restarting the
endpoint.
./tmp/.tivoli/.tecad_logfile.fifo.zd110l05.aix-default
./tmp/.tivoli/.tecad_logfile.lock.zd110l05.aix-default
./tmp/.tivoli/.tecad_logfile.fifo.zd110l05.aix-defaultlogsourcepipe
./etc/Tivoli/tecad
./etc/Tivoli/tecad.1011792
./etc/Tivoli/tecad.1011792/bin/init.tecad_logfile
./etc/Tivoli/tec/tecad_logfile.cache
./etc/rc.tecad_logfile
./etc/rc.shutdown-pre-tecad_logfile
./etc/rc.tecad_logfile-pre-tecad_logfile
./etc/rc.tivoli_tecad_mqseries
find: 0652-023 Cannot open file ./proc/278708.
find: 0652-023 Cannot open file ./proc/315572.
find: 0652-023 Cannot open file ./proc/442616.
find: 0652-023 Cannot open file ./proc/475172.
./beheer/Tivoli/lcf/dat/1/cache/out-of-date/init.tecad_logfile
./beheer/Tivoli/lcf/dat/1/cache/out-of-date/tecad-remove-logfile.sh
./beheer/Tivoli/lcf/dat/1/cache/bin/aix4-
r1/TME/TEC/adapters/bin/tecad_logfile.cfg
./beheer/Tivoli/lcf/dat/1/LCFNEW/CTQ/logs/trace_mqs_start_tecad__MQS_CC.
Q3P0063__1__p1052790.log
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/bin/tecad_logfile
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/bin/init.tecad_logfile
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/bin/tecad-remove-
logfile.sh
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/aix-
default/etc/C/tecad_logfile.fmt
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/aix-
default/etc/tecad_logfile.err
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/aix-
default/etc/tecad_logfile.conf
./beheer/Tivoli/lcf/bin/aix4-r1/TME/TEC/adapters/aix-
default/etc/tecad_logfile.cds
./beheer/Tivoli/lcf/bin/aix4-r1/TME/MQS/bin/tecad_mqseries.cfg
./beheer/Tivoli/lcf/bin/aix4-r1/TME/MQS/bin/tecad_mqseries.mqsc
./beheer/Tivoli/lcf/bin/aix4-r1/TME/MQS/bin/tecad_mqseries_nontme
./beheer/Tivoli/lcf/bin/aix4-r1/TME/MQS/bin/tecad_mqseries_tmegw
./beheer/Tivoli/lcf/bin/generic_unix/TME/MQS/sh/mqs_start_tecad.sh
./beheer/Tivoli/lcf/bin/generic_unix/TME/MQS/sh/mqs_stop_tecad.sh
./beheer/Tivoli/lcf/bin/generic_unix/TME/MQS/teccfg/tecad_mqseries.Q3P00
63.cfg
dircmp:
=======
About dircmp
Lists files in both directories and indicates whether the files in the
directories are the same and/or different.
Syntax
dircmp [-d] [-s] [-w n] directoryone directorytwo.
-d Compare the contents of files with the same name in both directories
and output a list telling what
must be changed in the two files to bring them into agreement. The list
format is described in diff(1).
-s Does not tell you about the files that are the same.
-w n Change the width of the output line to n characters. The default
width is 72.
directoryone The first directory for comparing.
directorytwo The second directory for comparing.
Examples
dircmp dir1 dir2 - Compares the directory dir1 with the directory dir2.
Below is an example of the output
you may receive when typing this command.
directory .
same ./favicon.ico
same ./logo.gif
same ./question.gif
kmcrca:
=======
FLASHCOPY:
==========
Note 1:
=======
What is FlashCopy?
FlashCopy is a function designed to create an instant "copy" of some
data. When an administrator issues a
FlashCopy command that essentially says "make a copy of this data," SVC
via FlashCopy immediately provides
the appearance of having created a copy of the data, when in reality it
creates the physical copy
in the background before moving that copy to an alternative data-storage
device, which can take some time
depending on the size of the backup copy. However, it creates the
appearance of having completed
the copy instantaneously, so customers can have a backup copy available
as soon as the command is issued,
even though copying to a different storage medium takes place behind the
scenes.
"Because it operates very quickly in this way, FlashCopy allows
customers to make a copy and immediately
move on to other work without having to wait for the data to actually
physically be copied from one place
to another," says Saul. "In that regard, SVC FlashCopy is very similar
to FlashCopy on the DS8000, for example,
with the difference being SVC FlashCopy operates on most storage devices
attached to the SVC, spanning many
different disk systems."
Note 2:
=======
FlashCopy
FlashCopy is an IBM feature supported on ESS (Enterprise Storage
Servers) that allows you to make nearly
instantaneous Point in Time copies of entire logical volumes or data
sets. The HDS (Hitachi Data Systems)
implementation providing similar function is branded as ShadowImage.
Using either implementation,
the copies are immediately available for both read and write access.
-- FlashCopy Version 1
The first implementation of FlashCopy, Version 1 allowed entire volumes
to be instantaneously "copied" to
another volume by using the facilities of the newer Enterprise Storage
Subsystems (ESS).
-- FlashCopy Version 2
FlashCopy Version 2 introduced the ability to flash individual data sets
and more recently added support
for "consistency groups". FlashCopy consistency groups can be used to
help create a consistent point-in-time
copy across multiple volumes, and even across multiple ESSs, thus
managing the consistency of dependent writes.
http://www.ibm.com/developerworks/forums/thread.jspa?messageID=13967589
Q:
Using target volume from FlashCopy on same LPAR as source volume going
thru VIO server
Posted: Jun 28, 2007 12:21:09 PM Reply
Synopsis:
A:
Did you rmdev the vpath and hdisks before recreating the flash copy?
Then you will need to run recreatevg again,
as restarting the flash copy will change the pvid back to the same as
the source volume.
Why not just attach the flash copy to another host? Then you won't need
to run recreate vg and you could assign
the flash copy to the original host if you need to recover the data.
==============================
2. NOTES ABOUT SHELL PROGRAMS:
==============================
-------------------------------------------------------------
NOTE 1:
This means that commands are read from the string between two ` `.
Usage in a nested command goes like this:
font=`grep font \`cat filelist\``
-------------------------------------------------------------
NOTE 3:
To extend the PATH variable, on most systems use a statement like the
following example:
$ export PATH=$PATH:$ORACLE_HOME/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
PATH=.:/usr/bin:/$HOME/bin:/net/glrr/files1/bin
export PATH
-------------------------------------------------------------
NOTE 4:
Positional parameters:
# cat makelist
sort +1 -2 people | tr -d "[0-9]" | pr -h Distribution | lp
This will only work on that filename, that is in this example, people.
# cat makelist
sort +1 -2 $1 | tr -d "[0-9]" | pr -h Distribution | lp
# makelist file1
# makelist file2
-------------------------------------------------------------
NOTE 5:
-------------------------------------------------------------
NOTE 6:
- A variable name must begin with a letter and can contain letters,
digits, and underscores,
but no special characters.
ME=bill
BC="bill clinton"
Now the shell can react and use the variable $ME and it substitutes the
value for that variable.
Note that there must be no spaces around the "=" sign: VAR=value works;
VAR = value doesn't work
Local and environment variables:
--------------------------------
variables that you set are local to the current shell unless you mark
them for excport.
Variables marked for export are called environment variables, and will
be made available
to any command that the shell creates. The following command marks the
variable BC for export:
export BC
-------------------------------------------------------------
NOTE 7:
if test $# -eq 0
then echo "You must give a filename"
exit 1
fi
-------------------------------------------------------------
NOTE 8:
-------------------------------------------------------------
NOTE 9:
Here is a simple Bash script which prints out all its arguments.
#!/bin/bash
#
# Print all arguments (version 1)
#
for arg in $*
do
echo Argument $arg
done
The `$*' symbol stands for the entire list of arguments and `$#' is the
total number of arguments.
-------------------------------------------------------------
NOTE 10: Start and End of Command
A command starts with the first word on a line or if it's the second
command on a line
with the first word after a";'.
A command ends either at the end of the line or whith a ";". So one can
put several commands onto one line:
One can continue commands over more than one line with a "\" immediately
followed by a newline sign
which is made be the return key:
-------------------------------------------------------------
NOTE 11:
Bash and the Bourne shell has an array of tests. They are written as
follows.
Notice that `test' is itself not a part of the shell, but is a program
which works out
conditions and provides a return code. See the manual page on `test' for
more details.
string comparisons:
Note that an alternate syntax for writing these commands is to use the
square brackets,
instead of writing the word test.
Just as with the arithmetic expressions, Bash 2.x provides a syntax for
conditionals
which are more similar to Java and C. While arithmetic C-like
expressions can be used within double parentheses,
C-like tests can be used within double square brackets.
This C-like syntax is not allowed in the Bourne shell, but is equivalent
to
Test is used by virtually every shell script written. It may not seem
that way, because test
is not often called directly. test is more frequently called as [. [ is
a symbolic link to test,
just to make shell programs more readable. If is also normally a shell
builtin (which means that the shell
itself will interpret [ as meaning test, even if your Unix environment
is set up differently):
$ type [
[ is a shell builtin
$ which [
/usr/bin/[
$ ls -l /usr/bin/[
lrwxrwxrwx 1 root root 4 Mar 27 2000 /usr/bin/[ -> test
This means that '[' is actually a program, just like ls and other
programs, so it must be surrounded by spaces:
Test is a simple but powerful comparison utility. For full details, run
man test on your system,
but here are some usages and typical examples.
Test is most often invoked indirectly via the if and while statements.
It is also the reason you will come
into difficulties if you create a program called test and try to run it,
as this shell builtin
will be called instead of your program!
if [ ... ]
then
# if-code
else
# else-code
fi
Note that fi is if backwards! This is used again later with case and
esac.
Also, be aware of the syntax - the "if [ ... ]" and the "then" commands
must be on different lines.
Alternatively, the semicolon ";" can seperate them:
if [ ... ]; then
# do something
fi
if [ something ]; then
echo "Something"
elif [ something_else ]; then
echo "Something else"
else
echo "None of the above"
fi
#!/bin/ksh
if (( FILE_FOUND == 0 ))
then
Checking char/strings:
if [ $myvar = "hello" ]
then
echo "We have a match"
fi
Checking numbers:
if [ $# -gt 1 ]
then
echo "ERROR: should have 0 or 1 command-line parameters"
fi
Boolean operators
! not
-a and
-o or
if [ $num -lt 10 -o $num -gt 100 ]
then
echo "Number $num is out of range"
elif [ ! -w $filename ]
then
echo "Cannot write to $filename"
fi
if [ $myvar = "y" ]
then
echo "Enter count of number of items"
read num
if [ $num -le 0 ]
then
echo "Invalid count of $num was given"
else
... do whatever ...
fi
fi
-------------------------------------------------------------
NOTE 12:
alphabet="a b c d e" #
Initialise a string
count=0 #
Initialise a counter
while [ $count -lt 5 ] # Set up a
loop control
do # Begin
the loop
count=`expr $count + 1` #
Increment the counter
position=`bc $count + $count - 1` # Position
of next letter
letter=`echo "$alphabet" | cut -c$position-$position` # Get next
letter
echo "Letter $count is [$letter]" # Display
the result
done # End of
loop
if [ -f $dirname/$filename ]
then
echo "This filename [$filename] exists"
elif [ -d $dirname ]
then
echo "This dirname [$dirname] exists"
else
echo "Neither [$dirname] or [$filename] exist"
fi
-------------------------------------------------------------
NOTE 13: Loops and conditionals:
loops:
for-do-done
while-do-done
until-do-done
conditionals:
if-then-else-fi
case-esac
&&
||
IF
==
The basic type of condition is "if".
if [ $? -eq 0 ] ; then
print we are okay
else
print something failed
fi
if [ $? -eq 0 ] ; then
print we are okay
print We can do as much as we like here
fi
if [ -f /tmp/errlog ]
then
rm /tmp/errlog
else
echo "no errorlog found"
fi
if [ ! -f /tmp/errlog ]
then
#!/usr/bin/ksh
if [ `cat alert.log|wc -l` -gt 1 ]
then
echo "something you want to say if alert.log contains more than 1
line"
else
echo "something else you want to say"
fi
# Gebaseerd op:
# full core SAA running: 4 mxs processes, all hk BS_ processes #
# hk mode : 0 mxs processes, all hk BS_ processe #
# not running : 0 mxs processes, 0 hk BS_ processe #
integer NO_MXS
integer NO_BS
#echo $NO_MXS
#echo $NO_BS
if (( NO_MXS==4 ))
then
echo "Running"
else
if (( NO_BS==0 ))
then
echo "Notrunning"
else
echo "HousekeepingMode"
fi
fi
CASE
====
The case statement functions like 'switch' in some other languages.
Given a particular variable,
jump to a particular set of commands, based on the value of that
variable.
While the syntax is similar to C on the surface, there are some major
differences;
*)
echo This is the default clause. we are not sure why or
echo what someone would be typing, but we could take
echo action on it here
;;
esac
Sometimes you want to break out a while loop, which contains a case,
like in this example:
#!/bin/sh
In this example, the program loops if the user typed "hello". But if the
user types "bye", the "break" statement will
quit the loop.
Note: ":" evaluates to "true", but you might also have used "while
true".
&& and ||
=========
The simples conditional in the Bourne shell is the double ampersand &&.
When two commands are separated by a double ampersand, the second
command executes
only if the first command returns a zero exit status (succesful
completion)
Example:
The opposite of && is the ||. When two commands are separated by ||, the
second command executes
only if the first command returns a nonzero exit status (indicating
failure).
Example:
Loops
WHILE
=====
The basic loop is the 'while' loop; "while" something is true, keep
looping.
There are two ways to stop the loop. The obvious way is when the
'something' is no longer true.
The other way is with a 'break' command.
keeplooping=1;
while [[ $keeplooping -eq 1 ]] ; do
read quitnow
if [[ "$quitnow" = "yes" ]] ; then
keeplooping=0
fi
if [[ "$quitnow" = "q" ]] ; then
break;
fi
done
UNTIL
=====
The other kind of loop in ksh, is 'until'. The difference between them
is that 'while' implies looping while
something remains true.
'until', implies looping until something false, becomes true
FOR
===
A "for loop", is a "limited loop". It loops a specific number of times,
to match a specific number of items.
Once you start the loop, the number of times you will repeat is fixed.
The basic syntax is
Whatever name you put in place of 'var', will be updated by each value
following "in".
So the above loop will print out
one
two
three
But you can also have variables defining the item list. They will be
checked ONLY ONCE, when you start the loop.
for i in 1 2 3 4 5 6 7
do
cp x.txt $i
done
-------------------------------------------------------------
NOTE 14: Arrays
Arrays
Yes, you CAN have arrays in ksh, unlike old bourne shell. The syntax is
as follows:
print ${array[1]}
print ${array[2]}
print ${array[3]}
print ${array[three]}
arrayname[subscript]
The first element in an array uses a subscript of 0, and the last
element position (subscript value)
is dependent on what version of the Korn shell you are using. Review
your system's Korn shell (ksh)
man page to identify this value.
In this first example, the colors red, green, and blue are assigned to
the first three positions of an array
named colors:
$ colors[0]=RED
$ colors[1]=GREEN
$ colors[2]=BLUE
Adding a dollar sign and an opening brace to the front of the general
syntax and a closing brace on the end
allows you to access individual array elements:
${arrayname[subscript]}
Using the array we defined above, let's access (print) each array
element one by one:
$ print ${colors[0]}
RED
$ print ${colors[1]}
GREEN
$ print ${colors[2]}
BLUE
$
$ print ${colors[]}
RED
$
The while construct can be used to loop through each position in the
array:
$ i=0
$ while [ $i -lt 3 ]
> do
> print ${colors[$i]}
> (( i=i+1 ))
> done
RED
GREEN
BLUE
$
Notice that a variable (i) was used for the subscript value each time
through the loop.
As another example:
Special variables
There are some "special" variables that ksh itself gives values to. Here
are the ones I find interesting
PWD - always the current directory
RANDOM - a different number every time you access it
$$ - the current process id (of the script, not the user's shell)
PPID - the "parent process"s ID. (BUT NOT ALWAYS, FOR FUNCTIONS)
$? - exit status of last command run by the script
PS1 - your "prompt". "PS1='$PWD:> '" is interesting.
$1 to $9 - arguments 1 to 9 passed to your script or function
APP_DIR=${APP_DIR:-/usr/local/bin}
(KSH only)
You can also get funky, by running an actual command to generate the
value. For example
DATESTRING=${DATESTRING:-$(date)}
(KSH only)
To count the number of characters contained in a variable string, use $
{#varname}.
Example 1:
----------
# mv logfile logfile.`date`
# mv logfile logfile.`date + %Y.%m.%d`
Example 2:
----------
MS korn shell:
# now=`date -u %d`;export now
# echo $now
24
------------------------------------------------------------
NOTE 16: tput
What is tput?
The tput command, like most commands in UNIX, can be used either at your
shell command line or inside a shell script.
To gain a better understanding of tput, this article starts with the
command line, and then continues into shell script examples.
Cursor attributes
To move the cursor to the fifth column (X) and the first row (Y) on a
device, simply execute tput cup 5 1.
Another example would be tput cup 23 45, which would move the cursor to
the forty-fifth row in the twenty-third column.
tput sc
The current cursor location must be saved first. To save the current
cursor position, include the sc option,
or "save cursor position."
tput cup 23 45
After the cursor location has been saved, the cursor coordinates will be
moved to 23,45.
tput rc
When the information has been displayed, the cursor must return to the
original location that was saved with tput sc.
To return the cursor to its last saved location, include the rc option,
or "restore cursor position."
------------------------------------------------------------
NOTE 17: Doing some arithmetic
CleanOldArchiveFiles()
{
cd $T2_ARCH_DIR
COUNT_BEFORE=$(find ${T2_ARCH_DIR} -type f -name "T2*" -exec ls -al
{} \; | wc -l)
PRESENT_DIR=`pwd`
------------------------------------------------------------
NOTE 18: shell script debugging
$ bash -x script-name
set
Note 1:
------
Write your shell scripts this way to make them easier to test and debug.
Shell script debugging is not easy. You have to put set -x or the echo
command into the script. You may want to test the script
in a test environment. And at least before publishing the script, you
have to delete the debug lines.
The following tip gives a hint about how to solve this problem without
errors.
if [[ -z $DEBUG ]];then
alias DEBUG='# '
else
alias DEBUG=''
fi
Everywhere you put a line that is only for testing, write it in the
following way:
DEBUG set -xOr echo a parameter:
DEBUG echo $PATHOr set a parameter that is valid only during the test:
Sample Script
#!/usr/bin/ksh
if [[ -z $DEBUG ]];then
alias DEBUG='# '
else
alias DEBUG=''
fi
LOG_FILE=/var/script.out
DEBUG LOG_FILE=$HOME/tmp/script.out
function add {
DEBUG set -x
a=$1
b=$2
# MAIN
add 2 2
echo " 2 + 2 = $?"
------------------------------------------------------------
NOTE 19: Examples
Example 1:
----------
#!/usr/bin/ksh
# Monitor the SPL p550 server
# By Albert
# version 0.1
umask 022
date=`date +%d-%m-%y`
time=`date +%H:%M`
[email protected]
exit 0
Example 2:
----------
#!/bin/ksh
# Monitor rsp logfile
#
PATH=/usr/ucb:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/et
c:/usr/opt/SUNWmd/sbin
export PATH
umask 022
date=`date +%d-%m-%y`
time=`date +%H:%M`
[email protected],[email protected]
if [ -s /tmp/brokencursor.err ]
then
# echo "$date $time" > /tmp/brokencursor.err
mailx -r [email protected] -s "::: Check on ORA-01000 :::"
$emailers < /tmp/brokencursor.err
else
echo "all OK" >> /tmp/brokencursor.log
fi
/bin/rm /tmp/brokencursor.err
exit 0
#!/bin/ksh
# name: spl
# purpose: script that will start or stop the spl stuff.
case "$1" in
start )
echo "starting spl"
echo "su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3
-c "spl.sh -t start"'"
su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3 -c
"spl.sh -t start"'
;;
stop )
echo "stopping spl"
echo "su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3
-c "spl.sh -t stop"'"
su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3 -c
"spl.sh -t stop"'
;;
* )
echo "Usage: $0 (start | stop)"
exit 1
esac
#!/usr/bin/ksh
SAVEPERIOD=5
echo "/prj/spl/splapp/SPLQ3"| \
while read DIR
do
cd $(DIR)
find . -type f -mtime +$(SAVEPERIOD) -exec rm {} \;
done
exit 0
Initialise()
{
export SPLcomplog=$SPLSYSTEMLOGS/initialSetup.sh.log
if [ -f $SPLcomplog ]
then
rm -f $SPLcomplog
export RSP=$?
if [ $RSP -ne 0 ]
then
echo "ERROR - Cannot remove the old Log file $SPLcomplog "
exitFunc $RSP
fi
fi
touch $SPLcomplog
export RSP=$?
if [ $RSP -ne 0 ]
then
echo "ERROR - Cannot create Log file $SPLcomplog "
exitFunc $RSP
fi
export TMP1=$SPLSYSTEMLOGS/initialSetup.sh.tmp
}
exitFunc()
{
export RSP=$1
Log "Exiting $SCRIPTNAME with return code $RSP"
if [ -f $TMP1 ]
then
rm -f $TMP1 > /dev/null 2>&1
fi
exit $RSP
}
testDBconnection()
{
Log "Testing Database connection parameters entered in
configureEnv.sh"
if [ `which db2|wc -w` -gt 1 ]
then
Log "ERROR : cannot find \"db2\" Program. This is a PATH
prerequisit to the Install"
exitFunc 1
fi
. cisconnect.sh > $TMP1 2>&1
export RSP=$?
if [ $RSP -ne 0 ]
then
Log "ERROR : connecting to Database:"
Log -f "$TMP1"
Log "ERROR : Rerun configureEnv.sh and ensure database
connection parameters are correct"
Log "ERROR : Check DB2 Connect configuration to ensure
connection is o.K."
exitFunc $RSP
fi
Other example:
check_cron() {
# check of commando door cron of met de hand wordt uitgevoerd #
CRON_PID=`ps -ef | grep check_sudo | grep -v grep | awk '{print $3}'`
if [[ `ps -p ${CRON_PID} | grep -v TIME | awk '{print $4}'` ==
"cron" ]]
then
CRON_RUN="yes"
# Genereer een sleeptime nummer, voorkom daarmee dat alle
clients tegelijk de Distroserver benaderen #
random_sleeptime
else
CRON_RUN="no"
SLEEPTIME="1"
fi
}
Example 6:
----------
fi
if [ $status = UP ] ; then
# check logs
echo
echo "Check backup to rmt0"
echo "--------------------"
tail -2 /opt/back*/backup_to_rmt0.log
echo
echo "Check backup to rmt1"
echo "--------------------"
tail -7 /opt/backupscripts/backup_to_rmt1.log
echo
echo "Check backup from 520"
echo "---------------------"
ls -l /backups/520backups/oradb/conv.dmp
ls -l /backups/520backups/splenvs/*tar*
Example 7:
----------
#!/bin/sh
getinfo() {
USER=$1
PASS=$2
DB=$3
CONN="${USER}/${PASS}@${DB}"
echo "
set linesize 1000
set pagesize 1000
set trimspool on
SELECT CIS_RELEASE_ID,':', CM_RELEASE_ID
FROM CI_INSTALLATION;
" | sqlplus -S $CONN | grep '[0-9a-zA_Z]'
}
if [ $# -gt 0 ]
then
DB="$1"
else
DB="$SPLENVIRON"
fi
if [ "x$DB" = x ]
then
echo "dbvers: no environment"
exit 1
fi
Example 8:
----------
#!/usr/bin/sh
MARKER=/home/cissys/etc/marker-file
if [ $1 = "setmarker" ]
then
/bin/touch $MARKER
exit 0
fi
if [ $1 = "cleanup" ]
then
[ \! -f $MARKER ] && exit 1
if [ $1 = "runbatch" ]
then
for ETM in `cut -d: -f1 /etc/cistab`
do
DIR1=`grep $ETM /etc/cistab|cut -d: -f3`
DIR2=`grep $ETM /etc/cistab|cut -d: -f4`
$DIR1/bin/splenviron.sh -q -e $ETM -c cdxcronbatch.sh \
>>$DIR2/cdxcronbatch.out 2>&1
done
exit 0
fi
exit 1
Example 9:
----------
cd /backups/oradb
if [ -f spltrain.dmp ]
then
echo "backup of spltrain is OK" >>
/opt/backupscripts/backupdatabases.log
else
echo "error backup of spltrain " >>
/opt/backupscripts/backupdatabases.log
fi
Example 10:
-----------
#!/usr/bin/ksh
Example 11:
-----------
Make dynamic Oracle exports from a shell script. You do not need to list
exp statements per database,
this will be extracted from som file, like /etc/oratab.
#!/usr/bin/ksh
DATE=`date +%Y%m%d`
HOSTNAME=`hostname`
ORACONF=/etc/rc.oratab
set -x
# MAKE SURE THE ENVIRONMENT IS OK
ORACLE_BASE=/apps/oracle; export ORACLE_BASE
ORACLE_HOME=/apps/oracle/product/9.2; export ORACLE_HOME
LIBPATH=/apps/oracle/product/9.2/lib; export LIBPATH
ORACLE_TERM=xterm;export ORACLE_TERM
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
export TNS_ADMIN=/apps/oracle/product/9.2/network/admin
export ORAENV_ASK=NO
PATH=/usr/local/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:
/usr/java131/jre/bin;export PATH
PATH=$ORACLE_HOME/bin:$PATH;export PATH
# SAVE THE FORMER BACKUPS: LETS KEEP 1 EXTRA DAY ONLINE
# Lets copy the current file to another filesystem:
cd /backups/oradb
# Now lets save the current file on the same filesystem in 1dayago
cd /backups/oradb/1dayago
mv spl*dmp /backups/oradb/2dayago
cd /backups/oradb
mv spl*dmp /backups/oradb/1dayago
ExpOracle()
{
set -x
for i in `cat ${ORACONF} | grep -v \# | awk '{ print $1 }'`
do
SID_NAME=$i
BOOT=`grep $SID_NAME $ORACONF | awk '{ print $2}'`
if [ $BOOT = Y ] ;
then
su - oracle -c "
ORACLE_SID=${SID_NAME}
export ORACLE_SID
cd /backups/oradb
exp system/cygnusx1@$SID_NAME file=$SID_NAME.
$HOSTNAME.$DATE.dmp full=y statistics=none
EOF "
fi
sleep 5
if [ -f $SID_NAME.$HOSTNAME.$DATE.dmp ]
then
echo "backup of $SID_NAME is OK" >>
/opt/backupscripts/backupdatabases.log
else
echo "error backup of $SID_NAME " >>
/opt/backupscripts/backupdatabases.log
fi
done
}
ExpOracle
Example 12:
-----------
Example 13:
-----------
kill `ps -ef | grep /dir/dir/abc | grep -v grep | awk '(print $2)'`
Example 14:
-----------
#!/usr/bin/ksh
#
# description: start and stop the Documentum Content Server environment
from dmadmin account
# called by: dmadmin
#
DOCBASE_NM1=dmw_et
DOCBASE_NM2=dmw_et3
function log
{
echo $(date +"%Y/%m/%d %H.%M.%S %Z") 'documentum.sh:' ${@}
}
start)
# Starting DocBroker
cd $DOCUMENTUM/dba
./dm_launch_Docbroker
./dm_start_$DOCBASE_NM1
./dm_start_$DOCBASE_NM2
# Starting Tomcat services
cd $DM_HOME/tomcat/bin
./startup.sh
;;
stop)
# Stopping Tomcat services
cd $DM_HOME/tomcat/bin
./shutdown.sh
# Stopping DocBroker
cd $DOCUMENTUM/dba
./dm_shutdown_$DOCBASE_NM1
./dm_shutdown_$DOCBASE_NM2
./dm_stop_Docbroker
;;
clean_logs)
# Call myself to stop stuff
${0} stop
# Stopping Tomcat services
find $DOCUMENTUM/dba/log -type f -name "*" -exec rm -rf {} \;
# Call myself to restart stuff
${0} start
;;
clean_performance)
# Call myself to stop stuff
${0} stop
# Stopping Tomcat services
find $DOCUMENTUM/dba/log -type d -name "perftest*" -exec rm -rf
{} \;
find $DOCUMENTUM/dba/log -type d -name "perfuser*" -exec rm -rf
{} \;
find $DOCUMENTUM/dba/log -type d -name "RefAppUser*" -exec rm -rf {}
\;
# Call myself to restart stuff
${0} start
;;
kill)
cd $DOCUMENTUM/dba
./dm_launch_Docbroker -k
;;
*)
echo "Usage: $0 {start|stop|kill|clean_logs|clean_performance}"
exit 1
esac
exit 0
Example 15:
-----------
# Check bij aanloggen of voor dit domein een dmgr of nodeagent draait
WASUSR=`whoami`
echo ""
Example 16:
-----------
echo ""
read CLEARMSG?"Press C or c to clear this message or any other key to
keep it : "
if [ "${CLEARMSG}" = "C" ] || [ "${CLEARMSG}" = "c" ]; then
if [ -f ExtraMessage.txt ]; then
rm ~/ExtraMessage.txt
fi
fi
Example 17:
-----------
# Check arguments
if [ ${#} != 3 ]
then
log "Usage: ${0} <enviroment> <installFilesFolder> <installTarget>"
exit 1
fi
#
if [ -z "$1" ]
then
echo "use : build.sh PROGNAME
e.g. build.sh CLBDSYNC
"
exit 1
fi
Example 18:
-----------
- Read in a Variable
From a user we read with: read var. Then the users can type something
in. One should first print something like:
last | sort | {
while read myline;do
# commands
done }
Example 19:
-----------
#!/bin/sh
#
************************************************************************
****
# This script is used to start Tomcat
# It calls the startup.sh script under $CATALINA_HOME/bin.
#
#
************************************************************************
****
if [ ! -d $CATALINA_HOME/bin ]
then
echo "Unable to find directory $CATALINA_HOME/bin"
else
$CATALINA_HOME/bin/startup.sh
fi
Example 20:
-----------
export SCRIPTNAME=$0
export SPLQUITE=N
export SPLCOMMAND=""
export SPLENVIRON=""
export MYID=`id |cut -d'(' -f2|cut -d')' -f1`
export SPLSUBSHELL=ksh
Example 21:
-----------
Get a certain number of columns from df -k output:
#!/usr/bin/ksh
for i in `df -k |awk '{print $7}' |grep -v "Filesystem"'`
do
echo "Albert"
done
#!/usr/bin/ksh
cd ~
rm -rf /root/alert.log
echo "Important alerts in errorlog: " >> /root/alert.log
errpt | grep -i STORAGE >> /root/alert.log
errpt | grep -i QUORUM >> /root/alert.log
errpt | grep -i ADAPTER >> /root/alert.log
errpt | grep -i VOLUME >> /root/alert.log
errpt | grep -i PHYSICAL >> /root/alert.log
errpt | grep -i STALE >> /root/alert.log
errpt | grep -i DISK >> /root/alert.log
errpt | grep -i LVM >> /root/alert.log
errpt | grep -i LVD >> /root/alert.log
errpt | grep -i UNABLE >> /root/alert.log
errpt | grep -i USER >> /root/alert.log
errpt | grep -i CORRUPT >> /root/alert.log
cat /root/alert.log
Example 22:
-----------
Notes:
------
% cd /u01/app/oracle/product/9.2.0/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on
case $IN in
start)
for dbase in `grep -v "^#" /etc/oratab | sed -e 's/:.*$//'`
do
su - $dbase -c "/beheer/oracle/cluster/orapkg.sh start"
done;;
Note 1:
-------
LONGFILE=myfile
set filename=`awk '{print substr($LONGFILE,1,3);exit}'`
echo $filename
Note 2:
-------
filename=whatever
first3chars=`echo $filename | cut -c 1-3`
echo $first3chars
Note 3:
-------
Q:
hi
i need to name a file with a substring of a another file name.
A:
#!/bin/sh
F=abc.txt
F1="${F%.*}_1.${F##*.}"
echo "F1=$F1
A:
t1=`basename $0 .txt`
t2=${t1}_1.txt
echo "new file name:-${t2}"
Example 24:
-----------
split command:
--------------
To split large files into smaller files in Unix, use the split command.
At the Unix prompt, enter:
-l linenumber
-b bytes
The split command will give each output file it creates the name prefix
with an extension tacked to the end
that indicates its order. By default, the split command adds aa to the
first output file, proceeding through
the alphabet to zz for subsequent files. If you do not specify a prefix,
most systems use x .
Examples
In this simple example, assume myfile is 3,000 lines long:
split myfile
This will output three 1000-line files: xaa, xab, and xac.
for f in x*
do
runDataProcessor $f > $f.out &
done
wait
for k in *.out
do
cat $k >> combinedResult.txt
done
csplit command:
---------------
$ csplit orginal.txt 11 72 98
the csplit command would create four files: the xx00 file would contain
lines 1-10, the xx01 file would contain lines 11-71,
the xx02 file would contain lines 72-97, the xx03 file would contain
lines 98-108.
The Argument parameter can also contain the following symbols and
pattern strings:
/Pattern/
Creates a file that contains the segment from the current line up to,
but not including, the line containing the specified pattern.
The line containing the pattern becomes the current line.
Example:
exit 0
$ ./split_it.sh
File part.00 contains:
a
b
c
File part.01 contains:
d
e
f
File part.02 contains:
g
h
i
File part.03 contains:
j
k
l
Example:
To split the text of book into a separate file for each chapter, enter:
This creates 10 files, xx00 through xx09. The xx00 file contains the
front matter that comes before the first chapter.
Files xx01 through xx09 contain individual chapters. Each chapter begins
with a line that contains only the word Chapter and the chapter number.
Example:
To specify the prefix chap for the files created from book, enter:
Example:
#!/bin/sh
#
# Split a file
#
if [ $# -lt 2 ]
then
echo "Syntax is ./lineSplit.sh <file name> <split line number>"
echo "Example: ./lineSplit.sh bp.out 25000"
exit 1
fi
file=$1
split=$2
lineMax=`wc -l $file | awk '{print $1}'`
counter=1
i=1
while [ $counter -lt $lineMax ]
do
split1=`expr $counter`
split2=`expr $counter + $split - 1`
sed -n "$split1","$split2"p $file > fragment."$i"
i=`expr $i + 1`
counter=`expr $split2 + 1`
done
Example 25:
-----------
If you consider some line of text, you can tell your shell what is the
field seperator.
In normal text, a separator will be a 'space", or a "tab" etc.., but if
you look for example at your $PATH
variable, your field seperator will be a ":" symbol.
#!/usr/bin/ksh
IFS=: # Here we say that the field seperator is the :
character.
for p in $PATH
do
if [ -x $p/$1 ]
then
echo $p/$1
return
fi
done
echo "No $1 found in path"
return 1
Example 26:
-----------
MAIL_TEXT="Goodmorning,"
MAIL_TEXT="${MAIL_TEXT}\n\nThere's no file found in ${ARCH_DIR}
younger than 1 day."
MAIL_TEXT="${MAIL_TEXT}\nThat means that we have one of the following
situations:"
MAIL_TEXT="${MAIL_TEXT}\n- Something went wrong trying to receive the
file."
MAIL_TEXT="${MAIL_TEXT}\n- Or it was a special day where no files are
send."
ABC_INST=$1
typeset -i ABC_PROCS
STATUS=""
Status_ABC()
{
# Description: Function which gives the status of all processes:
Running or Notrunning.
# Calling: Status_CD4SN
case ${ABC_INST} in
aa)
ABC_PROCS=$(ps -ef | grep Command | grep -v grep | grep "s 1"
| wc -l)
;;
bb)
ABC_PROCS=$(ps -ef | grep Command | grep -v grep | grep "s 2"
| wc -l)
;;
cc)
ABC_PROCS=$(ps -ef|grep $USER |grep -v grep|grep "abc.$(uname
-n)" |wc -l)
;;
*)
echo "Wrong parameter. Use \"aa\", \"bb\" of \"cc\"
exit
;;
esac
if [[ ${ABC_PROCS} = 2 ]]
then
STATUS="Running"
else
STATUS="Notrunning"
fi
echo "${STATUS}"
}
# MAIN
Status_ABC
Note:
-----
typeset
$ typeset -r year=2000
$ echo $year
$ year=2001
ksh: year: is readonly
Example 28:
-----------
#!/bin/bash
##### Constants
##### Functions
function system_info
{
echo "<h2>System release info</h2>"
echo "<p>Function not yet implemented</p>"
} # end of system_info
function show_uptime
{
echo "<h2>System uptime</h2>"
echo "<pre>"
uptime
echo "</pre>"
} # end of show_uptime
function drive_space
{
echo "<h2>Filesystem space</h2>"
echo "<pre>"
df
echo "</pre>"
} # end of drive_space
function home_space
{
# Only the superuser can get this information
} # end of home_space
##### Main
Or...
#!/bin/bash
##### Constants
##### Functions
function system_info
{
echo "<h2>System release info</h2>"
echo "<p>Function not yet implemented</p>"
} # end of system_info
function show_uptime
{
echo "<h2>System uptime</h2>"
echo "<pre>"
uptime
echo "</pre>"
} # end of show_uptime
function drive_space
{
echo "<h2>Filesystem space</h2>"
echo "<pre>"
df
echo "</pre>"
} # end of drive_space
function home_space
{
# Only the superuser can get this information
} # end of home_space
function write_page
{
cat <<- _EOF_
<html>
<head>
<title>$TITLE</title>
</head>
<body>
<h1>$TITLE</h1>
<p>$TIME_STAMP</p>
$(system_info)
$(show_uptime)
$(drive_space)
$(home_space)
</body>
</html>
_EOF_
function usage
{
echo "usage: system_page [[[-f file ] [-i]] | [-h]]"
}
##### Main
interactive=
filename=~/system_page.html
=====================
3. BOOT and Shutdown:
=====================
init or shutdown are normally best: they run the kill scripts
halt or reboot do not run the kill scripts properly
- If you say init 6, or shutdown -i6, the system reboots an restart into
a runstate as defined as the default
in the inittab file.
- If you say init 0, the system cleanly shuts down, and you can power of
the system
If you say init 5, is equivalent to the poweroff command, and the
system cleanly shuts down,
and you can power of the system
to achieve the desired effect. Be sure to read the man page for shutdown
for your operating system.
With no argument, shutdown will take the system into single user mode.
sync<enter>
sync<enter>
init 0<enter>
- Shutdown scripts:
Like startup scripts, the system initialization directories (usually
/etc/rcN.d) contains shutdown scripts which are fired up
by init during an orderly shutdown (i.e. when either the init command is
used to change the runlevel or when the
shutdown command is used).
The usual convention is to use the letter K in front of a number,
followed by a service name, such as K56network.
The number determines the order in which the scripts are fired up when
the system transitions into a particular run level.
You can use the init, shutdown and halt commands. The shutdown command
stops the system in an orderly fashion.
If you need a customized shutdown sequence, you can create a file called
/etc/rc.shutdown.
If this file exists, it is called by the shutdown command and is
executed first.
This can be usefull for example, if you need to close a database prior
to a shutdown.
If rc.shutdown fails (non zero return code value), the shutdown cycle is
terminated.
Example rc.shutdown:
--------------------
#cat /etc/rc.shutdown
#!/bin/ksh
# stop Control-SA/Agent
/etc/rc.ctsa stop
/etc/rc.mwa stop
/etc/rc.opc stop
/etc/rc.directoryserver stop
#Stop the Tivoli Enterprise Console Logfile Adapter
if [ -f /beheer/Tivoli/lcf/bin/aix4-
r1/TME/TEC/adapters/bin/init.tecad_logfile ]; then
/beheer/Tivoli/lcf/bin/aix4-
r1/TME/TEC/adapters/bin/init.tecad_logfile stop aix-default >/dev/null
2>&1
echo "Tivoli Enterprise Console Logfile Adapter stopped."
fi
exit 0
3.1.3 Shutdown a Linux system:
==============================
To shut down Red Hat Linux, issue the shutdown command. You can read the
shutdown man page for complete details,
but the two most common uses are:
/sbin/shutdown -h now
/sbin/shutdown -r now
You must run shutdown as root. After shutting everything down, the -h
option will halt the machine,
and the -r option will reboot.
Non-root users can use the reboot and halt commands to shutdown the
system while in runlevels 1 through 5.
However, not all Linux operating systems support this feature.
If your computer does not power itself down, be careful not turn off the
computer until you see a message indicating
that the system is halted.
3.2 Booting:
============
SunOs: /vmunix
Solaris8 = SunOs 5.8: /kernel/unix
AIX: /unix
- Openboot PROM:
The command "ok boot disk5 kernel/unix -s", the PROM will look for the
primary bootprogram bootblk
on the alias disk5, which could be a physical device as
/iommu/sbus/espdma@f,400000/esp@f,800000/sd@0,0
The primary startup command will then load "ufsboot". This will then
load the kernel as specified.
Thus, after the simple boot command, the boot process goes on in the
following manner:
$ more /var/adm/messages
$ /usr/sbin/dmesg
1. login as root
/etc/system.orig
id:rstate:action:process
in /sbin we find the scripts rc0 - rc6, rcS. These are not links, but
true shell scripts.
In /etc we find the links rc0 - rc6, rcS.
Contents /etc/inittab
# ls /etc/rc2.d
K20spc@ S70uucp* S80lp*
K60nfs.server* S71rpc* S80spc@
K76snmpdx* S71sysid.sys* S85power*
K77dmi* S72autoinstall* S88sendmail*
README S72inetsvc* S88utmpd*
S01MOUNTFSYS* S73nfs.client* S89bdconfig@
S05RMTMPFILES* S74autofs* S91leoconfig*
S20sysetup* S74syslog* S92rtvc-config*
S21perf* S74xntpd* S92volmgt*
S30sysid.net* S75cron* S93cacheos.finish*
S47asppp* S76nscd* S99audit*
S69inet* S80PRESERVE* S99dtlogin*
The /etc/rcn.d scripts are always run in ASCII sort order. The scripts
have names of the form:
[K,S][0-9][0-9][A-Z][0-99]
Files beginning with K are run to terminate (kill) a system process.
Files beginning with S are run to start a system process.
The advantage to have individual scripts, is that you can stop or start
individual processes
by running such a script, without rebooting or changing the run level.
Restart functionality
# /etc/init.d/filename start
For example, if you want to restart the NFS server, you can do the
following:
# /etc/init.d/nfs.server stop
# /etc/init.d/nfs.server start
Use the ps and grep commands to verify whether the service has been
stopped or started.
# ps -ef | grep service
If you want to add a run control script to start and stop a service,
copy the script into the /etc/init.d directory and create links in the
rc*.d
directory you want the service to start and stop.
See the README file in each /etc/rc*.d directory for more information
on naming run control scripts.
The procedure below describes how to add a run control script.
# cd /etc/init.d
# ln filename /etc/rc2.d/Snnfilename
# ln filename /etc/rcn.d/Knnfilename
(or
cd /etc/rc2d
ln /etc/init.d/filename S22filename
)
Use the ls command to verify that the script has links in the
specified directories.
# cp xyz /etc/init.d
# cd /etc/init.d
# ln xyz /etc/rc2.d/S100xyz
# ln xyz /etc/rc0.d/K100xyz
# ls /etc/init.d /etc/rc2.d /etc/rc0.d
#!/bin/ksh
# name: spl
# purpose: script that will start or stop the spl stuff.
case "$1" in
start )
echo "starting spl"
echo "su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3
-c "spl.sh -t start"'"
su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3 -c
"spl.sh -t start"'
;;
stop )
echo "stopping spl"
echo "su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3
-c "spl.sh -t stop"'"
su - ccbsys -c '/prj/spl/SPLS3/bin/splenviron.sh -e SPLS3 -c
"spl.sh -t stop"'
;;
* )
echo "Usage: $0 (start | stop)"
exit 1
esac
#!/sbin/sh
# Copyright (c) 1984, 1986, 1987, 1988, 1989 AT&T
# All Rights Reserved
PATH=/usr/sbin:/usr/bin
set `/usr/bin/who -r`
if [ -d /etc/rc3.d ]
then
for f in /etc/rc3.d/K*
{
if [ -s ${f} ]
then
case ${f} in
*.sh) . ${f} ;; # source
it
*) /sbin/sh ${f} stop ;; # sub
shell
esac
fi
}
for f in /etc/rc3.d/S*
{
if [ -s ${f} ]
then
case ${f} in
*.sh) . ${f} ;; # source
it
*) /sbin/sh ${f} start ;; # sub
shell
esac
fi
}
fi
if [ $9 = 'S' -o $9 = '1' ]
then
echo 'The system is ready.'
fi
Example:
--------
mt -f /dev/rm rewind
tar -xvf /dev/rmt1.1 fielname
mt -f /dev/rmt0.1 fsf 2 (voor drie) (daarna staat tapepointer op begin
4)
fsf bsf
#
# Startup for Oracle Databases
#
ORACLE_HOME=/opt/oracle/product/8.0.6
ORACLE_OWNER=oracle
if [ ! -f $ORACLE_HOME/bin/dbstart ] ;then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
'start')
# Start the Oracle databases
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbstart" > /dev/null
2>&1
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl start" >
/dev/null 2>&1
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl dbsnmp_start"
> /dev/null 2>&1
;;
'stop')
# Stop the Oracle databases
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl dbsnmp_stop"
> /dev/null
2>&1
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl stop" >
/dev/null 2>&1
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut" > /dev/null 2>&1
;;
*)
echo "Usage: $0 { start | stop }"
;;
esac
Another example:
----------------
more /etc/init.d/dbora
###################################
#
# usage: dbstart
#
# This script is used to start ORACLE from /etc/rc(.local).
# It should ONLY be executed as part of the system boot procedure.
#
#####################################
ORATAB=/var/opt/oracle/oratab
trap 'exit' 1 2 3
case $ORACLE_TRACE in
T) set -x ;;
esac
#
# Loop for every entry in oratab file and and try to start
# that ORACLE
#
PFILE=${ORACLE_HOME}/dbs/init${ORACLE_SID}.ora
else
VERSION="5"
fi
fi
if test -f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.dbf -o \
-f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.ora
then
STATUS="-1"
else
STATUS=1
fi
case $STATUS in
1) if [ -f $PFILE ] ; then
case $VERSION in
5) ior w pfile=$PFILE
;;
6) sqldba command=startup
;;
7) sqldba <<EOF
connect internal
startup
EOF
;;
7) sqldba <<EOF
connect internal
shutdown abort
EOF
;;
;;
esac
6) sqldba command=startup
;;
7) sqldba <<EOF
connect internal
startup
EOF
;;
7.3) svrmgrl <<EOF
connect internal
startup
EOF
;;
esac
if test $? -eq 0 ; then
echo ""
echo "Database \"${ORACLE_SID}\" warm
started."
else
echo ""
echo "Database \"${ORACLE_SID}\" NOT
started."
fi
else
echo ""
echo "Can't find init file for Database \"$
{ORACLE_S
ID}\"."
echo "Database \"${ORACLE_SID}\" NOT
started."
fi
else
echo "Database \"${ORACLE_SID}\" NOT started."
fi
;;
esac
fi
;;
esac
done
DBPASSWORD=abc
DBPASSWORDFE=mrx
DBUSER=xyz
DBUSERFE=mry
EDITOR=vi
HOME=/opt/home/oracle
HZ=100
INPUTRC=/usr/local/etc/inputrc
LD_LIBRARY_PATH=/opt/oracle/product/8.0.6/lib
LESSCHARSET=latin1
LOG=/var/opt/oracle
LOGNAME=oracle
MANPATH=/usr/share/man:/usr/openwin/share/man:/usr/opt/SUNWmd/man:/opt/S
UNWsymon
/man:/opt/SUNWswusg/man:/opt/SUNWadm/2.2/man:/opt/local/man
NLS_LANG=american_america.we8iso8859p1
OPENWINHOME=/usr/openwin
ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/8.0.6
ORACLE_SID=ORCL
ORA_NLS33=/opt/oracle/product/8.0.6/ocommon/nls/admin/data
PATH=/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/ucb:/usr/openwin/bin:/
opt/orac
le/product/8.0.6/bin
PROGRAMS=/opt/local/bin/oracle
PS1=\u@\h[\w]>
SHELL=/sbin/sh
TERM=vt100
TZ=MET
\u@\h[\w]>
http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?
topic=/com.ibm.aix.doc/aixbman/admnconc/under_sys.htm
During a hard disk boot, the boot image is found on a local disk created
when the operating system was installed.
During the boot process, the system configures all devices found in the
machine and initializes other basic software
required for the system to operate (such as the Logical Volume Manager).
At the end of this process,
the file systems are mounted and ready for use. For more information
about the file system used during boot processing,
see Understanding the RAM File System.
Most users perform a hard disk boot when starting the system for general
operations.
The system finds all information necessary to the boot process on its
disk drive.
When the system is started by turning on the power switch (a cold boot)
or restarted with the
reboot or shutdown commands (a warm boot), a number of events must occur
before the system is ready for use.
These events can be divided into the following phases:
2. The ROS initial program load (IPL) checks the user bootlist, a list
of available boot devices.
This boot list can be altered to suit your requirements using the
bootlist command. If the user boot list
in non-volatile random access memory (NVRAM) is not valid or if a valid
boot device is not found,
the default boot list is then checked. In either case, the first valid
boot device found in the boot list
is used for system startup. If a valid user boot list exists in NVRAM,
the devices in the list are checked in order.
If no user boot list exists, all adapters and devices on the bus are
checked. In either case, devices are checked
in a continuous loop until a valid boot device is found for system
startup.
Note:
The system maintains a default boot list located in ROS and a user boot
list stored in NVRAM,
for a normal boot. Separate default and user boot lists are also
maintained for booting from the Service key position.
3. When a valid boot device is found, the first record or program sector
number (PSN) is checked.
If it is a valid boot record, it is read into memory and is added to the
IPL control block in memory.
Included in the key boot record data are the starting location of the
boot image on the boot device,
the length of the boot image, and instructions on where to load the boot
image in memory.
4. The boot image is read sequentially from the boot device into memory
starting at the location
specified in NVRAM. The disk boot image consists of the kernel, a RAM
file system, and base customized
device information (customized reduced ODM).
The init process starts the rc.boot script. Phase 1 of the rc.boot
script performs the base device configuration,
and it includes the following steps:
. The boot script calls the restbase program to build the customized
Object Data Manager (ODM) database
in the RAM file system from the compressed customized data.
. The boot script starts the configuration manager, which accesses phase
1 ODM configuration rules to configure
the base devices.
. The configuration manager starts the sys, bus, disk, SCSI, and the
Logical Volume Manager (LVM) and
rootvg volume group configuration methods.
. The configuration methods load the device drivers, create special
files, and update the customized data
in the ODM database.
The init process starts phase 2 running of the rc.boot script. Phase 2
of rc.boot includes the following steps:
.Call the ipl_varyon program to vary on the rootvg volume group.
.Mount the hard disk file systems onto their normal mount points.
.Run the swapon program to start paging.
.Copy the customized data from the ODM database in the RAM file system
to the ODM database in the hard disk file system.
.Exit the rc.boot script.
- After phase 2 of rc.boot, the boot process switches from the RAM file
system to the hard disk root file system.
- Then the init process runs the processes defined by records in the
/etc/inittab file.
One of the instructions in the /etc/inittab file runs phase 3 of the
rc.boot script,
At the end of this process, the system is up and ready for use.
# who -r
# cat /etc/.init.state
The telinit command directs the actions of the init process by taking a
one character parameter
and signaling the init process to perform the appropriate action.
So the telinit command sets the system at a specific runlevel.
Describe LED codes (121, 223, 229, 551, 552, 553, 581, OC31, OC32)
-reduced ODM from BLV copied into RAMFS: OK=510, NOT OK=LED 548:
-LED 511: bootinfo -b is called to determine the last bootdevice
-ipl_varyon of rootvg: OK=517,ELSE 551,552,554,556:
-LED 555,557: mount /dev/hd4 on temporary mountpoint /mnt
-LED 518: mount /usr, /var
-LED 553: syncvg rootvg, or inittab problem
-LED 549
-LED 581: tcp/ip is being configured, and there is some problem
105
CPU planar board is not securely seated in the adapter slot on the
microchannel bus.
------------------------------------------------------------------------
--------
200
Key is in SECURE mode and the system will NOT boot until the key is
turned to either
NORMAL or SERVICE mode.
------------------------------------------------------------------------
--------
201
LV hd5 (boot logical volume) has been corrupted. To correct this
situation, perform the following:
. Boot system in service mode. Either boot the system from boot
diskettes or boot tape OF THE SAME VERSION AND
LEVEL AS THE SYSTEM.
. To perform system maintenance functions from the INSTALL and
MAINTENANCE menu, enter the following command,
where hdisk0 is the drive that contains the boot logical volume (/blv)
/usr/sbin/getrootfs hdisk0
. From maintenance mode make sure /tmp has at least enough free disk
space to create the tape image when
the 'bosboot' command is executed.
. Make sure /dev/hd6 is swapped on via the lsps -a command.
You don't want to get 'paging space low' messages when creating a new
boot image on /dev/hd5. Recreate
a new boot image by executing the command:
bosboot -a -d /dev/hdisk0
Turn key to normal mode
shutdown -Fr
------------------------------------------------------------------------
--------
221
Boot system in service mode. Either boot the system from boot diskettes
or boot tape
Select option to perform system maintenance functions from the INSTALL
and MAINTENANCE menu.
Enter the following command:
/usr/sbin/getrootfs hdisk0 from maintenance mode
Enter the command
bootlist -m normal hdisk0 or whatever your boot drive name is (eg.,
hdisk1)
shutdown -Fr
If the above method fails, try the following:
Shutdown your machine and unplug your system battery before you power
up.
Wait 30 minutes for battery to drain.
Reconnect battery.
Once you power up and a 221 is displayed on your LED
flip the key to service mode then back to normal mode
plug in system battery
Once this is done, the NVRAM should return to normal.
------------------------------------------------------------------------
--------
223/229
Cannot boot in normal mode from any of the devices listed in the NVRAM
bootlist.
Typically the cause of this problem is the machine has just been moved
and the SCSI adapter card is not
firmly seated in the adapter slot on the microchannel bus. Make sure the
card is seated properly and all
internal and external SCSI connectors are firmly attached.
Another possibility is that a NEW SCSI device has been added to the
system and there are two or more devices
with the same SCSI ID.
------------------------------------------------------------------------
--------
233
Attempting to IPL from devices specified in NVRAM device list. If
diagnostics indicate a bad drive is
suspected, BEFORE replacing the physical volume, replace the LOGIC
ASSEMBLY on the drive housing first.
Saves time in retrying to rebuild a system especially if full backups
haven't been made recently.
------------------------------------------------------------------------
--------
552
BAD ERROR. The VG rootvg could not be varied on. Most likely scenario is
that the VGDA on the default
boot drive (hdisk0) got hammered/corrupted. To resolve this problem, try
the following:
1) Boot system in service mode. Either boot the system from boot
diskettes or boot tape
2) Select option to perform system maintenance functions from the
INSTALL and MAINTENANCE menu.
3) Enter the following command:/usr/sbin/getrootfs hdisk0 from
maintenance mode. If there are at least two PVs in the VG rootvg, if one
fails to work with this command, try any of the remaining PVs (eg,
/etc/continue hdisk0 or /etc/continue hdisk1)
4) If the importvg command fails, as should the varyonvg command, then
perform the following from the command line:
8) If there are no error messages from the synclvodm command or the fsck
command, then mount the following
file systems:
9) If there are no error messages from these mount commands, then goto
step '11'
10) If the previous step fails or the log redo process fails or
indicates any filesystems with an
unknown log device, then do the following 2 steps:
sync; sync;
halt
13) If the problem still persists, consult your local SE before you
attempt to RE-INSTALL your system.
------------------------------------------------------------------------
--------
553
Your /etc/inittab file has been corrupted or truncated. To correct this
situation, perform the following:
boot system in service mode. Either boot the system from boot diskettes
or boot tape select option 5
(perform system maintenance) from the INSTALL and MAINTENANCE menu.
Enter the command /etc/continue hdisk0 from maintenance mode.
Check to see that you have free space on those file systems that are
mounted on logical volumes /dev/hd3 and /dev/hd4.
If they are full, erase files that aren't needed.
Some space needs to be free on these logical volumes for the system to
boot properly.
Check to see if the /etc/inittab file looks ok. If not, goto the next
step, else consult your local SE
for further advice.
Place the MOST recent 'mksysb' tape into the tape drive. If you don't
have a 'mksysb' tape, get your
INSTALL/MAINT floppy and insert into your diskette drive.
Extract the /etc/inittab file from the media device mentioned.
Change directories to root (eg., cd /) first, then execute the following
command:
restore -xvf/dev/fd0 ./etc/inittab - if a floppy disk
restore -xvf/dev/rmt0 ./etc/inittab - if a tape device
This will restore the contents of the /etc/inittab file to a reasonable
format to boot the system up with.
Depending on how current the /etc/inittab file is, you may have to
manually add, modify, or delete the
contents of this file.
shutdown -Fr
------------------------------------------------------------------------
--------
581
This LED is displayed when the /etc/rc.net script is executed.
------------------------------------------------------------------------
--------
727
Printer port is being configured BUT there is NO cable connected to the
configured port on the 16-port
concentrator OR the RJ-45 cable from the concentrator back to the 64-
port card isn't connected.
Either remove the printer in question from the ODM database (eg., rmdev
-l lp0 -d) OR
Reconnect the printer cable back to the port on the 16-port concentrator
OR
Re-connect the 16-port concentrator back to the 64-port adapter card.
To determine WHICH concentrator box that printer is connected to
------------------------------------------------------------------------
--------
869
Most likely scenario is that you have two or more SCSI devices with the
same SCSI id on one SCSI controller.
To correct this situation...
Change one of the conflicting SCSI devices to use an UNUSED SCSI address
(0-7).
If this case fails, RESET your SCSI adapter(s).
------------------------------------------------------------------------
--------
sysdumpdev -l This will determine which device has been assigned as the
primary and secondary dump devices
sysdumpstart -p (initiate dump to primary device)
sysdumpstart -s (initiate dump to secondary device)
sysdumpdev -z (indicates if a NEW dump exists)
sysdumpdev -L (indicates info about a previous dump)
Press keyboard sequence: CTRL-ALT-NUMPAD1 (for primary device)
Press keyboard sequence: CTRL-ALT-NUMPAD2 (for secondary device)
Insert a tape in the tape device you wish to dump the kernel data to
/usr/sbin/snap -gfkD -o /dev/rmt0
If your system is hung, the user MUST initiate or force a dump of the
kernel data via the following:
ROS IPL (Read Only Storage Initial Program Load). This phase includes a
power-on selftest, the location
of the bootdevice, and loading of the boot kernel into memory.
At boottime,once the POST is completed, the system will search the boot
list for a
bootable image. The system will attempt to boot from the first entry in
the bootlist.
Pressing the F5 key (or 5) during boot, will invoke the service
bootlist, which includes
the CDROM.
fd0
cd0
hdisk0
The bootlist can be changed using the same command, for example
# bootlist -m normal hdisk0 cd0
To record the current date and time in alog file named /tmp/mylog, enter
# date | alog -f /tmp/mylog
To see the list the logs defined in the alog database, run
# alog -L
AIX uses the default runlevel 2. This is the normal multi-user mode.
Runlevels 0,1 are reserved, 2 is normal, and 3-9 are configurable by the
Administrator.
init:2:initdefault:
brc::sysinit:/sbin/rc.boot 3 >/dev/console 2>&1 # Phase 3 of system boot
mkatmpvc:2:once:/usr/sbin/mkatmpvc >/dev/console 2>&1
atmsvcd:2:once:/usr/sbin/atmsvcd >/dev/console 2>&1
load64bit:2:wait:/etc/methods/cfg64 >/dev/console 2>&1 # Enable 64-bit
execs
tunables:23456789:wait:/usr/sbin/tunrestore -R > /dev/console 2>&1 # Set
tunables
rc:23456789:wait:/etc/rc 2>&1 | alog -tboot > /dev/console # Multi-User
checks
fbcheck:23456789:wait:/usr/sbin/fbcheck 2>&1 | alog -tboot >
/dev/console # run /etc/firstboot
srcmstr:23456789:respawn:/usr/sbin/srcmstr # System Resource Controller
rctcpip:23456789:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP
daemons
rcnfs:23456789:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons
cron:23456789:respawn:/usr/sbin/cron
nimclient:2:once:/usr/sbin/nimclient -S running
piobe:2:wait:/usr/lib/lpd/pio/etc/pioinit >/dev/null 2>&1 # pb cleanup
qdaemon:23456789:wait:/usr/bin/startsrc -sqdaemon
writesrv:23456789:wait:/usr/bin/startsrc -swritesrv
uprintfd:23456789:respawn:/usr/sbin/uprintfd
shdaemon:2:off:/usr/sbin/shdaemon >/dev/console 2>&1 # High availability
daemon
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
l7:7:wait:/etc/rc.d/rc 7
l8:8:wait:/etc/rc.d/rc 8
l9:9:wait:/etc/rc.d/rc 9
logsymp:2:once:/usr/lib/ras/logsymptom # for system dumps
itess:23456789:once:/usr/IMNSearch/bin/itess -start search >/dev/null
2>&1
diagd:2:once:/usr/lpp/diagnostics/bin/diagd >/dev/console 2>&1
httpdlite:23456789:once:/usr/IMNSearch/httpdlite/httpdlite -r
/etc/IMNSearch/httpdlite/httpdlite.conf & >/dev/console 2>&1
ha_star:h2:once:/etc/rc.ha_star >/dev/console 2>&1
dt_nogb:2:wait:/etc/rc.dt
cons:0123456789:respawn:/usr/sbin/getty /dev/console
srv:2:wait:/usr/bin/startsrc -s sddsrv > /dev/null 2>&1
perfstat:2:once:/usr/lib/perf/libperfstat_updt_dictionary >/dev/console
2>&1
ctrmc:2:once:/usr/bin/startsrc -s ctrmc > /dev/console 2>&1
lsof:2:once:/usr/lpp/aix4pub/lsof/mklink
monitor:2:once:/usr/lpp/aix4pub/monitor/mklink
nmon:2:once:/usr/lpp/aix4pub/nmon/mklink
ptxnameserv:2:respawn:/usr/java14/jre/bin/tnameserv -ORBInitialPort 2279
2>&1 >/dev/null # Start jtopasServer
ptxfeed:2:respawn:/usr/perfagent/codebase/jtopasServer/feed 2>&1
>/dev/null # Start jtopasServer
ptxtrend:2:once:/usr/bin/xmtrend -f /etc/perf/jtopas.cf -d /etc/perf/Top
-n jtopas 2>&1 >/dev/null # Start trend
direct:2:once:/tmp/script_execute_after_reboot_pSeries
2>>/tmp/pSeries.050527_16:56.log
fmc:2:respawn:/usr/opt/db2_08_01/bin/db2fmcd #DB2 Fault Monitor
Coordinator
smmonitor:2:wait:/usr/sbin/SMmonitor start > /dev/console 2>&1 # start
SMmonitor daemon
Other observations:
-------------------
Purpose
Displays and alters the list of boot devices available to the system.
Syntax
bootlist [ { -m Mode } [ -r ] [ -o ] [ [ -i ] [ -V ] [ -F ]| [ [ -f
File ] [ Device [ Attr=Value ... ] ... ] ] ] [ -v ]
The bootlist command allows the user to display and alter the list of
possible boot devices from which
the system may be booted. When the system is booted, it will scan the
devices in the list and attempt to
boot from the first device it finds containing a boot image.
The AIX "bootlist" command can be used to select the boot disk. This is
useful if you want to test
different AIX levels on the same system.
For example, assume hdisk0 has AIX 4.2.1 installed and hdisk1 AIX 4.3.3
installed. Use one of the following "bootlist"
commands** to select which version will come up on the next reboot:
The second disk can be installed from CD, a "mksysb" tape, or using AIX
4.3's "alt_disk_install" capability.
Both CD and mksysb installs require downtime. The "alt_disk_install"
allows you to install the second disk from
a "mksysb" or clone your existing OS while the system is running
Purpose
Creates boot image.
Syntax
For General Use:
bosboot -Action [ -d Device ] [ -Options ... ]
Description
The bosboot command creates the boot image that interfaces with the
machine boot ROS (Read-Only Storage)
EPROM (Erasable Programmable Read-Only Memory).
The bosboot command creates a boot file (boot image) from a RAM (Random
Access Memory) disk file system and a kernel.
This boot image is transferred to a particular media that the ROS boot
code recognizes.
When the machine is powered on or rebooted, the ROS boot code loads the
boot image from the media into memory.
ROS then transfers control to the loaded images kernel.
Examples
- To create a boot image on the default boot logical volume on the fixed
disk from which the system is booted, enter:
bosboot -a
- To create a bootable image called /tmp/tape.bootimage for a tape
device, enter:
bosboot -ad /dev/rmt0 -b /tmp/tape.bootimage
- When you have migrated a disk like disk0 to disk1, and you need to
make the second disk bootable,
proceed as follows:
Then:
bootlist -m normal DestinationDiskNumber
Then:
mkboot -c -d /dev/SourceDiskNumber
Once loaded, the BIOS tests the system, looks for and checks peripherals
and then locates a valid device
with which to boot the system. Usually, it first checks any floppy
drives and CD-ROM drives present for
bootable media, then it looks to the system's hard drives. The order of
the drives searched for booting
can often be controlled with a setting in BIOS. Often, the first hard
drive set to boot is the C drive or
the master IDE device on the primary IDE bus. The BIOS loads whatever
program is residing in the first sector
of this device, called the Master Boot Record or MBR, into memory. The
MBR is only 512 bytes in size and
contains machine code instructions for booting the machine along with
the partition table. Once found and loaded
the BIOS passes control whatever program (the bootloader) is on the MBR.
3. bootloader in MBR
Linux boot loaders for the x86 platform are broken into at least two
stages. The first stage is a small
machine code binary on the MBR. Its sole job is to locate the second
stage boot loader and load the first part
of it into memory. Under Red Hat Linux you can install one of two boot
loaders: GRUB or LILO.
GRUB is the default boot loader, but LILO is available for those who
require it for their hardware setup
or who prefer it.
> If you are using LILO under Red Hat Linux, the second stage boot
loader uses information on the MBR
to determine what boot options are available to the user. This means
that any time a configuration change
is made or you upgrade your kernel manually, you must run the
/sbin/lilo -v -v command to write the appropriate
information to the MBR. For details on doing this, see the Section
called LILO in Chapter 4.
> GRUB, on the other hand, can read ext2 partitions and therefore simply
loads its configuration file
- /boot/grub/grub.conf - when the second stage loader is called.
Once the second stage boot loader is in memory, it presents the user
with the Red Hat Linux initial,
graphical screen showing the different operating systems or kernels it
has been configured to boot.
If you have only Red Hat Linux installed and have not changed anything
in the
/etc/lilo.conf or /boot/grub/grub.conf,
4. Kernel
Once the second stage boot loader has determined which kernel to boot,
it locates the corresponding
kernel binary in the /boot/ directory. The proper binary is the
/boot/vmlinuz-2.4.x-xx file that corresponds
to the boot loader's settings. Next the boot loader places the
appropriate initial RAM disk image,
called an initrd, into memory. The initrd is used by the kernel to load
any drivers not compiled into it
that are necessary to boot the system. This is particularly important if
you have SCSI hard drives or
are using the ext3 file system [1].
After the kernel has initialized all the devices on the system, it
creates a root device, mounts the root partition
read-only, and frees unused memory.
At this point, with the kernel loaded into memory and operational.
However, with no user applications to give
the user the ability to provide meaningful input to the system, not much
can be done with it.
5. init
The init program coordinates the rest of the boot process and configures
the environment for the user.
When the init command starts, it becomes the parent or grandparent of
all of the processes that start up
automatically on a Red Hat Linux system. First, it runs the
/etc/rc.d/rc.sysinit script, which sets
your environment path, starts swap, checks the file systems, and so on.
Basically, rc.sysinit takes care of
everything that your system needs to have done at system initialization.
For example, most systems use a clock,
so on them rc.sysinit reads the /etc/sysconfig/clock configuration file
to initialize the clock.
Another example is if you have special serial port processes which must
be initialized, rc.sysinit will
execute the /etc/rc.serial file.
/sbin/init
-> runs /etc/rc.d/rc.sysinit
-> runs /etc/inittab
-> inittab contains default runlevel: init runs all processes
for that runlevel /etc/rc.d/rcN.d/ ,
-> runs /etc/rc.d/rc.local
As you can see, none of the scripts that actually start and stop the
services are located in the
/etc/rc.d/rc5.d/ directory. Rather, all of the files in /etc/rc.d/rc5.d/
are symbolic links pointing
to scripts located in the /etc/rc.d/init.d/ directory. Symbolic links
are used in each of the rc directories
so that the runlevels can be reconfigured by creating, modifying, and
deleting the symbolic links without
affecting the actual scripts they reference.
As usual, the K* scripts are kill/stop scripts, and the S* scripts are
started in sequence by number.
The last thing the init program does is run any scripts located in
/etc/rc.d/rc.local.
At this point, the system is considered to be operating at runlevel 5.
You can use this file to add additional commands necessary for your
environment. For instance, you can start
additional daemons or initialize a printer.
For example, the Alpha architecture uses the aboot boot loader, while
the Itanium architecture uses
the ELILO boot loader.
- Runlevels
SysV Init
The SysV init is a standard process used by Red Hat Linux to control
which software the init command
launches or shuts off on a given runlevel. SysV init chosen because it
is easier to use and more flexible
than the traditional BSD style init process.
The configuration files for SysV init are in the /etc/rc.d/ directory.
Within this directory,
are the rc, rc.local, and rc.sysinit scripts as well as the following
directories:
init.d
rc0.d
rc1.d
rc2.d
rc3.d
rc4.d
rc5.d
rc6.d
The init.d directory contains the scripts used by the init command when
controlling services.
Each of the numbered directories represent the six default runlevels
configured by default under Red Hat Linux.
id:3:initdefault:
0 - Halt
1 - Single-user mode
2 - Not used (user-definable)
3 - Full multi-user mode
4 - Not used (user-definable)
5 - Full multi-user mode (with an X-based login screen)
6 - Reboot
If you are using LILO, you can enter single-user mode by typing "linux
single" at the LILO boot: prompt.
If you are using GRUB as your boot loader, you can enter single-user
mode using the following steps.
- In the graphical GRUB boot loader screen, select the Red Hat Linux
boot label and press [e] to edit it.
- Arrow down to the kernel line and press [e] to edit it.
- At the prompt, type single and press [Enter].
- You will be returned to the GRUB screen with the kernel information.
Press the [b] key to boot the system
into single user mode.
mount -n /proc
mount -o rw,remount /
- Installing GRUB:
Once the GRUB rpm package is installed, open a root shell prompt and run
the command
/sbin/grub-install <location>,
The following command installs GRUB to the MBR of the master IDE device
on the primary IDE bus,
alos known as the C drive:
/sbin/grub-install /dev/hda
(<type-of-device><bios-device-number>,<partition-number>)
The parentheses and comma are very important to the device naming
conventions. The <type-of-device> refers
to whether a hard disk (hd) or floppy disk (fd) is being specified.
The <bios-device-number> is the number of the device according to the
system's BIOS, starting with 0.
The primary IDE hard drive is numbered 0, while the secondary IDE hard
drive is numbered 1.
The ordering is roughly equivalent to the way the Linux kernel arranges
the devices by letters,
where the a in hda relates to 0, the b in hdb relates to 1, and so on.
File Names
When typing commands to GRUB involving a file, such as a menu list to
use when allowing the booting
of multiple operating systems, it is necessary to include the file
immediately after specifying
the device and partition. A sample file specification to an absolute
filename is organized as follows:
(<type-of-device><bios-device-number>,<partition-number>)/path/to/file,
for example, (hd0,0)/grub/grub.conf.
- Example grub.conf:
default=0
timeout=10
splashimage=(hd0,0)/grub/splash.xpm.gz
This file would tell GRUB to build a menu with Red Hat Linux as the
default operating system, set to autoboot
it after 10 seconds. Two sections are given, one for each operating
system entry, with commands specific
to this system's disk partition table.
- Example lilo.conf:
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
message=/boot/message
lba32
default=linux
image=/boot/vmlinuz-2.4.0-0.43.6
label=linux
initrd=/boot/initrd-2.4.0-0.43.6.img
read-only
root=/dev/hda5
other=/dev/hda1
label=dos
Change to the directory that contains the image file. That might be on
the original CD of Redhat.
then use the following command:
System Shutdown
To shut down HP-UX for power-off, you can do any of the following:
# init 0
# shutdown -h -y now
The -h option to the shutdown command halts the system completely but
will prompt you for a message to issue users.
The -y option completes the shutdown without asking you any of the
questions it would normally ask.
When HP-UX is running on an nPartition, you can shut down HP-UX using
the shutdown command.
On nPartitions you have the following options when shutting down HP-UX:
Shutting down
/sbin/shutdown -r -y now Reboot
/sbin/shutdown -h -y now Stop system
/sbin/shutdown -y now Single user mode
When you are are at root prompt (from single user mode restart) type
following command:
# reboot -h
Note 1:
-------
PDC
HP-UX systems come with firmware installed called Processor Dependent
Code. After the system is powered on
or the processor is RESET, the PDC runs self-test operations and
initializes the processor. PDC also identifies
the console path so it can provide messages and accept input. PDC would
then begin the "autoboot" process
unless you interrupt it during the 10-second interval that is supplied.
If you interrupt the "autoboot" process,
you can issue a variety of commands. The interface to PDC commands is
called the Boot Console Handler (BCH).
This is sometimes a point of confusion; that is, are we issuing PDC
commands or BCH commands?
The commands are normally described as PDC commands, and the interface
through which you execute them is the BCH.
ISL
The Initial System Loader is run after PDC. You would normally just run
an "autoboot" sequence from ISL;
however, you can run a number of commands from the ISL prompt.
hpux
The hpux utility manages loading the HP-UX kernel and gives control to
the kernel. ISL can have hpux run
an "autoexecute" file, or commands can be given interactively. In most
situations, you would just want to
automatically boot the system; however, I cover some of the hpux
commands you can execute. This is sometimes
called the Secondary System Loader (SSL).
Note 2:
-------
HP-UX
Normal Boot
pdc
isl
hpux
- pdc
- isl
- hpux
Booting disk(scsi.6;0)/stand/vmunix
966616+397312+409688 start 0x6c50
- Single-user Boot
In this case the system automatically searches the SCSI, LAN, and EISA
interfaces for all potential boot devices
-devices for which boot I/O code (IODC) exists. The key to booting to
single-user mode is first to boot to ISL
using the b) option. The ISL is the program that actually controls the
loading of the operating system.
To do this using the above as an example, you would type the following
at the Select from menu: prompt:
This tells the system to boot to the ISL using the SCSI drive at address
6 (since the device path of P0 is scsi.6.0).
After displaying a few messages, the system then produces the ISL>
prompt.
Pressing the Escape key at the boot banner on newer Series 700 machines
produces the Boot Administration Utility,
as shown below.
Command Description
------- -----------
Auto [boot|search] [on|off] Display or set auto flag
Boot [pri|alt|scsi.addr][isl] Boot from primary, alt or SCSI
Boot lan[.lan_addr][install][isl] Boot from LAN
Chassis [on|off] Enable chassis code
Diagnostic [on|off] Enable/disable diag boot mode
Fastboot [on|off] Display or set fast boot flag
Help Display the command menu
Information Display system information
LanAddress Display LAN station addresses
Monitor [type] Select monitor type
Path [pri|alt] [lan.id|SCSI.addr] Change boot path
Pim [hpmc|toc|lpmc] Display PIM info
Search [ipl] [scsi|lan [install]] Display potential boot devices
Secure [on|off] Display or set security mode
-----------------------------------------------------------------
BOOT_ADMIN>
To display bootable devices with this menu you have to execute the
Search command at the BOOT_ADMIN> prompt:
BOOT_ADMIN> search
Searching for potential boot device.
This may take several minutes.
BOOT_ADMIN>
To boot to ISL from the disk at device path scsi.6.0 type the following:
Once you get the ISL prompt you can run the hpux utility to boot the
kernel to single-user mode:
ISL>hpux -is
Boot
: disk(scsi.6;0)/stand/vmunix
966616+397312+409688 start 0x6c50
- Startup
/sbin/rc
This script invokes execution scripts based on run levels. It is also
known as the startup and shutdown
sequencer script.
Execution scripts
These scripts start up and shut down various subsystems and are found in
the /sbin/init.d directory.
/sbin/rc invokes each execution script with one of four arguments,
indicating the "mode":
ROUTE_DESTINATION[0]="default"
ROUTE_GATEWAY[0]="gateway_address"
ROUTE_COUNT[0]="1"
Both the execution scripts and the configuration files are named after
the subsystem they control. For example,
the /sbin/init.d/cron execution script controls the cron daemon, and it
is customized by the /etc/rc.config.d/cron
configuration variable script.
Link Files
These files control the order in which execution scripts run. The
/sbin/rc#.d (where # is a run-level) directories
are startup and shutdown sequencer directories. They contain only
symbolic links to the execution scripts in
/sbin/init.d that are executed by /sbin/rc on transition to a specific
run level. For example, the /sbin/rc3.d
directory contains symbolic links to scripts that are executed when
entering run level 3.
These directories contain two types of link files: start links and kill
links. Start links have names beginning
with the capital letter S and are invoked with the start argument at
system boot time or on transition to a higher
run level. Kill links have names beginning with the capital letter K and
are invoked with the stop argument
at system shutdown time, or when moving to a lower run level.
The table below shows some samples from the run-level directories. (The
sequence numbers shown are only for example
and may not accurately represent your system.)
Example
If you want cron to start when entering run level 2, you would modify
the configuration variable script
/etc/rc.config.d/cron to read as follows:
# cron config
#
# CRON=1 to start
CRON=1
if [ $CRON = 1 ]
then /usr/sbin/cron
fi
cron will start at run level 2 because in /sbin/rc2.d a link exists from
S730cron to /sbin/init.d/cron.
/sbin/rc will invoke /sbin/init.d/cron with a start argument because the
link name starts with an S.
End Of File
========================================================================
===
4. Most important and current AIX, SOLARIS, and Linux fixes:
========================================================================
===
4.1 AIX:
========
4.2 SOLARIS:
============
(Vanaf hier: Oude tekst. As from here, ignore all text, cause its too
old. Its only interresting to Albert)
Bijvoorbeeld, bij linux is glibc 2.1.3 nodig bij Oracle versie 8.1.7.
Linux is erg kritisch m.b.t. de libraries in combinatie met Oracle.
# sysctl -w kernel.shmmax=100000000
# sysctl -w fs.file-max=65536
# echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf
# echo "kernel.shmmax = 2147483648" >> /etc/sysctl.conf
Als de 8.1.7 installatie gedaan wordt is ook nog de Java JDK 1.1.8
nodig.
Deze kan gedownload worden van www.blackdown.org
5.1.2 Omgevingsvariablelen:
---------------------------
LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export
LD_LIBRARY_PATH
/u01/app/oracle/product/8.1.6
/u01/app/oracle/admin/PROD
/u01/app/oracle/admin/PROD/pfile
/u01/app/oracle/admin/PROD/adhoc
/u01/app/oracle/admin/PROD/bdump
/u01/app/oracle/admin/PROD/udump
/u01/app/oracle/admin/PROD/adump
/u01/app/oracle/admin/PROD/cdump
/u01/app/oracle/admin/PROD/create
/u02/oradata/PROD
/u03/oradata/PROD
/u04/oradata/PROD
etc..
groupadd dba
groupadd oinstall
groupadd oper
mkdir /opt/u01
mkdir /opt/u02
mkdir /opt/u03
mkdir /opt/u04
Geef nu ownership van deze mount points aan user oracle en group
oinstall
chmod 644 *
chmod u+x filename
chmod ug+x filename
Linux:
startx
cd /usr/local/src/Oracle8iR3
./runInstaller
of
Het kan zijn dat de installer vraagt om scripts uit te voeren zoals:
orainstRoot.sh en root.sh
Om dit uit te voeren:
5.2.1 oratab:
-------------
Voorbeeld:
# $ORACLE_SID:$ORACLE_HOME:[N|Y]
#
ORCL:/u01/app/oracle/product/8.0.5:Y
#
Het script dbstart zal oratab lezen en ook tests doen en om de oracle
versie
te bepalen. Verder bestaat de kern uit:
Inhoud S99oracle:
Het dbstart script is een standaard Oracle script. Het kijkt in oratab
welke sid's op 'Y' staan,
en zal deze databases starten.
Startdb [ORACLE_SID]
--------------------
Dit script is een onderdeel van het script S99Oracle. Dit script heeft 1
parameter, ORACLE_SID
. $ORACLE_ADMIN/env/profile
ORACLE_SID=$1
echo $ORACLE_SID
Dit script is een onderdeel van het script K10Oracle. Dit script heeft 1
parameter, ORACLE_SID
ORACLE_SID=$1
export $ORACLE_SID
5.5 Batches:
------------
# Batches (Oracle)
=======================
7. INSTALLING SUNOS:
=======================
------------------------------------------------------------------------
--------
Contents
Overview
Using Serial Console Connection
Starting the Installation
Answering the Screen Prompts
Post-Installation Tasks
------------------------------------------------------------------------
--------
Overview
This article documents installing the 2/02 release of Solaris 8 from CD-
ROM.
For the purpose of this example, I will be installing Solaris 8 on a Sun
Blade 150 with the following configuration:
From the Linux machine, you can use a program called minicom. Start it
up with the command "minicom".
Press "Ctrl-A Z" to get to the main menu. Press "o" to configure
minicom. Go to "Serial port setup"
and make sure that you are set to the correct "Serial Device" and that
the speed on line E matches the speed
of the serial console you are connecting to. (In most cases with Sun,
this is 9600.) Here are the settings
I made when using Serial A / COM1 port on the Linux machine:
+-----------------------------------------------------------------------
+
| A - Serial Device : /dev/ttyS0
|
| B - Lockfile Location : /var/lock
|
| C - Callin Program :
|
| D - Callout Program :
|
| E - Bps/Par/Bits : 9600 8N1
|
| F - Hardware Flow Control : Yes
|
| G - Software Flow Control : No
|
|
|
| Change which setting?
|
+-----------------------------------------------------------------------
+
After making all necessary changes, hit the ESC key to go back to the
"configurations" menu.
Now go to "Modem and dialing". Change the "Init string" to "~^M~". Save
the settings (as dflt),
and then restart Minicom. You should now see a console login prompt.
Let's start the installation process! Put the SOLARIS 8 SOFTWARE (Disk 1
of 2) in the CDROM tray and boot to it:
ok boot cdrom
Resetting ...
The boot process may take several minutes to complete, but once done,
you will start answering a series of prompts.
The following section will walk you through many of the screen prompts
from the installation.
The first three prompts are from the command line interface (CLI) and
are used to specify the language,
locale and terminal. Use English for both Language and Locale. As for a
terminal setting, I commonly telnet
to a Linux server (that is connected from the serial port of the Linux
server to the serial port of the Sun machine).
From the Linux server, I use "minicom" to connect from the Linux server
to the Sun server.
The best terminal for this type of installation is "DEC VT100":
Language : English
Locale : English
What type of terminal are you using? : 3) DEC VT100
NOTE: You should be able to use a terminal type of "DEC VT100" or "X
Terminal Emulator (xterms)".
Many of the screens to follow will ask you about networking information.
When asked if the system will be connected
to a network, answer Yes.
NOTE: Many of the screens should be easy to complete except for the
"Names Services" section. In almost all cases,
you will want to use DNS naming services, but if your machine is not
currently configured within DNS, this section
will fail and no information entered about Names Services will be stored
and configured.
If this is the case, you will need to select None under the Names
Services section.
The network configuration will then need to be completed after the
installation process by updating certain
network files on the local hard drive. This will be documented in the
"Post Installation Procedures" of this document.
------------------------------------------------------------------------
--------
This screen informs you about how you will need to identify the computer
as it applies to network connectivity.
Networked
---------
[X] Yes
[ ] No
Hit ESC - F2 to continue
Screen 4 : DHCP
Use DHCP
--------
[ ] Yes
[X] No
Hit ESC - F2 to continue
Screen 6 : IP Address
Screen 7 : Subnets
Screen 8 : Netmask
Netmask: 255.255.255.0
Hit ESC - F2 to continue
Screen 9 : IPv6
Enable IPv6
-----------
[ ] Yes
[X] No
Hit ESC - F2 to continue
Name service
------------
[ ] NIS+
[ ] NIS
[X] DNS
[ ] LDAP
[ ] None
Hit ESC - F2 to continue
Search domain:
Search domain:
Search domain:
Search domain:
Search domain:
Search domain:
Hit ESC - F2 to continue
Regions
-------
[ ] Asia, Western
[ ] Australia / New Zealand
[ ] Canada
[ ] Europe
[ ] Mexico
[ ] South America
[X] United States
[ ] other - offset from GMT
[ ] other - specify time zone file
Hit ESC - F2 to continue
Time zones
----------
[X] Eastern
[ ] Central
[ ] Mountain
[ ] Pacific
[ ] East-Indiana
[ ] Arizona
[ ] Michigan
[ ] Samoa
[ ] Alaska
[ ] Aleutian
[ ] Hawaii
Hit ESC - F2 to continue
You must select the disks for installing Solaris software. If there are
several disks available,
I always install the Solaris software on the boot disk c0t0d0.
----------------------------------------------------------
Disk Device (Size) Available Space
=============================================
[X] c0t0d0 (14592 MB) boot disk 14592 MB (F4 to edit)
I generally select ESC - F4 to edit the c0t0d0 disk to ensure that the
root directory is going
to be located on this disk.
----------------------------------------------------------
On this screen you can select the disk for installing the
root (/) file system of the Solaris software.
Disk
==============================
[X] c0t0d0 (F4 to select boot device)
------------------------------------------------------------------------
--------
----------------------------------------------------------
On this screen you can select the specific slice for the root (/) file
system. If you choose Any of the Above, the Solaris installation program
will choose a slice for you.
[X] c0t0d0s0
[ ] c0t0d0s1
[ ] c0t0d0s2
[ ] c0t0d0s3
[ ] c0t0d0s4
[ ] c0t0d0s5
[ ] c0t0d0s6
[ ] c0t0d0s7
[ ] Any of the Above
Hit ESC - F2 to after selecting Disk Slice
------------------------------------------------------------------------
--------
------------------------------------------------------------------------
--------
Do you want to preserve existing data? At least one of the disks you've
selected for installing Solaris software
has file systems or unnamed slices that you may want to save.
On this screen you must select all the file systems you want auto-layout
to create, or accept the
default file systems shown.
The summary below is your current file system and disk layout, based on
the information you've supplied.
------------------------------------------------------------------------
--------
/ : I often get the sizes for the individual filesystems (/usr, /opt,
and /var) incorrect. This is one reason
I typically create only one partition as / that will be used for the
entire system (minus swap space).
In most cases, I will be installing addition disks for large
applications like the Oracle RDBMS,
Oracle Application Server, or other J2EE application servers.
overlap : The overlap partition represents entire disk and is slice s2
of the disk.
swap : The swap partition size depends on the size of RAM in the system.
If you are not sure of its size,
make it double the amount of RAM in your system. I typically like to
make swap 1GB.
------------------------------------------------
Boot Device: c0t0d0s0
=================================================
Slice Mount Point Size (MB)
0 / 37136
1 swap 1025
2 overlap 38162
3 0
4 0
5 0
6 0
7 0
=================================================
Capacity: 38162 MB
Allocated: 38161 MB
Rounding Error: 1 MB
Free: 0 MB
Hit ESC - F2 to continue
------------------------------------------------------------------------
--------
This is what the File System and Disk Layout screen looks like now.
Do you want to mount software from a remote file server? This may be
necessary if you had to remove software
because of disk space problems.
==================================================================
***************
| | | | | |
0 20 40 60 80 100
After the installation is complete it customizes system files, devices,
and logs.
The system then reboots or asks you to reboot depending upon the choice
selected earlier in the Reboot
After Installation? screen.
A root password can contain any number of characters, but only the first
eight characters in the password
are significant. (For example, if you create `a1b2c3d4e5f6' as your root
password, you can use `a1b2c3d4'
to gain root access.)
You will be prompted to type the root password twice; for security, the
password will not be displayed
on the screen as you type it.
Root password:
Enter Your root Password and Press Return to continue.
Please specify the media from which you will install Solaris 8 Software
2 of 2 (2/02 SPARC Platform Edition).
Alternatively, choose the selection for "Skip" to skip this disc and go
on to the next one.
Media:
1. CD/DVD
2. Network File System
3. Skip
Media [1]: 1
[]
Installation details:
2. Done
Networking:
If you will be using networking database files for your TCP/IP
networking configuration, several files
will need to be manually created and/or modified. I provided a step-by-
step document on how to manually
configure TCP/IP networking files to manually enable TCP/IP networking
using files:
Configuring TCP/IP on Solaris - TCP/IP Configuration Files - (Quick
Config Guide)
=======================
8. RAID Volumes on SUN:
=======================
8.1 SCSI, DISKS AND RAID:
=========================
8.1.1 General
-------------
Short for logical unit number, a unique identifier used on a SCSI bus to
distinguish between devices
that share the same bus. SCSI is a parallel interface that allows up to
16 devices to be connected along a single cable.
The cable and the host adapter form the SCSI bus, and this operates
independently of the rest of the computer.
Each of the eight devices is given a unique address by the SCSI BIOS,
ranging from 0 to 7 for an 8-bit bus or
0 to 15 for a 16-bit bus. Devices that request I/O processes are called
initiators. Targets are devices that perform
operations requested by initiators. Each target can accommodate up to
eight other devices, known as logical units,
and each is assigned an LUN. Commands that are sent to the SCSI
controller identify devices based on their LUNs.
So we might have a situation as:
8.1.2 single-initiator
----------------------
A single-initiator SCSI bus has only one node connected to it, and
provides host isolation and better
performance than a multi-initiator bus. Single-initiator buses ensure
that each node is protected
from disruptions due to the workload, initialization, or repair of the
other nodes.
Use the appropriate SCSI cable to connect each host bus adapter to the
storage enclosure.
Setting host bus adapter termination is done in the adapter BIOS utility
during system boot.
To set RAID controller termination, refer to the vendor documentation.
______________
______________
| System 1 | SCSI ___________ ___________ SCSI | System
2 |
|(SCSI Adapter)|--------|SCSI Device|--|SCSI Device|--------|(SCSI
Adapter)|
|______________| Bus |___________| |___________| Bus |
______________|
-Install Solaris8
-Install required OS patches
(If you have an Ultra60, install 106455-09 or better - firmware patch -
before proceeding)
- Install Raid Manager 6.22 (RM 6.22) or better.
# pkgadd -d . SUNWosar SUNWosafw SUNWosamn SUNWosau
See also section 6.2
(contributed by Greg Whalin) Check /etc/osa/mnf and make sure that your
controller name does NOT contain any periods.
Change them to a _ instead. The RM software does not have any clue how
to deal with a period.
This kept me screwed up for quite a while.
Install patches 109571-02 (for Solaris8 FCS) and 108553-07 (or newer)
(for Solaris7/2.6 patch 108834-07 or newer) [ NOTE: 112125-01 and
112126-01 or better for RM 6.22.1]
# patchadd 109571-02
# patchadd 108553-02
Boot -r
# touch /reconfigure
# reboot -- -r
/usr/lib/osa/bin/raidutil -c c1t0d0 -i
Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0205
Boot Level 02.05.01.00
Boot Level Date 12/02/97
Firmware Level 02.05.02.11
Firmware Date 04/09/98
raidutil succeeded!
Find lowest number firmware upgrade that is still greater than the
firmware that is installed on your A1000.
For the above example, with patch 108553, upgrade to 2.05.06.32 (do this
first, VERY IMPORTANT!)
# cd /usr/lib/osa/fw
# /usr/lib/osa/bin/fwutil 02050632.bwd c1t0d0
# /usr/lib/osa/bin/fwutil 02050632.apd c1t0d0
Upgrade to the each next higher firmware in succession until you get to
the most recent version.
It is recommend that you do the upgrades in order. For this example,
Upgrade to 3.01.02.33/5
# /usr/lib/osa/bin/fwutil 03010233.bwd c1t0d0
# /usr/lib/osa/bin/fwutil 03010235.apd c1t0d0
Upgrade to 03.01.03.60 (or better)
# /usr/lib/osa/bin/fwutil 03010304.bwd c1t0d0
# /usr/lib/osa/bin/fwutil 03010360.apd c1t0d0
# /usr/lib/osa/bin/raidutil -c c1t0d0 -i
Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0301
Boot Level 03.01.03.00
Boot Level Date 10/22/99
Firmware Level 03.01.03.54
Firmware Date 03/30/00
raidutil succeeded!
Check to make sure that the RAID is attached and looks good
# /usr/lib/osa/bin/drivutil -i c1t0d0
drivutil succeeded!
Example: Create 1 large 10-disk RAID 5 configuration (LUN 0) of max size
and then create 2 Hot Spare disks
# /usr/lib/osa/bin/raidutil -c c1t0d0 -D 0
raidutil succeeded!
raidutil succeeded!
raidutil succeeded!
# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 75 sectors/track
* 64 tracks/cylinder
* 4800 sectors/cylinder
* 65535 cylinders
* 65533 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 314558400 314558399
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 314558400 314558399
Check to make sure that the new array is available via "df -lk"
# df -lk
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 2056211 43031 1951494 3% /
/dev/md/dsk/d6 4131866 1133180 2957368 28% /usr
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/md/dsk/d5 2056211 9092 1985433 1% /var
swap 1450208 8 1450200 1% /var/run
swap 1450208 8 1450200 1% /tmp
/dev/md/dsk/d7 8089425 182023 7826508 3% /export
/dev/dsk/c1t0d0s2 154872105 9 153323375 1% /raid
Hi.
Thanks for your kind responses. There are a few reply but tons of
out of office mail. And sorry for forgetting to state that A1000
is not brand new one but used one. After some researches I found
this. here's my summary.
Conclusion:
If A1000 has previously defined LUNs and will be used to be array
as new one, you have to be remove old LUNs before define new LUNs
or your rm6 complains that cannot find raid modules.
---
if you can see more than 1 LUNs in boot prom via command "probe-scsi-
all"
you have to insert disk into slot as many as LUNs than reboot with boot
-rs.
Than you can see configured LUNs via /usr/lib/osa/bin/lad.
and /usr/lib/osa/bin/raidutil -c c#t#d# -X to delete all old LUNs.
Once you delete old LUNs you can boot normaly with just one disk and
can find raid module.
Again, Thanks for your help.
--
Firstly install the Raid manager 6.22 (6.221) software on the Solaris 8
system.
Defending upon your raid manager version and scsi/fibre card type you
will need to patch the system.
The following patches are recommended for Solaris 8.
-- Firmware
-- --------
The first thing to do is check the firmware of the A1000. This can be
done with the raidutil command.
( I assume the A1000 is on controller 1. If not then change the
controller as appropriate.
# raidutil -c c1t0d0 -i
If the returned values are less that those shown below you will have to
upgrade the firmware using fwutil.
# cd /usr/lib/osa/fw
# fwutil 02050632.bwd c1t0d0
# fwutil 02050632.apd c1t0d0
# fwutil 03010233.bwd c1t0d0
# fwutil 03010235.apd c1t0d0
# fwutil 03010304.bwd c1t0d0
# fwutil 03010360.apd c1t0d0
You can now re-perform the "raidutil -c c1todo -i" command again to
verify the firmware changes.
# raidutil -c c1t0d0 -X
The above command resets the array internals.
We can now remove any old lun's. To do this run "raidutil -c c1t0d0 -i"
and note any luns that are configured.
Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0301
Boot Level 03.01.03.04
Boot Level Date 07/06/00
Firmware Level 03.01.03.60
Firmware Date 06/30/00
raidutil succeeded!
# raidutil -c c1t0d0 -D 0
In the above example we are removing lun 0. repeat this command
changing the lun number as appropriate.
We can now give the array a name of our choice. (Do not use a .)
# storutil -c c1t0d0 -n "dragon_array"
Creating Lun's
The disks are labelled on the front of the A1000 as controller number
and disk number seperated by a comma eg. 1,0 1,2 and 2,0 etc, etc. We
refer to the disks without using the comma. So the first disk on
controller 1 is disk 10 and the 3rd disk on controller 2 is disk 23. we
will use disks on both controllers when creating the mirrors. I am
starting with the disks on each controller as viewed form the left. The
next stage is to create the luns we require. In the below example I will
configure a fully populated (12 disks) system which has 18Gb drives into
the following sizes. Here we will use the raidutil command again.
This then leaves the disk 25 or disk 5 on the second controller free as
a hot spare.
to set up this disk as a hot spare run
# raidutil -h 25
Finishing off
We are now ready to reboot the system performing a reconfigure. When
this is done we can format, partition, newfs
and mount the disks in the normal way.
Other commands
The following is a list of possibly useful raid manager commands
Overview
The Sun StorEdge D1000 is a disk tray with hot-pluggable
- Power supplies
- Fans
- Disks (If SPARCstorage Volume Manager configured).
Disk Terminology
Before you can effectively use the information in this section, you
should be familiar with basic disk architecture.
In particular, you should be familiar with the following terms:
Track
Cylinder
Sector
Disk controller
Disk label
Device drivers
Disk Slices
Files stored on a disk are contained in file systems. Each file system
on a disk is assigned to a slice-a group of
cylinders set aside for use by that file system. Each disk slice appears
to the operating system
(and to the system administrator) as though it were a separate disk
drive.
Slices are sometimes referred to as partitions.
Do not use the following areas of the disk for raw data slices, which
are sometimes created by third-party d
atabase applications:
For instance, a single disk might hold the root (/) file system, a swap
area, and the /usr file system, while a separate disk is provided for
the /export/home file system and other file systems containing user
data.
Slice Server
0 root
1 swap
2 -
3 /export
4 /export/swap
5 /opt
6 /usr
7 /export/home
Disk Labels
A special area of every disk is set aside for storing information about
the disk's controller, geometry, and slices. That information is called
the disk's label. Another term used to described the disk label is the
VTOC (Volume Table of Contents). To label a disk means to write slice
information onto the disk. You usually label a disk after changing its
slices.
If you fail to label a disk after creating slices, the slices will be
unavailable because the operating system has no way of "knowing" about
the slices. The partition table identifies a disk's slices, the slice
boundaries (in cylinders), and total size of the slices. A disk's
partition table can be displayed using the format utility. Partition
flags and tags are assigned by convention and require no maintenance.
Cylinders The starting and ending cylinder number for the slice.
Size The slice size in Mbytes.
Blocks The total number of cylinders and the total number of sectors
per slice in the far right column.
The following example displays a disk label using the prtvtoc command.
# prtvtoc /dev/rdsk/c0t1d0s0
* /dev/rdsk/c0t1d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 72 sectors/track
* 14 tracks/cylinder
* 1008 sectors/cylinder
* 2038 cylinders
* 2036 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 303408 303407 /
1 3 01 303408 225792 529199
2 5 00 0 2052288 2052287
6 4 00 529200 1523088 2052287 /usr
This temporary slice donates, or "frees," space when you expand a slice,
and receives, or "hogs," the discarded space when you shrink a slice.
For this reason, the donor slice is sometimes called the free hog.
The donor slice exists only during installation or when you run the
format utility. There is no permanent donor slice during day-to-day,
normal operations.
# format
The format utility displays a list of disks that it recognizes under
AVAILABLE DISK SELECTIONS.
Here is sample format output:
# format
Searching for disks...done
The format output associates a disk's physical and local device name to
the disk's marketing name which appears in angle brackets <>. This is an
easy way to identify which local device names represent the disks
connected to your system. The following example uses a wildcard to
display the disks connected to a second controller.
# format /dev/rdsk/c2*
AVAILABLE DISK SELECTIONS:
0. /dev/rdsk/c2t0d0s0
/io-unit@f,e0200000/sbi@0,0/QLGC,isp@2,10000/sd@0,0
1. /dev/rdsk/c2t1d0s0
/io-unit@f,e0200000/sbi@0,0/QLGC,isp@2,10000/sd@1,0
2. /dev/rdsk/c2t2d0s0
/io-unit@f,e0200000/sbi@0,0/QLGC,isp@2,10000/sd@2,0
3. /dev/rdsk/c2t3d0s0
/io-unit@f,e0200000/sbi@0,0/QLGC,isp@2,10000/sd@3,0
4. /dev/rdsk/c2t5d0s0
/io-unit@f,e0200000/sbi@0,0/QLGC,isp@2,10000/sd@5,0
Specify disk (enter its number):
The format output identifies that disk 2 (targets 0-5) are connected to
the first SCSI host adapter (sbi@...),
which is connected to the first SBus device (io-unit@).
------------------------------------------------------------------------
--------
Displaying Disk Slices
You can use the format utility to check whether or not a disk has the
appropriate disk slices. If you determine
that a disk does not contain the slices you want to use, use the format
utility to re-create them and label the disk.
The format utility uses the term partition in place of slice.
Become superuser.
Identify the disk for which you want to display slice information by
selecting a disk listed
under AVAILABLE DISK SELECTIONS.
format> partition
Display the slice information for the current disk drive by typing print
at the partition> prompt.
partition> print
Exit the format utility by typing q at the partition> prompt and typing
q at the format> prompt.
partition> q
format> q
#
------------------------------------------------------------------------
--------
Become superuser.
Enter the number of the disk that you want to label from the list
displayed on your screen.
Specify disk (enter its number):1
format> type
Format displays the Available Drive Types menu.
Label the disk. If the disk is not labeled, the following message is
displayed.
Use the verify command from the format main menu to verify the disk
label.
format> verify
partition> q
format> q
#
Example-Labeling a Disk
The following example automatically configures and labels a 1.05-Gbyte
disk.
# format
c1t0d0: configured with capacity of 1002.09MB
AVAILABLE DISK SELECTIONS:
0. c0t3d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@1,0
1. c1t0d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@1,0
Specify disk (enter its number): 1
Disk not labeled. Label it now? yes
format> verify
#
Become superuser.
# prtvtoc /dev/rdsk/device-name
In all cases, slice 6 (for the /usr file system) gets the remainder of
the space on the disk.
Become superuser.
Create the /reconfigure file that will be read when the system is
booted.
# /tech/sun/commands/touch.html">touch /reconfigure
Turn off power to the system and all external peripheral devices.
Make sure the disk you are adding has a different target number than the
other devices on the system.
You will often find a small switch located at the back of the disk for
this purpose.
Connect the disk to the system and check the physical connections.
Login as superuser, invoke the format utility, and select the disk to be
configured automatically.
# format
Searching for disks...done
c1t0d0: configured with capacity of 1002.09MB
AVAILABLE DISK SELECTIONS:
0. c0t1d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@1,0
1. c0t3d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@3,0
Specify disk (enter its number): 1
Reply yes to the prompt to label the disk. Replying y will cause the
disk label to be generated and written
to the disk by the autoconfiguration feature.
format> verify
format> q
------------------------------------------------------------------------
--------
Become superuser.
# format
A list of available disks is displayed.
Enter the number of the disk that you want to repartition from the list
displayed on your screen.
Go into the partition menu (which lets you set up the slices).
format> partition
partition> print
partition> modify
Identify the free hog partition (slice) and the sizes of the slices when
prompted. When adding a system disk,
you must set up slices for: root (slice 0) and swap (slice 1) and/or
/usr (slice 6) After you identify the slices,
the new partition table is displayed.
Label the disk with the new partition table when you have finished
allocating slices on the new disk.
partition> q
format> verify
format> q
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t1d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@1,0
1. c0t3d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@3,0
Specify disk (enter its number): 0
selecting c0t1d0
[disk formatted]
format> partition
partition> print
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 - 2035 1002.09MB (2036/0/0) 2052288
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Do you wish to continue creating a new partition
table based on above table[yes]? yes
Free Hog partition[6]? 6
Enter size of partition `0' [0b, 0c, 0.00mb]: 200mb
Enter size of partition `1' [0b, 0c, 0.00mb]: 200mb
Enter size of partition `3' [0b, 0c, 0.00mb]:
Enter size of partition `4' [0b, 0c, 0.00mb]:
Enter size of partition `6' [0b, 0c, 0.00mb]:
Enter size of partition `7' [0b, 0c, 0.00mb]:
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 406 200.32MB (407/0/0) 410256
1 swap wu 407 - 813 200.32MB (407/0/0) 410256
2 backup wu 0 - 2035 1002.09MB (2036/0/0) 2052288
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 814 - 2035 601.45MB (1222/0/0) 1231776
7 unassigned wm 0 0 (0/0/0) 0
Okay to make this the current partition table[yes]? yes
Enter table name (remember quotes): "disk0"
Ready to label disk, continue? yes
partition> quit
format> verify
format> quit
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t1d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@1,0
1. c0t3d0
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@3,0
Specify disk (enter its number): 0
selecting c0t1d0
[disk formatted]
format> partition
partition> print
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 - 2035 1002.09MB (2036/0/0) 2052288
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Do you wish to continue creating a new partition
table based on above table[yes]? y
Free Hog partition[6]? 7
Enter size of partition '0' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]:
Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 - 2035 1002.09MB (2036/0/0) 2052288
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 - 2035 1002.09MB (2036/0/0) 2052288
Okay to make this the current partition table[yes]? yes
Enter table name (remember quotes): "home"
Ready to label disk, continue? y
partition> q
format> verify
format> q
#
Become superuser.
Create a file system for each slice with the newfs(1M) command.
# newfs /dev/rdsk/cwtxdysz
Become superuser.
List all the processes that are accessing the file system, so you know
which processes you are going to stop.
# /tech/sun/commands/fuser.html">fuser -c [ -u ] mount-point
Stop all processes accessing the file system. You should not stop a
user's processes without warning.
# /tech/sun/commands/fuser.html">fuser -c -k mount-point
A SIGKILL is sent to each process using the file system.
# /tech/sun/commands/fuser.html">fuser -c mount-point
------------------------------------------------------------------------
--------
Add Disk
Follow the steps below to add a new external/internal disk:
Bring the system down to the ok prompt.
# init 0
Find an available target setting. This command will show what you
currently have on your system.
# probe-scsi
# probe-scsi-all
Attach the new disk with the correct target setting. Run probe-scsi
again to make sure the system sees it. If it doesn't, the disk is either
not connected properly, has a target conflict, or is defective. Resolve
this issue before continuing.
In this example, we'll say:
# boot -rv
rv -> reconfigure in verbose mode.
# format
Searching for disks...done
1. c0t1d0
/iommu@0,10000000/sbus@0,10001000/espdma@5,8400000/esp@5,8800000/sd@1,0
2. c0t3d0
/iommu@0,10000000/sbus@0,10001000/espdma@5,8400000/esp@5,8800000/sd@3,0
Specify disk (enter its number): 1
selecting c0t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
quit
format> part
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
quit
partition> print
partition>
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wu 163 - 423 128.46MB (261/0/0)
263088
partition> 4
Part Tag Flag Cylinders Size Blocks
4 unassigned wm 424 - 749 160.45MB (326/0/0)
328608
partition> 5
Part Tag Flag Cylinders Size
Blocks
5 unassigned wm 750 - 1109 177.19MB (360/0/0)
362880
partition> 6
Part Tag Flag Cylinders Size
Blocks
6 unassigned wm 1110 - 2035 455.77MB (926/0/0)
933408
partition>
NOTE: You will know for certain that your partitioning is correct if you
add all the cylinder values [the values enclosed in ( )], like so,
204+204+204+360+1064=2036 which is the same value for slice 2 or the
whole disk (Tag = backup).
Now label the disk. This is important as this is what saves the
partition table in your VTOC (Virtual Table Of Contents). It's also
always recommended to do the labeling part twice to be certain that the
VTOC gets saved.
partition> label
partition> q
format> q
c0t3d0 (running Solaris 2.6) being copied to c0t1d0 (which will have the
copied Solaris 2.6 slices/partitions)
c0t3d0s0 / -> c0t1d0s0 /
c0t3d0s4 /var -> c0t1d0s4 /var
c0t3d0s5 /opt -> c0t1d0s5 /opt
c0t3d0s6 /usr -> c0t1d0s6 /usr
For each of the partitions that you wish to mount, run newfs to contruct
a unix filesystem.
So, newfs each partition.
# newfs -v /dev/rdsk/c0t1d0s0
# newfs -v /dev/rdsk/c0t1d0s4
# newfs -v /dev/rdsk/c0t1d0s5
# newfs -v /dev/rdsk/c0t1d0s6
To ensure that they are clean and mounted properly, run fsck on these
mounted partitions:
# fsck /dev/rdsk/c0t1d0s0
# fsck /dev/rdsk/c0t1d0s4
# fsck /dev/rdsk/c0t1d0s5
# fsck /dev/rdsk/c0t1d0s6
# /tech/sun/commands/mkdir.html">mkdir /mount_point
# /tech/sun/commands/mkdir.html">mkdir /root2
# /tech/sun/commands/mkdir.html">mkdir /var2
# /tech/sun/commands/mkdir.html">mkdir /opt2
# /tech/sun/commands/mkdir.html">mkdir /usr2
# cd /
The gotcha here is that you can't really specify the directory name
as /tech/sun/commands/ufsdump.html">ufsdump will interpret it as not
being a block or character device. To illustrate this error:
# cd /usr
# /tech/sun/commands/ufsdump.html">ufsdump 0f - /usr | (cd /usr2;
/tech/sun/commands/ufsrestore.html">ufsrestore xf - )
DUMP: Writing 32 Kilobyte records
DUMP: Date of this level 0 dump: Wed Dec 10 17:33:42 1997
DUMP: Date of last level 0 dump: the epoch
DUMP: Dumping /dev/rdsk/c0t3d0s0 (tmpdns:/usr) to standard output
DUMP: Mapping (Pass I) [regular files]
DUMP: Mapping (Pass II) [directories]
DUMP: Estimated 317202 blocks (154.88MB)
DUMP: Dumping (Pass III) [directories]
DUMP: Broken pipe
DUMP: The ENTIRE dump is aborted
If you want to use the directory names to simplify your command line,
use the tar command instead of /tech/sun/commands/ufsdump.html">ufsdump
as follows:
Example:
# cd /usr
# tar cvfp - . | (cd /usr2; tar xvfp - )
OPTIONAL (This may be redundant BUT ensures that the copied files are
once again clean and consistent). Checking the integrity of a filesystem
is always highly recommended even if it becomes redundant in nature.
Now, check and run fsck on the new partition/slices:
# fsck /dev/rdsk/c0t1d0s0
# fsck /dev/rdsk/c0t1d0s4
# fsck /dev/rdsk/c0t1d0s5
# fsck /dev/rdsk/c0t1d0s6
Edit your /mount_point/etc/vfstab file to have this disk bootup from the
correct disk/devices c0t1d0 as opposed to c0t3d0.
# cd /root2
# vi /root2/etc/vfstab
If you choose to get bootblk from your current disk the location of the
bootblk in Solaris 2.5 or higher is under:
/usr/platform/`uname -i`/lib/fs/ufs/bootblk
Now create an alias for the other disk (this may be existent if it's off
of the onboard/first scsi controller).
ok> probe-scsi
T3 original boot disk
T1 new disk with copied slices
ok> devalias
By default this will always boot from the new disk. If you want to boot
from the old disk you can manually tell it to boot to that alias, like
so:
(This will boot off from any Target 3/scsi id 3 internal disk). Also see
INFODOC #'s 14046, 11855, 11854 for setting different boot devalias'es.
NOTE: If the new disk encounters a problem on booting, most likely cause
would be inappropriate /tech/sun/commands/devlinks.html">devlinks so,
the course of action to take here is the /etc/path_to_inst, /dev,
/devices fix: The following is a solution to solve problems with
/dev, /devices, and/or /etc/path-to_inst. This routine extracts the
defaults (with links intact) from the Solaris 2.x CD-ROM.
If you plan to move this new disk you copied the OS on, you MUST ensure
that it will be moved to a similar architecture and machine type as
hardware address paths are usually different from one machine to
another.
Each hardware platform has a hardware device tree which must match the
device tree information saved during installation in /devices and the
/dev directories.
For more details on why you can't move Solaris 2.X boot disk between
machines please see INFODOC 13911 and 13920.
# ls -l /dev/dsk/c1t4d0s*
The new disk drive is now available for use as a block or character
device. Refer to sd for more info.
Thank you:
Anand Chouthai
Roy Erickson
After making these two changes I was able to get the system
back to a sane state.
--
Not sure about this one but for now I will live w/ it.
Kevin Counts
--
Script #1:
#!/bin/sh
#-----------------------------------------------------------------------
-
# $Id: recover-egate2.sh,v 1.7 2004/03/01 19:36:06 countskm Exp $
#-----------------------------------------------------------------------
-
# Custom script to restore egate2 (run from jumpstart recovery image).
#-----------------------------------------------------------------------
--
#-----------------------------------------------------------------------
--
# Create pre-defined vtoc for 36GB FC Drive
#-----------------------------------------------------------------------
--
/usr/sbin/fmthard -s - /dev/rdsk/c1t0d0s2 <<EOF
0 2 00 0 8389656 8389655
1 3 01 8389656 8389656 16779311
2 5 00 0 71127180 71127179
3 7 00 16779312 16779312 33558623
4 0 00 33558624 37516554 71075177
6 0 00 71075178 26001 71101178
7 0 00 71101179 26001 71127179
EOF
/usr/sbin/fsck /dev/rdsk/c1t0d0s0
/usr/sbin/fsck /dev/rdsk/c1t0d0s3
/usr/sbin/fsck /dev/rdsk/c1t0d0s4
mount /dev/dsk/c1t0d0s0 /a
mkdir -p /a/var
mkdir -p /a/opt
mount /dev/dsk/c1t0d0s3 /a/var
mount /dev/dsk/c1t0d0s4 /a/opt
#-----------------------------------------------------------------------
-
server=veritas
log=/var/tmp/bprestore.log
rename=/var/tmp/bprestore.rename
filelist=/var/tmp/bprestore.filelist
cat <<EOF
--------------------------------------------------------------------
Running bprestore in foreground.
EOF
echo \
/usr/openv/netbackup/bin/bprestore -w \
-H \
-S ${server} \
-L ${log} \
-R ${rename} \
${extra_opt} \
-f ${filelist}
/usr/openv/netbackup/bin/bprestore -w \
-H \
-S ${server} \
-L ${log} \
-R ${rename} \
${extra_opt} \
-f ${filelist}
#-----------------------------------------------------------------------
--
# Make excluded /egate mountpoint
#-----------------------------------------------------------------------
--
mkdir -p /a/egate
#-----------------------------------------------------------------------
--
# Unconfigure disksuite mirror
#-----------------------------------------------------------------------
--
mv /a/etc/lvm/mddb.cf /a/etc/lvm/mddb.cf.bak
sed -e 's!md/!!g' \
-e 's!d10!c1t0d0s0!g' \
-e 's!d20!c1t0d0s1!g' \
-e 's!d30!c1t0d0s3!g' \
-e 's!d40!c1t0d0s4!g' \
/a/etc/vfstab > /a/etc/vfstab.tmp
cp /a/etc/vfstab /a/etc/vfstab.bak
cp /a/etc/vfstab.tmp /a/etc/vfstab
cp /a/etc/system /a/etc/system.bak
cp /a/etc/system.tmp /a/etc/system
#-----------------------------------------------------------------------
--
# Rebuild /dev and /devices and /etc/path_to_inst
# Typically we don't backup /dev so check if its even there.
#-----------------------------------------------------------------------
--
[ -d /a/dev ] && mv /a/dev /a/dev.bak
mv /a/devices /a/devices.bak
mkdir /a/dev
mkdir /a/devices
mv /a/etc/path_to_inst \
/a/etc/path_to_inst.bak
cp /tmp/root/etc/path_to_inst \
/a/etc/path_to_inst
#-----------------------------------------------------------------------
--
# Make mount points excluded from backup
#-----------------------------------------------------------------------
--
mkdir /a/tmp
chmod 1777 /a/tmp
chown root:sys /a/tmp
#-----------------------------------------------------------------------
--
# Umount the slices and install the ufs boot block
#-----------------------------------------------------------------------
--
umount /a/var
umount /a/opt
umount /a
echo
"--------------------------------------------------------------------"
echo " Restore complete - type \"reboot -- -r\" to reboot the system."
echo
"--------------------------------------------------------------------"
#-----------------------------------------------------------------------
--
# End.
#-----------------------------------------------------------------------
--
Script #2:
#!/bin/sh
#-----------------------------------------------------------------------
--
# Configuring Solaris 8 Boot Image
#-----------------------------------------------------------------------
--
root=/export/install/SOL8-RECOVER-TEST/Solaris_8/Tools/Boot/
noask=/export/depot/fileset/isconf/plat/sunos/5.8/etc/noask_pkgadd
depot=/export/depot/pkg/sunos/5.8
#-----------------------------------------------------------------------
--
perl -pi -e '/^root/ && s/NP/<your own hash>/' $root/etc/shadow
exit 0
pkgadd -d ${depot}/SMC/SMCncurs-5.3 -R $root \
-n -a ${noask} all
#-----------------------------------------------------------------------
--
perl -pi -e ' /^\s*install\)/ and print <<EOF
recover)
cat < /dev/null > /tmp/._recover_startup
shift
;;
EOF
' $root/sbin/rcS
#-----------------------------------------------------------------------
--
perl -pi -e ' m!#/usr/sbin/inetd -s! and print <<EOF
if [ -f /tmp/._recover_startup ] ; then
/usr/sbin/inetd -s
fi
EOF
' $root/sbin/sysconfig
#-----------------------------------------------------------------------
--
perl -pi -e ' m!exec /sbin/suninstall! and print <<EOF
if [ -f /tmp/._recover_startup ] ; then
exec /bin/ksh -o vi
fi
EOF
' $root/sbin/sysconfig
#-----------------------------------------------------------------------
--
cp -rp tmp_proto/openv $root/.tmp_proto/
ln -s /tmp/openv $root/usr/openv
#-----------------------------------------------------------------------
--
cat <<EOF >> $root/etc/services
#
# NetBackup services
#
bprd 13720/tcp bprd
bpcd 13782/tcp bpcd
vopied 13783/tcp vopied
bpjava-msvc 13722/tcp bpjava-msvc
EOF
#-----------------------------------------------------------------------
--
cat <<EOF >> $root/etc/inetd.conf
#
# netbackup services
#
bpcd stream tcp nowait root /usr/openv/netbackup/bin/bpcd
bpcd
vopied stream tcp nowait root /usr/openv/bin/vopied vopied
bpjava-msvc stream tcp nowait root
/usr/openv/netbackup/bin/bpjava-msvc bpjava-msvc -transient
EOF
_______________________________________________
- Physical devices:
A "physical device name" represents the full pathname of the device.
Physical device files are found in the /devices directory and have the
following
naming convention:
/devices/sbus@1,f8000000/esp@0,40000/sd@3,0:a
Each device has a unique name representing both the type of device and
the location of that device
in the system-addressing structure called the "device tree". The
OpenBoot firmware builds the
device tree for all devices from information gathered at POST. The
device tree is loaded in memory
and is used by the kernel during boot to identify all configured
devices.
A device pathname is a series of node names separated by slashes. Each
device has the following form:
driver-name@unit-address:device-arguments
/devices>ls -al
total 70
drwxr-xr-x 7 root sys 512 Aug 10 2004 .
drwxr-xr-x 25 root root 512 Aug 17 2004 ..
crw------- 1 root sys 201, 0 Aug 10 2004 memory-
controller@0,0:mc-us3i
drwxr-xr-x 4 root sys 512 Aug 10 2004 pci@1c,600000
crw------- 1 root sys 109,767 Aug 10 2004
pci@1c,600000:devctl
drwxr-xr-x 2 root sys 512 Aug 10 2004 pci@1d,700000
crw------- 1 root sys 109,1023 Aug 10 2004
pci@1d,700000:devctl
drwxr-xr-x 4 root sys 512 Aug 10 2004 pci@1e,600000
crw------- 1 root sys 109,511 Aug 10 2004
pci@1e,600000:devctl
drwxr-xr-x 2 root sys 512 Aug 10 2004 pci@1f,700000
crw------- 1 root sys 109,255 Aug 10 2004
pci@1f,700000:devctl
drwxr-xr-x 2 root sys 29696 Aug 11 2004 pseudo
- Instance name:
The "instance name" represents the kernel's abbreviated name for every
possible device
on the system. For example, sd0 and sd1 represents the instance names of
two SCSI disk devices.
Instance names are mapped in the /etc/path_to_inst file, an are
displayed by using the
commands dmesg, sysdef, and prtconf
/devices>cd /etc
/etc>more path_to_inst
#
# Caution! This file contains critical kernel state
#
"/options" 0 "options"
"/pci@1f,700000" 0 "pcisch"
"/pci@1f,700000/network@2" 0 "bge"
"/pci@1f,700000/network@2,1" 1 "bge"
"/pci@1e,600000" 1 "pcisch"
"/pci@1e,600000/ide@d" 0 "uata"
"/pci@1e,600000/ide@d/sd@0,0" 30 "sd"
"/pci@1e,600000/isa@7" 0 "ebus"
"/pci@1e,600000/isa@7/power@0,800" 0 "power"
"/pci@1e,600000/isa@7/rmc-comm@0,3e8" 0 "rmc_comm"
"/pci@1e,600000/isa@7/i2c@0,320" 0 "pcf8584"
"/pci@1e,600000/isa@7/i2c@0,320/motherboard-fru-prom@0,a2" 0 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/chassis-fru-prom@0,a8" 1 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/power-supply-fru-prom@0,b0" 2 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/power-supply-fru-prom@0,a4" 3 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/dimm-spd@0,b6" 4 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/dimm-spd@0,b8" 5 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/dimm-spd@0,c6" 6 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/dimm-spd@0,c8" 7 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/nvram@0,50" 8 "seeprom"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,70" 0 "pca9556"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,44" 1 "pca9556"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,46" 2 "pca9556"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,4a" 3 "pca9556"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,68" 4 "pca9556"
"/pci@1e,600000/isa@7/i2c@0,320/gpio@0,88" 5 "pca9556"
"/pci@1e,600000/isa@7/serial@0,3f8" 0 "su"
"/pci@1e,600000/isa@7/serial@0,2e8" 1 "su"
"/pci@1e,600000/pmu@6" 0 "pmubus"
"/pci@1e,600000/pmu@6/gpio@8a" 0 "pmugpio"
"/pci@1e,600000/pmu@6/i2c@0" 0 "smbus"
"/pci@1e,600000/pmu@6/gpio@80000000" 1 "pmugpio"
"/pci@1e,600000/pmu@6/i2c@0,0" 1 "smbus"
"/pci@1e,600000/usb@a" 0 "ohci"
"/pci@1c,600000" 2 "pcisch"
"/pci@1c,600000/scsi@2" 0 "glm"
"/pci@1c,600000/scsi@2/sd@0,0" 0 "sd"
"/pci@1c,600000/scsi@2/sd@1,0" 1 "sd"
"/pci@1c,600000/scsi@2/sd@2,0" 2 "sd"
"/pci@1c,600000/scsi@2/sd@3,0" 3 "sd"
"/pci@1c,600000/scsi@2/sd@4,0" 4 "sd"
"/pci@1c,600000/scsi@2/sd@5,0" 5 "sd"
"/pci@1c,600000/scsi@2/sd@6,0" 6 "sd"
"/pci@1c,600000/scsi@2/sd@8,0" 7 "sd"
"/pci@1c,600000/scsi@2/sd@9,0" 8 "sd"
"/pci@1c,600000/scsi@2/sd@a,0" 9 "sd"
"/pci@1c,600000/scsi@2/sd@b,0" 10 "sd"
"/pci@1c,600000/scsi@2/sd@c,0" 11 "sd"
"/pci@1c,600000/scsi@2/sd@d,0" 12 "sd"
"/pci@1c,600000/scsi@2/sd@e,0" 13 "sd"
"/pci@1c,600000/scsi@2/sd@f,0" 14 "sd"
"/pci@1c,600000/scsi@2/st@0,0" 0 "st"
"/pci@1c,600000/scsi@2/st@1,0" 1 "st"
"/pci@1c,600000/scsi@2/st@2,0" 2 "st"
"/pci@1c,600000/scsi@2/st@3,0" 3 "st"
"/pci@1c,600000/scsi@2/st@4,0" 4 "st"
"/pci@1c,600000/scsi@2/st@5,0" 5 "st"
"/pci@1c,600000/scsi@2/st@6,0" 6 "st"
"/pci@1c,600000/scsi@2/ses@0,0" 0 "ses"
"/pci@1c,600000/scsi@2/ses@1,0" 1 "ses"
"/pci@1c,600000/scsi@2/ses@2,0" 2 "ses"
"/pci@1c,600000/scsi@2/ses@3,0" 3 "ses"
"/pci@1c,600000/scsi@2/ses@4,0" 4 "ses"
"/pci@1c,600000/scsi@2/ses@5,0" 5 "ses"
"/pci@1c,600000/scsi@2/ses@6,0" 6 "ses"
"/pci@1c,600000/scsi@2/ses@7,0" 7 "ses"
"/pci@1c,600000/scsi@2/ses@8,0" 8 "ses"
"/pci@1c,600000/scsi@2/ses@9,0" 9 "ses"
"/pci@1c,600000/scsi@2/ses@a,0" 10 "ses"
"/pci@1c,600000/scsi@2/ses@b,0" 11 "ses"
"/pci@1c,600000/scsi@2/ses@c,0" 12 "ses"
"/pci@1c,600000/scsi@2/ses@d,0" 13 "ses"
"/pci@1c,600000/scsi@2/ses@e,0" 14 "ses"
"/pci@1c,600000/scsi@2/ses@f,0" 15 "ses"
"/pci@1c,600000/scsi@2,1" 1 "glm"
"/pci@1c,600000/scsi@2,1/sd@0,0" 15 "sd"
"/pci@1c,600000/scsi@2,1/sd@1,0" 16 "sd"
"/pci@1c,600000/scsi@2,1/sd@2,0" 17 "sd"
"/pci@1c,600000/scsi@2,1/sd@3,0" 18 "sd"
"/pci@1c,600000/scsi@2,1/sd@4,0" 19 "sd"
"/pci@1c,600000/scsi@2,1/sd@5,0" 20 "sd"
"/pci@1c,600000/scsi@2,1/sd@6,0" 21 "sd"
"/pci@1c,600000/scsi@2,1/sd@8,0" 22 "sd"
"/pci@1c,600000/scsi@2,1/sd@9,0" 23 "sd"
"/pci@1c,600000/scsi@2,1/sd@a,0" 24 "sd"
"/pci@1c,600000/scsi@2,1/sd@b,0" 25 "sd"
"/pci@1c,600000/scsi@2,1/sd@c,0" 26 "sd"
"/pci@1c,600000/scsi@2,1/sd@d,0" 27 "sd"
"/pci@1c,600000/scsi@2,1/sd@e,0" 28 "sd"
"/pci@1c,600000/scsi@2,1/sd@f,0" 29 "sd"
"/pci@1c,600000/scsi@2,1/st@0,0" 7 "st"
"/pci@1c,600000/scsi@2,1/st@1,0" 8 "st"
"/pci@1c,600000/scsi@2,1/st@2,0" 9 "st"
"/pci@1c,600000/scsi@2,1/st@3,0" 10 "st"
"/pci@1c,600000/scsi@2,1/st@4,0" 11 "st"
"/pci@1c,600000/scsi@2,1/st@5,0" 12 "st"
"/pci@1c,600000/scsi@2,1/st@6,0" 13 "st"
"/pci@1c,600000/scsi@2,1/ses@0,0" 16 "ses"
"/pci@1c,600000/scsi@2,1/ses@1,0" 17 "ses"
"/pci@1c,600000/scsi@2,1/ses@2,0" 18 "ses"
"/pci@1c,600000/scsi@2,1/ses@3,0" 19 "ses"
"/pci@1c,600000/scsi@2,1/ses@4,0" 20 "ses"
"/pci@1c,600000/scsi@2,1/ses@5,0" 21 "ses"
"/pci@1c,600000/scsi@2,1/ses@6,0" 22 "ses"
"/pci@1c,600000/scsi@2,1/ses@7,0" 23 "ses"
"/pci@1c,600000/scsi@2,1/ses@8,0" 24 "ses"
"/pci@1c,600000/scsi@2,1/ses@9,0" 25 "ses"
"/pci@1c,600000/scsi@2,1/ses@a,0" 26 "ses"
"/pci@1c,600000/scsi@2,1/ses@b,0" 27 "ses"
"/pci@1c,600000/scsi@2,1/ses@c,0" 28 "ses"
"/pci@1c,600000/scsi@2,1/ses@d,0" 29 "ses"
"/pci@1c,600000/scsi@2,1/ses@e,0" 30 "ses"
"/pci@1c,600000/scsi@2,1/ses@f,0" 31 "ses"
"/pci@1d,700000" 3 "pcisch"
"/pci@1d,700000/network@2" 2 "bge"
"/pci@1d,700000/network@2,1" 3 "bge"
"/memory-controller@0,0" 0 "mc-us3i"
"/memory-controller@1,0" 1 "mc-us3i"
"/pseudo" 0 "pseudo"
"/scsi_vhci" 0 "scsi_vhci"
/etc>
Logical device files have a major and minor number that indicate
device drivers,
hardware addresses, and other characteristics.
Furthermore, a device filename must follow a specific naming
convention.
A logical device name for a disk drive has the following format:
/dev/[r]dsk/cxtxdxsx
/dev/ls -al
..
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1a ->
rdsk/c1t1d0s0
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1b ->
rdsk/c1t1d0s1
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1c ->
rdsk/c1t1d0s2
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1d ->
rdsk/c1t1d0s3
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1e ->
rdsk/c1t1d0s4
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1f ->
rdsk/c1t1d0s5
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1g ->
rdsk/c1t1d0s6
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd1h ->
rdsk/c1t1d0s7
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3a ->
rdsk/c1t0d0s0
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3b ->
rdsk/c1t0d0s1
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3c ->
rdsk/c1t0d0s2
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3d ->
rdsk/c1t0d0s3
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3e ->
rdsk/c1t0d0s4
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3f ->
rdsk/c1t0d0s5
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3g ->
rdsk/c1t0d0s6
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsd3h ->
rdsk/c1t0d0s7
lrwxrwxrwx 1 root root 27 Aug 10 2004 rsm ->
../devices/pseudo/rsm@0:rsm
lrwxrwxrwx 1 root root 13 Aug 10 2004 rsr0 ->
rdsk/c0t0d0s2
lrwxrwxrwx 1 root root 7 Aug 10 2004 rst12 -> rmt/0lb
lrwxrwxrwx 1 root root 7 Aug 10 2004 rst20 -> rmt/0mb
lrwxrwxrwx 1 root root 7 Aug 10 2004 rst28 -> rmt/0hb
lrwxrwxrwx 1 root root 7 Aug 10 2004 rst36 -> rmt/0cb
lrwxrwxrwx 1 root other 27 Aug 10 2004 rts ->
../devices/pseudo/rts@0:rts
drwxr-xr-x 2 root sys 512 Aug 10 2004 sad
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1a ->
dsk/c1t1d0s0
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1b ->
dsk/c1t1d0s1
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1c ->
dsk/c1t1d0s2
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1d ->
dsk/c1t1d0s3
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1e ->
dsk/c1t1d0s4
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1f ->
dsk/c1t1d0s5
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1g ->
dsk/c1t1d0s6
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd1h ->
dsk/c1t1d0s7
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3a ->
dsk/c1t0d0s0
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3b ->
dsk/c1t0d0s1
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3c ->
dsk/c1t0d0s2
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3d ->
dsk/c1t0d0s3
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3e ->
dsk/c1t0d0s4
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3f ->
dsk/c1t0d0s5
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3g ->
dsk/c1t0d0s6
lrwxrwxrwx 1 root root 12 Aug 10 2004 sd3h ->
dsk/c1t0d0s7
..
/dev>cd dsk
/dev/dsk>ls -al
total 58
drwxr-xr-x 2 root sys 512 Aug 10 2004 .
drwxr-xr-x 14 root sys 4096 Oct 4 14:15 ..
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s0
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:a
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s1
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:b
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s2
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:c
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s3
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:d
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s4
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:e
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s5
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:f
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s6
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:g
lrwxrwxrwx 1 root root 42 Aug 10 2004 c0t0d0s7
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:h
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s0
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s1
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:b
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s2
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:c
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s3
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:d
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s4
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:e
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s5
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:f
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s6
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:g
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t0d0s7
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:h
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s0
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s1
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:b
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s2
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:c
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s3
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:d
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s4
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:e
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s5
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:f
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s6
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:g
lrwxrwxrwx 1 root root 43 Aug 10 2004 c1t1d0s7
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:h
/dev/dsk>cd ..
/dev>cd rdsk
/dev/rdsk>ls -al
total 58
drwxr-xr-x 2 root sys 512 Aug 10 2004 .
drwxr-xr-x 14 root sys 4096 Oct 4 14:15 ..
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s0
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:a,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s1
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:b,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s2
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:c,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s3
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:d,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s4
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:e,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s5
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:f,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s6
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:g,raw
lrwxrwxrwx 1 root root 46 Aug 10 2004 c0t0d0s7
-> ../../devices/pci@1e,600000/ide@d/sd@0,0:h,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s0
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s1
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:b,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s2
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:c,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s3
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:d,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s4
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:e,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s5
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:f,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s6
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:g,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t0d0s7
-> ../../devices/pci@1c,600000/scsi@2/sd@0,0:h,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s0
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s1
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:b,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s2
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:c,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s3
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:d,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s4
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:e,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s5
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:f,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s6
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:g,raw
lrwxrwxrwx 1 root root 47 Aug 10 2004 c1t1d0s7
-> ../../devices/pci@1c,600000/scsi@2/sd@1,0:h,raw
# format
Searching for disks...done
# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 9514560 8191680 17706239
1 3 01 0 8395200 8395199
2 5 00 0 143349312 143349311
3 7 00 8466432 1048128 9514559
4 0 00 51266688 33560448 84827135
5 0 00 17706240 33560448 51266687
6 8 00 84827136 58522176 143349311
7 0 00 8395200 71232 8466431
# prtvtoc /dev/rdsk/c1t1d0s2
* /dev/rdsk/c1t1d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 9514560 8191680 17706239
1 3 01 0 8395200 8395199
2 5 00 0 143349312 143349311
3 7 00 8466432 1048128 9514559
4 0 00 51266688 33560448 84827135
5 0 00 17706240 33560448 51266687
6 8 00 84827136 58522176 143349311
7 0 00 8395200 71232 8466431
#
# ls -al
..
..
crw------- 1 root system 17, 0 Aug 08 12:00 tty0
crw-rw-rw- 1 root system 22, 0 May 27 17:45 ttyp0
crw-rw-rw- 1 root system 22, 1 May 27 17:45 ttyp1
crw-rw-rw- 1 root system 22, 2 May 27 17:45 ttyp2
..
..
brw-rw---- 1 root system 10, 7 May 27 17:46 hd3
brw-rw---- 1 root system 10, 4 Jun 27 15:51 hd4
brw-rw---- 1 root system 10, 1 Aug 08 11:41 hd5
brw-rw---- 1 root system 10, 2 May 27 17:46 hd6
brw-rw---- 1 root system 10, 3 May 27 17:46 hd8
brw-rw---- 1 root system 10, 6 May 27 17:46 hd9var
brw------- 1 root system 16, 7 May 27 17:44 hdisk0
brw------- 1 root system 16, 2 May 27 17:44 hdisk1
brw------- 1 root system 16, 10 May 27 18:23 hdisk10
brw------- 1 root system 16, 12 May 27 18:23 hdisk11
brw------- 1 root system 16, 5 May 27 17:44 hdisk2
brw------- 1 root system 16, 20 May 27 18:23 hdisk20
brw------- 1 root system 16, 21 May 27 18:23 hdisk21
brw------- 1 root system 16, 22 May 27 18:23 hdisk22
..
..
=========================
8. Current machines 2005:
=========================
1. eServer p5 family:
---------------------
- pSeries 655
(Rack-mount)
- pSeries 670
(Rack-mount)
The 4- to 16-way p670 is packed with the same Capacity on Demand (CoD)
capabilities and innovative
technology as the flagship p690.
Processor POWER4+
Clock rates (Min/Max) 1.50GHz
System memory (Std/Max) 4GB / 256GB
Internal storage (Std/Max) 72.8GB / 7.0TB
Performance (rPerf range)*** 13.66 to 46.79
- pSeries 690
V125
V215
V245
V445
V210
V240
etc..
These servers can run many operating system, including Solaris OS,
Linux, Windows or VMware.
These are the Sun Blade machines.
_ HP ProLiant servers
_ HP ProLiant DL
_ HP ProLiant ML
_ HP ProLiant BL blades
_ HP Integrity servers
_ Entry-class
_ Mid-range
_ Superdome (high-end)
_ HP Integrity BL blades
_ HP 9000 servers
PA-RISC powered servers
_ HP Integrity servers
Industry standard
Itaniumr 2 based servers
_ HP Telco servers
Specially designed for the telecom and service provider industries
=======================================
9. Most important pSeries LED Codes:
=======================================
reduced ODM from BLV copied into RAMFS: OK=510, NOT OK=LED 548:
LED 511: bootinfo -b is called to determine the last bootdevice
ipl_varyon of rootvg: OK=517,ELSE 551,552,554,556:
LED 555,557: mount /dev/hd4 on temporary mountpoint /mnt
LED 518: mount /usr, /var
LED 553: syncvg rootvg, or inittab problem
LED 549
LED 581: tcp/ip is being configured, and there is some problem
LED Code 888 right after boot: software problem 102, OR, hardware or
software problem 103
rc.boot LED codes:
------------------
rc.boot1
=============
1.
=============
2.
0c5 The dump failed to start. An unecpected error occured while the
system was attempting to write to the dump media.
0c6 A dump to the secondary dump device was requested. Make the
secondary dump device ready, then press CTRL-ALT-NUMPAD2.
0c7 Reserved.
0c8 The dump function is disabled. No primary dump device is configured.
0c9 A dump is in progress.
0cc Unknown dump failure
24c Attempting a service mode IPL from FDDI specified in NVRAM IPL
device list.
250 Attempting a Service mode restart from adapter feature ROM specified
in IPL ROM device list.
251 Attempting a Service mode restart from Ethernet specified in IPL ROM
device list.
252 Attempting a Service mode IPL from standard I/O planar attached
devicesspecified in ROM Default Device List.
253 Attempting a Service mode IPL from SCSI attached devices specified
in IPL ROM Default Device List.
254 Attempting a Service mode restart from 9333 subsystem device
specified in IPL ROM device list.
255 Attempting a Service mode IPL from IBM 7012 DBA disk'attached
devices specified in IPL ROM Default Devices List.
256 Attempting a Service mode restart from Ethernet specified in IPL ROM
default device list.
257 Attempting a Service mode restart from Token Ring specified in IPL
ROM default device list.
258 Attempting a Service mode restart from Token Ring specified by the
operator.
259 Attempting a Service mode restart from FDDI specified by the
operator.
25c Attempting a normal mode IPL from FDDI specified in IPL ROM device
list.
260 Information is being displayed on the display console.
261 Information will be displayed on the tty terminal when the "1" key
is pressed on the tty terminal keyboard.
262 A keyboard was not detected as being connected to the system's
NOTE: Check for blown planar fuses or for a corrupted boot on disk drive
263 Attempting a Normal mode restart from adapter feature ROM specified
in NVRAM device list.
269 Stalled state - the system is unable to IPL
271 Mouse port POST.
272 Tablet port POST.
277 Auto Token-Ring LANstreamer MC 32 Adapter
278 Video ROM Scan POST.
279 FDDI adapter POST.
280 3COM Ethernet POST.
281 Keyboard POST executing.
282 Parallel port POST executing
283 Serial port POST executing
284 POWER Gt1 graphadapte POST executing
285 POWER Gt3 graphadapte POST executing
286 Token Ring adapter POST executing.
287 Ethernet adapter POST executing.
288 Adapter card slots being queried.
289 GTO POST.
290 IOCC POST error (irrecoverable).
291 Standard I/O POST running.
292 SCSI POST running.
293 IBM 7012 DBA disk POST running.
294 IOCC bad TCW SIMM in slot location J being tested.
295 Graphics Display adapter POST, color or grayscale.
296 ROM scan POST.
297 System model number does not compare between OCS and ROS
(irrecoverable). Attempting a software IPL.
298 Attempting a software IPL (warm boot).
299 IPL ROM passed control to the loaded program code.
301 Flash Utility ROM failed or checkstop occured (irrecoverable)
302 Flash Utility ROM failed or checkstop occured (irrecoverable)
302 Flash Utility ROM: User prompt, move the key to the service in order
to perform an optional Flash Update. LED
will only appear if the key switch is in the SECURE position. This
signals the user that a Flash Update may be
initiated by moving the key switch to the SERVICE position. If the key
is moved to the SERVICE position,
LED 303 will be displayed. This signals the user to press the reset
button and select optional Flash Update.
303 Flash Utility ROM: User prompt, press the reset button in order to
perform an optional Flash Update. LED
only appear if the key switch is in the SECURE position. This signals
the user that a Flash Update may be initiated
by moving the key switch to the SERVICE position. If the key is moved to
the SERVICE position, LED 303 will be
displayed. This signals the user to press the reset button and select
optional Flash Update.
304 Flash Utility ROM IOCC POST error (irrecoverable)
305 Flash Utility ROM standard I/O POST running.
306 Flash Utility ROM is attempting IPL from Flash Update Boot Image.
307 Flash Utility ROM system model number does not compare between OCS
and ROM (irrecoverable).
308 Flash Utility ROM: IOCC TCW memory is being tested.
309 Flash Utility ROM passed control to a Flash Update Boot Image.
311 Flash Utility ROM CRC comparison error (irrecoverable).
312 Flash Utility ROM RAM POST memory configuration error or no memory
found ( iirecoverable).
313 Flash Utility ROM RAM POST failure( irrecoverable).
314 Flash Utility ROM Power status register failed (irrecoverable).
315 Flash Utility ROM detected a low voltage condition.
318 Flash Utility ROM RAM POST is looking for good memory.
319 Flash Utility ROM RAM POST bit map is being generated.
322 CRC error on media Flash Image. No Flash Update performed.
323 Current Flash Image is being erased.
324 CRC error on new Flash Image after Update was performed. (Flash
Image is corrupted).
325 Flash Image successful and complete.
===========================================
10. Diskless machines, NFS Implementations:
===========================================
------------------------------------------------------------------------
--------
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
------------------------------------------------------------------------
--------
FreeBSD
The setup for FreeBSD 4.x is similar to NetBSD, but mountd needs
different options and /etc/exports has a different format.
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
------------------------------------------------------------------------
--------
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
Modify the NetInfo database to export your shares. Note that you must
escape the forward slashes in the path to your export twice. Once for
the shell, and once for the NetInfo parser (since it uses forward
slashes to delimit NetInfo properties). Just to add to the confusion,
the NetInfo property we're adding to is called /exports.
# nicl . -create /exports/\\/export\\/client\\/root opts
maproot=root:wheel
# nicl . -create /exports/\\/export\\/client\\/root clients 192.168.0.10
# nicl . -create /exports/\\/export\\/client\\/swap opts
maproot=root:wheel
# nicl . -create /exports/\\/export\\/client\\/swap clients 192.168.0.10
# nicl . -create /exports/\\/export\\/client\\/usr opts
maproot=nobody:nobody
# nicl . -create /exports/\\/export\\/client\\/usr clients 192.168.0.10
# nicl . -create /exports/\\/export\\/client\\/home opts
maproot=nobody:nobody
# nicl . -create /exports/\\/export\\/client\\/home clients 192.168.0.10
To later add another client for the same export, you would append to
that property (as opposed to the initial create):
# nicl . -append /exports/\\/export\\/client\\/root clients 192.168.0.12
Your system will always start the NFS daemons after reboots if the
NetInfo /exports property is present. To remove all exports and prevent
your system from starting NFS in the future, run:
# nicl . -delete /exports
If the server isn't running the NFS daemons, the client will print:
------------------------------------------------------------------------
--------
Linux
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
Most versions of linux only implement NFS2, in which case NetBSD will
try NFS3 and then automatically fall back. Some versions (notably RedHat
6.0) will incorrectly answer both NFS2 and NFS3 mount requests, then
ignore any attempt to access the filesystem using NFS3. This causes
untold pain and hassle.
SunOS
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
#/etc/exports
/export/client/root -root=client
/export/client/swap -root=client
/export/client/usr
/export/client/home
# rm -f /etc/xtab;touch /etc/xtab
# exportfs -a
------------------------------------------------------------------------
--------
Solaris
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
If the server isn't running the NFS daemons, the client will print:
------------------------------------------------------------------------
--------
NEWS-OS
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
#/etc/exports
/export/client/root -root=client
/export/client/swap -root=client
/export/client/usr
/export/client/home
# rm -f /etc/xtab;touch /etc/xtab
# /usr/etc/exportfs -av
------------------------------------------------------------------------
--------
NEXTSTEP
Note, NEXTSTEP doesn't support exporting a file. This means that swap
will have to be a file on your root (nfs) filesystem, and not its own
nfs mounted file. Keep this in mind in later steps involving swap.
You may also wish to keep with NEXTSTEP convention and place all of your
client files in /private/export/client instead of /export/client.
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/root/swap
# cd /export/client/root
# tar [--numeric-owner] -xvpzf /export/client/NetBSD-
release/binary/sets/kern.tgz
# mknod /export/client/root/dev/console c 0 0
Launch /NextAdmin/NFSManager.app
Type in your client's name under "Root Access" and click that "Add"
button.
If the server isn't running the NFS daemons, the client will print:
------------------------------------------------------------------------
--------
HP-UX 7
I couldn't get the HP-UX 7 rpc.mountd to start. Here's what I tried, if
you think it might work for you. Let us know what we're doing wrong.
I don't think HP-UX 7's NFS server allows for restricting root
read/write access.
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# tar [--numeric-owner] -xvpzf /export/client/NetBSD-
release/binary/sets/kern.tgz
# mknod /export/client/root/dev/console c 0 0
------------------------------------------------------------------------
--------
HP-UX 9
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
Open sam and make sure that the kernel has NFS support compiled in.
Kernel Configuration -> Subsystems, NFS/9000
This will require a reboot if it's not.
# /usr/etc/exportfs -a
If the server isn't running the NFS daemons, the client will print:
------------------------------------------------------------------------
--------
HP-UX 10
# mkdir -p /export/client/root/dev
# mkdir /export/client/usr
# mkdir /export/client/home
# touch /export/client/swap
# cd /export/client/root
# mknod /export/client/root/dev/console c 0 0
# /usr/sbin/exportfs -a
If the server isn't running the NFS daemons, the client will print:
#######################################################################
SOME MORE INFO ON ERROR CODES:
#######################################################################
====================================
SECTION 1: IBM lpar reference codes:
====================================
When the server posts these SRCs, you can find them in the Serviceable
Event View or the view that you use
to see informational logs (such as the Product Activity Log or ASM).
This error might occur when you shut down a partition that is set to
automatically IPL and then turn the managed system off
and back on. When the partition automatically IPLs, it uses the
resources specified in PHYP NVRAM, and this error
occurs when the server does not find the exact resources specified in
NVRAM. The solution is to activate the partition
by using the partition profile on the HMC. The HMC applies the values in
the profile to NVRAM. When the partition IPLs,
it uses the resources specified in the profile.
LPARCFG
LICCODE
An operating system MSD IPL was attempted with the IPL side on D-mode.
This is not a valid operating system IPL scenario,
and the IPL will be halted. This SRC is usually seen when a D-mode SLIC
install fails and attempts an MSD.
SVCDOCS
Have the customer configure an alternate IPL IOP for the partition. Then
retry the partition IPL.
SVCDOCS
Have the customer configure a load source IOP for the partition. Then
retry the partition IPL.
SVCDOCS
If you have a RAID enablement card (CCIN 5709) on your system, it will
disable an embedded SCSI adapter. If that embedded
slot is called out in the error, you can safely ignore this error.
SLOTUSE
2485 A problem occurred during the IPL of a partition.
The partition ID is characters 3 and 4 of the B2xx reference code in
word 1 of the SRC (in hexadecimal format).
If characters 3 and 4 are both zero, then the partition ID is in
extended word one as LP=xxx (in decimal format).
The platform LIC for this partition attempted an operation. There was a
failure. Contact your next level of support.
NEXTLVL
If present, look in the Serviceable Event View for a B7xx xxxx during
the partition's IPL. Correct that error and retry the partition IPL.
SVCDOCS
The B2xx xxxx SRC Format is Word 1: B2xx3114, Word 3: Bus, Word 4:
Board, Word 5: Card.
NEXTLVL
This is a platform LIC main store utilization problem. The platform LIC
could not obtain a segment of main storage
within the platform's main store to use for managing the creation of a
partition.
LICCODE
Look for B700 69xx errors in the Serviceable Event View and work those
errors.
NEXTLVL
Look for B700 69xx errors in the Serviceable Event View and work those
errors.
Check the LPAR configuration if required, and ensure that the tagged I/O
for the partition is correct.
Check the LPAR configuration if required, and ensure that the tagged I/O
for the partition is correct.
Check the LPAR configuration if required, and ensure that the tagged I/O
for the partition is correct.
Check the LPAR configuration if required, and ensure that the tagged I/O
for the partition is correct.
Look for a SRC in the Serviceable Event View logged at the time the
partition was performing an IPL.
This error indicates a failure during a search for the load source.
There may be a number of these failures
prior to finding a good load source. This is normal. If a B2xx3110 error
is logged, a B2xx3200 may be posted to the control panel.
Work the B2xx3110 error in the Serviceable Event View. If the system IPL
hangs at B2xx3200 and you cannot check the SRC history,
perform the actions indicated for the B2xx3110 SRC.
Look for a SRC in the Serviceable Event View logged at the time the
partition was performing an IPL.
This error indicates a failure during a search for the load source. It
is usual for a number of these failures
to occur prior to finding a valid load source. This is normal. If a
B2xx3110 error is logged, a B2xx3200 may be
posted to the control panel. Work the B2xx3110 error in the Serviceable
Event View. If the system IPL hangs at B2xx3200
and you cannot check the SRC history, perform the actions indicated for
the B2xx3110 SRC.
Use the Main Storage Dump Manager to rename or copy the current main
storage dump.
SVCDOCS
An error occurred when writing the partition's main storage dump to the
partition's load source. No service action required.
This is a problem with the load source media being corrupt or not valid.
LSERROR
There was an error mapping memory for the partition's IPL. Call your
next level of support.
LICCODE
8114 A problem occurred during the IPL of a partition.
The partition ID is characters 3 and 4 of the B2xx reference code in
word 1 of the SRC (in hexadecimal format).
If characters 3 and 4 are both zero, then the partition ID is in
extended word one as LP=xxx (in decimal format).
A problem occurred on the path to the load source for the partition.
There was a failure verifying VPD for the partition's resources during
IPL. Call your next level of support.
LICCODE
8117, 8121, 8123, 8125, 8127, 8129 A problem occurred during the IPL of
a partition.
Partition did not IPL due to platform Licensed Internal Code error.
Work any error logs in the Serviceable Event View. If there are no
errors, contact your next level of support.
SVCDOCS
The platform will need to be re-IPLed before that partition can be used.
Call your next level of support.
NEXTLVL
C1F0 A problem occurred during a power off a partition
Internal platform Licensed Internal Code error occurred during partition
shutdown or re-IPL.
Ignore this error if there are other serviceable errors. Work those
error logs for this partition and for the platform
from the Serviceable Event View. If there are no errors, contact your
next level of support.
SVCDOCS
For all other partition types, a Partition Dump is not supported. If the
system is Hardware Management Console (HMC)
or Integrated Virtualization Manager (IVM) controlled, do an immediate
partition power off. If the system is not HMC
or IVM controlled, perform a Function 8 on the control panel. After the
partition has powered off, re-IPL the partition,
collect error logs and contact your next level of support.
NEXTLVL
Work any error logs for this partition in the Serviceable Event View. If
there are no errors, contact your next level of support.
SVCDOCS
Look for other errors. If this SRC is displayed in the operator panel,
then panel function 34 might be used to retry
the current IPL while the partition is still in the failed state.
Look for other errors. If this SRC is displayed in the operator panel,
then panel function 34 might be used to retry
the current IPL while the partition is still in the failed state.
########################################################################
#########
==========================================================
SECTION 2: IBM Partition firmware reference (error) codes:
==========================================================
BA040040 Setting the machine type, model, and serial number failed.
FWFWPBL
BA040050 The h-call to switch off the boot watchdog timer failed.
FWFWPBL
BA040060 Setting the firmware boot side for the next boot failed.
FWFWPBL
BA050001 Rebooting a partition in logical partition mode failed. FWFWPBL
BA050004 Locating a service processor device tree node failed. FWFWPBL
BA05000A Failed to send boot failed message to the service processor
FWFWPBL
BA060003 IP parameter requires 3 period (.) characters
Enter a valid IP parameter. Example: 000.000.000.000
Power off the server and reboot from the permanent side. Reject the
firmware image on the temporary side.
If the problem persists, before replacing any components, refer to the
actions for BA090001.
BA090001 SCSI disk unit: test unit ready failed; hardware error FWSCSI1
BA090002 SCSI disk unit: test unit ready failed; sense data available
FWSCSI2
BA090003 SCSI disk unit: send diagnostic failed; sense data available
FWSCSI3
BA090004 SCSI disk unit: send diagnostic failed: devofl command FWSCSI3
BA100001 SCSI tape: test unit ready failed; hardware error FWSCSI1
BA100002 SCSI tape: test unit ready failed; sense data available FWSCSI4
BA100003 SCSI tape: send diagnostic failed; sense data available FWSCSI3
BA100004 SCSI tape: send diagnostic failed: devofl command FWSCSI3
BA110001 SCSI changer: test unit ready failed; hardware error FWSCSI1
BA110002 SCSI changer: test unit ready failed; sense data available
FWSCSI4
BA110003 SCSI changer: send diagnostic failed; sense data available
FWSCSI3
BA110004 SCSI changer: send diagnostic failed: devofl command FWSCSI3
BA120001 On an undetermined SCSI device, test unit ready failed;
hardware error FWSCSI5
BA120002 On an undetermined SCSI device, test unit ready failed; sense
data available FWSCSI4
BA120003 On an undetermined SCSI device, send diagnostic failed; sense
data available FWSCSI4
BA120004 On an undetermined SCSI device, send diagnostic failed; devofl
command FWSCSI4
BA130001 SCSI CD-ROM: test unit ready failed; hardware error FWSCSI1
BA130002 SCSI CD-ROM: test unit ready failed; sense data available
FWSCSI3
BA130003 SCSI CD-ROM: send diagnostic failed; sense data available
FWSCSI3
BA130004 SCSI CD-ROM: send diagnostic failed: devofl command FWSCSI3
BA130010 USB CD-ROM: device remained busy longer than the time-out
period
Retry the operation.
FWFWPBL
BA130011 USB CD-ROM: execution of ATA/ATAPI command was not completed
within the allowed time.
Retry the operation.
FWCD1
BA130012 USB CD-ROM: execution of ATA/ATAPI command failed.
Verify that the power and signal cables going to the USB CD-ROM are
properly connected and are not damaged.
If any problems are found, correct them, then retry the operation.
If the problem persists, the CD in the USB CD-ROM drive might not be
readable. Remove the CD and insert another CD.
NEXTLVL
BA130013 USB CD-ROM: bootable media is missing from the drive
Insert a bootable CD-ROM in the USB CD-ROM drive, then retry the
operation.
FWCD1
BA130014 USB CD-ROM: the media in the USB CD-ROM drive has been changed.
Retry the operation.
FWCD2
BA130015 USB CD-ROM: ATA/ATAPI packet command execution failed.
If the problem persists, the CD in the USB CD-ROM drive might not be
readable. Remove the CD and insert another CD.
FWCD2
BA131010 The USB keyboard was removed.
Plug in the USB keyboard and reboot the partition.
Check for system firmware updates and apply them, if available.
BA140001 SCSI read/write optical: test unit ready failed; hardware error
FWSCSI1
BA140002 SCSI read/write optical: test unit ready failed; sense data
available FWSCSI1
BA140003 SCSI read/write optical: send diagnostic failed; sense data
available FWSCSI3
BA140004 SCSI read/write optical: send diagnostic failed; devofl command
FWSCSI3
BA150001 PCI Ethernet BNC/RJ-45 or PCI Ethernet AUI/RJ-45 adapter:
internal wrap test failure
Replace the adapter specified by the location code.
BA151001 10/100 MBPS Ethernet PCI adapter: internal wrap test failure
Replace the adapter specified by the location code.
Note:
If the logical memory block size is already 256 MB, contact your next
level of support.
BA210100 The partition firmware was unable to log an error with the
server firmware. No reply was received
from the server firmware to an error log that was sent previously
NEXTLVL
BA210101 The partition firmware error log queue is full NEXTLVL
BA250010 dlpar error in open firmware FWLPAR
BA250020 dlpar error in open firmware due to an invalid dlpar entity.
This error may have been caused
by an errant or hung operating system process.
Check for operating system updates that resolve problems with dynamic
logical partitioning (dlpar) and apply them, if available.
Check for server firmware updates and apply them, if available.
BA278009 The server firmware update management tools for the version of
Linux that you are running are
incompatible with this system.
Go to Service and productivity tools for Linux on POWER™ and download
the latest service aids and productivity tools
for the version of Linux that you are running.
Note:
The maximum number of logical memory blocks per partition is 128
kilobytes (K) under the new memory allocation scheme.
BA300020 Function call to isolate a logical memory block failed under
the standard memory allocation scheme.
Do the following:
Upgrade the operating system to one that supports the new memory
representation, or edit the profile
to have fewer than 8192 logical memory blocks.
Reboot the partition.
BA310010 The firmware could not obtain the SRC history FWFLASH
BA310020 The firmware received an invalid SRC history FWFLASH
BA310030 The firmware operation to write the MAC address to vital
product data (VPD) failed FWFLASH
########################################################################
#########
=============================================
SECTION 3: IBM: Using system reference codes:
=============================================
SRC formats
For SRCs displayed on the control panel, the first four characters
designate the reference code type and the second four
characters designate the URC.
For SRCs displayed on software displays, characters 1 through 4 of word
1 designate the reference code type and characters
5 through 8 of word 1 designate the URC.
Note:
For partition firmware SRCs (AAxx, BAxx, and DAxx) and service processor
SRCs (A1xx and B1xx), only the first two characters
of the SRC indicate the necessary action. For partition firmware SRCs
that begin with 2xxx, only the first character indicates
the necessary action. In these cases, the term URC does not apply.
A reference code that is 6 or 8 characters long and appears in either of
the following formats (xxxxxx or xxxxxxxx) is an SRC,
unless it fits one of the following conditions:
The Reference Code column contains numbers that represent the unit
reference code (URC).
The Description/Action column offers a brief description of the failure
that this SRC represents. It may also contain
instructions for continuing the problem analysis.
The Failing Item column represents functional areas of the system unit.
When available, the failing function code links
to the FRU that contains this function for each specific system unit.
To use the list of system reference codes, complete the following steps:
Click the item in the list of system reference codes that matches the
reference code type that you want to find.
Note:
The SRC tables support only 8-character reference code formats. If the
reference code provided contains only 4 or 6 characters,
contact your next level of support for assistance.
When the SRC table appears, select the appropriate URC from the first
column of the table. The tables list URCs
in hexadecimal sequence, with numeric characters listed before
alphabetic characters.
Perform the action indicated for the URC in the Description/Action
column of the table.
If the table entry does not indicate an action or if performing the
action does not correct the problem, exchange
the failing items or parts listed in the Failing Item column in the
order that they are listed. Use the following
instructions to exchange failing items:
Note:
Some failing items are required to be exchanged in groups until the
problem is solved. Other failing items are flagged
as mandatory exchange and must be exchanged before the service action is
complete, even if the problem appears
to have been repaired. For more information, see Block replacement of
FRUs.
########################################################################
#########
########################################################################
#########
==========================================================
SECTION: Some more SOLARIS (and GENERIC) Errors:
==========================================================
The AnswerBook navigator window comes up, but the document viewer
window does not. This message appears on the console, and the
message "Could not start new viewer" appears in the navigator
window. This situation indicates that you have an unknown client
or a problem with the network naming service.
For more information on the NIS hosts map, see the section on the
default search criteria in the NIS+ and FNS Administration Guide.
If you are using the AnswerBook, "NIS hosts map" is a good search
string.
This C shell error message indicates that there are too many
arguments after a command. For example, this can happen by
invoking rm * in a huge directory. The C shell cannot handle more
than 1706 arguments.
The file specified after the first colon is not a valid mount
point because it is not a directory.
Bad address
===========
Check if the bad address resulted from supplying the wrong device
or option to a command. If that is not the problem, contact the
vendor or author of the program for an update.
This error could occur any time a function that takes a pointer
argument is passed an invalid address. Because processors differ
in their ability to detect bad addresses, on some architectures
passing bad addresses can result in undefined behaviors.
N BAD I=N
=========
This message from the memory management system often appears with
parity errors, and indicates a bad memory module or chip at the
position listed. Data loss is possible if the problem occurs
other than at boot time.
# newfs -N /dev/dsk/device
BAD TRAP
========
bad trap = N
============
See the message "BAD TRAP" for details.
For more information on bringing down the system, see the section
on halting the system in the System Administration Guide, Volume
I. If you are using the AnswerBook, "halting the system" is a
good search string.
Broken pipe
===========
Check the process at the end of the pipe to see why it exited.
Bus Error
=========
Exit the programs that make heavy use of the color map, then
restart the failed application and try again.
See the message "lp hang" for a procedure to clear the queue.
Boot the miniroot so you can replace init. Halt the machine by
typing Stop-A or by pressing the reset button. Reboot single-user
from CDROM, the net, or diskette. For example, type boot cdrom -s
at the ok prompt to boot from CDROM. After the system comes up
and gives you a # prompt, mount the device corresponding to the
original / partition somewhere, with a command similar to the
mount command below. Then copy the init program from the miniroot
to the original / partition, and reboot the system.
This message sometimes appears when using a modem that the system
regards as a "Hayes" type modem, which includes most modems
manufactured today. The message can be caused by incorrect switch
settings, by poor cable connections, or by not turning the modem
on.
Check that the modem is on and that the cables between the modem
and your system are securely connected. Check the internal and
external modem switch settings. Turn the modem off and then on
again, if necessary.
The C shell's cd(1) command takes only one argument. Either more
than one directory was specified, or a directory name containing
a space was specified. Directory names with spaces are easy to
create with File Manager.
The system has run out of stream devices. This error results when
a stream head attempts to open a minor device that does not exist
or that is currently in use.
Check that the stream device in question exists and was created
with an appropriate number of minor devices. Make sure that the
hardware corresponds to this configuration. If the stream device
configuration is correct, try again later when more system
resources might be available.
If you are specifying a numeric file mode, you can provide any
number of digits (although only the final one to four are
considered), but all digits must be between 0 and 7. If you are
specifying a symbolic file mode, use the syntax provided in the
chmod usage message to avoid the "invalid mode" error message:
Check the form and spelling of the command line. If that looks
correct, echo $path to see if the user's search path is correct.
When communications are garbled, it is possible to unset a search
path to such an extent that only built-in shell commands are
available. Here is a command to reset a basic search path:
Connection closed.
==================
Just try again. If the other system has gone down, wait for it to
reboot first.
Just try again. If the other system has gone down, wait for it to
reboot first.
Connection refused
==================
Find another machine and remote login to this system, then run
this command:
$ /usr/openwin/bin/kbd_mode -a
This puts the console back into ASCII mode. Note that kbd_mode is
not a windows program, it just fixes the console mode.
core dumped
===========
$ file core core: ELF 32-bit MSB core file SPARC Version 1, from
`dtmail'
If youhave the source code for the program, you can try
compiling it with cc -g, and debugging it yourself using dbx or a
similar debugger. The where directive of dbx provides a stack
trace.
Use the -k option to cpio to skip I/O errors and corrupted file
headers. This might permit you to extract other files from the
cpio archive. To extract files with corrupted headers, try
editing the archive with a binary editor such as emacs. Each cpio
file header contains a filename as a string.
Cross-device link
=================
Data fault
==========
This is a kind of bad trap that usually causes a system panic.
When this message appears after a bad trap message, a system text
or data access fault probably occurred.¤ In the absence of a bad
trap message, this message might indicate a user text or data
access fault. Data loss is possible if the problem occurs other
than at boot time.
Make sure the machine can reboot, then check the log file
/var/adm/messages for hints about what went wrong.
This error usually relates to file and record locking, but can
also apply to mutexes, semaphores, condition variables, and
read/write locks.
Device busy
===========
Run fsck to clean the file system in question. See the message
"/dev/rdsk/N: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY" for
proper procedures.
First run sack -n on the file system, to see how many and what
type of problems exist. Then run fsck again to repair the
file system. If you have a recent backup of the file system, you
can generally answer "y" to all the fsck questions. It's a good
idea to keep a record of all problematic files and inode numbers
for later reference. To run fsck yourself, specify options as
recommended by the boot script. For example:
# fsck /dev/rdsk/c0t4d0s0
Usually the files lost during fsck repair are these that were
created just before a crash or power outage, and they cannot be
recovered. If you lose important files, you can recover them from
backup tapes.
If you don't have a backup, ask an expert to run fsck for you.
The user can delete files to bring disk usage under the limit, or
the server administrator can use the edquota(1M) command to
increase the user's disk limit.
During file system backup, the dump program cannot open the tape
drive because some other process is holding it open.
Find the process that has the tape drive open, and either kill(1)
the process or wait for it to finish.
N DUP I=N
=========
An attempt was made to sccs edit or sccs get a file that is not
yet under SCCS control.
Run sccs info to see who has the file checked out. If it is you,
go ahead and edit it. If it is somebody else, ask that personto
check in the file.
Make sure that the software matches the architecture and system
you're using. The file(1) command can help you determine the
target architecture. If you're using SunOS 4.1.x softwareon a
Solaris 2.x system, make sure that the Binary Compatibility
Package is installed. You can check for it using this command:
File exists
===========
Look at the names of files in the directory, then try again with
a different name or after renaming or removing the existing file.
All a user can do is restart the program and hope deadlock does
not reoccur.
Have the original owner change the mode ((chmod(1)) of this file
back to 1777, its default creation mode. Rebooting the
workstation also resolves this problem.
The kernel file table is full because too many files are open on
the system. Temporarily, no more files can be opened. New data
created under this condition will probably be lost.
In the C shell, use the limit command to see or set the default
file size. In the Bourne or Korn shells, use the ulimit -a
command. Even when the shells claim that the file size is
unlimited, in fact the system limit is FCHR_MAX (usually 1
gigabyte).
Generally you can answer yes to this question without harming the
filesystem.
The fsck(1M) command cannot open the disk device, because the
specified filesystem does not exist.
giving up
=========
Check that all SCSI devices are connected and powered on. Make
sure that SCSI target numbers are correct and not in conflict.
Verify that all cables are no longer than six meters, total, and
that all SCSI connections are properly terminated.
Host is down
============
ie0: no carrier
===============
Illegal Instruction
===================
If you are booting from CDROM or from the net, check README files
to make sure you are using an image appropriate for your machine
architecture. Run df to make sure there is enough swap space on
the system; too little swap space can cause this error. If you
recently upgraded your CPU to a new architecture, replace your
operating system with one that supports the new architecture (an
operating system upgrade might be required).
Sometimes this condition results from programming error, such as
when a program attempts to execute data as instructions. This
condition can also indicate device file corruption on your
system.
If you are booting from the net, check README files to make sure
you are using a boot image for that architecture. If you are
booting from disk, make sure the system is looking at the right
disk, which is usually SCSI target 3. Failing these solutions,
connect a CD drive to the system and boot from CDROM.
Illegal seek
============
Rather than using a pipe on the command line, redirect the output
of the first program into a file and then run the second program
on that file.
Ask the program's author to fix this condition. The program needs
to be changed so it employs a device driver that can accept
special character device controls.
The ioctl() system call was given as an argument for a file that
is not a special character device. This message replaces the
traditional but puzzling "Not a typewriter" message.
The symbolic name for this error is ENOTTY, errno=25.
Generally you can answer yes to this question without harming the
filesystem.
The SUNWbnuu package must be installed before the machine can run
UUCP. Run pkgadd(1M) to install this package from the
distribution CDROM or over the network.
For more information about NIS+, see the NIS+ and FNS
Administration Guide.
This message can appear when someone runs a command from the
shell or uses a third-party application. The sar(1M) command does
not indicate that the system-wide open file limit has been
exceeded.
The probable cause for this is that the shell limit has been
exceeded. The default open file limit is 64, but can be raised to
256.
# mount -o rw,remount /
pkgadd -d .
If other devices on the system are not working correctly, the
system might havea corrupt /devices directory. Halt the system
and boot using the -r (reconfigure) option. The system will run
fsck(1M) if the /devices filesystem is corrupted, most likely
fixing the problem.
Invalid argument
================
This C shell message results from a command line with two pipes
(|)in a row or from a pipe without a command afterwards.
I/O error
=========
First find out which device is experiencing the I/O error. If the
device is a tape drive, make sure a tape is inserted into the
drive. When this error occurs with a tape in the drive, it is
likely that the tape contains an unrecoverable bad spot.
In some cases this error might occur on a call following the one
to which it actually applies.
Is a directory
==============
Look at the kernel error messages that preceded this one to try
to determine the cause of the problem. Error messages such as
"BAD TRAP" usually indicate faulty hardware. Until the problem
that caused the kernel panic is resolved, a kernel core image
cannot be saved for debugging.
Killed
======
This error does not necessarily occur when you first bring up an
application. It could take months to develop, if ordinary use of
the application seldom references the undefined symbol.
/usr/dt/lib:/usr/openwin/lib
If the system is busy with other processes, this error can occur
frequently. If possible, try to reduce the system load by
quitting applications or killing some processes.
The Lance Ethernet chip timed out while trying to acquire the bus
for a DVMA transfer. Most network applications wait for a
transfer to occur, so generally no data gets lost. However, data
transfer might fail after too many time-outs.
For more information about the Lance Ethernet chip, see the
le(7D) man page.
If the Ethernet cable is plugged in, find out whether or not the
Ethernet hub does a Link Integrity Test. Then become superuser to
check and possibly set the machine's NVRAM. If the hub's Link
Integrity Test is disabled, set this variable to false.
# eeprom | grep tpe tpe-link-test?=true # eeprom 'tpe-link-
test?=false'
LINK COUNT FILE I=i OWNER=o MODE=m SIZE=s MTIME=t COUNT... ADJUST?
===================================================================
Generally you can answer yes to this question without harming the
filesystem.
For information on updating NIS data, see the section on NIS maps
in the NIS+ and FNS Administration Guide. If you are using the
AnswerBook, "hosts.byaddr" is a good search string.
Login incorrect
===============
Check the /etc/passwd file and the NIS or NIS+ passwd map on the
local system to see if an entry exists for this user. If a user
has simply forgotten the password, su and set a new one with the
passwd usernamecommand. This command automatically updates the
NIS+ passwd map, but with NIS you'll need to coordinate the
update with the passwd map.
The "Login incorrect" problem can also occur with older versions
of NIS when the user name has more than eight characters. If this
is the case, edit the NIS password file, change the user name to
have eight or fewer characters, and then remake the NIS passwd
map.
If that doesn't work, see the message "su: No shell" and follow
most of the instructions given there. Instead of changing the
default shell however, make the password field blank in
/etc/shadow.
lp hang
=======
The mailx program can usually recover from this error and
delineate mail message boundaries correctly. Pay close attention
to the message that might be truncated or combined with another
message, and to all messages after that one. If a mail file
becomes hopelessly corrupted, run it through a text editor to
eliminate all Content-Length lines, and ensure that each message
has a From (no colon) line for each message, preceded by a blank
line.
Replace the SPARCstation 2 CPU with one that isat the most
recent dash level.
memory leaks
============
An application uses up more and more memory, until all swap space
is exhausted.
The system was unable to mount the filesystem that was specified
because the super-block indicates that the filesystem might be
corrupted. This is not an impediment for read-only mounts.
Network is down
===============
Network is unreachable
======================
This message comes from the NFS server when it gets a request
with unrecognized or incorrect arguments. Typically, it means the
request could not be XDR decoded properly. This can result from
corruption of the packet over the network, or from an
implementation bug causing the NFS client to improperly encode
its arguments.
To add more swap area, use the swap -a command on the target
system. Alternatively, reconfigure the target system to have
more swap space. As a general rule, wwap space should be two to
three times as large as physical memory.
No child processes
==================
The login(1) program could not find the home directory listed in
the password file or NIS passwd map, so it deposited the user in
the root directory.
/home auto_home
+auto_home
It is also possible that the NFS server has not shared (exported)
this /home directory, or that the NFS daemons on the server have
disappeared.
No recipients specified
=======================
No route to host
================
Remove unneeded files from the hard disk or diskette until there
is space for all the data you are writing. It might be advisable
to move some directories onto another filesystem and create
symbolic links accordingly. When a tape is full, continue on
another one, use a higher density setting, or obtain a higher-
capacity tape.
Look in the /devices directory to see why this device does not
exist, or why the program expects it to exist. The similar "No
such device or address" message tends to indicate I/O problems
with an existing device, whereas this message tends to indicate a
device that does not exist at all.
This can occur when a tape drive is off-line or when a device has
been powered off or removed from thesystem.
For tape drives, make sure the device is connected, powered on,
and toggled on-line (if applicable). For disk and CDROM drives,
check that the device is connected and powered on.
With all SCSI devices, ensure that the target switch or dial is
set to the number where the system originally mounted it. To
inform the system of a change to the target device number, reboot
using the -r (reconfigure) option.
The specified file or directory does not exist. Either the file
name or path name was entered incorrectly.
Check the file name and path name for correctness and try again.
If the specified file or directory is a symbolic link, it
probably points to a nonexistent file or directory.
Make sure the NIS map name is spelled correctly. To see a list of
nicknames for the various NIS maps, run the ypcat -x command. To
see a full list of the various NIS maps (databases), run the
ypwhich -m command. If the NIS service were not running on the
current machine, these commands would result in a "can't
communicate with ypbind" message.
No such process
===============
To eliminate this message at boot time, remove the cron file for
the nonexistent user, or rename it if the user's login name has
changed. If this is a valid user, create an appropriate password
entry for this name.
Not a directory
===============
/usr/swap - - swap - no -
not found
=========
This message indicates that the Bourne shell could not find the
program name given as a command.
Check the form and spelling of the command line. If that looks
correct, echo $PATH to see if the user's search path is correct.
When communications are garbled, it is possible to unset a search
path to such an extent that only built-in shell commands are
available. Here is a command to reset a basic search path:
$ PATH=/usr/bin:/usr/ccs/bin:/usr/openwin/bin:.
For more general information on the login shell, see the section
on customizing your work environment in the Solaris Advanced
User's Guide.
Not owner
=========
Not supported
=============
out of memory
=============
See the message "Not enough space" for details. Any data written
during this condition will probably be lost.
To gain credentials for secure RPC, users can run keylogin (after
login) and type in their secret key. To stop this message from
appearing at login time, users can run the chkey -p command and
set their network password to bethe same as their NIS+ password.
If a user doesn't remember the network password, the system
administrator should delete and re-create the user's credentials
table entry so the user can establish a new network password with
chkey.
Permission denied
=================
Check the ownership and protection mode of the file (with a long
listing from the ls-l command) to see who is allowed to access
the file. Then change the file or directory permissions as
needed.
Try torlogin again, perhaps after waiting a few minutes for the
system to reboot.
rebooting...
============
The C shell sometimes issues this message when it clears away the
window process group after the user exits the window system. This
can happen when the window system doesn't clean up after itself.
This indicates that the fork(2) system call failed because the
system's process table is full, or that a system call failed
because of insufficient memory or swap space. It is also possible
that a user is not allowed to create anymore processes.
Note that this message can indicate "Result too small" in the
case of floating pointunderflow.
rx framing error
================
Check that all SCSI devices on the bus are Sun approved hardware.
Then verify that all cables are no longer than six meters, total,
and that all SCSI connections are properly terminated. If power
fluctuations are occuring, invest in an uninterruptible power
supply.
This message indicates that the system sent data over the SCSI
bus, but the data never reached its destination because of a SCSI
bus reset. The most common cause of this condition is conflicting
SCSI targets.¤Data corruption is possible but unlikely to
occur, because this failure prevents data transfer.
Verify that all cables are no longer than six meters, total, and
that all SCSI connections are properly terminated. If power
surges are a problem, acquire a surge suppressor or
uninterruptible power supply.
Segmentation Fault
==================
To see which program produced a core file, run either the file(1)
command or the adb (1) command. The following examples show the
output of the file and adb commands on a core file from the
dtmail program.
$ file core core: ELF 32-bit MSB core file SPARC Version 1, from
`dtmail'
Check the user's mail spool file to see if a message ends without
a newline character. If so, talk with the user and determine how
to prevent the problem from occurring again. If these messages
are the result of network problems, you could try moving the mail
spool directory to another machine with a faster network
interface.
This message from the SCSI tape drive appears when Exabyteor DAT
tapes generate too many soft (recoverable) errors. It is followed
bythe advisory "Please, replace tape cartridge" message. Soft
errors are an indication that hard errors could soon occur,
causing data corruption.
This message from the SCSI tape drive appears when Archive tapes
generate too many soft (recoverable) errors. It is followed by
the advisory "Periodic head cleaning required and/or replace tape
cartridge" message. Soft errors are an indication that hard
errors couldsoon occur, causing data corruption.
The original vnode isno longer valid. The only way to get rid of
this error is to force the NFS server and client to renegotiate
file handles.
This message comes from the NFS status monitor daemon statd,
which provides crash recovery services for the NFS lock daemon
lockd. The message indicates that statd has left old references
in the /var/statmon/sm and /var/statmon/sm.bak directories. After
a user has removed or modified a host in the hosts database,
statd might not properly purge files in these directories, which
results in its trying to communicate with a nonexistent host.
This message results when a user tries to remote copy with rcp(1)
or remote shell with rsh(1) from one machine to another, but has
an stty(1) command in the remote
The solution is to move the stty command to the user's .login (or
equivalent) file. Alternatively, execute the stty command in
.cshrc only when the shell is interactive. Here is a test to do
just that:
su: No shell
============
After the system comes up and gives you a # prompt, mount the
device corresponding to the original / partition somewhere, such
as with a mount(1M) command similar to the one below. Then run an
editor on the newly-mounted system password file (use ed(1) if
terminal support is lacking):
Use the editor to change the password file's root entry to call
an existing shell, such as /usr/bin/csh or /usr/bin/ksh.
Reboot single user (for example with boot -s) and run ls -l
/dev/bd* to see if this is the problem. If so, remove
/dev/bd.off, then run bdconfig off or reboot with the -r
(reconfigure) option.
could
specify one of them after the -f option of tar(1).
This error message from tar(1) indicates that the checksum of the
directory and the files it has read from tape does not match the
checksum advertised in the header block. Usually this indicates
the wrong blocking factor, although it could indicate corrupt
data on tape.
For more information on tar tapes, see the section on copying UFS
files in the System Administration Guide,Volume I.
Text is lost because the maximum edit log size has been exceeded.
=================================================================
To increase the maximum size of the Command Tool log file, use
cmdtool with the-M option, specifying more than 100,000 bytes.
First run fsck -n on the filesystem, to see how many and what
type of problems exist. Then run fsck again to repair the
filesystem. If you have a backup of the filesystem, you can
generally answer "y" to all the fsck questions. It's a good idea
to keep a record of all problematic files and inode numbers for
later reference. To run fsck yourself, specify options as
recommended by the boot script. For example:
# fsck /dev/rdsk/c0t4d0s0
Usually, files lost during fsck repair were created just before a
crash or power outage, and cannot be recovered. If important
files are lost, you can recover them from backup tapes.
If you don't havea backup, ask an expert to run fsck for you.
This message means the system is going down immediately and it's
too late to save any changes.
For more information on shutting down the system, see the System
Administration Guide, Volume I. If you are using the AnswerBook,
"halting the system" is a good search string.
Save all changes now or your work will be lost. Write out any
files you were changing, send any e-mail messages you were
composing, and close your files.
For more information on shutting down the system, see the System
Administration Guide, Volume I. If you are using the AnswerBook,
"halting the system" is a good search string.
If you choose "Save Changes" mailtool will request the other mail
reader to relinquish its lock and write out any changes it has
made to your inbox. If you choose "Ignore" mailtool will read
your inbox without locking it. If you choose "Cancel" mailtool
will exit.
This problem can occur while booting from the net, and indicates
a network connection problem.
Check to see why there are so many links to the same file. To get
more than the maximum number of hard links, use symbolic links
instead.
A process has too many files open at once. The system imposes a
per-process soft limit on open files, OPEN_MAX (usually 64),
which can be increased, and a per-process hard limit (usually
1024), which cannot be increased.
You can control the soft limit from the shell. In the C shell,
use the limit command to increase the number of descriptors. In
the Bourne or Korn shells, use the ulimit command with the -n
option to increase the number of file descriptors.
undefined control
=================
This message, prefaced by the file name and line number involved,
is from the C preprocessor /usr/ccs/lib/cpp, and indicates a line
starting with a sharp (#) but not followed by a valid keyword
such as define or include.
This message from the C shell csh(1) indicates that a user typed
a command containing a backquote symbol (`) without a closeing
backquote. Similar messages result from an unmatched single quote
(') or an unmatched double quote ("). Other shells generally give
a continuation prompt when a command line contains an unmatched
quote symbol.
This means that the xinit(1) program, which sets up X11 resources
and starts a window manager, failed to locate the X server
process. Perhaps the user interrupted window system startup, or
exited abnormally from OpenWindows (for example, by killing
processes or by rebooting). It is possible that the X server
crashed. Data loss is possible in some cases. Depending on
process timing, this message might be normal when OpenWindows
exits during a system reboot.
In most cases, especially if the power has been off for less than
a month, the internal clock keeps the correct time, and you do
not have to reset the date. Use the date(1) command to check the
date andtime on your system. If the date or time is wrong,
become superuser and use the date(1) command to reset them.
To reduce the frequency of this message, add this line near the
bottom of the /etc/system file and reboot:
set esp:esp_use_poll_loop=0
You might also see this message repeatedly after manually
removing a CD when it was busy. Don't do this! To get the system
back to normal, reboot the system with the -r (reconfigure)
option.
The system swap area (virtual memory) has filled up. You needto
reduce swap space consumption by killing some processes or
possibly by rebooting the system.
WARNING: TOD clock not initialized-- CHECK AND RESET THE DATE!
========================================================-=====
This message indicates that the Time Of Day (TOD) clock reads
zero, so its time is the beginning of the UNIX epoch: midnight 31
December 1969. On a brand-new system, the manufacturer might have
neglected to initialize the system clock. On older systems it is
more likely that the rechargeable battery has run out and
requires replacement.
This message comes at boot time from the /etc/rcS script whenever
it gets a bad return code from fsck(1) after checking a
filesystem. The message recommends an fsck command line, and
instructs you to exit the shell when done to continue booting.
Then the script places the system in single-user mode so fsck can
be run effectively.
Watchdog Reset
==============
Look for some other message that might help diagnose the problem.
By itself, a watchdog reset doesn't provide enough information;
because traps are disabled, all information has been lost. If all
that appears on the console is an ok prompt, issue the PROM
command below to view the final messages that occurred just
before system failure:
ok f8002010 wector p
This message doesn't come from the kernel, but from the OpenBoot
PROM monitor, a piece of Forth software that gives you the ok
prompt before you boot UNIX. If the CPU detects a trap when traps
are disabled (an unrecoverable error), it signals a watchdog. The
OpenBoot PROM monitor detects the watchdog, issues this message,
and brings down the system.
Window Underflow
================
This means that the client has lost its connection to the X
server. The "0.0" represents the display device, which is usually
the console. This message can appear when a user is running an X
application on a remote system with the DISPLAY set back to the
original system and the remote system's X server disappears,
perhaps because someone exited X windows orrebooted the machine.
It sometimes appears locally when a user exits the window system.
Dataloss is possible if applications were killed before saving
files.
This means that I/O with the X server has been broken. The "0.0"
represents the display device, which is usually the console. This
message can appear when a user is running Display PostScript
applications and the X server disappears or the client is shut
down. Data loss is possible if applications disappeared before
saving files.
This means that xterm(1) has lost its connection to the X server.
The "0.0" represents the display device, which is usually the
console. This message can appear when a user is running xterm and
the X server disappears or the client gets shut down. Data loss
is possible if applications were killed before saving files.
Try to run the terminal emulator again in a few minutes after the
system has rebooted and the window system is running.
This message from the XView library warns that a requested font
is not installed on the X server. Often multiple warnings appear
about the same font. The set of available fonts can vary from
release to release.
This message from the ypwhich(1) command indicates that the NIS
binder process ypbind(1M) is not running on the local machine.
# /usr/lib/netsvc/yp/ypbind -broadcast
zsN: silo overflow
==================
This message means that the Zilog 8530 character input silo (or
serial portFIFO) overflowed before it could be serviced. The
zs(4S) driver, which talks to a Zilog Z8530 chip, is reporting
that the FIFO (holding about two characters) has been overrun.
The number after zs shows which serial port experienced an
overflow:
#######################################################################
antapex.org
albert van der sel
#######################################################################