Daily Activity
Daily Activity
Daily Activity
3) top:top command displays dynamic real-time view of the running tasks managed by
kernel and in Linux system. The memory usage stats by top command include real-time
live total, used and free physical memory and swap memory with their buffers and
cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active
processes. ps will show the percentage of memory resource that is used by each
process or task running in the system. With this command, top memory hogging
processes can be identified.
Usage:
$ps aux
We can use the top command, and press M which orders the process list by memory
usage.
We have to make sure Sufficient disk space is available on the mount points on each
OS servers where the Database is up and running.
$df h
5)Filesystem space:
Under normal threshold.Check the filesystem in the OS side whether the sufficient
space is available at all mount points.
DATABASE :
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=STAR01D;
Checking the alert log file regulary is a vital task we have to do.In the alert log
files we have to looks for the following things:
We can check the wait events details with the help of below queries:
The following query provides clues about whether Oracle has been waiting for
library cache activities:
The below Query gives details of Users sessions wait time and state:
9) Max Sessions:
There should not be more than 6 inactive sessions running for more than 8 hours in
a Database in order to minimize the consumption of CPU and I/O resources.
b) Users and Sessions CPU and I/O consumption can be obtained by below query:
-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumption
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI')
date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program )
program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J'))
days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,
(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
We can find out long running jobs with the help of the below query:
We can check the invalid objects with the help of the below query:
We need to analyze the jobs that are running once in a week as a golden rule.
The below steps can be considered for analyzing jobs.
We can find out the failed jobs and Broken jobs details with the help of the Below
query:
We can get information of temporary tablespace usage details with the help of below
query:
Set lines 1000
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;
We can get information of Undo tablespace usage details with the help of the below
query:
set lines 1000
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';
We can get the PGA usage details with the help of the below query:
select st.sid "SID", sn.name "TYPE",
ceil(st.value / 1024 / 1024/1024) "GB"
from v$sesstat st, v$statname sn
where st.statistic# = sn.statistic#
and sid in
(select sid from v$session where username like UPPER('&user'))
and upper(sn.name) like '%PGA%'
order by st.sid, st.value desc;
Enter value for user: STARTXNAPP
14)Validating the Backup:
14)Hotbackup/Coldbackup:
Validating the backup of Database.It should complete on time with the required data
for restoring and recovery purpose if required.
We can the log switch details with the help of the below query:
We can use the below queries for archive logs generation details:
select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)
from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')
order by to_char(COMPLETION_TIME,'DD-MON-YYYY');
select count(*)
from v$archived_log
where trunc(completion_time)=trunc(sysdate);
16)I/O Generation:
We can find out CPU and I/O generation details for all the users in the Database
with the help of the below query:
-- Show IO per session,CPU in seconds, sessionIOS.
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI')
date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program )
program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J'))
days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,
(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
To know what the session is doing and what kind of sql it is using:
eg: sid=1853
17)Sync arch:
In a Dataguard environment we have to check primary is in sync with the secondary
Database.This we can check as follows:
The V$ MANAGED_STANDBY view on the standby database site shows you the activities
performed by
both redo transport and Redo Apply processes in a Data Guard environment
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
In some situations, automatic gap recovery may not take place and you will need to
perform gap recovery manually. For example, you will need to perform gap recovery
manually if you are using logical standby databases and the primary database is not
available.
The following sections describe how to query the appropriate views to determine
which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query
the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;
If it displays no rows than the primary Database is in sync with the standy
Database.If it display any information with row than manually we have to apply the
archive logs.
After you identify the gap, issue the following SQL statement on the primary
database to locate the archived redo log files on your primary database (assuming
the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
Eg:
SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN
7 AND 10;
Copy these log files to your physical standby database and register them using the
ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For
example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';
After you register these log files on the physical standby database, you can
restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database
only returns the next gap that is currently blocking Redo Apply from continuing.
After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view
again on the physical standby database to determine the next gap sequence, if there
is one. Repeat this process until there are no more gaps.
Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical
standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE
statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-
1292880008_10.arc';
After you register these log files on the logical standby database, you can restart
SQL Apply.
The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap
that is currently blocking SQL Apply from continuing. After resolving the
identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the
logical standby database to determine the next gap sequence, if there is one.
Repeat this process until there are no more gaps.
Monitoring Log File Archival Information:
Step 1 Determine the current archived redo log file sequence numbers.
Enter the following query on the primary database to determine the current archived
redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG
WHERE STATUS='CURRENT';
Step 2 Determine the most recent archived redo log file.
Enter the following query at the primary database to determine which archived redo
log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query at the primary database to determine which archived redo
log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';
THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14
18)Purge arch:
We have to make sure the archive logs files are purged safely or move to Tape drive
or any other location in order to make space for new archive logs files in the
Archive logs destination locations.
19)Recovery status:
In order to do recover make sure you are having latest archive logs,so that you can
restore and do the recovery if required.
eg:SID=1844
I would like to add one more script which will tell me details regarding the Size
of the Database used,occupied and available and Tablespace usage
details along with hit ratio of various SGA components which can be very helpfull
to monitor the performance of the Databases.
Database_monitor.sql:
ttitle off
col val2 new_val dict noprint
select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2
from V$ROWCACHE;
ttitle off
col val3 new_val phys_reads noprint
select Value val3
from V$SYSSTAT
where Name = 'physical reads';
ttitle off
col val4 new_val log1_reads noprint
select Value val4
from V$SYSSTAT
where Name = 'db block gets';
ttitle off
col val5 new_val log2_reads noprint
select Value val5
from V$SYSSTAT
where Name = 'consistent gets';
ttitle off
col val6 new_val chr noprint
select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6
from DUAL;
ttitle off
col val7 new_val avg_users_cursor noprint
col val8 new_val avg_stmts_exe noprint
select SUM(Users_Opening)/COUNT(*) val7,
SUM(Executions)/COUNT(*) val8
from V$SQLAREA;
ttitle off
set termout on
set heading off
ttitle -
center 'SGA Cache Hit Ratios' skip 2
Best regards,
#@6!----------hari
Posted by Hari Thanneru at 06:57
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
No comments:
Post a Comment
www.facebook.com
Follow by Email
Email address...
Submit
Blog Archive
? 2012 (125)
? 12 (125)
? Dec 17 (19)
? Dec 18 (47)
? Dec 19 (10)
? Dec 20 (3)
? Dec 21 (14)
Oracle Troubleshooting
11g rman cloning using duplicate command
clone of oracle_home in oracle 11g
Creating large empty files in Linux / UNIX
how to apply database patches
dba daily activities
Database upgradation 9i to 10g
creation oracle 11g database manually in linux
data guard configuration using rman
Partitioning a Non-partitioned table
ora errors
scheduling the jobs using cron tab exp/imp
datapump through network
recover database from rman cold backup
? Dec 23 (10)
? Dec 26 (6)
? Dec 29 (8)
? Dec 30 (8)
? 2013 (68)
? 2014 (17)
? 2016 (1)
About Me
My photo
Hari Thanneru