BASISWORLD

Start, Learn and work on SAP Basis.

Monday, November 14, 2016






         
                                                                                                                                                                    After an Oracle instance is started, a special process, the Listener allows the database clients and the instance to communicate with each other.
Note: The listener process is not part of an Oracle instance; it is part of networking processes that work with Oracle.
In SAP installations, dedicated servers are used. When a work process makes a request to connect to the database, the listener creates a dedicated server process and establishes an appropriate connection.
• The separate server process created on behalf of each work process
(generally, for each user process) is called a shadow process.
• To handle database requests from several SAP system users, a work process
communicates with its corresponding shadow process.
• When a work process has lost its connection to the database system, it automatically reconnects after the database server is available again and a database request is to be processed.
Oracle background processes perform various tasks required for the database to
function properly.
Databases are stored in data files on disks. To accelerate read and write access to
data, it is cached in the database buffer cache in the SGA. .

The Oracle database management system holds the executable SQL statements
in the shared SQL area (also called the shared cursor cache), which is part of
the shared pool allocated in SGA. Another part of the shared pool called the row
cache caches Oracle data dictionary information.

Caching of Data
Databases are stored in data files on hard disks. However, data is never processed
on the disks themselves. No matter whether a database client just needs to read
some data or wants to modify it, it is first copied by the associated shadow process
from the disk to the database buffer cache in the system global area (if it was
not already there).

Figure 3: Oracle Architecture: Caching of Data
Data is always cached if the data is accessed for the first time after an Oracle
instance is started. But as all users concurrently connected to the instance share
access to the database buffer cache, copies of data read from data files into the
buffer cache can be reused by any user.
The smallest logical unit that Oracle uses for copying data between data files and
the buffer cache, as well as for managing data in the cache, is the data block.
• The size of an Oracle data block can generally be chosen during the creation
of a database. In SAP installations, however, the block size is always 8 KB.
• For performance reasons, the physical allocation unit size on disks where
you store Oracle files should also be 8 KB.

Oracle always tries to keep the most recently used data blocks in the buffer cache.
Depending on the size of the buffer cache, it may sometimes be necessary to
overwrite the least recently used data blocks in the buffer cache.
Writing of Modified Data

Figure 4: Oracle Architecture: Writing of Modified Data
Any changes to Oracle data (inserts, updates, and deletions), are always performed
in the buffer cache. An Oracle shadow process itself never copies modified data
blocks (’dirty blocks’) from the buffer cache to the disk. This is the task of a
special Oracle background process called the database writer .
The database writer process writes dirty blocks to disk in the following situations:
• Buffers in the buffer cache that contain dirty blocks (dirty buffers) may not
be reused until these blocks have been copied back to disk. When a shadow
process needs to copy data from disk to the buffer cache, it first scans the
cache for non-modified, reusable buffers. If the number of scanned buffers
reaches a certain threshold, the shadow process signals the database writer to
start writing some of the modified blocks to disk. The database writer then
copies those dirty blocks that are on the list of least recently used blocks
(LSU list), thus making them free.
• At specific times, all modified buffers in the SGA are written to data files
by the database writer. This process is called checkpoint, and checkpoint
process (CKPT) triggers the database writer to perform the process.

Using the concept of deferred writes rather than immediate writes improves
efficiency because in many cases, several changes on the same block are
performed before the block is copied to disk. Also, the database writer performs
multiblock writes in a batch(ed) style to increase I/O efficiency.
Hint: Although one database writer process (DBW0) is sufficient for
most systems, additional database writer processes (DBW1 and so on)
may be configured in exceptional cases. See SAP Notes 124361 and
191463 for more information.
Logging of Modifications
Figure 5: Oracle Architecture: Logging of Modifications
Because of deferred writes, a mechanism is needed to prevent data loss and also
to avoid data inconsistencies in case of a crash of any system component (disk,
Oracle instance, or server).
Generally, each RDBMS logs all data changes in a log area, which is written to
disk at appropriate times. The log is generally written to disk after the commit of a
database transaction so that all data block changes are logged.
A database transaction is a logical unit of work for the database server (LUW -
logical unit of work), which is always treated as an “atomic” unit, meaning it must
either be processed completely or not at all.

In the following paragraphs, do not mix up undo entries / redo entries with undo
tablespace / redo logs
Redo entries are stored in redo logs to perform roll forward recovery when
necessary

• Undo entries are stored in a rollback or undo tablespace to undo changes that
have not been committed (rollback)
• The undo tablespace is a new type of tablespace to store undo entries, and
replaces the rollback tablespace
To achieve data consistency and read consistency, Oracle maintains redo entries
for roll forward or redo recovery for example after a crash, and undo entries
to roll back uncommitted transactions.

Redo entries
Redo entries contain the information necessary to reconstruct, redo, or
roll forward changes made to the database by SQL statements within a
committed transaction. Redo entries contain the new values of the modified
data, also called after images.
Parallel to changes made in data blocks, Oracle shadow processes write redo
entries into the redo log buffer. This is a circular buffer in the system global
area that temporarily records all changes (both uncommitted and committed)
made to the database . The Oracle background process log writer (LGWR)
then writes contiguous portions of the redo log buffer sequentially to an
online redo log file (or group of files) on disk.
The online redo log consists of four (or more) online redo log files. Redo entries in the online redo log are used for the database recovery if necessary (in order to redo changes).
The log writer writes entries from the redo log buffer to the online redo log:
• When any transaction commits (the LGWR also writes a special
commit record for the corresponding transaction in this case)
• Every three seconds
• When the redo log buffer is one-third full
• When the database writer is about to write modified buffers from the
block buffer to disk and some of the corresponding redo records have
not yet been written to the online redo log files (write-ahead logging).
This algorithm ensures that space is always available in the redo log buffer
for new redo records.
When a user (a work process) commits a transaction, the transaction is
assigned a system change number (SCN) by the database system. Oracle
records the SCN along with the transaction’s redo entries in the redo log.
Note: Because Oracle uses write-ahead logging, DBW0 does not
need to write data blocks when a transaction commits.
undo entries
Undo entries contain the information necessary to undo, or roll back, any
changes to data blocks that have been performed by SQL statements, which
have not yet been committed. Undo entries contain the old values of the
modified data, also called before images.
Oracle stores undo information (old values of modified data), called before
images in a special undo segment, separate from the redo log.
The Oracle undo space consists either of an undo tablespace (this solution is
called automatic undo management — only the undo tablespaces must be
created) or of rollback segments (manual undo management — rollback
segments must be allocated in a tablespace and managed).
The undo information of a transaction is retained in the undo space at
least until the end of the transaction. It may be overwritten only after the
transaction has been committed.
During database recovery, Oracle first applies all changes recorded in the redo log
(which includes the recovery of changes in the undo space), and then uses the
undo information to roll back any uncommitted transactions.
Moreover, Oracle can use the undo entries for other purposes, including reading
of snapshots of consistent data (accessing before images of blocks changed in
uncommitted transactions).
Log switch
oacle redo log files do not dynamically grow when more space is needed for redo
entries; they have a fixed size (on SAP systems, typically 50 MB). When the
current online redo log file becomes full, the log writer process closes this file and
starts writing into the next one. This is called a log switch.
• The predefined collection of online redo log files (four in our example) is
used in a cycle.
• At every log switch, Oracle increases the log sequence number (LSN).
Through the LSN, Oracle automatically creates a sequential numbering
of redo logs.
• The online redo log file into which the LGWR is currently writing is called
the current online redo log file.
Control files
Figure 7: Oracle Architecture: Control files
Every Oracle database has a control file, which is a small binary file necessary
for the database to start and operate successfully. A control file contains entries
that specify the physical structure and state of the database, such as tablespace
information, names and locations of data files and redo log files, or the current log
2008 © 2008 SAP AG. All rights reserved. 13
Unit 1: Database Overview TADM51
sequence number. If the physical structure of the database changes (for example,
when creating new data files or redo log files), the control file is automatically
updated by Oracle.
• Control files can be changed only by Oracle. No database administrator or
any other user can edit the control file directly.
• After opening the database, the control file must be available for writing.
If for some reason the control file is not accessible, the database cannot
function properly.
• Oracle control files can be mirrored for security reasons. Several copies can
be stored at different locations; Oracle updates these at the same time. In
SAP installations, the control file is stored in three copies, which must be
created on three physically separate disks.
Normally, control files are quite small and do not grow. When using RMAN for
backups (see unit 2, Backup, Restore and Recovery), control files can grow by a
factor of 10 because they contain information about RMAN backups.
Checkpoints
Figure 8: Oracle Architecture: Checkpoints
The term checkpoint has at least two different meanings.

The checkpoint is the point at which the database writer writes all changed buffers
in the buffer cache to the data files.
The database writer (DBW0) receives a signal at specific times from the
background checkpoint process (CKTP) to perform this action. DBW0 then
copies all buffers that were dirty at that moment to disk. Before DBW0 finishes
this job, other blocks in the buffer cache can become dirty.
After the checkpoint event has finished, the oldest dirty buffer in the buffer cache
determines a point in the redo log from which instance recovery must begin if a
crash occurs. This log position is also called a checkpoint.
The task of the checkpoint process is not only to activate the database writer, and
it does not write blocks to disk (because this is the task of DBW0):
• It also writes checkpoint information to the data file header.
• It writes information about the checkpoint position in the online redo log
into the control file.
The information about the checkpoint position in the online redo log in the control
file is needed for instance recovery. It tells Oracle that all redo entries recorded
before this point are not necessary for database recovery, as they were already
written to data files.
The frequency of checkpoints is one of the factors that influences the time required
for the instance to recover from a failure. The less frequent the checkpoints, the
longer the time the instance needs for recovery.
• A checkpoint always occurs at a log switch.
• Frequency of checkpoints can be specified with help of Oracle parameters. In
SAP installations, these parameters have values such that they are effectively
not used, and checkpoints occur only at log switches.

Database Recovery
Figure 9: Oracle Architecture: Database Recovery
Online redo logs play an important role when starting an Oracle instance and
opening the database, especially after a crash or when the instance was not shut
down “cleanly”. In this situation, Oracle recognizes that the database was not
properly shut down and automatically initiates database recovery (also called
instance recovery).
The automatic recovery at restart consists of two phases:
• Starting at the checkpoint position, redo entries are read from the online redo
log and transactions are “reprocessed” (roll forward). This includes the roll
forward of changes in the undo space.
• For every transaction that was either uncommitted at the time of the crash or
rolled back explicitly before the crash (so that there is no commit entry for it
in the redo log), a rollback is performed with the help of before images read
from the undo space. Oracle ensures that this is always possible because it
never deletes undo entries of open transactions from the undo space.
The result is a consistent database containing only changes committed before
the crash.

In the example in the above figure:
• Transaction 1 is not relevant for redo/undo, as it was committed at the time
of the last checkpoint. Changes to this transaction were written to disk at
the last checkpoint.
• Transactions T2, T3, T4, T5, and T6 are redone, as they caused changes in the
database after the last checkpoint. However, among these, only the changes
to T4 and T6 are committed, which means only these changes are persistent.
• Transactions T2, T3, and T5 are rolled back.
Redo Log Mirroring
From a data security point of view, the online redo logs are one of the most critical
areas in an Oracle server. If you lose them in a crash, a complete recovery of the
database is not possible, and the result is a loss of some data.
Caution: Online redo logs must always be mirrored, meaning that two or
more copies of each redo log must be maintained on different disks.
Figure 10: Oracle Architecture: Redo Log Mirroring
Oracle itself can mirror online redo logs. This feature is used in SAP installations
by default, so that there is no need for a software or hardware RAID solution
(redundant array of independent disks) solution. On the other hand, from the
data security point of view, it does not matter which solution you choose. Even a
combination of both Oracle and RAID mirroring is feasible to minimize the risk
of losing an online redo log.
Archiving
As the online redo log is limited in size and cannot grow automatically, Oracle
must overwrite old redo entries to write new ones.
Only the oldest redo entries up to the checkpoint position in the log, which
corresponds to data changes that have already been written to data files, can be
overwritten. This ensures that automatic instance recovery after a crash is always
possible.
Figure 11: Oracle Architecture: Archiving
However, in a situation where you must restore data files after a disk crash and
recover them manually (usually to the state the data had at the point of the crash),
you need both a database backup and all the redo information written after this
backup. In an SAP system, log switches occur every few minutes so that online
redo log files are reused very frequently. To prevent loss of redo information, it
must be copied from online redo log files to a safe location before overwriting.
This is the task of a special Oracle background process called archiver (ARC0).
Archiving must be explicitly activated, through turning on the ARCHIVELOG
mode of the database and setting the Oracle instance parameter
LOG_ARCHIVE_START to TRUE.

When LOG_ARCHIVE_START is TRUE, the archiver process automatically
starts when an Oracle instance is started. When the ARCHIVELOG mode of the
database is turned on, the archiver process automatically copies a newly written
online redo log file – after a corresponding switch to the next online redo log file –
to an offline redo log file, and overwriting old redo log entries in online redo logs
is not allowed before the entries have been copied to offline redo logs. Once an
offline redo log file has been successfully created as a copy, the corresponding
online redo log file is released to be overwritten with new log information. The
directory where offline redo log files are created can be specified through an
Oracle parameter.
Caution: Archiving must be activated in productive systems. Moreover,
offline redo log files should be stored on a mirrored disk to prevent loss of
redo information. A RAID system can be used for this purpose.
In an SAP system, the activation of archiving is the default setting. A deactivation
of archiving by changing the database log mode to NOARCHIVELOG (required,
for example, during a system upgrade) is supported by the SAP tool BRSPACE.
Caution: If you lose a disk containing offline redo logs and data files
after a crash, complete recovery is no longer possible. Therefore, offline
redo logs and data files should be stored on different disks.
Other Background Processes
There are two more background processes that always run in an Oracle instance:
System Monitor (SMON)
• Performs recovery at instance startup, if necessary
• Writes alert log information if any other instance process fails
• Cleans up temporary segments that are no longer in use
Process Monitor (PMON)
• Monitors shadow processes
• In case a client process crashes, PMON rolls back its non-committed
transaction, stops the corresponding shadow process, and frees
resources that the process was using.


Oracle Directory Structure in SAP
Figure 12: Oracle Directory Structure on SAP Systems
The Oracle directory (folder) and file names are standardized in SAP
environments. Directories (folders) are always created with the same structure and
naming convention during the installation. This structure may not be changed, and
the naming conventions must be observed, as SAP tools for Oracle administration
rely on it.
Various parts of the Oracle directories and files must be physically separated from
each other for performance and data security reasons. On UNIX systems, the
Oracle directories appear as a tree structure because the file systems created on the
physical disks are mounted on directories. On Windows, however, you will have
several \oracle\<DBSID> folders on different disks with different drive letters.
Hint: On UNIX, the <Release> subdirectory under
/oracle/<DBSID> also contains information whether you use a 32-bit
or 64-bit Oracle version, for example, /oracle/<DBSID>/102_64
for a 64-bit Oracle 10.2.
20 © 2008 SAP AG. All rights reserved. 2008
TADM51 Lesson: Database Architecture
Figure 13: Oracle Directories and Files on SAP Systems
dbs (on UNIX) or database (on Windows)
• The Oracle profile init<DBSID>.ora or spfile<DBSID>.ora
holds the Oracle instance configuration parameters
• The profile init<DBSID>.sap holds configuration parameters for
administration tools BR*Tools
sapdata<n>
Contains the data files of the tablespaces
origlogA/B, mirrlogA/B
Online redo log files reside in the origlog and mirrlog directories:
Log file numbers 1 and 3 and their mirrors in origlogA and mirrlogA,
log file numbers 2 and 4 and their mirrors in origlogB and mirrlogB,
respectively

oraarch
Offline redo log files are written to the oraarch directory; their names are
specified with help of Oracle instance configuration parameters, so the name
<DBSID>arch1_<LSN>.dbf is just an example.
Note that this has changed. Previously, offline redo logs were written to the
saparch directory. The reason for this new directory is that in the event of
an archiver stuck, in rare cases BRARCHIVE was not able to backup offline
redo logs to release space because BRARCHIVE was not able to write into
its log file. How to change from saparch to oraarch is described in
lesson Housekeeping and Troubleshooting.
saptrace
Oracle dump files are written in the directory saptrace. The
Oracle alert log alert_<DBSID>.log occurs in the directory
saptrace/background. Traces of Oracle shadow processes are written
to the directory saptrace/usertrace.
saparch
Stores the logs written by the SAP tool BRARCHIVE
sapbackup
Stores logs written by the SAP tools BRBACKUP, BRRESTORE, and
BRRECOVER
sapreorg
BRSPACE creates logs for its different functions here
sapcheck
BRSPACE creates logs for its different functions here
22 © 2008 SAP AG. All rights reserved. 2008
TADM51 Lesson: Database Architecture
Oracle Directories and Environment Variables
Figure 14: Oracle Directories and Environment Variables
On the database server, the environment variables ORACLE_SID,
ORACLE_HOME, and SAPDATA_HOME must always be set for the user
<sapsid>adm, as well as for the user ora<dbsid> on a UNIX platform.
ORACLE_SID
This is the system ID of the database instance (DB SID).
ORACLE_HOME
This is the home directory of the Oracle software. More precisely,
ORACLE_HOME points to the directory that contains subdirectories bin,
dbs (or database), and network. This means in particular that the
Oracle profile init<DBSID>.ora or spfile<DBSID>.ora is always
located in $ORACLE_HOME/dbs (in %ORACLE_HOME%\database on
Windows).


SAPDATA_HOME
Points to the directory in which the database files are stored.
Hint: : The location of the control files and of the offline redo
logs is configured in the Oracle profile init<DBSID>.ora; the
location of all other files (data files, online redo logs, and so on) is
stored in the database itself. Therefore, SAPDATA_HOME is mainly
used by BR*Tools to offer suitable directories, for example, when
new tablespaces or data files need to be created.
other variables
There are also other variables you can set if the corresponding directories
do not have any subdirectories of SAPDATA_HOME. This is often the case
on Windows systems owing to the different drive letters: SAPARCH,
SAPBACKUP, SAPCHECK, SAPREORG.
On an Oracle client (especially on every SAP application server), the variable
ORACLE_HOME must also be set so that connection information can be found in
$ORACLE_HOME/network/admin.
In a Unix environment, the environment variables ORA_NLS10 are
also set for the user ora<dbsid>. The default value for ORA_NLS10 is
$ORACLE_HOME/nls/data. Since ORACLE_HOME is set, ORA_NLS10
does not need to be set.
For the user <sapsid>adm, the environment variable ORA_NLS10 is not set or
must not be set. The Oracle instant client downloads the NLS data from a dynamic
library (NLS library), which is stored in the Instant Client directory.
See SAP Note 830578 for information on how to set this variable correctly for
Oracle 10g. For earlier Oracle releases, see SAP Note 180430 and the other SAP
Notes referenced there.
Oracle Real Application Clusters (RAC)
To improve performance, increase throughput, and deliver high availability at
the same time, install your SAP system in a real application clusters (RAC)
environment. RAC overcomes the restrictions of normal failover solutions with:
• Concurrent processing
• Load balancing
• Fast and reliable detection of a node or network failure
• Fast recovery

ST01: - SYSTEM TRACE

  
ST01 is the system trace, we can see the

1) Authorization check: -If this switch is activated, all access authorizations that the system performs are recorded. The resulting report shows when the system checked what authorization.
Double-click on a trace record to get the following information:
  • Date, time
  • Work process number
  • User
  • Authorization object
  • Result (0, if authorization was given)
  • Program, line
  • Number of authorization values

The authorization values are then listed.

2) Kernal Functions: -Trace Switch: Calls from C Modules

Calls of kernel function.
Some kernel functions write this trace.

3) General Kernel: - Trace switch: Unformatted entries from C programs

4) DB Access SQL Trace: - Trace switch: "Actual" database accesses
This trace switch concerns the interface between the platform-independent parts of the database access module and the platform-dependent parts.
Here you can see what RSDB could not achieve via SAP buffers.

5) Table Buffer Trace : - Trace switch: Buffered tables
6) RFC calls :- Trace switch: RFC calls
7) Lock operations: - Trace switch: Enqueue calls

Depending upon the requirement we need to switch on the trace and after specific time switch off the trace. If you keep on the trace, the trace files may grow unlimitedly there by disk overflow may occur.


ST02: -BUFFER-TUNNING SUMMARY



ü  Buffers are need to monitor regularly, in maintaining buffers in the application servers improves the performance of the system.
ü  During restart the system the performance will be low because the buffers are cleaned during restart. That is reason the system perform need to monitor after the buffer accumulation.
ü  The various types of buffers like Name Tab, Program Buffers, CUA, CalendarBuffers, Table Buffers, and Screen Buffer each of the areas are assigned with certain instance profiles.
ü  There is also a column called Swaps, which is critical. If there is more number of SWAPS, you need to identify the reason.


ü  The reasons for Buffer Swaps are
o    If there is no memory
o    if there is no directories
ü  Based on the SWAP allowable limit you need to change the size of the buffers. While changes the buffer sizes keep in ‘KIV’, buffers sizes should be allowable in the total memory.
ü  You can identify all existing parameters in the current parameters. There is a memory column which shows the amount of memories utilizes in the system.
Various memories are
o    Heap memory
o    Extended memory
We have to see that system doesn’t use complete heap memory and extended memory.

By clicking detailed analysis we can see the following figure
-It will display

Ø  Also gives detail of SAP memory, roll area, paging area, extend, heap memory
Ø  Types of buffering:
Ø  Full buffering
Ø  Single record buffering
Ø  Generic record buffering
Ø  No buffering
Ø  Improve buffer hit ratio (>94%)
Ø  Check buffer utilization.
Ø  Improve buffer utilization by increasing buffer size (ABAP/buffer size) thro RZ10.
Ø  Maximum allowable limit for swaps are recommended by SAP is 5000 to 10000.if it is more than 20000 then increase buffer size.

STO3:- WORKLOAD ANALYSIS

ü  The load analysis of the system will be monitor in t-code ST03.
ü  Go to Expert mode- select day, weak or month load.
ü  The amount of time consumed by individual components like roll time, wait time, CPU time, DB time … etc. will be displayed for each user, transaction, programs and fro each processes. Average is also displayed for the above components.
ü  Good response time of a dialog step will be in between 600 m.sec to 1800 m.sec.
ü  Roll-in time and roll-out time should be more than 10% of the response time.
ü  Load time, CPU, DB times should be more than 40% of (RT – WT)
ü  If CPU time is more, look into ABAP programs

Response time per dialog step:
The response time of a dialog step is the time required for requesting the dialog from the dispatcher work process to the processing of the dialog through the ending of the dialog in the dispatcher and the transfer of the data to the Presentation Server.

This also includes the time used for "Roundtrips" to transfer data from the SAP R/3 front-end to the application server and back. This time is recorded as roll wait time on the application server, while it is displayed as GUI time on the front-end. For inefficient networks, such as WAN connections, this time can contribute significantly to the response time, although it uses no resources on the application server, as the context is rolled out and the work process is released.
The response time is usually split into wait time and "dispatched" time. The SAP response time is made up of the following components:
+ Load times for programs, screens, and GUI interfaces
+ Roll times for rolling in work data
+ ABAP processing times
+ Database time
+ Enqueue time for logical SAP locks
+ Roll wait time (not including task types RFC/CPIC/ALE).
The CPU time is not an additive component of the response time, but rather the sum of the CPU time used by the individual components. The CPU time is an independent additional piece of response time information.

ST04 – DATABASE MONITORING


Here we can see Buffer busy waits, file system requests, wait events, sap client, Oracle session, SQL request, table access, exclusive lock waits, latch waits, Database message log, state on disk, parameter changes, performance database, summary report
Db cache should be greater than 98%
User/recursive calls less than 5
Reads/user calls greater than 40
Buffer busy waits.
Ø  D/b monitor.
Ø  Used to monitor the performance of database.
Ø  Database Buffer Hit Ratio should be >94%.
Ø  When it is < than 94% the data is fetched directly from the d/b. So this affects the performance of the system.
Ø  Hit ratio = [(Logical read-physical read)/Logical read]*100.


ST05: - SYSTEM PERFORMANCE


 -> Here we can trace SQL, Enqueue, RFC, Buffer traces
-> With this transaction you can switch different traces on or off, output trace logs in a list, display further information about the logged statement (such as an SQL statement),and create the execution plan for an SQL statement.
-> You can also create, save, and study the performance of SQL statements. For information on other functions, see the documentation (see below). You can gain an overview of the available functions from the menu entries.
-> By choosing Help -> Help on statement you can display the documentation on the SAP R/3 System trace.
-> If you would like to receive messages during the evaluation of the performance trace (for example, a progress display for the trace record formatting), enter the command "MESSAGE" in the OK code field. You can terminate the performance trace after 5000 records have been formatted by entering "CHECK" in the OK code field. You can reset these settings by entering "RESET" in the OK code field.



ST06 – OPERATING SYSTEM  MONITORING


Here we can see
Ø  -O/s monitoring.
§  Check CPU idle time memory, swap
§  Disk with highest response time.
§  Check CPU, Memory, Swap, DISK, LAN.
§  Used to comparing with previous days or previous hours.
§  CPU idle time >= 30%.
§  If it is low it indicates that load on the system is high.
§  Analyze and reduce the load.
§  Gives statistical information from the O/S.
§  Detailed analysis can be performed.
§  Restrict the consuming of high CPU resources.

Ø  Number of CPUs in the system
Ø  CPU load average for the last 1 minute
Ø  CPU load average for the last 5 minutes
Ø  CPU load average for the last 15 minutes
Ø  Physical memory available
Ø  Physical memory free space
Ø  Number of pages, paged in and out per second.
Ø  Maximum swap size (by file system swap: freespace include)
Ø  Actual size of total swap in KB
Ø  Response time of the disk
Ø  Disk with highest response time
·               Name
·               Utilization
·               Avg wait time per m.s
·               Kb transferred per sec
Ø  Here we can start and stop saposcol

The operating system collector SAPOSCOL is a stand-alone program that runs in the operating system background. It runs independently of SAP instances exactly once per monitored host. SAPOSCOL collects data about operating system resources, including:
Ø  Usage of virtual and physical memory
Ø  CPU utilization
Ø  Utilization of physical disks and file systems
Ø  Resource usage of running processes



ST07: - APPLICATION MONITORING


Ø  Number of users of an SAP application
Ø  Number of active users of an SAP application
Ø  Number of users connected to one work process
Ø  average number of sessions per user
Ø  Number of servers on which an SAP application is running
Ø  Here we can see all the above information for a particular client also.
By Click on Userdistribution – change client
Ø  And also for particular application server.
By Click on Userdistribution – choose.app.server
Ø  Here we can see particular module of sap buffer, DB access, DB memory
Ø  We can view Response time of previous days, previous week, this month, previous month.



 ST10: - TABLE CALL STATISTICS


Here we can see where the tables buffered in not buffered, single key buffered, generic key buffered.


  
ST11- ERROR LOG FILES ( DEVELOPER TRACE )


Ø  Developer traces contain technical information for use in the event of problems with your system. Using the entries in the developer traces requires a sophisticated knowledge of the host systems in which your SAP system is running and of the SAP system itself.
Ø  The traces can be useful in diagnosing host system and SAP-internal problems that are affecting your SAP system.
Ø  Developer traces are written in files in the work directory of the SAP application server that generated the trace.



Component                                                       File Name
Dispatcher                                                        dev_disp
Workprocess                                                    dev_w0 to wn
Gateway                                                          dev_rd
R3trans and tp transport programs                     dev_tp
Monitoring infrastructure (test mode only)           dev_moni
Activating / Deactivating Developer Traces from within SAP System:
Enter transaction code SM50.
Choose the work process in which you wish to increase the trace level.
To trace all work processes of a server, use the system profile method shown below.
Choose Process ® Trace ® Active components.
The system presents a dialog screen that shows the current status of the developer trace.
Turn developer tracing on and off for different server components by selecting the appropriate checkboxes.
Set the degree of detail by entering a number in the Level field. Possible trace levels are as follows:
– 0: no trace.
– 1: write error messages in the trace file.
– 2: full trace. The trace entries that are actually written can vary with the SAP program that is being traced.
- 3: additionally, trace data blocks.

You can also set trace options instance-wide with the rdisp/TRACE=<n> option. The trace values are the same as those in the list above.


ST22:- MONITORING ABAP DUMPS


The runtime errors of the system will be logged in dump analysis. Whenever there is programmatically error [or] reports exceeds maximum for time execution, table space overflow, max extents reached will be logged into the screen.
(1)   What happened?
(2)   What can we do?
(3)   Error analysis?
(4)   How to correct the error?
(5)   System environment.
(6)     Information on where terminated?
Analysis the requirement to run that Program/report and run the program again.

Try to debug the problem with the help of error message get assistance from seniors. Search SAP market place the problem still persists log a message to SAP.
License with SAP and try to get the solution and implement it. If necessary we will follow the landscape for applying the recommended notes.


DB01: - EXCLUSIVE LOCK WAITS



 Oracle: Waiting for exclusive database locks (exclusive lock waits)

An exclusive database lock is set when a user locks a table record. If a
second user tries to process this record, he or she must wait until the
first user releases it. This situation is an 'exclusive lock wait'. This
monitor shows such wait situations. You can display an overview of all
database locks (V$Lock) by choosing 'V$Lock'.

The fields are:
  Object                        : Name of the locked table
              for lock holder and lock waiter:
  SID     : Oracle session ID
  SPID  : Oracle shadow process ID (at operating system level)
  Client host: Name of the host on which the R/3 work process is running
  PID     : R/3 work process ID (at operating system level)
  Level  : Level > 1: Lock holder is waiting for another lock
  Time   :  Time (in seconds) since a lock was set or wait time
  Row ID: Oracle row ID of the locked record

Level = 1 : Lock Situation detected
Level > 1 : Lock Holder is waiting on one or more Locks itself
Level = -1: Deadlocks detected

DB02 :- DATABASE PERFORMANCE {TABLES AND INDEXES}


In this transaction we can see

Database and Tablespaces
Ø  Date and time of the last analysis
Ø  Total number of tablespaces in the database
Ø  Total size of all tablespaces in KB
Ø  Free space in all tablespaces in KB
Ø  Used space of all tablespaces as a percentage
Ø  Free size of the tablespace with the lowest amount of free space in KB
Ø  Used space of the tablespace with the highest fill ratio as a percentage
Ø  Current size
Ø  Space statistics
Ø  Free space statistics
Ø  D/b size should not above 90%

Tables and Indexes
Ø  Date and time of the last analysis
Ø  Total number of tables defined in the database
Ø  Total amount of used space of all tables defined in the database
Ø  Total amount of used space of all indexes defined in the database
Ø  Total amount of used space of all indexes defined in the database
Ø  Date and time of the oldest execution of RUNSTATS using program dmdb6srp
Ø  Date and time of the latest execution of RUNSTATS using program dmdb6srp
Ø  Detailed analysis on tables and indexes
Ø  Missing Indexes
Ø  Space critical objects
Ø  Space statistics
DB12: - BACKUP LOGS

We can see in this transaction

DATABASE BACKUPS

Ø  Last unsuccessful backup ( time, date, and with return code)
Ø  Last successful backup
Ø  Overview of database backups

REDO LOG BACKUPS

Ø  Archiving directory statistics ( free space)
Ø  Overview of redo log files
Ø  Overview of redo log backups

Report: RSORA850
You can use this program to display BRBACKUP and BRARCHIVE logs (Database backup and archive of redo log files).



DB13: - DBA PLANNING CALENDAR


Ø  You can use the DBA Planning Calendar to automate Oracle database administration. This includes implementing, executing, and checking actions
Ø  You can use the DBA Planning Calendar for almost all regular database administration actions. This includes tasks for which the Oracle database system must be stopped, such as offline backups.
Ø  You can only use the DBA Planning Calendar to start actions if the R/3 System is up and running. You can use the BRtools or SAPDBA to execute tasks such as recovery, for which the SAP System must be inactive.
Ø  All DBA activities that stop the database, such as offline backups, will terminate active SAP transactions. Schedule such activities for night runs, and warn users of the interruption using utilities – system messages
Ø  In all such tasks, the DBA Planning Calendar stops the database and starts the action automatically. The SAP System is not available as long as the database is stopped. However, the SAP System itself is not stopped. Once the database is available again, the SAP System is automatically reconnected.


DB14: -   Display DBA Operation log for Database


 In this transaction we can see

DBA

      SAPDBA(sapdba operations)
Ø  SAPDBA Detail log for DATABASE

BRCONNCET(brconnect operations)
Ø  Number of Tables Without Statistics:
Ø  Number of Indexes Without Statistics:
Ø  Number of Tables Whose Statistics Were Deleted:
Ø  Number of Indexes Whose Statistics Were Deleted:
Ø  Number of Tables Whose Statistics Were Checked:
Ø  Number of Tables Selected After Check:
Ø  Number of Tables for Which Statistics Were Collected:
Ø  Number of Indexes for Which Statistics Were Collected:
Ø  Number of Indexes Whose Structure Was Checked:

BACKUPS

       BRBACKUP(Database backups)
Ø  BRBACKUP action log of database instance
Ø  BRBACKUP detail log of database instance
BRARCHIVE(Redo log Backups)
Ø  BRARCHIVE action log of database instance
Ø  BRARCHIVE detail log of database instance

OTHERS

Ø  Other Operations
Ø  Non-SAP Data Archiving
Ø  All Operations
Ø  All Function IDs:


DB20: - EDIT TABLE STATISTICS



Here we can
Ø  Create statistics of table                       
Ø  Delete statistics of table
Ø  Check structure of table
Ø  DBSTATC Maintenance
Ø  DBA Operations(transaction DB14)
Ø  Start Global Statistics Operation:
  Update                                                     Step 1: Delete Harmful Statistics
                                                                 Step 2: Check and Update
                                                                  and Generate Initial Statistics.
                                                                                  All steps are executed in accordance with the
                                                                                  settings in table DBSTATC.

Update(DBSTATC)                                      Check and Update Statistics
                                                                                  of DBSTATC objects.
                                                                                  If the TODO indicator is set, the statistics for
                                                                                  the table are updated without a check.

Create missing statistics                                 Creates statistics for tables that should
                                                                                  have statistics, but do not.

Delete Harmful statistics                                Deletes statistics for tables that should not
                                                                                 have statistics, but which do have some, such
                                                                                 as pool and cluster tables.

DB24:- Logs for Administrative Database Operations


Here we can do
Ø  All complete logs or jobs.
Ø  Displays optimizing statistics.
Ø  Update statistics.
Ø  Backup logs.
Ø  Reorganization
Ø  Consistency check


*SM51:-  Application overview


                        -It will displays             
Server name
Host name
Type
Status
            -Status may be

(a) Starting : Application server are being started. They cannot process any request yet.
(b) Initial: The server has logged on to Message server but cannot be addressed yet.
                        (c) Active: Processing the request.
(d) Passive: Application server will be deactivated; it will complete the   processing of task but    
     will not accept any new tasks.
                        (e) Shutdown: Server is being shutdown.
                        (f) Stop: Server has no connection with the message server.      

What we should do?     
  • Check the status
  • Which client, user, Report and action on  the user worked on work process
  • Reasons for the status of work process hold, stopped, running, wait; start(y or n)
  • Check the no. of servers
  • If it is stopped click on the release notes to find Operating system, Database and Kernel Version.
  • Double click on the server  it will display  SM50

*SM50:- Workprocess overview

-It will displays 

S.no
Type of w/p
PID
status
Reason start
Err
CPU time
Report
Client
User
action
Table
           
W/p status may be,
-Runnig, Waiting, Holding, Terminated

W/p Type may be
 DIA, UPD, UPD2, ENQ, BTC, SPO

What we should do?
§  Check Yes/No i.e. Restarted or not Restarted
§  Check the error how many times a W/p terminated abnormally
§  If all the W/p running stage identified the W/p which is consuming more time (Monitor the work process at O/s level using Dpmon
§  Go task manager and the PID kill the process.
§  Identify work proces which is consuming more heap memory in sm66 and inform the user before terminating workprocess. Every action will be done through CRF(change request form) or notification to the user will be using the t-codes




SM66 : - Global Active Workprocess Overview


-It will displays

Server name
Type
PID
Status
Reason start
ERR
CPU time
Clnt
User
Report
Act
Reason
Waiting

What we should do?
                 
                              -Double click on it to find the following details.
(a)Process start time
(b) Report/Program/Memory used
- Same like SM50, but display global active W/p overview.


SM04:-  Active User List

                                   
-It will displays

Client
User
First name
Transaction
Time
Session
Type
Server


                                    Type may be
                                                1. Communication User.
                                                2. System user  .
                                                3. Service user
                                                4. Dialog user
                                                5. Reference user
                          
                           What we should do?
                                                -It shows active user
                                                -We can kill the user. (Click user then system àend user)
                                                -we can kill the user session or complete user session



*AL08: - Global Active Workprocess Overview


-It will displays
Active instance
No of active users
No of interact user
No of RFC user
                                   
                                    Then

Server name
Mand
User
Terminal
TCode


                           What we should do?
                                                -Check the active user
                                                -Can’t kill user session



*SM21: - System logs

                                   
-It will displays
Time
Ty
Nr
Cl
User
TCode
Mno
Text
Date

                                                -Select Date and time
                                                -run time problems are recorded
                           What we should do?
Ø  Check the text for getting the errors and problems
Ø  Double click on it
Ø  It will shows programatical errors
Ø  user locking, Table space overflow
Ø  Max extends reach
Ø  update inactive, enqueue can’t issue lock
Ø  W/p in priv mode
Ø  most of dumps, buffer over flow
Ø  all the d/b related errors, ex:- Ora 600, 1578, 1631,1632,1653,1654
Ø  we can see local and remote systems logs also
Ø  Here all system messages are recorded. Time, type of message, client, user and all information will be available here.
Ø  Here we can see local & remote system logs also
Ø  Detailed list by double clicking on the message



SM37: -  Monitoring the Backgroud Jobs

                                                               
                                                                Select job name, user name and execute           

            -It will displays

Job
Clnt
Job created by
Status
Start date
Start time
Duration
 Delay

-          Status may be

1. Scheduled
2. Released
3. Ready
4. Active
5. Finished
6. Cancelled

                     What we should do?

-Select job and click job log
-it will show date, time, message text, class no, type, Error occurs and reason
-our job is to monitor jobs which are in cancelled status and     unsuccessfully finished job.

Steps to follow the reasons for background job is not running:

1.     Authorizations are not assign to the user to run the particular job
2.     File which is to be uploaded is not found
3.     We are running BDC program from flat file to Database if the file structure is difference then job is terminated.
4.     Table space overflow
5.     Maximum extends reached.













SM13: -  Monitoring the terminated records


            -It will displays
Clnt
User
Date
Time
Shared Table
Lock argument
Status

Status may be

(1) init : the record is waiting to be updated
(2).auto :when the update task is activated from inactive state then record will be updated automatically

(3).Run : processing
(4). Error: occurred that cause the request to be cancelled

            What we should do?

check the record which are terminated
check types of updates
update modes
v1-critical update,v2-non critical update             
synchronous :update d/b directly
asynchronous: update temporarily and then d/b

The following r the reasons to terminated
  • There are too many locks obtained by the users, the users are not getting the locks to update his request
  • Reaching the maximum size of table, which can be identified by analyzing the sytem lock and DB locks. One it is table size is fixed or auditing you need to restart the update process manually
  • If it is frequently caused programming error this an be identified in SM13 and inform the respective developer and the user.



SM14: - Update Administration



-it shows

Update/server/server group/parameter
and
Active/deactive  switch
            What we should do ?
v  check update is active or deactive
v  if it is inative explore the reasons for it
v  we can switch over between two, when error occurred in update process to make the d/b consistent
v  check syslog and SM21 to get the information of update process error, and inconsistency then deactive the update process by using parameter



RZ04: - Configuring Operation modes

Productive instances and their WP distribution                                                 
                                                                                                
Host Name    Server Name          Instance Profile                                             
                             OP Mode              Dia  BP BPA Spo Upd Up2 Enq Sum              
                                                                                                
                      Maintain operation modes (peak hours, off peak hours) for all the instances

            What we should do?

Ø  Check whether the switching of operation modes is performing or not
Ø  if it is not performing check SM21
Ø  We can switch manually the w/p (dialogue and b/g ) depends upon peak and non peak hours
Ø  go to SM63 and assign the time intervals for operation modes

RZ20:- CCMS Monitoring


            -COMPUTING CENTRE MONITORING SYSTEM
            -BUSINESS PROCESS MONITORING (PERFORMED)
                                                                       
-          used to configure the alert in the system
-          monitor set : group of relative alert into for monitoring
-          monitor element : this is an element under monitor sale which need to be monitoring
-          each element consist of a certain threshold value
-          value reach the threshold value alert will release
-          it will shows alert text
-          O/s monitoring R/3 monitoring, D/b monitoring are performed here.



SPOOL MANAGEMENT


SAP provides print mechanism by using spool process. Whenever the user request prints a document the dialog process handling the request and stores the data TemSe(Temproary sequential objects). TemSe is located either in database or in operating system(global directory), which will determine by the profile parameter rspo/storage_location values will be g or gb

Access methods:
            It defines the type of printing method used by R/3 system. Access method is specifies using protocol by three types:
  • Local printing
  • Remote printing
  • Front-end printing
Local Printing: spool server transfers the data directly to the host printer or the print manager. This is fastest printing is enable. C-windows L-Unix is the protocols are used.
Remote Printing: spool processor format the spool request data any spool request information (no. of authors, no. of copies) and generates output request to a different spooler. We use access method U-Unix; S-windows NT is the protocols will be used.
Front-end printing: user can print from request to printer directly

            It is one-way spool work processes are configured. It can be defined as either logical server or real spool server 1. real spool server
                               2. Logical spools server
Real spool workprocess contains R/3 spool workprocess
Logical spool servers do not exist but interm it’s pointed to real spool server. These are used to switch b/n real spool server for logon load balancing and in place of failover.
There are various statuses of spool requests like waiting, process, printing, problems, error, archievel
Various problems in spool request:
1. Devices not reached
2. Due to network failure
3. Printer problems
4. There is long queue at printer. If you are specified sequential processing while the max. No. of spool request in the system 32,000 or 99,000