After an Oracle instance is started, a special process, the Listener allows the database clients and the instance to communicate with each other.
Note: The listener process is not part of an Oracle instance; it is part of networking processes that work with Oracle.
In SAP installations, dedicated servers are used. When a work process makes a request to connect to the database, the listener creates a dedicated server process and establishes an appropriate connection.
The separate server process created on behalf of each work process
(generally, for each user process) is called a shadow process.
To handle database requests from several SAP system users, a work process
communicates with its corresponding shadow process.
When a work process has lost its connection to the database system, it automatically reconnects after the database server is available again and a database request is to be processed.
Oracle background processes perform various tasks required for the database to
function properly.
Databases are stored in data files on disks. To accelerate read and write access to
data, it is cached in the database buffer cache in the SGA. .
The Oracle database management system holds the executable SQL statements
in the shared SQL area (also called the shared cursor cache), which is part of
the shared pool allocated in SGA. Another part of the shared pool called the row
cache caches Oracle data dictionary information.
Caching of Data
Databases are stored in data files on hard disks. However, data is never processed
on the disks themselves. No matter whether a database client just needs to read
some data or wants to modify it, it is first copied by the associated shadow process
from the disk to the database buffer cache in the system global area (if it was
not already there).
Figure 3: Oracle Architecture: Caching of Data
Data is always cached if the data is accessed for the first time after an Oracle
instance is started. But as all users concurrently connected to the instance share
access to the database buffer cache, copies of data read from data files into the
buffer cache can be reused by any user.
The smallest logical unit that Oracle uses for copying data between data files and
the buffer cache, as well as for managing data in the cache, is the data block.
The size of an Oracle data block can generally be chosen during the creation
of a database. In SAP installations, however, the block size is always 8 KB.
For performance reasons, the physical allocation unit size on disks where
you store Oracle files should also be 8 KB.
Oracle always tries to keep the most recently used data blocks in the buffer cache.
Depending on the size of the buffer cache, it may sometimes be necessary to
overwrite the least recently used data blocks in the buffer cache.
Writing of Modified Data
Figure 4: Oracle Architecture: Writing of Modified Data
Any changes to Oracle data (inserts, updates, and deletions), are always performed
in the buffer cache. An Oracle shadow process itself never copies modified data
blocks (dirty blocks) from the buffer cache to the disk. This is the task of a
special Oracle background process called the database writer .
The database writer process writes dirty blocks to disk in the following situations:
Buffers in the buffer cache that contain dirty blocks (dirty buffers) may not
be reused until these blocks have been copied back to disk. When a shadow
process needs to copy data from disk to the buffer cache, it first scans the
cache for non-modified, reusable buffers. If the number of scanned buffers
reaches a certain threshold, the shadow process signals the database writer to
start writing some of the modified blocks to disk. The database writer then
copies those dirty blocks that are on the list of least recently used blocks
(LSU list), thus making them free.
At specific times, all modified buffers in the SGA are written to data files
by the database writer. This process is called checkpoint, and checkpoint
process (CKPT) triggers the database writer to perform the process.
Using the concept of deferred writes rather than immediate writes improves
efficiency because in many cases, several changes on the same block are
performed before the block is copied to disk. Also, the database writer performs
multiblock writes in a batch(ed) style to increase I/O efficiency.
Hint: Although one database writer process (DBW0) is sufficient for
most systems, additional database writer processes (DBW1 and so on)
may be configured in exceptional cases. See SAP Notes 124361 and
191463 for more information.
Logging of Modifications
Figure 5: Oracle Architecture: Logging of Modifications
Because of deferred writes, a mechanism is needed to prevent data loss and also
to avoid data inconsistencies in case of a crash of any system component (disk,
Oracle instance, or server).
Generally, each RDBMS logs all data changes in a log area, which is written to
disk at appropriate times. The log is generally written to disk after the commit of a
database transaction so that all data block changes are logged.
A database transaction is a logical unit of work for the database server (LUW -
logical unit of work), which is always treated as an atomic unit, meaning it must
either be processed completely or not at all.
In the following paragraphs, do not mix up undo entries / redo entries with undo
tablespace / redo logs
Redo entries are stored in redo logs to perform roll forward recovery when
necessary
Undo entries are stored in a rollback or undo tablespace to undo changes that
have not been committed (rollback)
The undo tablespace is a new type of tablespace to store undo entries, and
replaces the rollback tablespace
To achieve data consistency and read consistency, Oracle maintains redo entries
for roll forward or redo recovery for example after a crash, and undo entries
to roll back uncommitted transactions.
Redo entries
Redo entries contain the information necessary to reconstruct, redo, or
roll forward changes made to the database by SQL statements within a
committed transaction. Redo entries contain the new values of the modified
data, also called after images.
Parallel to changes made in data blocks, Oracle shadow processes write redo
entries into the redo log buffer. This is a circular buffer in the system global
area that temporarily records all changes (both uncommitted and committed)
made to the database . The Oracle background process log writer (LGWR)
then writes contiguous portions of the redo log buffer sequentially to an
online redo log file (or group of files) on disk.
The online redo log consists of four (or more) online redo log files. Redo entries in the online redo log are used for the database recovery if necessary (in order to redo changes).
The log writer writes entries from the redo log buffer to the online redo log:
When any transaction commits (the LGWR also writes a special
commit record for the corresponding transaction in this case)
Every three seconds
When the redo log buffer is one-third full
When the database writer is about to write modified buffers from the
block buffer to disk and some of the corresponding redo records have
not yet been written to the online redo log files (write-ahead logging).
This algorithm ensures that space is always available in the redo log buffer
for new redo records.
When a user (a work process) commits a transaction, the transaction is
assigned a system change number (SCN) by the database system. Oracle
records the SCN along with the transactions redo entries in the redo log.
Note: Because Oracle uses write-ahead logging, DBW0 does not
need to write data blocks when a transaction commits.
undo entries
Undo entries contain the information necessary to undo, or roll back, any
changes to data blocks that have been performed by SQL statements, which
have not yet been committed. Undo entries contain the old values of the
modified data, also called before images.
Oracle stores undo information (old values of modified data), called before
images in a special undo segment, separate from the redo log.
The Oracle undo space consists either of an undo tablespace (this solution is
called automatic undo management only the undo tablespaces must be
created) or of rollback segments (manual undo management rollback
segments must be allocated in a tablespace and managed).
The undo information of a transaction is retained in the undo space at
least until the end of the transaction. It may be overwritten only after the
transaction has been committed.
During database recovery, Oracle first applies all changes recorded in the redo log
(which includes the recovery of changes in the undo space), and then uses the
undo information to roll back any uncommitted transactions.
Moreover, Oracle can use the undo entries for other purposes, including reading
of snapshots of consistent data (accessing before images of blocks changed in
uncommitted transactions).
Log switch
oacle redo log files do not dynamically grow when more space is needed for redo
entries; they have a fixed size (on SAP systems, typically 50 MB). When the
current online redo log file becomes full, the log writer process closes this file and
starts writing into the next one. This is called a log switch.
The predefined collection of online redo log files (four in our example) is
used in a cycle.
At every log switch, Oracle increases the log sequence number (LSN).
Through the LSN, Oracle automatically creates a sequential numbering
of redo logs.
The online redo log file into which the LGWR is currently writing is called
the current online redo log file.
Control files
Figure 7: Oracle Architecture: Control files
Every Oracle database has a control file, which is a small binary file necessary
for the database to start and operate successfully. A control file contains entries
that specify the physical structure and state of the database, such as tablespace
information, names and locations of data files and redo log files, or the current log
2008 © 2008 SAP AG. All rights reserved. 13
Unit 1: Database Overview TADM51
sequence number. If the physical structure of the database changes (for example,
when creating new data files or redo log files), the control file is automatically
updated by Oracle.
Control files can be changed only by Oracle. No database administrator or
any other user can edit the control file directly.
After opening the database, the control file must be available for writing.
If for some reason the control file is not accessible, the database cannot
function properly.
Oracle control files can be mirrored for security reasons. Several copies can
be stored at different locations; Oracle updates these at the same time. In
SAP installations, the control file is stored in three copies, which must be
created on three physically separate disks.
Normally, control files are quite small and do not grow. When using RMAN for
backups (see unit 2, Backup, Restore and Recovery), control files can grow by a
factor of 10 because they contain information about RMAN backups.
Checkpoints
Figure 8: Oracle Architecture: Checkpoints
The term checkpoint has at least two different meanings.
The checkpoint is the point at which the database writer writes all changed buffers
in the buffer cache to the data files.
The database writer (DBW0) receives a signal at specific times from the
background checkpoint process (CKTP) to perform this action. DBW0 then
copies all buffers that were dirty at that moment to disk. Before DBW0 finishes
this job, other blocks in the buffer cache can become dirty.
After the checkpoint event has finished, the oldest dirty buffer in the buffer cache
determines a point in the redo log from which instance recovery must begin if a
crash occurs. This log position is also called a checkpoint.
The task of the checkpoint process is not only to activate the database writer, and
it does not write blocks to disk (because this is the task of DBW0):
It also writes checkpoint information to the data file header.
It writes information about the checkpoint position in the online redo log
into the control file.
The information about the checkpoint position in the online redo log in the control
file is needed for instance recovery. It tells Oracle that all redo entries recorded
before this point are not necessary for database recovery, as they were already
written to data files.
The frequency of checkpoints is one of the factors that influences the time required
for the instance to recover from a failure. The less frequent the checkpoints, the
longer the time the instance needs for recovery.
A checkpoint always occurs at a log switch.
Frequency of checkpoints can be specified with help of Oracle parameters. In
SAP installations, these parameters have values such that they are effectively
not used, and checkpoints occur only at log switches.
Database Recovery
Figure 9: Oracle Architecture: Database Recovery
Online redo logs play an important role when starting an Oracle instance and
opening the database, especially after a crash or when the instance was not shut
down cleanly. In this situation, Oracle recognizes that the database was not
properly shut down and automatically initiates database recovery (also called
instance recovery).
The automatic recovery at restart consists of two phases:
Starting at the checkpoint position, redo entries are read from the online redo
log and transactions are reprocessed (roll forward). This includes the roll
forward of changes in the undo space.
For every transaction that was either uncommitted at the time of the crash or
rolled back explicitly before the crash (so that there is no commit entry for it
in the redo log), a rollback is performed with the help of before images read
from the undo space. Oracle ensures that this is always possible because it
never deletes undo entries of open transactions from the undo space.
The result is a consistent database containing only changes committed before
the crash.
In the example in the above figure:
Transaction 1 is not relevant for redo/undo, as it was committed at the time
of the last checkpoint. Changes to this transaction were written to disk at
the last checkpoint.
Transactions T2, T3, T4, T5, and T6 are redone, as they caused changes in the
database after the last checkpoint. However, among these, only the changes
to T4 and T6 are committed, which means only these changes are persistent.
Transactions T2, T3, and T5 are rolled back.
Redo Log Mirroring
From a data security point of view, the online redo logs are one of the most critical
areas in an Oracle server. If you lose them in a crash, a complete recovery of the
database is not possible, and the result is a loss of some data.
Caution: Online redo logs must always be mirrored, meaning that two or
more copies of each redo log must be maintained on different disks.
Figure 10: Oracle Architecture: Redo Log Mirroring
Oracle itself can mirror online redo logs. This feature is used in SAP installations
by default, so that there is no need for a software or hardware RAID solution
(redundant array of independent disks) solution. On the other hand, from the
data security point of view, it does not matter which solution you choose. Even a
combination of both Oracle and RAID mirroring is feasible to minimize the risk
of losing an online redo log.
Archiving
As the online redo log is limited in size and cannot grow automatically, Oracle
must overwrite old redo entries to write new ones.
Only the oldest redo entries up to the checkpoint position in the log, which
corresponds to data changes that have already been written to data files, can be
overwritten. This ensures that automatic instance recovery after a crash is always
possible.
Figure 11: Oracle Architecture: Archiving
However, in a situation where you must restore data files after a disk crash and
recover them manually (usually to the state the data had at the point of the crash),
you need both a database backup and all the redo information written after this
backup. In an SAP system, log switches occur every few minutes so that online
redo log files are reused very frequently. To prevent loss of redo information, it
must be copied from online redo log files to a safe location before overwriting.
This is the task of a special Oracle background process called archiver (ARC0).
Archiving must be explicitly activated, through turning on the ARCHIVELOG
mode of the database and setting the Oracle instance parameter
LOG_ARCHIVE_START to TRUE.
When LOG_ARCHIVE_START is TRUE, the archiver process automatically
starts when an Oracle instance is started. When the ARCHIVELOG mode of the
database is turned on, the archiver process automatically copies a newly written
online redo log file after a corresponding switch to the next online redo log file
to an offline redo log file, and overwriting old redo log entries in online redo logs
is not allowed before the entries have been copied to offline redo logs. Once an
offline redo log file has been successfully created as a copy, the corresponding
online redo log file is released to be overwritten with new log information. The
directory where offline redo log files are created can be specified through an
Oracle parameter.
Caution: Archiving must be activated in productive systems. Moreover,
offline redo log files should be stored on a mirrored disk to prevent loss of
redo information. A RAID system can be used for this purpose.
In an SAP system, the activation of archiving is the default setting. A deactivation
of archiving by changing the database log mode to NOARCHIVELOG (required,
for example, during a system upgrade) is supported by the SAP tool BRSPACE.
Caution: If you lose a disk containing offline redo logs and data files
after a crash, complete recovery is no longer possible. Therefore, offline
redo logs and data files should be stored on different disks.
Other Background Processes
There are two more background processes that always run in an Oracle instance:
System Monitor (SMON)
Performs recovery at instance startup, if necessary
Writes alert log information if any other instance process fails
Cleans up temporary segments that are no longer in use
Process Monitor (PMON)
Monitors shadow processes
In case a client process crashes, PMON rolls back its non-committed
transaction, stops the corresponding shadow process, and frees
resources that the process was using.
Oracle Directory Structure in SAP
Figure 12: Oracle Directory Structure on SAP Systems
The Oracle directory (folder) and file names are standardized in SAP
environments. Directories (folders) are always created with the same structure and
naming convention during the installation. This structure may not be changed, and
the naming conventions must be observed, as SAP tools for Oracle administration
rely on it.
Various parts of the Oracle directories and files must be physically separated from
each other for performance and data security reasons. On UNIX systems, the
Oracle directories appear as a tree structure because the file systems created on the
physical disks are mounted on directories. On Windows, however, you will have
several \oracle\<DBSID> folders on different disks with different drive letters.
Hint: On UNIX, the <Release> subdirectory under
/oracle/<DBSID> also contains information whether you use a 32-bit
or 64-bit Oracle version, for example, /oracle/<DBSID>/102_64
for a 64-bit Oracle 10.2.
20 © 2008 SAP AG. All rights reserved. 2008
TADM51 Lesson: Database Architecture
Figure 13: Oracle Directories and Files on SAP Systems
dbs (on UNIX) or database (on Windows)
The Oracle profile init<DBSID>.ora or spfile<DBSID>.ora
holds the Oracle instance configuration parameters
The profile init<DBSID>.sap holds configuration parameters for
administration tools BR*Tools
sapdata<n>
Contains the data files of the tablespaces
origlogA/B, mirrlogA/B
Online redo log files reside in the origlog and mirrlog directories:
Log file numbers 1 and 3 and their mirrors in origlogA and mirrlogA,
log file numbers 2 and 4 and their mirrors in origlogB and mirrlogB,
respectively
oraarch
Offline redo log files are written to the oraarch directory; their names are
specified with help of Oracle instance configuration parameters, so the name
<DBSID>arch1_<LSN>.dbf is just an example.
Note that this has changed. Previously, offline redo logs were written to the
saparch directory. The reason for this new directory is that in the event of
an archiver stuck, in rare cases BRARCHIVE was not able to backup offline
redo logs to release space because BRARCHIVE was not able to write into
its log file. How to change from saparch to oraarch is described in
lesson Housekeeping and Troubleshooting.
saptrace
Oracle dump files are written in the directory saptrace. The
Oracle alert log alert_<DBSID>.log occurs in the directory
saptrace/background. Traces of Oracle shadow processes are written
to the directory saptrace/usertrace.
saparch
Stores the logs written by the SAP tool BRARCHIVE
sapbackup
Stores logs written by the SAP tools BRBACKUP, BRRESTORE, and
BRRECOVER
sapreorg
BRSPACE creates logs for its different functions here
sapcheck
BRSPACE creates logs for its different functions here
22 © 2008 SAP AG. All rights reserved. 2008
TADM51 Lesson: Database Architecture
Oracle Directories and Environment Variables
Figure 14: Oracle Directories and Environment Variables
On the database server, the environment variables ORACLE_SID,
ORACLE_HOME, and SAPDATA_HOME must always be set for the user
<sapsid>adm, as well as for the user ora<dbsid> on a UNIX platform.
ORACLE_SID
This is the system ID of the database instance (DB SID).
ORACLE_HOME
This is the home directory of the Oracle software. More precisely,
ORACLE_HOME points to the directory that contains subdirectories bin,
dbs (or database), and network. This means in particular that the
Oracle profile init<DBSID>.ora or spfile<DBSID>.ora is always
located in $ORACLE_HOME/dbs (in %ORACLE_HOME%\database on
Windows).
SAPDATA_HOME
Points to the directory in which the database files are stored.
Hint: : The location of the control files and of the offline redo
logs is configured in the Oracle profile init<DBSID>.ora; the
location of all other files (data files, online redo logs, and so on) is
stored in the database itself. Therefore, SAPDATA_HOME is mainly
used by BR*Tools to offer suitable directories, for example, when
new tablespaces or data files need to be created.
other variables
There are also other variables you can set if the corresponding directories
do not have any subdirectories of SAPDATA_HOME. This is often the case
on Windows systems owing to the different drive letters: SAPARCH,
SAPBACKUP, SAPCHECK, SAPREORG.
On an Oracle client (especially on every SAP application server), the variable
ORACLE_HOME must also be set so that connection information can be found in
$ORACLE_HOME/network/admin.
In a Unix environment, the environment variables ORA_NLS10 are
also set for the user ora<dbsid>. The default value for ORA_NLS10 is
$ORACLE_HOME/nls/data. Since ORACLE_HOME is set, ORA_NLS10
does not need to be set.
For the user <sapsid>adm, the environment variable ORA_NLS10 is not set or
must not be set. The Oracle instant client downloads the NLS data from a dynamic
library (NLS library), which is stored in the Instant Client directory.
See SAP Note 830578 for information on how to set this variable correctly for
Oracle 10g. For earlier Oracle releases, see SAP Note 180430 and the other SAP
Notes referenced there.
Oracle Real Application Clusters (RAC)
To improve performance, increase throughput, and deliver high availability at
the same time, install your SAP system in a real application clusters (RAC)
environment. RAC overcomes the restrictions of normal failover solutions with:
Concurrent processing
Load balancing
Fast and reliable detection of a node or network failure
Fast recovery
The 23 Tribes of The Oz Casino, New York
ReplyDeleteThe 23 Tribes of 양주 출장샵 The Oz Casino, New 김천 출장샵 York is a unique casino, located on 성남 출장마사지 the corner of Canal Street and Zipperta Avenue in the 통영 출장샵 Upper Peninsula 아산 출장샵 of New York.