Synchronize values of APPLPTMP with UTL_FILE_DIR in 12.2 and 19c

  • Synchronize values of APPLPTMP with UTL_FILE_DIR for PL/SQL-based Concurrent Requests.

For Oracle Database 19c, you can retrieve the value of UTL_FILE_DIR by using the following command as the APPS user:

SQL> select value from v$parameter where name=’utl_file_dir’;

See My Oracle Support Knowledge Document 2525754.1, Using UTL_FILE_DIR or Database Directories for PL/SQL File I/O in Oracle E-Business Suite Releases 12.1 and 12.2 for more information

3.       Set the target UTL_FILE_DIR values in the Oracle Database

Starting with Oracle Database 19c, UTL_FILE_DIR is not a supported database initialization parameter.

Set the target UTL_FILE_DIR values in the database.

Source the environment file.

cd <ORACLE_HOME>
source <CONTEXT_NAME>.env

10.  Obtain the existing value for the UTL_FILE_DIR using the following commands:

perl <ORACLE_HOME>/appsutil/bin/txkCfgUtlfileDir.pl -contextfile=<DB Context File> \
-oraclehome=<ORACLE_HOME> -outdir=<ORACLE_HOME>/appsutil/log -mode=getUtlFileDir

This will create a text file  <DB_NAME>_utlfiledir.txt under the <ORACLE_HOME>/dbs directory with references to the target Oracle home.

  1. Review the <DB_NAME>_utlfiledir.txt file and edit the values, if required.

Note: Before proceeding to the next step, ensure to create the physical directories for all directory paths being specified in the <DB_NAME>_utlfiledir.txt file.

  1. Run the following command to store the updated values for UTL_FILE_DIR in the database:

cd <ORACLE_HOME>/appsutil/bin
perl <ORACLE_HOME>/appsutil/bin/txkCfgUtlfileDir.pl -contextfile=<DB Context File> \
-oraclehome=<ORACLE_HOME> -outdir=<ORACLE_HOME>/appsutil/log \
-mode=setUtlFileDir

This command will validate the directory paths provided in the <DB_NAME>_utlfiledir.txt for existence and will also create directory objects for all the physical directory paths.

  1. Run the following command to create the directory object for the outbound directory (pointed to by the s_outbound_dir context variable in the database tier context file).

perl <ORACLE_HOME>/appsutil/bin/txkCfgUtlfileDir.pl -contextfile=<DB Context File> \
-oraclehome=<ORACLE_HOME> -outdir=<ORACLE_HOME>/appsutil/log -mode=createDirObject

When prompted for the OS path for the directory object to be created, enter the value of the s_outbound_dir context variable in the database tier context file.

  1. Sync up the value of UTL_FILE_DIR in the database tier context file by running the following command:

perl <ORACLE_HOME>/appsutil/bin/txkCfgUtlfileDir.pl -contextfile=<DB Context File> \ -oraclehome=<ORACLE_HOME> -outdir=<ORACLE_HOME>/appsutil/log -mode=syncUtlFileDir \ -skipautoconfig=yes

  1. Run AutoConfig on the database tier nodes:

adconfig.sh contextfile=/u01/app/oracle/product/db12102/etest/appsutil/etest_etestdb.xml

11g Database OCA Cheat Sheet Notes

My notes for teaching the OCA 11g DBAI class, for students preparing to take the OCA exam. 

Database Writer writes when:

  1. No free buffers
  2. Too many dirty buffers
  3. 3 second timout
  4. Checkpoint

Log Writer writes when:

  1. session COMMIT
  2. buffer 1/3 full
  3. DBWR is about to write

UNDO

UNDO is used for:

  1. Reversing incomplete transactions
  2. Read consistency (temporary)
  3. Transaction isolation
  4. Flashback (9i temporary)

UNDO errors:

Run out of space ORA-30036 “unable to extend segment in undo tablespace”  Lets the query succeed

Fails on consistent read- undot for CR has been overwritten ORA-01555 “snapshot too old”. This means the transaction will succeed.

FAILURE TYPES

  1. Statement failure
  2. User process failure- Network(listener, network card, route), User error
  3. Media Failure
  4. Instance Failure

RECOVERY

  1. Loss of control file
  2. Loss of online redo log
  3. Loss of datafile in NOARCHIVELOG mode
  4. Loss of datafile in ARCHIVELOG mode

Init parameters

FAST_START_MTTR_TARGET

Views

V$INSTANCE_RECOVERY

Commands

alter system checkpoint

alter system switch logfile

FLASHBACK

db_file_recovery_dest

db_file_recovery_dest_size

V$RECOVERY_FILE_DEST

V$FLASH_RECOVERY_USAGE

How to Set Archive Log Mode

  1. ARCHIVE LOG LIST
  2. Shutdown
  3. ALTER DATABASE ARCHIVELOG
  4. ALTER SYSTEM ARCHIVE LOG CURRENT, or
  5. ALTER SYSTEM SWITCH LOGFILE
  6. SELECT name, is_recovery_dest_file FROM v$archive_log

Commands to Perform a Hot Backup

  1. ALTER TABLESPACE <name> BEGIN BACKUP
  2. LIST BACKUP OF DATABASE
  3. LIST BACKUP OF ARCHIVE ALL;
  4. REPORT NEED BACKUP;
  5. REPORTOBSOLETE

RMAN commands

All the RMAN command are listed at the bottom.

  1. CROSSCHECK
  2. DELETE EXPIRED
  3. DELETE OBSOLETE
  4. CATALOG

Copy the file(s) to the new location.

RMAN> COPY DATAFILE 8 TO '/u01/app/oracle/oradata/users.dbf';

Turn the tablespace to offline. We could have turned the tablespace offline before the copy, removing the need for a recovery, but the tablespace would have been offline longer using that method.

RMAN> SQL 'ALTER TABLESPACE USERS OFFLINE';

Switch to the new datafile copy(s) and recover the tablespace.

RMAN> SWITCH DATAFILE 8 TO COPY;
RMAN> RECOVER TABLESPACE users;

Turn the tablespace online again.

RMAN> SQL 'ALTER TABLESPACE users ONLINE';

Remove the old datafile(s).

RMAN> HOST 'rm /u01/app/oracle/product/11.2.0.3/dbs/users.dbf';

AUDITING

DB_DBA_AUDIT_TRAIL –>

Value Based Triggers

FGA_DBMS_FGA (all rows)         These two views combine to form DBA_COMMON_AUDIT_TRAIL

 DBA_FGA_AUDIT_TRAIL–>

ALERTS – Threshold Alerts – MMON – stateful

DBA_SERVER_ALERT

DBA_OUTSTANDING_ALERTS

DBA_ALERT_HISTORY –>   “snapshot too old is stateless”

 

STATISTICS

TABLEs                  DBA_TABLES

COLUMNS             DBA_TAB_COLUMNS

INDEXES               DBA_INDEXES, INDEX_STATS

 

STATISTICS_LEVEL = TYPICAL, ALL, BASIC

 

OPATCH

opatch apply

opatch rollback

 

To move or rename a datafile do the following.

  • Shutdown the database.
  • Rename the physical file on the OS.
  • Start the database in mount mode.
  • Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
  • Open the database.

To move or rename a controlfile do the following.

  • Alter the control_files parameter using the ALTER SYSTEM comamnd.
  • Shutdown the database.
  • Rename the physical file on the OS.
  • Start the database.

 RMAN Commands

Command Purpose
“@” Run a command file.
“@@” Run a command file in the same directory as another command file that is currently running. The @@ command differs from the @ command only when run from within a command file.
“ALLOCATE CHANNEL” Establish a channel, which is a connection between RMAN and a database instance.
“ALLOCATE CHANNEL FOR MAINTENANCE” Allocate a channel in preparation for issuing maintenance commands such as DELETE.
“allocOperandList” A subclause that specifies channel control options such as PARMS and FORMAT.
“ALTER DATABASE” Mount or open a database.
“archivelogRecordSpecifier” Specify a range of archived redo logs files.
“BACKUP” Back up database files, copies of database files, archived logs, or backup sets.
“BLOCKRECOVER” Recover an individual data block or set of data blocks within one or more datafiles.
“CATALOG” Add information about a datafile copy, archived redo log, or control file copy to the repository.
“CHANGE” Mark a backup piece, image copy, or archived redo log as having the status UNAVAILABLE or AVAILABLE; remove the repository record for a backup or copy; override the retention policy for a backup or copy.
“completedTimeSpec” Specify a time range during which the backup or copy completed.
“CONFIGURE” Configure persistent RMAN settings. These settings apply to all RMAN sessions until explicitly changed or disabled.
“CONNECT” Establish a connection between RMAN and a target, auxiliary, or recovery catalog database.
“connectStringSpec” Specify the username, password, and net service name for connecting to a target, recovery catalog, or auxiliary database. The connection is necessary to authenticate the user and identify the database.
“CONVERT” Converts datafile formats for transporting tablespaces across platforms.
“CREATE CATALOG” Create the schema for the recovery catalog.
“CREATE SCRIPT” Create a stored script and store it in the recovery catalog.
“CROSSCHECK” Determine whether files managed by RMAN, such as archived logs, datafile copies, and backup pieces, still exist on disk or tape.
“datafileSpec” Specify a datafile by filename or absolute file number.
“DELETE” Delete backups and copies, remove references to them from the recovery catalog, and update their control file records to status DELETED.
“DELETE SCRIPT” Delete a stored script from the recovery catalog.
“deviceSpecifier” Specify the type of storage device for a backup or copy.
“DROP CATALOG” Remove the schema from the recovery catalog.
“DROP DATABASE” Deletes the target database from disk and unregisters it.
“DUPLICATE” Use backups of the target database to create a duplicate database that you can use for testing purposes or to create a standby database.
“EXECUTE SCRIPT” Run an RMAN stored script.
“EXIT” Quit the RMAN executable.
“fileNameConversionSpec” Specify patterns to transform source to target filenames during BACKUP AS COPY, CONVERT and DUPLICATE.
“FLASHBACK” Returns the database to its state at a previous time or SCN.
“formatSpec” Specify a filename format for a backup or copy.
“HOST” Invoke an operating system command-line subshell from within RMAN or run a specific operating system command.
“keepOption” Specify that a backup or copy should or should not be exempt from the current retention policy.
“LIST” Produce a detailed listing of backup sets or copies.
“listObjList” A subclause used to specify which items will be displayed by the LIST command.
“maintQualifier” A subclause used to specify additional options for maintenance commands such as DELETE and CHANGE.
“maintSpec” A subclause used to specify the files operated on by maintenance commands such as CHANGE, CROSSCHECK, and DELETE.
“obsOperandList” A subclause used to determine which backups and copies are obsolete.
“PRINT SCRIPT” Display a stored script.
“QUIT” Exit the RMAN executable.
“recordSpec” A subclause used to specify which objects the maintenance commands should operate on.
“RECOVER” Apply redo logs and incremental backups to datafiles restored from backup or datafile copies, in order to update them to a specified time.
“REGISTER” Register the target database in the recovery catalog.
“RELEASE CHANNEL” Release a channel that was allocated with an ALLOCATE CHANNEL command.
“releaseForMaint” Release a channel allocated with an ALLOCATE CHANNEL FOR MAINTENANCE command.
“REPLACE SCRIPT” Replace an existing script stored in the recovery catalog. If the script does not exist, then REPLACE SCRIPT creates it.
“REPORT” Perform detailed analyses of the content of the recovery catalog.
“RESET DATABASE” Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed and that a new incarnation of the target database has been created, or reset the target database to a prior incarnation.
“RESTORE” Restore files from backup sets or from disk copies to the default or a new location.
“RESYNC” Perform a full resynchronization, which creates a snapshot control file and then copies any new or changed information from that snapshot control file to the recovery catalog.
“RUN” Execute a sequence of one or more RMAN commands, which are one or more statements executed within the braces of RUN.
“SEND” Send a vendor-specific quoted string to one or more specific channels.
“SET” Sets the value of various attributes that affect RMAN behavior for the duration of a RUN block or a session.
“SHOW” Displays the current CONFIGURE settings.
“SHUTDOWN” Shut down the target database. This command is equivalent to the SQL*Plus SHUTDOWN command.
“SPOOL” Write RMAN output to a log file.
“SQL” Execute a SQL statement from within Recovery Manager.
“STARTUP” Start up the target database. This command is equivalent to the SQL*Plus STARTUP command.
“SWITCH” Specify that a datafile copy is now the current datafile, that is, the datafile pointed to by the control file. This command is equivalent to the SQL statement ALTER DATABASE RENAME FILE as it applies to datafiles.
“UNREGISTER DATABASE” Unregisters a database from the recovery catalog.
“untilClause” A subclause specifying an upper limit by time, SCN, or log sequence number. This clause is usually used to specify the desired point in time for an incomplete recovery.
“UPGRADE CATALOG” Upgrade the recovery catalog schema from an older version to the version required by the RMAN executable.
“VALIDATE” Examine a backup set and report whether its data is intact. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksums to verify that the contents can be successfully restored.

Performance tuning the R12.1.3 Upgrade

Question from EBS DBA:

We are up to our sixth pass and are on our seventh and perhaps last one. The past two times have taken longer than the previous ones (I’m talking like 8-12 hours per on the 12.1 and 12.1.3 patches). Doesn’t seem like a lot but when you only have 72 hours it’s a lot!

I think I may know why but wanted to run it by you. We avoided running the statistics steps in order to save some time up front. Thinking about it now, could this end up adding hours to the two main patches?

I think yes, it could be huge. We are also looking at the SAN as well but thought I would start here.

Answer from Mike:

Yes, statistics can be very helpful. You should run these upfront, adstats.

Second, take a look at V$SYSTEM_EVENT, it will tell you what your wait events are. This is cumulative, so stop your database before you start the upgrade and it will refresh the table.

Your SAN is probably the biggest problem. Get everyone out of the SAN during the upgrade; you are sharing bandwidth. Also, ask your SAs if they can increase the stripe size for the upgrade. If you can stripe across 8-12 disks during the upgrade and the reduce that after the upgrade. The reason is concurrency. When you run the upgrade with the stripe size going across 12 disks, you’ll light up the whole SAN with the upgrade. Also, spread your mount points across the SAN so that you are using more disks and aren’t using just one or two mount points with a very limited number of disks. If you can light up the whole SAN you’ll have better throughput. However, reduce this after he upgrade, because with multiple users, they will be waiting for each other; waiting for each other’s transactions to complete. Transactions will be very fast with lots of mount points with more disks, but everyone will be waiting. As long as it’s just the upgrade running more disks with more disks in each stripe will improve you IOPS.

Also, check your network. You may be waiting on your network. If you have multiple DNS servers, test how long it takes to return an address from all the DNS servers. If you have a bad DNS address, SQLNet will wait 1 minute before trying the next DNS address. For a 3 second SQL transaction, this can be devastating.

Use fewer workers if you have poor IO and more workers if you have fast IO. For example, with just a few disks, use the same number of workers (3-4) as you have disks. With a large number of disks use 16 workers. However, this depends on the number of CPUs you have. More workers will help the compile stage. So, use about the same number of workers as you have CPUs, if you have lots of disks. If you have only a few disks, more CPUs and more workers will make the problem worse.

RMOUG – Graham Wood 25+ years at Oracle

Graham’s presentation on the “Hidden” Features of the Oracle Database:

Since, these presentations aren’t generally published, I took notes, not guaranteed to be correct.

Graham Wood Hidden Features

Free Features

dbms_monitor.session_trace_enable  10g

plan_stat   11g argument added to dbms_monitor.session_trace_enable 

dump plan when cursor is closed

now, after the first execution, never, all_executions, they dump row source statistics

cursor mapping broken 11gr1, statement text printed on every execute

long cursor numbers 11gR2, not so human readable

TKPROF enhancements

                Prints sql ID and plan hash for each statement

                Reports First, Avg, Max from multiple row source dunmps

                But not CLOSE … DOH!

Enable trace foe individual statements

                Alter session set events ‘sql_trace [sql:…]

Getting sql info V$SQLstat (10g), doesn’t use any latches

                Using V$SQL can make latch problems worse

DBMS_XPLAN.DISPLAY_CURSOR,

EXPLAIN_PLAN doesn’t work correctly, so use DBMS_XPLAN

Select * from table (dbms_xplan.display_cursor (“atnoianf82398nhfa’, NULL, ‘ADVANCED’));

Select * from table (dbms_xplan.display_cursor (format=>’ALLSTATS LAST’))

Use the CARDINALITY HINT to check the actual plan

Select /*+ gather_plan_statistics +/

Set a parameter for running an individual statement

SELECT /* + OPT_PARAM(‘star_transformation’….

_optimizer_ignore_hints=TRUE, useful for testing

Optimizer Statistics

Pending Statistics

                Separates stats collection from stats publishing

                Capture stats as pending

                Test using pending stats

                Publish once verified

Controlled by parameter – optimizer_use_pending_statistics

                Determines if the optimizer will use pending stats

Set to FALSE by default in 11g

                Use dbms_stats.set_*_stats

                Very useful for min max values

Setting Statistics

Dbms_stats.datearray(to_date(‘01/01/1999’, ‘MM/DD/YYYY)’, sysdate)

Dbms_stats.set_column_stats(‘scott’,’emp’,’hiredate’,srec=>sr);

 

 

Gather incremental stats in 11g uses “synopsis”. Make sure to clear all the stats and use one pass to create the “SYNOPSIS”.

LISTAGG – Analytic Query

First_value, nth value, last value

External Tables

External Table Preprocessor

Stop use SQL Loader – user External tables  – parallel loads with a degree of parallelism

                The use of a “preprocessor” allows for uncompressing zip files

                External tables allow us to filter the data before loading the data.

 

Sqlplus –prelim / as sysdba   gives access to in memory structures with no SQL access, no latches or SQL allowed.

Aren’t Free

AWR captures top 5

dbms_workload_repository.modify_snapshot_settings (topsql=>200)

dbms_workload_repository.add_colored_sql(‘sql_id’) to capture marked SQL, this marks the SQL

 

Reporting

AWR SQL report

AWR Diff Report

AWR RAC report

ASH report any time period, instance or filtered, filtering at session level is like SQL trace

                Use ADDM report!

Check Cardinality using SQL Monitor, key to work out why the execution plan may not be optimal.

Six Easy Pieces: Essentials of Database Tuning for Packaged Applications

December 7th at 14:25, Track 1

Six Easy Pieces: Essentials of Database Tuning for Packaged Applications

Packaged Applications such as E-Business Suite and Demantra may require significant hardware and performance tuning resources. Large investments in hardware will benefit from periodic tuning of the application and underlying database. Tuning the database for these packaged applications can utilize similar techniques, however the hardware resources required may be vastly different. We are faced with the constraint that we can’t tune the sql in these packaged applications, but we can make the database run faster and thereby, “fix” the offending sql. In addition to the generally used techniques of reorg-ing tables or gathering better statistics, there are six more tuning techniques that can be used to help improve the performance of your packaged application.

This is a practical presentation for those packaged application users that need better performance but without the purist’s focus on tuning sql.

 

 

 

 

 

Meetings with Gustavo and Max at Oracle

The two Senior Development Managers I met this morning were Max and Gustavo.  We talked about 12.1.3, 12.2, Fusion Applications and Fusion Middleware. This is  my recollection of our discussion.

First, topics from 12.1.3:

OATM – TruTek suggests that OATM be run we before the upgrade to 12.1.3, but  running OATM after the upgrade will remove fragmentation by running “ALTER TABLESPACE MOVE TABLE” on all the tables in each tablespace .  However, in the interest of minimizing downtime during the upgrade, TruTek recommends running OATM before the upgrade and run OATM  after the upgrade, to remove any fragmentation, if present.

Parallel Concurrent Processing and Dead Connection Detection using 11g – with versions of the database before 11gR1, dead connection detection was dependent on TCP Keepalive settings. Now, with 11gR1 and above the dia0 database process eliminates dead connections within 10 seconds and sometimes seems almost immediate.  Special setup of TCP parameters is no longer necessary to perform PCP failover.

Edition Based Redifinition (EBR) was introduced in 11gR2.  Hot Patching uses this database feature in 12.2 with the introduction of “adop” AD Online Patching, the new admin tool for patching that keeps a copy of the Apps Tier for ADOP patching.

TruTek suggested an enhancement to adpatch: to allow a different number of workers for different phases. For example, normal patch jobs run better with fewer workers, in part because of less contention for deferred jobs, while the ad parallel compile runs fastest using all available CPUs.

Topics from 12.2:

In 12.0 and 12.1 the application server is 10gAS and the configuration parameters are incorporated into AutoConfig, leaving out the AS Control management console. In 12.2 the application server is WebLogic and while some parameters, such as DBC configuration parameters for AOL/J are still managed by AutoConfig, the WebLogic Server (WLS) is managed through the WLS Console. This allows admins to configure domains, managed servers,  server clusters, machines, node managers, virtual hosts, work managers and security realms.

All the AOL/J components are essentially the same, including calls to FND_GLOBAL.APPS_INITIALIZE. This sets the FND_GLOBAL_USER_ID, Function Security, NLS and Multi Org parameters. The call to the NLS setting is especially expensive, which is why the JDBC Connection pool will save this session info and try to reuse it if possible.

In 12.2 the OHS will connect to Weblogic with a reverse proxy using the Apache Plugin that supports the APG14 protocol.

12.2 supports EBR and needs the functionality of 11gR2 for EBR. Therefore, 10gR2 and 11gR1 will not support 12.2. Steven Chan’s blog mentions that 11.2.0.3 is on the horizon.

Fusion Applications:

Get ready for Fusion by upgrading to 12.1.3

Introduce SOA to begin to replace existing non-Fusion compatible customizations

Co-Exist with Fusion to take advantage of new functionality in Fusion and use SOA interface with EBS.

ADF does not support flexfields, but OAF still supports flexfields. These pictures are from Nadia’s presentation at Open World. You can launch ADF pages from OAF:

ADF objects can be embedded in in OAF pages.

ADF no longer supports flexfields, but OAF still supports flexfields.

Begin using ADF extensions for EBS to help protect your investment, better enable co-existence with Fusion Applications and ensure your upgrade path to Fusion in the future.

 

 Fusion Middleware:

Prepare for Fusion by implementing Identity Management and Access Manager:

 

 Prepare for Fusion by Implementing OBIEE:

 

 Prepare for Fusion by implementing BI Publisher:

 

Dead Connection Detection works in 11.1.0.7

A couple of years ago I wrote a paper on how to adjust the TCP settings so that TNS would detect a dead connection. I also demonstrated how Oracle cleans up processes, but may leave sessions connected to the database. ALTER SYSTEM DISCONNECT SESSION is the new command for cleaning up the session and process at the same time in 11g.

In the white paper, “Improve Performance with Dead Connection Detection”, I showed with 10gR2 how blocking locks are not released when the user session was abnormally terminated. The same test in 11.1.0.7 shows the blocking locks released almost immediately after the first session was abnormally terminated.

I believe this is due to the new process, dia0, dia(zero), in 11g.

Monday at Oracle Open World

Awesome presentations including some of my favorite presenters/authors, Tim Gorman, Cary Millsap, Debra Lilley, Dennis Horton and Deep Ram, Tanel Poder, Nadia Bendjedou and Cliff Godwin.

My Favorite presentation was from Cary Millsap on his presentation on “Skew”. Skew is everywhere. Did you know the average number of legs per person was 1.99? Why is it not greater than 2, because there are no three legged humans.

The James Bond joke, if you don’t know what a Thermos is and someone tells you that it keeps hot things hot, and cold things cold, why didn’t it work when I put my coffee and my popsicle in my thermos?  Understand your data.

Skew exists everywhere, especially in systems, charaterized by CPU, memory, disks and networks. In 11.2.0.2 there are 1118 system calls, 6 dba calls and 2 psuedo calls. The system  processes are dominated by system calls, with a few critical DBA calls.

This system basically makes calls to the system and to the database.

The main idea was to illustrate the common misconception that idle wait events can be ignored. The answer is, you have to drill down to the exac cause of the wait and not look at summary data that represents median values or averages.

Cary summarized his Method R process: Identify the Important Task, Measure the Response Time, Optimize Response, Repeat until satisfied.

Check out Simpson’s paradox, a baseball statistical conundrum. Bobby Bragan the 1966 manager of the Atlanta Braves, was quoted as saying, “If you have one foot in the oven and one foot in the icebox, the percentages would say you’re fine”.

Drill down on each issue and remove the skew from each case by understanding the details of each wait.