10 Ways to Leave your Upgrade, Faster

This blog is based on a RadiOAUG broadcast on April 9, 2013 at Collaborate in Denver.

10 Ways to Speed up your Upgrade.

1) Fast IO – Solid State Drive for the database files

2) Don’t do an in-place upgrade. This allows you to measure the upgrade on the actual machine you’ll use for the production upgrade.

3) Lots of disk space to stage files. Stage the 11i files so you don’t have to clone them before the upgrade. Stage the R1211 files and the post 1213 files, with all the customizations.

4) Try increasing parallel_max_servers to see if this allows greater parallelization.

5) Don’t use a threaded CPU architecture for the database server.

6) If you do use a threaded CPU server, use Automatic Degree of Parallelism (ADOP) to increase the parallelism. There are three modes: Manual, Limited and Auto. We tested Manual and Auto, and had the best results with Auto.

7) XDOLoader – entropy issue. Create a link to /dev/urandom called /dev/random. Note: 1065393.1.

8) Disable ARCHIVELOG Mode and Auditing

9) Use trace to diagnose performance issues:

oradebug setospid OSPID-number
oradebug unlimit
oradebug event 10046 trace name context forever, level 12
oradebug tracefile_name

Run tkprof on the tracefile and examine the tkprof output.

10) There are lots of functional scripts and datafixes that can speed your upgrade: facpupg.sql and glrsgup2.sql are two examples that allow you to run depreciation in advance and prepare posted journals before the upgrade.

Read more about it in my book, the little r12.1.3 tuning and troubleshooting guide for Oracle E-Business Suite



Three XLA invalid objects if you don’t apply the SLA patch 5233248 to 11i before the R12.1.3 upgrade

When upgrading to 12.1.1 and then 12.1.3, if you don’t apply patch 5233248 during the application of your 11i patches to prepare your database objects for R12, after the upgrade and all the R12.1.3 patches have been applied, there are three invalid objects that can only be resolved by applying this SLA patch to the 11i instance, in advance of the upgrade to R12.1.3. This patch is recommended as a patch needed if you are using “Upgrade by Request”.  There are other methods to implement “Upgrade by Request”, but this is the method we recommend. If you don’t apply this patch and set the start and end dates with the SLA pre-upgrade program shown below, then only the default number of periods will be upgraded. The number of periods that are upgraded by default are six, unless you set the periods in the SLA pre-upgrade program. We highly recommend setting the starting period to the first period in your instance and the ending period to the last period in your instance.

First, enable the SLA pre-upgrade program:

enable SLA preupgrade program

If you plan to upgrade all your data during the upgrade, and I highly recommend upgrading all your data during the upgrade downtime window, you will need to set the start and end dates for all the GL periods in your instance. Set these dates using the “SLA pre-upgrade” concurrent program as follows:

SLA preupgrade program with parameters



If you decide to use less than all the GL periods, the amount of time this saves has been about one to two hours during our customers production upgrades. The post upgrade concurrent request can upgrade any remaining data in a few hours, if you choose not to upgrade all your GL periods during the original production downtime window.  But a downtime window for this post upgrade concurrent program should requested because of the amount of archive logging it creates. If you have a downtime window to process the remaining GL periods, archive logging can be disabled.  If you do not have a downtime window during the post SLA processing, the archive log mount point can fill up and hang your database. For this reason we strongly recommend disabling archive logging during any post processing of SLA data.

This XLA patch needs to be run with 11i patches in order to avoid these three invalid objects after the 12.1.1 / 12.1.3 upgrade:


adpatch restart file ‘END OF COMPLETED JOBS’ line is missing

When applying the patch from the apps tier machine and it dies in the middle of the patch, in this case the R12.1.1 patch, 6678700, it said:

Reading completed jobs from restart file (if any).

AutoPatch error:

unable to find job when restarting

Wrong number of fields in restart file when restarting at line number =

This may indicate that the “END OF COMPLETED JOBS’ line is missing.

Notice the patch lists the number of jobs from the beginning of the patch, phase A0, not the current job, in this case is 48177 jobs at phase A144, as shown in the figure at the bottom of this post.endofcompletedjobs






Other OTN threads suggest restarting the patch from the beginning. This requires a restore of the database, and while this solution works, it’s not the fastest solution.

Please edit the $APPL_TOP/admin/SID/restart/adpmain.rf9 file and remove the last incomplete line. See the figure below, the last line is incomplete compared to the line above it. Make a backup of the restart directory and edit the file and remove the last line.adfmain_rf9




Remove the line that says: Done job: prod= gmd un=GMD file=GMDSTNXT.sql ……

Then, restart the patch and use adctrl to set the jobs to restart.

Thanks, Mike

Missing Responsibilites after Upgrade to R1213

From: How To Run The Workflow Directory Services Concurrent Program From The SQLplus Prompt [ID 1213304.1]

1. Sync responsibility role data into the Workflow table:


2. Synchronize WF LOCAL tables:

exec WF_LOCAL_SYNCH.BulkSynchronization(‘ALL’)

The Synchoronize WF LOCAL table step above give the following error:

ERROR at line 1:

ORA-12801: error signaled in parallel query server P000

ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

ORA-06512: at “APPS.WF_LOCAL_SYNCH”, line 2726

ORA-06512: at line 1

sqlplus apps/apps $FND_TOP/patch/115/sql/affurgol.sql

The run step three from MOS Note [1213304.1]

3. Workflow Directory Services User/Role Validation:

exec wf_maintenance.ValidateUserRoles(p_BatchSize => null, p_check_dangling => TRUE, p_check_missing_ura => TRUE, p_UpdateWho => FALSE);

 Run adstpall.sh to stop the apps tier and adstrtal.sh to start the apps tier.


Recent Issues with the R1213 Upgrade

Issues with the R1211 Upgrade

After applying the latest R12.1.1 preinstall=y patches, I found these new issues:

To fix the GMIG3MIB.sql issue:

Run sqlplus apps/apps  …/apps/apps_st/appl/gmi/12.0.0/patch/115/sql/GMIG3MIB.sql

Enter value for 2: GMI

Enter value for 1: 1000

Enter value for 3: 1

Enter value for 4: 1


I skipped the okemd2fx.sql, with option 8, and skipped the wmsrules.sql

Download the EBS Software for R12.1.1 Install for Rapidwiz

First, go to edelivery.oracle.com and select the E-Business Suite and Linux, Solaris or whatever the appropriate OS is. Then, select the set of disks to install; For instance, I’ve selected Oracle Solaris 64-bit. You also have the choice of whether you want the NLS packs for additional languages:

EBS download 1



Notice the next figure shows the first line is the Rapid Install RDBMS Disk2. This is the last disk of the ORACLE_HOME software disks. You will need to download this and rapidwiz will ask for these disks, even though you have selected the rapidwiz option to “Upgrade Oracle Applications”. In my case, I already have a ORACLE_HOME, so I don’t need the ORACLE_HOME that gets installed with these disks. However, there is no option other than install all the software. If you remove this ORACLE_HOME before rapidwiz finishes it’s post install checks, the install will fail and the oraInventory will not be updated. Therefore. wait for rapidwiz to completely finish and then delete the ORACLE_HOME.

In the figure below. you don’t need to download the Rapid Install Databases disks. These are used if you want install the VISION database with data pre-populated.

EBS download 3






You will need the Tools disks, the APPL_TOP disks and the startCD disks:

EBS download 4





After you unzip these zip files,  you should have the following directories:





If you want to download the software for Linux, notice the part numbers are different in the figure below:

EBS download 5




If you are using Linux, there are different disks to install for Linux:

EBS download 6

Performance tuning the R12.1.3 Upgrade

Question from EBS DBA:

We are up to our sixth pass and are on our seventh and perhaps last one. The past two times have taken longer than the previous ones (I’m talking like 8-12 hours per on the 12.1 and 12.1.3 patches). Doesn’t seem like a lot but when you only have 72 hours it’s a lot!

I think I may know why but wanted to run it by you. We avoided running the statistics steps in order to save some time up front. Thinking about it now, could this end up adding hours to the two main patches?

I think yes, it could be huge. We are also looking at the SAN as well but thought I would start here.

Answer from Mike:

Yes, statistics can be very helpful. You should run these upfront, adstats.

Second, take a look at V$SYSTEM_EVENT, it will tell you what your wait events are. This is cumulative, so stop your database before you start the upgrade and it will refresh the table.

Your SAN is probably the biggest problem. Get everyone out of the SAN during the upgrade; you are sharing bandwidth. Also, ask your SAs if they can increase the stripe size for the upgrade. If you can stripe across 8-12 disks during the upgrade and the reduce that after the upgrade. The reason is concurrency. When you run the upgrade with the stripe size going across 12 disks, you’ll light up the whole SAN with the upgrade. Also, spread your mount points across the SAN so that you are using more disks and aren’t using just one or two mount points with a very limited number of disks. If you can light up the whole SAN you’ll have better throughput. However, reduce this after he upgrade, because with multiple users, they will be waiting for each other; waiting for each other’s transactions to complete. Transactions will be very fast with lots of mount points with more disks, but everyone will be waiting. As long as it’s just the upgrade running more disks with more disks in each stripe will improve you IOPS.

Also, check your network. You may be waiting on your network. If you have multiple DNS servers, test how long it takes to return an address from all the DNS servers. If you have a bad DNS address, SQLNet will wait 1 minute before trying the next DNS address. For a 3 second SQL transaction, this can be devastating.

Use fewer workers if you have poor IO and more workers if you have fast IO. For example, with just a few disks, use the same number of workers (3-4) as you have disks. With a large number of disks use 16 workers. However, this depends on the number of CPUs you have. More workers will help the compile stage. So, use about the same number of workers as you have CPUs, if you have lots of disks. If you have only a few disks, more CPUs and more workers will make the problem worse.

Obsolete Columns after the R12.1.3 Upgrade – alter table … drop unused columns

Hi Mike,

When a column is marked unused, it is considered for all effective purposes as dropped —  (so much so , you could follow up that command with adding a new column to the table with the same column name) — as I mentioned, this action (of setting column unused, is irreversible– there is no command to mark it “used”)….the only action left to do, is a space management operation to reclaim the space …so from this perspective, knowing which columns existed before, and were marked unused is irrelevant …all one needs to know is that table x has some columns that were marked unused (essentially dropped – minus space reclaim) …and that these tables are the ones  for which space can be reclaimed via the alter table …drop unused columns  command.

If you need to see which columns where marked ununsed , you can try one of the following approaches …

1 — View the 12.1.3 data model comparison report from MOS Doc. 1290886.1

2 — Try something like this in a test env  which I found poking around in the web — which created a view on dictionary objects which supposedly does the column lookup even for unused cols — but I have not tried it :   http://stackoverflow.com/questions/2949226/how-can-you-tell-which-columns-are-unused-in-all-tab-cols


Stopping and Starting adpatch Causes ‘Unable to find job when reading existing jobs’

I have comments turned off on this blog, but please email me at mike@trutek.com, if you have questions or comments.

Sometimes stopping and restarting a patch that seems to be hanging with adctrl will cause the patch to finish. However, twice now this process has scared the living bejebers out of me. In the example below, job GMIG3MIB.sql was running for a very long time, so I stopped the patch with adctrl and restarted the patch, using adpatch.






After restarting the patch, the patch always scares me, because the first thing it says is that it’s starting over, then after reading the completed jobs from the restart file, it determines the correct number of jobs remaining:patch issue 1





Then, I got this message that say’s, it’s “Unable to find job when reading existing jobs”, Actually the job GMIG3MIB.sql has already finished, so I tried setting the status with adctrl to quit and failed, and the option 8, skip and restart on the next job.








So now, since the status was Skip & restart, I thought the jobs would be skipped, but I still kept getting the “Unable to find job when reading existing jobs” error from above.







Boy did I think I was in serious ____, but with a little research on MOS, I found: Inadvertently Closing a Session of Adpatch Causes ‘Unable to find job ‘ Error [ID 257505.1]

This MOS note says, to resolve this issue:

1. Determine the last job that was supposed to run. (zxexemptjur.sql)

2. Grep from all the adworker log files to confirm that the last job

has been completed successfully by one of the adworkers. (GMIG3MIB.sql)

3.  Update fnd_install_processes with control code and status both =

‘W’ for the worker that this driver is assigned.

4  Restart adpatch.

This worked and saved me from restoring the database files and restarting the 1211 MP, 6678700, with 125402 jobs and running the patch for another 16 hours. Whew.

update applsys.fnd_install_processes set status=’W’ where worker_id=1;

update applsys.fnd_install_processes set status=’W’ where worker_id=2;

update applsys.fnd_install_processes set control_code=’W’ where worker_id=1;

update applsys.fnd_install_processes set control_code=’W’ where worker_id=2;

To fix the GMIG3MIB.sql issue:

Run sqlplus apps/apps  …/apps/apps_st/appl/gmi/12.0.0/patch/115/sql/GMIG3MIB.sql

Enter value for 2: GMI

Enter value for 1: 1000

Enter value for 3: 1

Enter value for 4: 1

I skipped the okemd2fx.sql, with option 8, and skipped the wmsrules.sql.

Missing R12.1.1 preinstall patches after starting the R12.1.1 Upgrade

I started the most recent pass of the R12.1.1 upgrade and then realized I had forgotten some patches. Some of the patches had not been download from MOS, because I tried to download too many at a time. I think there is a limit of about 100 patches per download, but I don’t know the exact number. Therefore, I only downloaded about 100 of the 120 patches I intended to download.

I started the upgrade using the u6678700.drv, in $AU_TOP/patch/115/driver and then I double checked my patch directory. I was surprised and worried when I realized I did not have all the patches. I was missing some very important patches, included in an earlier post, and sumarized below, including the R12.1 CUP1, 7303029 and others.

The list from earlier for the R12.1 preinstall=y patches:

7303029, 7648869, 8230656, 8351855, 8429275, 8495719, 8509517, 8517880, 8615142, 8712047, 8731432, 8752951, 8764069, 8781600, 8850587, 8865466, 8871012, 8942413, 8967918, 9062910, 9082021, 9202645, 9257954, 9290222, 9491856, 9504903, 9560244, 9586498, 9726737, 9799876, 9868229, 9903933, 9918101, 9947835, 10011700, 10029457, 10041471, 10096115, 10096191, 10144929, 10163624, 10163753, 10170555, 10198811, 10221534, 10235226, 10275581, 10358280, 10359715, 10393730, 10420245, 11071399, 11653739, 11767692, 11847569, 11928146,, 12344218, 12347791, 12372035, 12382774, 12387835, 12422840, 12553436, 12578648, 12648752, 12651338, 12682665, 12877002, 12877611, 12912302, 12930626, 12944782, 12990345, 12992125, 13016846, 13019385, 13040251, 13243594, 13248921, 13335019, 13356426, 13390676, 13397299, 13403786,  13451293, 13509912, 13544907, 13545598, 13557244, 13577549, 13598940,  13684949, 13695167, 13774403, 13793279, 13799611,  13820304, 13833174, 13837821, 13856969, 13864126, 13872862, 13917170, 13931180, 13978746, 14026963

I got a little excited when I realized I had jeopardized all my earlier work, but not double checking that the patches had been accurately uploaded to the server. However, I know how to apply a patch in the middle of the maintenance pack, by making a copying of the $APPL_TOP/admin/SID/restart directory and a copy of the FND_INSTALL_PROCESSES table (create table applsys.fnd_install_processes_backup as select * from applsys.fnd_install_processes).

After making a backup of the table and directory, I stopped the patch using adctrl and applied the missing R12.1.1 preinstall patches. After copying the restart directory and fnd_install_processes table from the backups, I restart the 6678700 R12.1.1 Maintenance Pack. To my surprise, it never asked me if I wanted to continue with the previous patch, and started the 6678700 driver from the beginning. I was mortified. I was sure the patch would fail, as this has happened to me before, with failure as the only option, and recovery of the database would be necessary.

However, to my surprise, the patch worked just fine. After further investigation, the patch had previously reached phase A25, when I stopped it to apply the missing patches. These phases contain only odf and xdf files. These object descriptor files and xml descriptor files prepare the instance and can be re-run. The odf and xdf files continue through phases A35. Therefore, any failures before the end of phase A35 can be restarted without using the copies of the restart directory and backup of the FND_INSTALLED_PROCESSES table.