lab-time: upgrading grid infrastructure (gi) from 12.1 to 18c – the final version

lab-time: upgrading grid infrastructure (gi) from 12.1 to 18c – the final version

In an earlier blogpost, I was playing around in an unsupported way to upgrade my lab Grid Infrastructure from 12.1 to 18c. The problem there was, that the 18c software was not officially available for on-premises installations. Now it is! During my holidays I had a little time to play around with it and this is how I upgraded my cluster.

Reading the documentation it seems very easy (and it really is). Unzip the software and run I wouldn’t write a blogpost if I didn’t encounter something, would I?


Software staging

Create the new directories

And this has to be done on all the nodes.

Unzipping the software, has to be done as the owner of the Grid Infrastructure on the first node only:



Perform some very basic quick checks to ensure the cluster is healthy.
Check your current release and version:

Determine your active version:

Remark. My active patch level is 3544584551. This is important. Because the installer checks for patch 21255373 being installed on your software. It’s a full rolling patch which is applied using opatchauto and I did not have any issues on my environment, so I won’t cover that here.

And a quick check if we can talk to the crs

And then I usually gather some evidence.

That way, I can always refer back on “which was the output again”.


As the Grid infrastructure owner, run the cluvfy in pre-install mode.

The outcome in my case:

this looks good to me, so good to go.


As written earlier, doing the upgrade is simple. Unzip the software and run When you have a response file, you can use that to do a silent installation upgrade, otherwise just use the gui.

I have an x-server available and a stable network, so this time I did it interactively:

Then the installer window pops up:

OUI reads my mind and selected the good radio button in my case “Upgrade Oracle Grid infrastructure”.

You need to verify if all your nodes are listed. My 4 nodes are correctly listed and I have SSH key equivalence already setup from my previous installation.

I’m fine with the default Oracle Base and my Software Location (GI home) matches the directory in which the software has been unzipped previously.

Let’s be a little lazy and check if it works, so I entered my root password so Oracle can run the and other configuration scripts for me.

I like this option for bigger clusters actually. You can determine batches in which the root and configuration scripts will be executed. As you will see later in the wizard, it OUI gives you the choice to execute the scripts now or on a later moment in time. I can think of some use cases in which this comes in useful. So let’s try it, and I created 2 batches.

you won’t escape, the OUI will do some pre-checks as well

There we go. After unpacking the 18c software on my first node, he thinks /u01 is too small. It actually is not, but this is my lab and I will remove the 12.1 software afterwards anyhow and I have the necessary space available. So in this particular case it is safe to ignore. I would not continue in a production environment. It would be better to extend the /u01 filesystem. But again, it’s a lab and I know it fits, so I could ignore this one safely.

Mandatory confirmation

and off we can go. I usually save my response file for later use

And the installers takes off. Of cours I forgot to take the initial screenshot but the next one is interesting

How nice. The installer is being nice to you and asks permission to use the root password.

This excites the root scripts

and here it asks if you want to perform the scripts on the second batch as well. I want this, so “Execute now”.

And finally my upgrade succeeded! Yay!

Post tasks

First things first, verify if all went well

That looks good to me.

The easiest thing to see if all is running again:

Next steps was to

  • enable the volume and acfs volume GHCHKPT.
  • enable and start the rhp (rapid home provisioning)

In my case they were not enabled by default. You can choose, or you do it in the brand new fancy asmca and click around. In the settings box, you can enter the root password, which makes life a little easier, or you use the commandline. It’s up to you.

right click the GHCKPT volume and click “enable on all nodes”

and in my case it threw me an error

“CRS-2501: Resource ‘ora.DATA.GHCHKPT.advm’ is disabled”

now what …

ok… back to the CLI then, because I didn’t found a quick way to do it in the asmca interface:

first check the volumes

Then we now know the device name and we can enable it.

When we now retry the same operation in the asmca gui, the operation succeeds

Then you can ask the interface to show the acfs mount command:

and it tells you exactly what to do

So basically … you should mount the filesystem yourself. That’s ok for me.

So back to the cli

*sigh* ok ok … this error is simple, enable it and then start it:

and then it works.

Then the rhp is still to be done.

For rhp, you must do it using cli as the grid user

after doing all this, everything is online



Biggest gotcha’s during this upgrade was basically the advm and acfs volume which weren’t enabled by default. Is this a problem? Not really. Just something to take into account and something to check/verify. It also depends if you want this or not.

Something else I noticed.

I did not document this here as such, but in order to perform the upgrade (coming from, you need 23,5GB usable free in the diskgroup for the cluster. In my case the +DATA diskgroup. To do this, (on 12.1) I moved the GIMR (MGMTDB) out of ASM and I have put it into an acfs filesystem:

So far so good. But why does the upgrade need 23GB for then? You would be surprised (or not) … it’s the GIMR.

you see who’s back?

GIMR is playing Houdini. But you have freed up my “old” gimr location, didn’t you?

what did you think… I wonder if when you keep the GIMR on the same location (in ASM), if same behaviour appears, but in case you played around with it. Take this into account.


A common thing to forget, but get rid of the old 12.1 home. But that I will cover in another blogpost.


As always, questions, remarks? find me on twitter @vanpupi

Leave a Reply

Your email address will not be published. Required fields are marked *

3 × one =

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: