Browsed by
Tag: oracle

A warm welcome to exadata SL6-2

A warm welcome to exadata SL6-2

Last years Oracle Openworld, uncle Larry announced the sparc based exadata SL6-2, so this means that we have to give the sparc chips a warm welcome to the Exadata family.
During the conference I wrote 2 blogposts. You can find them here and here.

To recap, a little picture of the new one in the family:

Exadata SL6-2

Nowadays, we’re used to the big X for the exadata’s. This is for the x86 infrastructure they are running on. So SL stands for “Sparc Linux”. You should follow the Oracle guys on twitter as well, then you see this product (Linux for sparc) is growing very rapidly. One of the questions which pop into the mind directly, which endianness is this using? Well, linux on sparc is using big endian as the sparc chip itself is big endian.

So in my blog posts I was eagerly looking forward to the spec-sheet and here it is! http://www.oracle.com/technetwork/database/exadata/exadata-sl6-ds-3547824.pdf

A shameless copy out of the datasheet:
“The Exadata SL6 Database Machine uses powerful database servers, each with two 32-core SPARC M7 processors and 256 GB of memory (expandable up to 1TB)”

According Gurmeet Goindi’s blog (@exadatapm) it comes at the same cost as the intel based variant. You can read his blog here: https://blogs.oracle.com/exadata/entry/exadata_sl6_a_new_era

Exadata SL6-2 hardware specifications

Look what’s there! In stead of 2 QDR ports, we now have 4. And also the elastic configs remain. Also remarkable is that the storage cell’s remain on Intel based architecture.
This looks interesting as well (same as the X6-2 trusted partitions):

Exadata SL6-2 mgmt features

 

On this moment (or I have read over it) I can’t see yet how virtualisation will be done, so if someone has info about this, I will be happy to hear this. I heard several rumours about this, but I am eager to find out what it’s going to be!

One question remains … when will I be able to find a customer who buys it and let me explore this to the bottom 🙂

 

As always, questions, remarks? find me on twitter @vanpupi

 

 

 

The first performance related impressions of the new ODA X6-2M

The first performance related impressions of the new ODA X6-2M

New toys are always fun! When Oracle announced their “Small” ODA’s in the X6-2 generation, we were excited to test them. We were not the only ones, so it took a while before getting one, but the first week of january, it was playtime. An ODA X6-2M was delivered to our demoroom and testing could begin.

Normally I would start a blog post by “how to install” it. Actually this is very simple and very well documented. If you want me to blog about it as well, just let me know.

The nice thing about the database appliance is, that in the X6-2 generation, it is now possible to have single instances which can host standard edition. This is a good thing. One of the reasons you want to consider this, is that step-in costs can be reduced. For smaller companies, you get a database in a box which is just working. Nice, isn’t it?

So how does it perform?
Well … first things first! Slob. The wonderfull tool of Kevin Closson (you can find him here http://kevinclosson.net ). Slob helps stressing the storage so that you can find out how your system is behaving. It always one of the first things I run on a new system.

Marco Mischke (@dbamarco) was also playing with the X6 and he discovered an important performance difference between running your database on ASM and ACFS. It has been classified as a bug and “fixed” in the latest ODA image. Guess which version I installed on the ODA? Right, the latest one. So we got in touch and the first slob test was good. It reached far higher, so the problem looked to be fixed.

But looking a bit further, I wanted to test on ASM as well.
You know? I will provide the results you’re looking for by now 🙂

Ok First: ACFS, here we go.

ACFS IOPS Read

So with a limited set of workers we reach up to about 325000 iops. Given that the system has 20 cores available, this results into 16250iops per core.
If we translate that in MB/s we get this:

ACFS throughput MB

I left out the latencies here to make it a bit more clear but it peaks to 2,5GB/s at it’s most. So here are the latencies over the tests:

ACFS read latencies.

I put it into excel as well:

max read latency		2587.22	us	2.58722	ms
max write latency		2094.74	us	2.09474	ms

These are the maximum latencies during the test, so merely at the end. In my opinion, this is good.
If more details are needed, drop me a message and I will provide more information.

Let’s move on to ASM, exactly the same database, parameters, etc,… I love 12c! you can move the datafiles online, so that’s how it has been done.
ASM, your results please.

Oops, what’s that? 800.000 iops in read! And the write ones are only slightly better.

Then we go to the throughput:

So asm is faster than ACFS. I was expecting it to be a bit faster, but not this.
For completeness the latencies:

And then the figures:

max read latency		2508.65	us	2.50865	ms
max write latency		2893.87	us	2.89387	ms

This look likes expected. Good.

I talked to my team-lead and performance tuning expert, Geert De Paep about this behaviour. You could see the lights in his eyes, he wants to test it as well. So I’m looking forward to his blogpost as well. I can tell you already, by doing the queries manually on the swingbench schema, Geert was also able to see this behaviour. So we should also figure out what happens by using acfs. If it is still strange, we should contact Oracle as well. We will see.

If you run swingbench with the preconfigured runbooks, the first bottleneck you find is the cpu. This is due to all the pl/sql in swing bench. So knowing that … the next tests will be Logical IO.

As always, questions, remarks? find me on twitter @vanpupi

 

ORACLE DB IN THE AZURE CLOUD – PT2

ORACLE DB IN THE AZURE CLOUD – PT2

During the BI in the cloud project, one of the aspects we had to test is the network. Here is how we did it to figure out how the network performs and most of all, is it stable?

One of the most important things in a cloud environment is the network. It connects devices to eachother and makes it possible to have communication between devices. Sounds obvious, right?

Some tests we have done, were relying very heavily on the network, such like nfs, smb,… and in the beginning, we didn’t manage to get it stable. At some period in time, you have the “I-should-find-some-time-to-do”-moment. This was one of them. I should find some time to, in a very easy quick way, to check if the network remains “ok”. So, I came up with the most basic test a network test could be: ping! Ping? Pong, yes an easy ping. I know that firewalls give lower priority to ping but in this case they are configured well so this is good to go.

The test consists out a very little tiny script, which does 10 pings, some cli magic to grep the time out of it and record it in a file. It’s a quick and dirty script, and it’s a lot better to store it in a database. But hey, we just needed an idea, is the network stable or not. This script goes in the crontab for every 5 minutes on each of the 3 servers. This generates data and I harvested this data after a couple of days.  I would like to mention (oh oh, comment storm coming up) that regarding the network in this Microsoft Azure subscription, windows and linux servers are performing the same. Prerequisite is that you configure them well, so we did that 🙂

The first test is done on 2 servers, one linux and one windows, and stored in a different availability set (AS).

PTest1svg

This is no excel-graph. I would like to thank my team-lead Geert De Paep for letting me put my data into Pandora. Pandora is a tool which puts database data into every kind of svg-graph you would like. For the people interested, I can share the excel graph as well, but there were high peaks. To keep my detail, I needed the exponential graphs and pandora is the ideal choice to do so.

This looks to me that for every ping packet series, the first one takes some time and then it gets pretty stable.

The second test is also done on 2 servers, one linux and one windows. This time that are stored in the availability set (AS). But there’s a little other difference. The network throughput we had on other machines was bit disappointing. Hey Microsoft, can you do something about it? The answer was very easy. Use the preview of accelerated networking. So that is what we did.

PTest2svg

Strange behaviour in the beginning, but I assume, as it is a preview, that still something was going on. Timings are a bit lower, which is good. But also the same behaviour. One “slower” ping and then good results. Although between 18h and 20h we see some higher times on a daily rate. I think I should gather more data on this as well to spot if it is a recurring trend.

So that brings us to the third and final test. Just the same setup as the second one, except that it runs between 2 linux boxes. Azure, your results please!

PTest3svg

The graph looks different, but spot the time. While the windows boxes were shutdown between christmas and new year. No no no, it’s not because windows crashed, they were simply shutdown and resources are reused for other things.
But I do like the consistency. Still the same behaviour. One longer ping and then the rest lower but consistent.

As always, questions, remarks? find me on twitter @vanpupi

Oracle DB in the Azure cloud – Pt1

Oracle DB in the Azure cloud – Pt1

A few months ago (about october) ago we were contacted with the simple question: Can you run an oracle database in the cloud, the Azure cloud. Well … it depends. The little detail was, that the database is about 34TB and there are a few other multi TB databases AND there are a lot of copies of them. And … the final decision for go live is … end of 2016.  Well, we accepted the challenge.

The deadline was strict, so that’s also the reason I had less time to blog and these Azure cloud series won’t be completely chronological, … but (and this is a spoiler alert) I’m interested in sharing what we ended up with.

This post will focus on how the database tests using slob were done. Credits for @kevinclosson for the SLOB-tool and @flashdba for his slob testing harness. Combining these 2 provides a very quick way of running consistent tests. We needed such a quick testing framework as we were changing about everything to see if it impacted disk throughput / iops or not.

Why we choose those machines is for another post, but we opted for the DS15_V2 vm ( details here ). The explanation from the machine I borrowed from the Microsoft website: “Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series.”
Looks good, right? And we can attach up to 40TB to the machine, which makes it a candidate to be used for the future database servers.
It gets better, these family of servers can use also the Microsoft premium storage, which are basically SSD’s and disk caching is possible if needed.
As the databases are a bit bigger, only way we could do was use the P30 disks ( more details about them here ) So a disk limit of 5000 iops and 200MB/s. Should be ok as a first test.

The first test was done using iozone. The results of that will be in a different blogpost as I still need to do the second tests to crosscheck them. But let’s continue, but not before I would like to ask, if there are remarks, questions or suggestions to improve, I’ll be happy to test them.
The vm is created, 1 storage account was used, and in the storage account, it was completely filled up with 35 premium storage ssds.
Those disks were presented to the virtual machine, added into one big volume group and an xfs striped filesystem was created on a logical volume, which will host the SLOB database.
The db was created db using cr_db.sql from create database kit after enabling it for the 4k redologs. After finishing all steps to make it a Physical IO test we were good to launch the testing harness. It ran for a wile and eventually our top load profile looked like this during all the tests:

AWR_example_cloud

I think that’s ok? So after that it’s time to run the slob2-analyze.sh to generate a csv file. That csv was loaded in excel and this was the result.

1SA40disks_cloud

 

 

 

 

 

First I splitted the write and read iops, but then I decided to use the total iops as the graph follows the trend. My understanding (please correct me if wrong) is that around 30000 iops of a 8k database block is around 234MB/s? These tests were done without disk caching.

Then we decided to do the whole test again, but this time, instead of using 1 storage account with a bunch of disks, we used a bunch of storage accounts with only one disk in it. The rest of the setup was done exactly the same (created a new vm with same size, same volumegroup, same striping, …) and the database was created using the same scripts again. Here are the results:

40SA1Disk_cloud

 

 

 

 

 

I think it is remarkable that even in the cloud, the way how you provide the disks to the machine really does matters. For example if you take the 32 workers. With one storage account, remarkably less work was done.

More to come of course. Feedback is welcome about what might be the next blogpost. Let’s make it interactive 🙂

As always, questions, remarks? find me on twitter @vanpupi

OTN Appreciation Day: Dataguard

OTN Appreciation Day: Dataguard

Thanks Tim Hall for the idea about OTN Appreciation day. The feature I like the most in oracle is a rather “old” one, but it can be extremely useful: Dataguard. Why dataguard? I find it extremely easy to set it up, maintain it and it can save you a lot of “troubles”. Especially on big(ger) databases it takes down the time to recover in case of a failure down to seconds instead of hours.

The concept is simple:

Dataguard configuration

(image borrowed from the oracle documentation) It consists out of a primary (mostly live) database and we replicate all the redo to a target / standby database. This in real-time or if needed, with delayed apply.

One of the nice things about it, even how you mess it up, you get it up and running every time again, so it’s virtually unbreakable. Is it? Maybe not, but even it lags behind, it’s fairly easy to bring your standby database up to date with incremental backups and go on with your daily tasks.

Nowadays, I (and lots of my colleagues) use it a lot for hardware migrations. Almost everything can be done on beforehand, the moment of the big switch, you just switch the db, adapt the connection strings and done. You even can test your migration easily by breaking the redo stream (or in current versions use the snapshot standby) and test the applications on the new platform.

One of the nicest things I ever used it for was a exadata to exadata migration from Germany to The Netherlands. That client decided to switch from datacenter and all the equipment must be moved from Germany to the Netherlands, but with as few as downtime as possible. Switching over 45 databases (not too big, only a couple of TB in total), took only minutes and the end-users weren’t even aware that a complete DC move had been done.

All those nice features, active dataguard, snapshot standby, … they all make our live a bit easier. So thanks for this nice feature which makes my life a bit easier.

 

First POUG conference

First POUG conference

I could create a very short blogpost summarising this first POUG conference: AMAZING! But as this marvellous event deserves what it deservers, I’m going to write a little post about it, as a way of saying thank you!

Poug: it’s a word with double meaning. Intentionally it would stand for Poland Oracle User Group but thanks to Kamil (@ora600pl ) and his team it was turned into Pint with the Oracle User Group.
I met Kamil only at UKOUG Ireland this year where he revealed his plan. Philippe Fierens (@pfierens) and I were enthousiast immediately: count us in. Some weeks after … “Hi guys, we made it, we are expecting you”. Great news!

While composing this blogpost, I realised that I should take more pictures during conferences. Memo to self! Here we go.
We were arriving on thursday rather late so unfortunately we missed the speakers dinner. Anyhow … at friday morning the conference started.

Kamil_opening_poug2016

2 Kamil’s opened the conference by “explaining” the rules. So meaning that we have to adapt and that water is meant for bacterias, so we should go for the beers. It’s the first conference on which we actually we’re encouraged to drink beer as a speaker! This looks like a good start.
It was a rather good amount of people who showed up. If I need to guess, it must be around 150 people who were traveling to Warsaw for this nice event. In summary, this is a great start and more great things to come.

The first session I attended was from Jim Czuprynsky (@JimTheWhyGuy) . It was titled: “DBA, Heal Thyself: Five Diseases of IT Organizations and How to Cure Them”. Such an interesting talk with lots of truth in it. Jim was telling so full of passion and enthusiasm, a session never to forget.

Heli Helskyaho from Finland  (  )  ,it’s easier to spell her name as to pronounce it 😉 , had a extremely interesting session about how and why you should use sql developer and showed us a huge amount of useful tip&tricks you could do with it. Lots of thing I even didn’t know that were possible with this product. If you ever have the chance to attend one of her sessions, please go! You won’t be disappointed.

poug_beers_and_ivicaNext topic I chose was the session about parallellism from Ivica Arsov. I was very interested in it as parallellism in 12c is a rather complex mechanism. Notice the nice choice of Polish beers we had the opportunity to choose from.

After lunch it was our turn, we talked about the ovm implementation on exadata. It was nice to have such an interested audience, really a joy to speak for. Thank you POUG and attendees.
Our successor was Kiran Tailor ( ) with a very interesting session about exalytics. As we have a customer with exalytics it was EXTREMELY interesting to hear this.

poug_only_one_drinkWe closed the evening with really “only one drink” each. But it was an amazing party. It was called a party, but a far better name should be appreciation event. Check also this link from Robin

 

 

 

After a long short night, mmm you get the picture 🙂 , a lot of people turned up for Neil Chandler’s session. Ok it started at 10:50h but a full room, and all attentive on Neil’s talk why the optimizer from time to time decides to go his own wrong way. Useful hints, tips and tricks. Actually a presentation you should see and make sure every dba should have seen this!

poug_multitierJoze Senegacnik (@joc1954 ) took over by opening the CBO’s black box and then you understand why sometimes it can be more the box of pandora instead. Fortunately the parallels to “normal life” weren’t far away and pretty well illustrated. No further comments on this one.

“You should use sql” (or at least pl/sql) were the words of a small man in size, but a great man for all the rest. An marvellous speaker. Ladies and gentleman: Martin Widlake ( ) ! He had an EX-CEL-LENT talk over how to use (pl)sql to use loading data into a database. Everybody agreed on “row-by-row slow-by-slow” and very good tips which can be implemented very easily and give you lots of advantages. Thanks Martin!

This block of text, I’m borrowing from my friend Kiran on his blog entry . I can’t write it better, so I’m going to give you the block of text about the afternoon:

Once we had consumed our lunch we had our final session of the day ‘#DBADEV,Bridging the gap between development and operation table’ This was a panel session with Sabine @oraesque, Martin @MDWidlake, Philippe @pfierens, Neil @ChandlerDBA, Piet @pdevisser and Erik @evrocs_nl. I think most people would be able to say something in this area, I am always fighting with developers :-). What a great way to end the conference.

I should actually continue copy/pasting as the lunches, dinners, breakfasts Kiran and I enjoyed together. It was just epic!

This conference was

  • extremely well organized
  • on a fabulous location
  • equipment/food/drinks/beers/… were plenty and excellent
  • fantastic attendees

It was so nice to meet so nice people in such a positive and stimulating environment.
Dear organizers, I would like to say special thank you for having us. It was the first edition, but it was FANTASTIC! Thank you so so so much.

poug_thank_you

My trip to Oracle Openworld 2016 #OOW16

My trip to Oracle Openworld 2016 #OOW16

It all started a couple of months ago. I submitted a presentation and my friend Philippe Fierens submitted a panel session together with Adam Bolinski (Ora600.pl) , but unfortunately, I’m joining #teamRejected Lots of cool and nice people had to join as well, so it’s not too bad.
Then great news came and my boss asked if I was interested of visiting Oracle Openworld. I didn’t have to think twice, so my answer was yes very quickly.

The trip starts early. Very early … on saturday at 04:30 in the morning we had the appointment at the company. There a little coach picked us up and after a 2hour drive, we arrived at schiphol.

img_0865

A nice group of friendly and interesting people traveling together. This will be fun!

After a short check-in, bagage drop-off and all other security checks, we had breakfast and after that, off we go!

 

img_0868 Immediately it became very clear what the main topic of this Oracle Openworld would be: CLOUD! 🙂

Personally, I’m very sceptic to this new “hype”, but the only way to know if I’m right or wrong is to go with an open mind and experience it.

 

 

10:30h of flying later, we safely landed in San Francisco. Then the necessary checks, and we took a cab to the hotel. Checking in and then off we went to the Moscone center for the registration.

img_0874

It IS impressive. People told me it’s big and large, but if you stand there for yourself, you know they’re right. This is going to be cool.

After registration we went for a walk. Doing something a little bit active in order not to suffer from the jetlag. Then I received a message from Philippe, he’s already at Jillians, so we went to have some drinks together to catch up. Always nice to see friends again! And friends do have some good ideas, this night’s dinner would be sushi. A very typical sushi bar but so good! If I could find it back, I would recommend it!

The next day, sunday, ok there are already some sessions, but for my first time in San Francisco I really wanted to see the bridge. So with the oracle belgium people we we went biking. After riding the Goldengate bridge and taking the Ferry back, it’s time to follow some sessions.

I tried to create a varying schedule filled with attending some sessions and going to the demo-grounds. My personal feeling was that you really had to search well for technical sessions, whilst when walking a moment in the demo grounds, you could talk to lots of technical people and some of my questions I had were answered very quickly. So it became clear very quickly that this was the place to be.

Sunday evening, the glorious moment of every OOW – virgin. Larry’s keynote. To be honest, he let me down. After hearing so much about the keynotes maybe my expectations were a bit high but, it is an impressive event. We closed the evening with some drinks and then … early to bed!

Monday a day full of sessions/demogrounds/… but most of all, trying to get to know as much as possible about the newly announced exadata, the Exadata SL6-2. A sparc based engineered system which I think Oracle is going to push pretty hard. Engineers I was talking to, said that this machine is to be used in the Oracle cloud. Still trying to figure out on how to check it easily to confirm this.

Tuesday it became very clear that Oracle is focusing on cloud … but … “there will be a coexistence of about 10 a 15 years”. I like this, but in some way it feels like they aren’t very confident in their own product, but time will tell. Larry held his second keynote and this was really what I had expected. If you’re curious, you can see it here. And that evening we had a blast during the benelux party. So we went early to bed, literally 🙂

Wednesday, the last full day. I spent most of the time on the demo grounds this day. Taking a picture of the famous America’s cup  and an interesting session about the exadata technical deep dive and internals. Congratulations also to Oren Nakdimon (@dboriented on twitter), he had a nice presentation about upgrading his pl/sql code without any downtime on standard edition.

Last full day, apparently means also the appreciation event. It was announced to be Billy Joel, but he cancelled and was replaced by Gwen Stefani and Sting. Personally, I’m not really a Gwen Stefani fan … but Sting was really fantastic and he got the atmosphere going!

Thursday was a day of buying some presents for home and rushing to the airport.

A common topic through many presentations and talks at the geek theatre at the demo grounds is encryption. It’s almost named in one sentence with cloud. Which they immediately turn directly to sparc. So my guess is sparc will be pushed more and more. Just because of the encryption on the chip and the software in silicon which can help you with the in-memory option of the oracle database. This screams for some tests.

In summary, my first Oracle Openworld experience was fantastic. I can advise everybody that you need to have experienced it at least once. Such nice friendly people, and lots of knowledge transfer. Thanks Exitas for letting me doing this!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Welcome Exadata SL6-2!

Welcome Exadata SL6-2!

EDIT: a new entry about this Exadata SL6-2 launch can be found here!

During a session of Juan Loaiza with the lovely title “Oracle exadata: what’s new and what’s coming” suddenly following slide popped up.

img_2209

 

So it looks like this is really going to happen. In the session no exact timings were given, but I’m awaiting this system to play with.

Basically I think this will be useful to migrate current solaris / sparc users to exadata. I’m a bit sceptic about these extreme performance numbers, but hey … if it’s launched, i’ll be happy to test it.

To be continued!

Database psu patch july 2016: cannot find -lodbcinst

Database psu patch july 2016: cannot find -lodbcinst

It’s patching time again. So on one of our test systems we gave the july 2016 bundle a go. It’s a simple system, just a plain database 12.1.0.2 with the april bundle applied. No grid infrastructure, no asm, just a plain simple database system.

The sequence is the same as always. Unzipping, conflict check, nothing new. In our case, no conflicts were found so the next step is opatch apply.

All went good until a certain point the make command failed on:

Make failed to invoke "/usr/bin/make -f ins_odbc.mk isqora 
   ORACLE_HOME=/u01/app/oracle/product/12.1.0.2"....'/usr/bin/ld: cannot find -lodbcinst
collect2: error: ld returned 1 exit status
make: *** [/u01/app/oracle/product/12.1.0.2/odbc/lib/libsqora.so.12.1] Error 1
The following make actions have failed :
Re-link fails on target "isqora".

This doesn’t look good.
The good thing is that opatch creates a backup and it’s pretty easy to restore it. It just tells you how to do it.

In the meanwhile there is already a mos-note: New DocumentMake Failed To Invoke “/usr/bin/make -f ins_odbc.mk isqora” ‘/usr/bin/ld: cannot find -lodbcinst for libsqora.so.12.1 While Applying Patch 23054246 (Doc ID 2163593.1)

To save you some time from reading it, the summary is simple. This error comes from the unixODBC package which is not installed on the system.
after installing the x86_64 and i686 version, opatch succeeded successfully.

What’s odd is that if we check the prerequisite packages for eg RHEL in following mos-note: Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1) it does not show any indication that this package is mandatory.

For general information about the install notes, this mos-note is useful as well: Master Note of Linux OS Requirements for Database Server (Doc ID 851598.1)

I checked some linux versions for 12c database prerequisite packages, but no luck. No reference found to unixODBC being mandatory now.

I decided to install a clean OEL version (an older image of OEL67 which was the first available I had) and then I used the preinstall rpm coming from the public-yum from oracle.
It’s stunning that in the official rpm does not include the new mandatory rpm’s.

[root@oel67 ~]# rpm -aq |grep -i rdbms
oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64
[root@oel67 ~]# rpm -qa |grep -i unixodbc
[root@oel67 ~]#

So conclusion, if you are about to install a new system, don’t forget to install the unixodbc binaries as well.

As always, questions, remarks? find me on twitter @vanpupi

Acfs: it’s all about permissions

Acfs: it’s all about permissions

It all starts with creation of a database on a Database appliance which failed with the error

Validation of server pool succeeded.
Registering database with Oracle Restart
PRCR-1006 : Failed to add resource ora.demodb.db for demodb
PRCR-1071 : Failed to register or update resource ora.demodb.db
CRS-2566: User 'oracle' does not have sufficient permissions to operate on resource 'ora.redo.datastore.acfs', which is part of the dependency specification.
DBCA_PROGRESS : DBCA Operation failed.

 

One of the things … is it due to running on the ODA or is it a general cluster issue?
It was easy to verify as this customer had another ODA on which everything just works smoothly. So we started to compare the environments. One tiny little thing appeared to be different: the ACL.

On a working ODA:

[grid@ODA_A-1 ~]$ crsctl status resource ora.redo.datastore.acfs -p |grep ACL
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,user:oracle:r-x
[grid@ODA_A-1 ~]$ 

 

On this one:

[grid@ODA_B-1 ~]$ crsctl status resource ora.redo.datastore.acfs -p|grep ACL
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
[grid@ODA_B-1 ~]$ 

Sooo there we have it.
The first intention to do is to do a crsctl modify or a crsctl setperm.
Let’s switch to a demo system as this is acfs and not oda related.

So it’s playtime!
On the demo environment we have an acfs volume:

[root@demo-rac12-01 ~]# crsctl status resource ora.dg_advm.advmvol01.acfs
NAME=ora.dg_advm.advmvol01.acfs
TYPE=ora.acfs.type
TARGET=ONLINE , ONLINE , ONLINE
STATE=ONLINE on demo-rac12-01, ONLINE on demo-rac12-02, ONLINE on demo-rac12-03

[root@demo-rac12-01 ~]#

If we verify the ACL we see the same configuration as on the ODA:

[root@demo-rac12-01 ~]# crsctl status resource ora.dg_advm.advmvol01.acfs -p |grep ACL
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
[root@demo-rac12-01 ~]#

Yes I know, I did this as root and you could get this information as grid as well.
So let’s do the instinctive thing and try to modify the resource:

[root@demo-rac12-01 ~]# crsctl modify resource ora.dg_advm.advmvol01.acfs -attr "ACL='owner:root:rwx,pgrp:root:r-x,other::r--,user:oracle:r-x'"
CRS-4995:  The command 'Modify  resource' is invalid in crsctl. Use srvctl for this command.
[root@demo-rac12-01 ~]#

And now we have to be careful with googling things. If you start googling this error, you will find several pages suggesting to use the -unsupported flag. But there is no reason to do so 🙂
By the way, this same errors is thrown to you if you try to crsctl setperm.

Let’s assume the cluster is right (he mostly is), then a srvctl modify must exist and indeed there is!

[root@demo-rac12-01 ~]# srvctl modify filesystem -h

Modifies the configuration for the file system.

Usage: srvctl modify filesystem -device <volume_device> [-user {[/+ | /-]<user> | <user_list>}] [-path <mountpoint_path>] [-node <node_list> | -serverpool <serverpool_list>] [-fsoptions <options>] [-description <description>] [-autostart {ALWAYS|NEVER|RESTORE}] [-force]
-device <volume_device> Volume device path
-user <user>|<user_list> Add (/+) or remove (/-) a single user, or replace the entire set of users (with a comma-separated list) authorized to mount and unmount the file system
-path <mountpoint_path> Mountpoint path
-node <node_list> Comma separated node names
-serverpool <serverpool_list> Comma separated list of server pool names
-fsoptions <fs_options> Comma separated list of file system mount options
-description <description> File system description
-autostart {ALWAYS|NEVER|RESTORE} File system autostart policy
-force Force modification (ignore dependencies)
-help Print usage
[root@demo-rac12-01 ~]#

So it seems we need to find out which device we’re using. This is simple:

[root@demo-rac12-01 ~]# crsctl status resource ora.dg_advm.advmvol01.acfs -p |grep VOLUME_DEVICE
CANONICAL_VOLUME_DEVICE=/dev/asm/advmvol01-438
VOLUME_DEVICE=/dev/asm/advmvol01-438
[root@demo-rac12-01 ~]#

There we have it. So now it ‘s just syntax. Remember the difference in ACL, so we need to add user:oracle:r-x and sometimes we’re lucky, it’s not too hard.

[root@demo-rac12-01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl status resource ora.dg_advm.advmvol01.acfs -p |grep -i acl
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
[root@demo-rac12-01 ~]# /u01/app/12.1.0.2/grid/bin/srvctl modify filesystem -device /dev/asm/advmvol01-438 -user /+oracle
[root@demo-rac12-01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl status resource ora.dg_advm.advmvol01.acfs -p |grep -i acl
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,user:oracle:r-x
[root@demo-rac12-01 ~]# 

Removing it, isn’t too hard either:


[root@demo-rac12-01 ~]# /u01/app/12.1.0.2/grid/bin/srvctl modify filesystem -device /dev/asm/advmvol01-438 -user /-oracle
[root@demo-rac12-01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl status resource ora.dg_advm.advmvol01.acfs -p |grep -i acl
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
[root@demo-rac12-01 ~]#

 

As always, questions, remarks? find me on twitter @vanpupi