Browsed by
Category: Oracle

A warm welcome to exadata SL6-2

A warm welcome to exadata SL6-2

Last years Oracle Openworld, uncle Larry announced the sparc based exadata SL6-2, so this means that we have to give the sparc chips a warm welcome to the Exadata family.
During the conference I wrote 2 blogposts. You can find them here and here.

To recap, a little picture of the new one in the family:

Exadata SL6-2

Nowadays, we’re used to the big X for the exadata’s. This is for the x86 infrastructure they are running on. So SL stands for “Sparc Linux”. You should follow the Oracle guys on twitter as well, then you see this product (Linux for sparc) is growing very rapidly. One of the questions which pop into the mind directly, which endianness is this using? Well, linux on sparc is using big endian as the sparc chip itself is big endian.

So in my blog posts I was eagerly looking forward to the spec-sheet and here it is! http://www.oracle.com/technetwork/database/exadata/exadata-sl6-ds-3547824.pdf

A shameless copy out of the datasheet:
“The Exadata SL6 Database Machine uses powerful database servers, each with two 32-core SPARC M7 processors and 256 GB of memory (expandable up to 1TB)”

According Gurmeet Goindi’s blog (@exadatapm) it comes at the same cost as the intel based variant. You can read his blog here: https://blogs.oracle.com/exadata/entry/exadata_sl6_a_new_era

Exadata SL6-2 hardware specifications

Look what’s there! In stead of 2 QDR ports, we now have 4. And also the elastic configs remain. Also remarkable is that the storage cell’s remain on Intel based architecture.
This looks interesting as well (same as the X6-2 trusted partitions):

Exadata SL6-2 mgmt features

 

On this moment (or I have read over it) I can’t see yet how virtualisation will be done, so if someone has info about this, I will be happy to hear this. I heard several rumours about this, but I am eager to find out what it’s going to be!

One question remains … when will I be able to find a customer who buys it and let me explore this to the bottom 🙂

 

As always, questions, remarks? find me on twitter @vanpupi

 

 

 

The first performance related impressions of the new ODA X6-2M

The first performance related impressions of the new ODA X6-2M

New toys are always fun! When Oracle announced their “Small” ODA’s in the X6-2 generation, we were excited to test them. We were not the only ones, so it took a while before getting one, but the first week of january, it was playtime. An ODA X6-2M was delivered to our demoroom and testing could begin.

Normally I would start a blog post by “how to install” it. Actually this is very simple and very well documented. If you want me to blog about it as well, just let me know.

The nice thing about the database appliance is, that in the X6-2 generation, it is now possible to have single instances which can host standard edition. This is a good thing. One of the reasons you want to consider this, is that step-in costs can be reduced. For smaller companies, you get a database in a box which is just working. Nice, isn’t it?

So how does it perform?
Well … first things first! Slob. The wonderfull tool of Kevin Closson (you can find him here http://kevinclosson.net ). Slob helps stressing the storage so that you can find out how your system is behaving. It always one of the first things I run on a new system.

Marco Mischke (@dbamarco) was also playing with the X6 and he discovered an important performance difference between running your database on ASM and ACFS. It has been classified as a bug and “fixed” in the latest ODA image. Guess which version I installed on the ODA? Right, the latest one. So we got in touch and the first slob test was good. It reached far higher, so the problem looked to be fixed.

But looking a bit further, I wanted to test on ASM as well.
You know? I will provide the results you’re looking for by now 🙂

Ok First: ACFS, here we go.

ACFS IOPS Read

So with a limited set of workers we reach up to about 325000 iops. Given that the system has 20 cores available, this results into 16250iops per core.
If we translate that in MB/s we get this:

ACFS throughput MB

I left out the latencies here to make it a bit more clear but it peaks to 2,5GB/s at it’s most. So here are the latencies over the tests:

ACFS read latencies.

I put it into excel as well:

max read latency		2587.22	us	2.58722	ms
max write latency		2094.74	us	2.09474	ms

These are the maximum latencies during the test, so merely at the end. In my opinion, this is good.
If more details are needed, drop me a message and I will provide more information.

Let’s move on to ASM, exactly the same database, parameters, etc,… I love 12c! you can move the datafiles online, so that’s how it has been done.
ASM, your results please.

Oops, what’s that? 800.000 iops in read! And the write ones are only slightly better.

Then we go to the throughput:

So asm is faster than ACFS. I was expecting it to be a bit faster, but not this.
For completeness the latencies:

And then the figures:

max read latency		2508.65	us	2.50865	ms
max write latency		2893.87	us	2.89387	ms

This look likes expected. Good.

I talked to my team-lead and performance tuning expert, Geert De Paep about this behaviour. You could see the lights in his eyes, he wants to test it as well. So I’m looking forward to his blogpost as well. I can tell you already, by doing the queries manually on the swingbench schema, Geert was also able to see this behaviour. So we should also figure out what happens by using acfs. If it is still strange, we should contact Oracle as well. We will see.

If you run swingbench with the preconfigured runbooks, the first bottleneck you find is the cpu. This is due to all the pl/sql in swing bench. So knowing that … the next tests will be Logical IO.

As always, questions, remarks? find me on twitter @vanpupi

 

Oracle DB in the Azure cloud – Pt1

Oracle DB in the Azure cloud – Pt1

A few months ago (about october) ago we were contacted with the simple question: Can you run an oracle database in the cloud, the Azure cloud. Well … it depends. The little detail was, that the database is about 34TB and there are a few other multi TB databases AND there are a lot of copies of them. And … the final decision for go live is … end of 2016.  Well, we accepted the challenge.

The deadline was strict, so that’s also the reason I had less time to blog and these Azure cloud series won’t be completely chronological, … but (and this is a spoiler alert) I’m interested in sharing what we ended up with.

This post will focus on how the database tests using slob were done. Credits for @kevinclosson for the SLOB-tool and @flashdba for his slob testing harness. Combining these 2 provides a very quick way of running consistent tests. We needed such a quick testing framework as we were changing about everything to see if it impacted disk throughput / iops or not.

Why we choose those machines is for another post, but we opted for the DS15_V2 vm ( details here ). The explanation from the machine I borrowed from the Microsoft website: “Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series.”
Looks good, right? And we can attach up to 40TB to the machine, which makes it a candidate to be used for the future database servers.
It gets better, these family of servers can use also the Microsoft premium storage, which are basically SSD’s and disk caching is possible if needed.
As the databases are a bit bigger, only way we could do was use the P30 disks ( more details about them here ) So a disk limit of 5000 iops and 200MB/s. Should be ok as a first test.

The first test was done using iozone. The results of that will be in a different blogpost as I still need to do the second tests to crosscheck them. But let’s continue, but not before I would like to ask, if there are remarks, questions or suggestions to improve, I’ll be happy to test them.
The vm is created, 1 storage account was used, and in the storage account, it was completely filled up with 35 premium storage ssds.
Those disks were presented to the virtual machine, added into one big volume group and an xfs striped filesystem was created on a logical volume, which will host the SLOB database.
The db was created db using cr_db.sql from create database kit after enabling it for the 4k redologs. After finishing all steps to make it a Physical IO test we were good to launch the testing harness. It ran for a wile and eventually our top load profile looked like this during all the tests:

AWR_example_cloud

I think that’s ok? So after that it’s time to run the slob2-analyze.sh to generate a csv file. That csv was loaded in excel and this was the result.

1SA40disks_cloud

 

 

 

 

 

First I splitted the write and read iops, but then I decided to use the total iops as the graph follows the trend. My understanding (please correct me if wrong) is that around 30000 iops of a 8k database block is around 234MB/s? These tests were done without disk caching.

Then we decided to do the whole test again, but this time, instead of using 1 storage account with a bunch of disks, we used a bunch of storage accounts with only one disk in it. The rest of the setup was done exactly the same (created a new vm with same size, same volumegroup, same striping, …) and the database was created using the same scripts again. Here are the results:

40SA1Disk_cloud

 

 

 

 

 

I think it is remarkable that even in the cloud, the way how you provide the disks to the machine really does matters. For example if you take the 32 workers. With one storage account, remarkably less work was done.

More to come of course. Feedback is welcome about what might be the next blogpost. Let’s make it interactive 🙂

As always, questions, remarks? find me on twitter @vanpupi

Documentation bug in ovm for Exadata

Documentation bug in ovm for Exadata

A while ago a customer gave me a heads up about the “bug” concerning the default passwords for root and celladmin. I was thinking a bit further and I wondered if the “documentation bug” I found while adding a new OVM in the virtualised exadata is solved. The official documentation can be found here .

documentation_screenshot
documentation_screenshot

Then “Managing Oracle VM Domains on Oracle Exadata Database Machine” and then “Creating Oracle RAC VM Clusters” brings you to the point I want to warn you for.

All steps are correct, but the last one “Run all steps except for the Configure Cell Alerting step using the XML file for the new cluster. For most installations, the Configure Cell Alerting step is step 7. For example, to execute step 1, run the following command” might be a bit tricky. Why? I will show you.

When deploying the exadata, if you list the steps you will get this output:

$ ./install.sh -cf anonymous_customer.xml -l
 Initializing

1. Validate Configuration File
2. Create Virtual Machine
3. Create Users
4. Setup Cell Connectivity
5. Calibrate Cells
6. Create Cell Disks
7. Create Grid Disks
8. Configure Alerting
9. Install Cluster Software
10. Initialize Cluster Software
11. Install Database Software
12. Relink Database with RDS
13. Create ASM Diskgroups
14. Create Databases
15. Apply Security Fixes
16. Install Exachk
17. Create Installation Summary
18. Resecure Machine
$

But if you take the newly created xml for the new cluster:

$ ./install.sh -cf anonymous_customer_new_clu.xml -l
 Initializing

1. Validate Configuration File
2. Create Virtual Machine
3. Create Users
4. Setup Cell Connectivity
5. Calibrate Cells
6. Create Cell Disks
7. Create Grid Disks
8. Configure Alerting
9. Install Cluster Software
10. Initialize Cluster Software
11. Install Database Software
12. Relink Database with RDS
13. Create ASM Diskgroups
14. Create Databases
15. Apply Security Fixes
16. Install Exachk
17. Create Installation Summary
18. Resecure Machine
$

Do you spot the difference? I don’t.
I just want to say … If you create a new cluster, be careful with “Create Cell Disks”. I should recheck the logfiles, but the time I checked it lately, it was performing a drop of the celldisk before recreating it. So you can imagine what will happen to your other virtual machines. If you have an exadata on which I can try it, please let me know. I’m happy to check it out further 🙂

Exadata add a new vm

Exadata add a new vm

Today a customer highlighted me a nice-to-know. If you add a new virtual machine to an exadata ovm cluster, he experienced something odd. It was tested on a “new installation”, so it worked good. Basic steps are:

  • Run over OEDA and add the cluster
  • move the xml-files to the dom0 on the same spot as the original one
  • run install.sh with this config

As this is a good customer he followed the advice of having all passwords changed. The bad thing is … while running install.sh lots of errors on different components where thrown.
The most remarkable, and even the first one thrown, was:

OCMD-02624: Error while executing command {0}.java.lang.reflect.InvocationTargetException

So after digging around for a while, it turned out that it was due to the “non-default” passwords for root and celladmin.
After changing the root and celladmin passwords back the the wellknown default, the install.sh liked it and gave the expected success message.

Successfully completed execution of step Validate Configuration File [elapsed Time [Elapsed...

The IB switches suffer from this as well. But that’s only faced if you are going to upgrade the IB software. So in order to patch them easily, just temporarily reset the passwords to the default and change them back afterwards.

Python … no not the snake – my very first script

Python … no not the snake – my very first script

Do you know the feeling? “I should do <fill in something cool here>”. Well, I was facing this already a time by learning python. I knew that you could do some cool thing with it, but never pushed myself to do it. Until now! Otn appreciation day! Thanks to Mr Oracle Base  Tim Hall. Some time ago, he launched the idea of otn appreciation day and of course I added my entry as well. You can find it here.

Tuesday 11 october 2016, my entry was scheduled at 08:30 CEST and seeing all the other blog posts, I soon realised that this would be a very nice bunch of information. I started copy pasting the blogs and links, but … As soon as I started doing something else (work!!! ) I missed some. That brought me the idea of creating a script. The idea was simple, log in to twitter, fetch all tweets hashtagged by ThanksOTN and then filtering out the retweets. Simple huh? Then … how to do it? mmm … let’s take the challenge, I’ll do it in python.

The result is here (improvement needed Christian! But that will come in time).
I’m happy to have an omnios (solaris derivate) server at home and there I have python available. So let’s go.

I broke it down into some steps. In order to read a twitter feed you need a twitter application. To do so, surf to https://apps.twitter.com and after loging in, click the create application button.

twitterapp creation

 

I only left the callback url blank. For this purpose, we don’t need it. I think, if I do, please let me know.

 

 

 

 

 

Then all is done. The next screen will give you an overview about the application you just have created and in the tab “keys and access tokens”, you only have to click on one more button “Create my access token”. So that’s it folks, nothing more to be done at twitter side.
Just record following fields:

  • Consumer Key (API Key)
  • Consumer Secret (API Secret)
  • Access Token
  • Access Token Secret

these we need in order to be able to establish a connection with tweepy to twitter.

Then it’s time to write some code! I assume you already have python setup, if not, drop me a mail, comment or tweet and I’ll help you out. So I never ever had done some python scripting so it was googling a bit.
It turned out i’d need tweepy for this task, so it was easy to install. Pip install tweepy was all I needed to enter and confidence was growing, if it’s going to be as easy as this, I’ll be good!

first we need to import some things:

import tweepy
from tweepy import OAuthHandler
import json

Then (at this point) I only needed a main procedure. I call it “main”. maybe obvious but ok 🙂 If you need more procedures, they go right after the import statements.

So the main I created looks like this. The comments are my comment for the blog as well:

def main():

 #Variables that contains the user credentials to access Twitter API
 access_token = "<fill in your own>"
 access_token_secret = "<fill in your own>"
 consumer_key = "<fill in your own>"
 consumer_secret = "<fill in your own>"

 # OAuth process, using the keys and tokens
 # here we create an auth object which uses the tweepy oauthhandler. There we need to pass the consumer key and secret
 # then we need to set the access tokens into the auth object
 auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
 auth.set_access_token(access_token, access_token_secret)

# Creation of the actual interface, using authentication
# here we create the actual connection to twitter and we call it api. It's just a name.
 api = tweepy.API(auth)

# search it and display them after removing the RT @'s
# some variables i picked:
 query = '#thanksOTN'
 max_tweets = 1000

# here i am gathering the tweets into an array and put them in a cursor.
 searched_tweets = [status for status in tweepy.Cursor(api.search, q=query).items(max_tweets)]
# this is just "quick'n dirty I figured out in the meanwhile. A better way would be to 
# use templates and fill them in this way, but hey... this works :-)
 print('<html><body>')
# now I have a resultset (searched_tweets) and I will run over them in a for loop.
 for tweet in searched_tweets:
# I'm filtering out the retweets in following line
 if "RT @" not in tweet.text:
## the next line in comment. I added an extra if-clause first to only list tweets which contained "OTN Appreciation day:" in the tweet
# later turned out that not everyone put this one in, so I commented it out.
# if "OTN Appreciation Day:" in tweet.text:
# and here i'm printing the user who has sent the tweet
 print('<p>Twitter user:',tweet.user.screen_name,'<br />')
# and eventually what he has tweeted. This is also a bit Q'n'd I figured out. but the emoji weren't
# displayed correctly in python 3.4, so this is a was to have them parsed to utf8. I think there 
# will be some more efficient ways of doing this, so feel free to comment on this.
 tweet_text = str(tweet.text.encode("utf-8") if tweet.text else tweet.text).lstrip('b\'')
# and finally print the tweet.
 print(tweet_text,'</p>')
 # and close the webpage 
 print('</body></html>')

# finally call main
if __name__=='__main__':
 main()

And finally  this script was scheduled in crontab every 30 minutes, and redirected to a html file.

I should still create a kind of tokeniser to manipulate the tweet_text in order to make hyperlinks from the links. But hey … that’s something for the future 🙂

So this was my very very very first python script. I think it’s a fun language which I’ll be using more and more.

Comments, advice,… are always welcome!

 

OTN Appreciation Day: Dataguard

OTN Appreciation Day: Dataguard

Thanks Tim Hall for the idea about OTN Appreciation day. The feature I like the most in oracle is a rather “old” one, but it can be extremely useful: Dataguard. Why dataguard? I find it extremely easy to set it up, maintain it and it can save you a lot of “troubles”. Especially on big(ger) databases it takes down the time to recover in case of a failure down to seconds instead of hours.

The concept is simple:

Dataguard configuration

(image borrowed from the oracle documentation) It consists out of a primary (mostly live) database and we replicate all the redo to a target / standby database. This in real-time or if needed, with delayed apply.

One of the nice things about it, even how you mess it up, you get it up and running every time again, so it’s virtually unbreakable. Is it? Maybe not, but even it lags behind, it’s fairly easy to bring your standby database up to date with incremental backups and go on with your daily tasks.

Nowadays, I (and lots of my colleagues) use it a lot for hardware migrations. Almost everything can be done on beforehand, the moment of the big switch, you just switch the db, adapt the connection strings and done. You even can test your migration easily by breaking the redo stream (or in current versions use the snapshot standby) and test the applications on the new platform.

One of the nicest things I ever used it for was a exadata to exadata migration from Germany to The Netherlands. That client decided to switch from datacenter and all the equipment must be moved from Germany to the Netherlands, but with as few as downtime as possible. Switching over 45 databases (not too big, only a couple of TB in total), took only minutes and the end-users weren’t even aware that a complete DC move had been done.

All those nice features, active dataguard, snapshot standby, … they all make our live a bit easier. So thanks for this nice feature which makes my life a bit easier.

 

My trip to Oracle Openworld 2016 #OOW16

My trip to Oracle Openworld 2016 #OOW16

It all started a couple of months ago. I submitted a presentation and my friend Philippe Fierens submitted a panel session together with Adam Bolinski (Ora600.pl) , but unfortunately, I’m joining #teamRejected Lots of cool and nice people had to join as well, so it’s not too bad.
Then great news came and my boss asked if I was interested of visiting Oracle Openworld. I didn’t have to think twice, so my answer was yes very quickly.

The trip starts early. Very early … on saturday at 04:30 in the morning we had the appointment at the company. There a little coach picked us up and after a 2hour drive, we arrived at schiphol.

img_0865

A nice group of friendly and interesting people traveling together. This will be fun!

After a short check-in, bagage drop-off and all other security checks, we had breakfast and after that, off we go!

 

img_0868 Immediately it became very clear what the main topic of this Oracle Openworld would be: CLOUD! 🙂

Personally, I’m very sceptic to this new “hype”, but the only way to know if I’m right or wrong is to go with an open mind and experience it.

 

 

10:30h of flying later, we safely landed in San Francisco. Then the necessary checks, and we took a cab to the hotel. Checking in and then off we went to the Moscone center for the registration.

img_0874

It IS impressive. People told me it’s big and large, but if you stand there for yourself, you know they’re right. This is going to be cool.

After registration we went for a walk. Doing something a little bit active in order not to suffer from the jetlag. Then I received a message from Philippe, he’s already at Jillians, so we went to have some drinks together to catch up. Always nice to see friends again! And friends do have some good ideas, this night’s dinner would be sushi. A very typical sushi bar but so good! If I could find it back, I would recommend it!

The next day, sunday, ok there are already some sessions, but for my first time in San Francisco I really wanted to see the bridge. So with the oracle belgium people we we went biking. After riding the Goldengate bridge and taking the Ferry back, it’s time to follow some sessions.

I tried to create a varying schedule filled with attending some sessions and going to the demo-grounds. My personal feeling was that you really had to search well for technical sessions, whilst when walking a moment in the demo grounds, you could talk to lots of technical people and some of my questions I had were answered very quickly. So it became clear very quickly that this was the place to be.

Sunday evening, the glorious moment of every OOW – virgin. Larry’s keynote. To be honest, he let me down. After hearing so much about the keynotes maybe my expectations were a bit high but, it is an impressive event. We closed the evening with some drinks and then … early to bed!

Monday a day full of sessions/demogrounds/… but most of all, trying to get to know as much as possible about the newly announced exadata, the Exadata SL6-2. A sparc based engineered system which I think Oracle is going to push pretty hard. Engineers I was talking to, said that this machine is to be used in the Oracle cloud. Still trying to figure out on how to check it easily to confirm this.

Tuesday it became very clear that Oracle is focusing on cloud … but … “there will be a coexistence of about 10 a 15 years”. I like this, but in some way it feels like they aren’t very confident in their own product, but time will tell. Larry held his second keynote and this was really what I had expected. If you’re curious, you can see it here. And that evening we had a blast during the benelux party. So we went early to bed, literally 🙂

Wednesday, the last full day. I spent most of the time on the demo grounds this day. Taking a picture of the famous America’s cup  and an interesting session about the exadata technical deep dive and internals. Congratulations also to Oren Nakdimon (@dboriented on twitter), he had a nice presentation about upgrading his pl/sql code without any downtime on standard edition.

Last full day, apparently means also the appreciation event. It was announced to be Billy Joel, but he cancelled and was replaced by Gwen Stefani and Sting. Personally, I’m not really a Gwen Stefani fan … but Sting was really fantastic and he got the atmosphere going!

Thursday was a day of buying some presents for home and rushing to the airport.

A common topic through many presentations and talks at the geek theatre at the demo grounds is encryption. It’s almost named in one sentence with cloud. Which they immediately turn directly to sparc. So my guess is sparc will be pushed more and more. Just because of the encryption on the chip and the software in silicon which can help you with the in-memory option of the oracle database. This screams for some tests.

In summary, my first Oracle Openworld experience was fantastic. I can advise everybody that you need to have experienced it at least once. Such nice friendly people, and lots of knowledge transfer. Thanks Exitas for letting me doing this!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Welcome Exadata SL6-2!

Welcome Exadata SL6-2!

EDIT: a new entry about this Exadata SL6-2 launch can be found here!

During a session of Juan Loaiza with the lovely title “Oracle exadata: what’s new and what’s coming” suddenly following slide popped up.

img_2209

 

So it looks like this is really going to happen. In the session no exact timings were given, but I’m awaiting this system to play with.

Basically I think this will be useful to migrate current solaris / sparc users to exadata. I’m a bit sceptic about these extreme performance numbers, but hey … if it’s launched, i’ll be happy to test it.

To be continued!

Exadata on sparc: sl6-2

Exadata on sparc: sl6-2

EDIT: a new entry about this Exadata SL6-2 launch can be found here!

Yesterday while walking to Larry’s keynote  on Oracle OpenWorld 2016, all people had to pass the engineered systems.

image

Curious as I am, and fed by lots of rumors I wanted to have a look at them. And indeed, there was one “new” one standing.

the nice thing is that you just can talk to the guys and this extremely friendly guy told me “this one is being announced on Tuesday”. It’s all about the exadata sl6-2. You’d think, ok another exadata, but this one is a bit special. The compute nodes are sparc t7 based and they run Linux on sparc. The nice thing they have 2ib cards and thus potentially 160gb/s available. Sounds something nice to test 🙂

Storage cells are still Intel, but who knows what’s going to come. I had the chance to play with it a very short bit and it’s really running Linux on sparc, ofcourse it’s just exploring and it should be available around December and the virtualized version was to be expected around May 2017. Anyhow, I ‘ll be attending the keynote (so that’s why Larry will do a second one 🙂 ) and then we’ll hopefully know more.

image

What I know now is that it will come in same elastic configurations as the current Intel based exadata’s

Anyhow … To be continued!