Browsed by
Month: February 2017

Aws s3 outage

Aws s3 outage

Today our aws friends suffered from an outage. This raises some questions. 

Suppose your entire organization is built on cloud products. All of them are closely related. This means a total outage for the organization. 

What about costs? Personally I am not an aws user, but would this outage be like “sorry dear customer, but you’re going to pay even we had this unplanned outage”.

If you’re mission critical, let’s say you’re end users are depending on it. Would it still make sense running production in the cloud? 

Curious for opinions!

Why the sun shines for Oracle and it’s Cloudy for others

Why the sun shines for Oracle and it’s Cloudy for others

First of all, I ‘d like to mention that this draft has been put back from schedule several times over and over again. Asking myself the question, should I really do this? But then again … This is a blog, a very personal humble opinion and you should not agree with me, I can be wrong, I can be right. The truth is probably in between. So, the title “Why the sun shines for Oracle and it’s cloudy for others”, it’s kind of a metaphor that Oracle (until now) has missed the cloud-train.

Recently I came across the website of the Synergy research group and found a nice article  When you see the graph, then you immediately get why uncle Larry is doing all this stuff to beat AWS.

Synergy_cloudy_graph

You see? Find Oracle … it’s in the “Others” group. If this was the rdbms resource manager, i’d not like to be there. I think Oracle was thinking the same 🙂 If you have a look at AWS, it’s virtually no change. Personally I expected a little growth, but apparently not. Microsoft Azure, Google and IBM are taking up the share of the “others”.

Please dig around on my blog, then you’ll see that I recently worked on a project on the Microsoft Azure cloud. Even tough I’d never like Microsoft and I’m not a fan of Oracle on windows, I have to agree that doing this Azure project (apart from some other problems) was a BLAST! Full support from Microsoft, stable cloud environment, easy to configure, maintain … A very positive experience.

Then I had a look at the Oracle Cloud. A bit sceptic. The interface is fantastic! But then you dig a bit deeper and I have hit limits I wasn’t expecting. A very simple example, Oracle wants to position itself as the number #1 cloud provider. To do so, they want to migrate full datacenters to their cloud, GREAT! Wonderfull idea. I mean this.

One story from the Azure-project. Due to a miscalculation (if you want to hear all about it, find me at a conference for my presentation about the journey of a BI stack to the cloud), we needed far more powerfull servers to cope with the load. That wasn’t a problem, but they are expensive. So expensive that if we made the financial calculation again, that we decided to have a look at the other 2 players as well. AWS was easy and competitive, but about the same price, so that means that there was no reason to change. Then we had a look at the oracle cloud.

Remember the demo Larry Ellison gave at OpenWorld, he wants to lift and shift datacenters to the Oracle Cloud. I love that concept. So we went to the Oracle marketplace (I love this term!) and were looking for our windows server version. No worries our db’s are running on linux 🙂 But err … no decent windows servers available in the marketplace 🙁

Cloudy_limited_choice

Then also … I find that the interface is slow … very slow … and sometimes even unstable.

Cloudy_failure

Some friends had even difficulties to cancel their trial subscription. I can go on like this for a while, but one of the other “no-go’s” for this customer was this entry in the FAQ:

“I have hardware VPN appliance in my datacenter. Will Corente VPN work with my existing appliance?

Currently, third-party VPN appliances will not work with the Corente service. VPN endpoint locations will need to install a Corente Services Gateway.”

This Customer wanted another choice and that was impossible. That’s a pity.
EDIT (24/02/2017): The Oracle cloud, just as the others, is evolving very rapidly. Thanks to Philip Brown (@pbedba) for pointing me to these links about a Third-Party Gateway to an IP Network in Oracle Cloud and a Third-Party Gateway On-Premises to the Shared Network
So it seems that currently it is possible, which is good news! So hopefully the FAQ will be updated quickly.

Ok, let’s do database as a service then. It’s the #1 database company (and yes, I’m an Oracle fanboy), so that should work for a decent price. Right?

I’ll take the anonimized example I use in my presentation as well. 3 prod db’s, 35TB, 15TB and 6TB + their dataguard instances and then for each db 8 non-live versions (Dev, Dev New Release, Test, Test New Release, Int , Int New Release, Uat, Uat New Release). Then you immediately spot, WHY the cloud is an option. Treat this databases as cattle, not as pets. So automation and provisioning would be key. But for production, it should be feasible, right?
Let’s explore the options … In summary … not too much except the full blown exadata option, which was (compared to the Azure solution we had figured out) extremely expensive. Even then we left out mechanisms for cloning those databases in an automated way to non-prod systems.

It’s a bit a frustrating blogpost and I feel so sad writing and reading it. So for Oracle in my opinion, the sun is still shining on premises and I do hope for them the clouds will come, but the way it is now, I’m afraid they ‘ll miss this train. I believe more in the data on premises, but the cloud will definitely take it’s place and we should definitely embrace it. I totally agree with the statement “there will be a co-existence for the next 5 a 10 years”. Ofcourse some other hype will be there by then, but that’s another story.

But Oracle … you still can win this battle!

  • Think about the past, think “back to the future”! How did you win ground in the past? Make it EASY TO USE. So, the trial subscription, make it really free to subscribe and unsubscribe without having to provide credit card details. Have a look at your colleagues of apex, they are doing a GREAT job!
  • Support us. Support is key. If we choose to be dependent from a cloud provider, offer good support. Resolve (i don’t say respond, but really resolve ) SR’s really quick (< 0,5d in the local timezone) as speed in the cloud is key.
  • No unplanned outages please! Make it stable,no suddenly disappearing machines. Outages are acceptable, but communicate them, be very transparant.
  • Invest in a good extensive marketplace. Currently, you’re at the point of Microsoft Azure 2 years ago. You have the experience, the knowledge, the social network,… it must be feasible to fill this marketplace really quick with recent and decent software. Vendors are asking for it … hear them. Make the marketplace a shopping mall or a candy store.
  • Engage your partners! It’s lonely at the top and if you’re high you can fall very low. If the product is mature, and if partners get easy access to features-to-come (compare it to private/public preview with Azure), customers will start to trust you and dare to take the move.
  • Don’t push the “cloud-on-premise” too hard. It’s no cloud at all, it’s just an interface. People don’t get this idea. Keeping the costs of the own datacenter and pay extra for this service. It’s difficult to understand. I do believe in this mechanism as a “step to the cloud”, but make it free (or very very cheap). So that people can use the engineered systems to put their environment on, once done, call DHL/Fedex or some other partner and move them to the Oracle D.C. Done.
  • Don’t change the rules if you can’t win and don’t get agressive. Yes I’m referring to the core-factor story regarding AWS and Microsoft Azure. I heard some customers making the comparison with children “if they can’t win, they change the rules” I couldn’t think of any response at that time … it felt they were right.
  • Provide a clear cloud advantage. This can be for instance that if you are adding a compute layer to host your db yourself, the EE licenses would be included. Or change the license model (in the oracle cloud) that eg. all the options are “free” included in the EE license. If you make that cheaper than the on premises licenses, you will certainly win ground without putting the customers from other certified cloud providers in a strange position.
  • Provide an easy mechanism that customers can go back/away very easily without extra cost. This sounds very strange, but people don’t like to be in prison, so they are very scared about “loosing their data to someone else” or going through a lengthy process to get it out the cloud again (if needed for one reason or another).

Basically, it comes down to one sentence: Listen to your customers, Listen to what they want, don’t push things through their throat. It’s not too late yet. People are interested in it, engage them, don’t scare them.

Once again, this is a very personal opinion and I might be right, but I might be wrong as well. I think by discussing this, more beautiful and working (usable) clouds can be created.

Cloudy_but_sunny

And remember, when it’s cloudy, it doesn’t necessarily mean that it will rain 🙂

As always, questions, remarks? find me on twitter @vanpupi

Memo to Self: Recap cellsrvstat

Memo to Self: Recap cellsrvstat

Sometimes I ask myself “how did that work again”, so I decided to document this every time I have this feeling. With some links to the documentation, easy commands,… you got the picture.

First one today, new customer, new environment, to get some feeling with the cells, I used cellsrvstat.

Documentation reference (here ). Cellsrvstat is also part of the exawatcher on the cells.

A basic overview of the command. If you log on to the cells as root, it is in your $PATH. But in case you’re looking for it, it’s stored in /opt/oracle/cell<version>/cellsrv/bin/

So basics first, what can it do:

# cellsrvstat -h
LRM-00101: Message 101 not found; No message file for product=ORACORE, facility=LRM
Usage:
cellsrvstat [-stat_group=<group name>,<group name>,]
[-offload_group_name=<offload_group_name>,]
[-database_name=<database_name>,]
[-stat=<stat name>,<stat name>,] [-interval=<interval>]
[-count=<count>] [-table] [-short] [-list]

stat A comma separated list of short strings representing
the stats. Default is all. (unless -stat is specified).
The -list option displays all stats.
Example: -stat=io_nbiorr_hdd,io_nbiowr_hdd
stat_group A comma separated list of short strings representing
stat groups. Default: all except database
(unless -stat_group is specified).
The -list option displays all stat groups.
The valid groups are: io, mem, exec, net,
smartio, flashcache, offload, database.
Example: -stat_group=io,mem
offload_group_name
A comma separated list of short strings representing
offload group names.
Default: cellsrvstat -stat_group=offload
(all offload groups unless -offload_group_name is specified).
Example: -offload_group_name=SYS_121111_130502
database_name A comma separated list of short strings representing
database group names.
Default: cellsrvstat -stat_group=database
(all databases unless -database_name is specified).
Example: -database_name=testdb,proddb
interval At what interval the stats should be obtained and
printed (in seconds). Default is 1 second.
count How many times the stats should be printed.
Default is once.
list List all metric abbreviations and their descriptions.
All other options are ignored.
table Use a tabular format for output. This option will be
ignored if all metrics specified are not integer
based metrics.
short Use abbreviated metric name instead of
descriptive ones.
error_out An output file to print error messages to, mostly for
debugging.

In non-tabular mode, The output has three columns. The first column
is the name of the metric, the second one is the difference between the
last and the current value(delta), and the third column is the absolute value.
In Tabular mode absolute values are printed as is without delta.
cellsrvstat -list command points out the statistics that are absolute values


[root@dm06celadm01 ~]#

So it can display all kind of information about your cell status, which can be helpful to see what’s going on. So let’s do the list: (warning: awful lot of info! But i’ll cut out some of the rows, but if you execute it, be prepared for a long list)

[root@dm06celadm01 ~]# cellsrvstat -list
Statistic Groups:
io Input/Output related stats
mem Memory related stats
exec Execution related stats
net Network related stats
smartio SmartIO related stats
flashcache FlashCache related stats
health Cellsrv health/events related stats
offload Offload server related stats
database Database related stats
ffi FFI related stats
lio LinuxBlockIO related stats
mpp Reverse Offload related stats
Sparse Sparse stats

Statistics:
[ * - Absolute values. Indicates no delta computation in tabular format]

io_nbiorr_hdd Number of hard disk block IO read requests
io_nbiowr_hdd Number of hard disk block IO write requests
io_nbiorb_hdd Hard disk block IO reads (KB)
io_nbiowb_hdd Hard disk block IO writes (KB)
io_nbiorr_flash Number of flash disk block IO read requests
io_nbiowr_flash Number of flash disk block IO write requests
io_nbiorb_flash Flash disk block IO reads (KB)
io_nbiowb_flash Flash disk block IO writes (KB)
io_ndioerr Number of disk IO errors
io_ltow Number of latency threshold warnings during job
io_ltcw Number of latency threshold warnings by checker
io_ltsiow Number of latency threshold warnings for smart IO
io_ltrlw Number of latency threshold warnings for redolog writes
...
mpp_nr_blcc Num of reqs not pushed due to low cell cpu (C)
mpp_nr_bhcon Num of reqs not pushed due to high cell outnet (C)
mpp_nr_bhrnin Num of reqs not pushed due to high db node innet (C)
mpp_nincr_mb Num rate increase by reverse offload info from db (C)
mpp_ndecr_mb Num rate decrease by reverse offload info from db (C)
mpp_nincr_rn Num rate increases from db node cpu information (C)
mpp_ndecr_rn Num rate decreases from db node cpu information (C)
mpp_ndecr_ccpu Num rate decreases from low cell cpu utilization (C)
mpp_ndecr_con Num rate decreases from high cell outnet util (C)
mpp_ndecr_rn_in Num rate decreases from high db node innet util (C)
sparse_ncb num buckets compacted by sparse HT background scan
sparse_ios num IOs with sparse regions
sparse_ios_kb Total sparse IOs (KB)
sparse_smartio Total redirected smart ios (KB)
[root@dm06celadm01 ~]#

Let’s say you’re only interested in the io related things you could use a stat_group:

[root@dm06celadm01 ~]# cellsrvstat -stat_group io
===Current Time=== Tue Feb 21 11:29:39 2017

== Input/Output related stats ==
Number of hard disk block IO read requests 0 2226820445
Number of hard disk block IO write requests 0 1033312850
Hard disk block IO reads (KB) 0 1909110664882
Hard disk block IO writes (KB) 0 199121447989
Number of flash disk block IO read requests 0 14301322886
Number of flash disk block IO write requests 0 1008668696
Flash disk block IO reads (KB) 0 789129901568
Flash disk block IO writes (KB) 0 52097067586
Number of disk IO errors 0 0
Number of latency threshold warnings during job 0 1081
Number of latency threshold warnings by checker 0 0
Number of latency threshold warnings for smart IO 0 0
Number of latency threshold warnings for redolog writes 0 0
Current read block IO to be issued (KB) 0 0
Total read block IO to be issued (KB) 0 599867955384
Current write block IO to be issued (KB) 0 0
Total write block IO to be issued (KB) 0 197822797002
Current read blocks in IO (KB) 0 0
Total read block IO issued (KB) 0 599867955384
Current write blocks in IO (KB) 0 0
Total write block IO issued (KB) 0 197822797002
Current read block IO in network send (KB) 0 0
Total read block IO in network send (KB) 0 599867955384
Current write block IO in network send (KB) 0 0
Total write block IO in network send (KB) 0 197822797002
Current block IO being populated in flash (KB) 0 2765920
Total block IO KB populated in flash (KB) 0 32844047616
I/Os queued in IORM for hard disks 0 0
I/Os queued in IORM for flash disks 0 0

[root@dm06celadm01 ~]#

Last 2 lines are also very interesting, it tells you if IORM is kicking in or not. Might be usefull in some cases. Just saying.

The exec group is also nice. Once again I will cut out some rows, but the last lines are very interesting as well:

[root@dm06celadm01 ~]# cellsrvstat -stat_group exec
===Current Time=== Tue Feb 21 11:30:17 2017

== Execution related stats ==
Incarnation number 0 3
Number of module version failures 0 0
Number of threads working 0 2
Number of threads waiting for network 0 23
Number of threads waiting for resource 0 9
Number of threads waiting for a mutex 0 112
Number of Jobs executed for each job type
CacheGet 0 3123536972
CachePut 0 1031998876
CloseDisk 0 15376502
OpenDisk 0 20379160
ProcessIoctl 0 304858117
PredicateDiskRead 0 7462707
PredicateDiskWrite 0 36539
PredicateFilter 0 24054836
PredicateCacheGet 0 140219901
PredicateCachePut 0 16917010
FlashCacheMetadataWrite 0 0
RemoteListenerJob 0 0
CacheBackground 0 0
RemoteCellMgrService 0 0
CopyFromRemote 0 30925
...
sparse_bootstrap 0 0
sparse_free_region 0 0
DelegateIO 0 62678
NetworkPoll 0 0
CopySIFromRemote 0 550
SIGetJob 0 720
NetworkDirectoryGC 0 0

SQL ids consuming the most CPU
INT99 dxpwsgys5za27 3
END SQL ids consuming the most CPU

[root@dm06celadm01 ~]#

This tells me which database is asking the most cpu for which query. Might be usefull in some cases. Remember… in an idle environment and you do something, … then you’re automatically the “top”. But if suspecting things, it’s worth to have a look, it might help.

As always, questions, remarks? find me on twitter @vanpupi

UKOUG Ireland 2017

UKOUG Ireland 2017

Once more I’d like to thanks my colleague and friend Philippe Fierens (@pfierens and http://pfierens.blogspot.be ) for convincing me to speak at conferences about the things I do for customers. Last year, 2016, it all started at UKOUG Ireland. I am lucky to be selected this year again! I’m speaking at this years OUG Ireland event! (agenda)

My first talk is the first day at 14:15h and will tell you all you’d like to know about ovm. It’s called: OVM on Exadata : Living in a virtual world. You can find the abstract here.  One thing I’d like to mention. Normally this is a duo-presentation with Philippe, but due to circumstances he can’t join, but I’d like to credit him for his part in the presentation. Thanks Philippe, I will try to do it as good as you do!

The second day I’ll be speaking at 15:25 about a very recent project I did, I like to call it the same as the title of the presentation: The Journey of a bi-stack to the Cloud. You can find the abstract here.

So folks, register for the conference and see you there!

As always, questions, remarks? find me on twitter @vanpupi

A warm welcome to exadata SL6-2

A warm welcome to exadata SL6-2

Last years Oracle Openworld, uncle Larry announced the sparc based exadata SL6-2, so this means that we have to give the sparc chips a warm welcome to the Exadata family.
During the conference I wrote 2 blogposts. You can find them here and here.

To recap, a little picture of the new one in the family:

Exadata SL6-2

Nowadays, we’re used to the big X for the exadata’s. This is for the x86 infrastructure they are running on. So SL stands for “Sparc Linux”. You should follow the Oracle guys on twitter as well, then you see this product (Linux for sparc) is growing very rapidly. One of the questions which pop into the mind directly, which endianness is this using? Well, linux on sparc is using big endian as the sparc chip itself is big endian.

So in my blog posts I was eagerly looking forward to the spec-sheet and here it is! http://www.oracle.com/technetwork/database/exadata/exadata-sl6-ds-3547824.pdf

A shameless copy out of the datasheet:
“The Exadata SL6 Database Machine uses powerful database servers, each with two 32-core SPARC M7 processors and 256 GB of memory (expandable up to 1TB)”

According Gurmeet Goindi’s blog (@exadatapm) it comes at the same cost as the intel based variant. You can read his blog here: https://blogs.oracle.com/exadata/entry/exadata_sl6_a_new_era

Exadata SL6-2 hardware specifications

Look what’s there! In stead of 2 QDR ports, we now have 4. And also the elastic configs remain. Also remarkable is that the storage cell’s remain on Intel based architecture.
This looks interesting as well (same as the X6-2 trusted partitions):

Exadata SL6-2 mgmt features

 

On this moment (or I have read over it) I can’t see yet how virtualisation will be done, so if someone has info about this, I will be happy to hear this. I heard several rumours about this, but I am eager to find out what it’s going to be!

One question remains … when will I be able to find a customer who buys it and let me explore this to the bottom 🙂

 

As always, questions, remarks? find me on twitter @vanpupi