Home » Server Options » RAC & Failsafe » Planning For RAC on 11g2 (solaris 10)
Planning For RAC on 11g2 [message #458711] Tue, 01 June 2010 07:29 Go to next message
sailesh
Messages: 72
Registered: September 2007
Location: MAURITIUS
Member
Hi all,


I have a database sever on Sun V890 on oracle 10g version 10.2.0.1.0. This sever is available 24 * 7. Recently, i have increased the number of processes from 2500 to 5000, as during peak days, the number of processes had reached 2500 and the database just hang. After increasing the number of processes to 5000, the database did not start until i had increase the os parameter on /etc/system. Three parameters had to be incresed the SEMMNS,semmsl and semmni.

The CPU utilisation increases up to 90% utilisation during peak.

Now i am planning to consider a RAC where the application servers requests will be load balance berween the two nodes.

The application servers we are running on are weblogic and Glassfish. Normally, all application would be migrated to Glassfish.

My question is it a good reason for me to consider RAC to be able to load balance the requests from application servers.

Right now, i may have about 1500 concurent users on my database. Obviously this is an estimate from the number of processes.

Please advice?


Sailesh
Re: Planning For RAC on 11g2 [message #458759 is a reply to message #458711] Tue, 01 June 2010 14:16 Go to previous messageGo to next message
mkounalis
Messages: 147
Registered: October 2009
Location: Dallas, TX
Senior Member
RAC is not the answer to everything - it is primarily the answer for building highly available systems that host Oracle databases. Obviously, you can buy a bigger box to run your expected workload - but you mention that the system is 'available 24*7', so RAC is indicated here.

You don't mention what platform you are thinking about implementing RAC on - I would think Linux but Solaris works just as well - and not just because Oracle owns Sun now Wink. RAC allows you to add horsepower to your Oracle database by allowing you to grow the number of nodes in the cluster fairly easily. Because of this, commodity X86_64 based servers are usually what people use - but RAC runs on Oracle/Sun Sparc, HPUX Itanium, and AIX as well Wink.

If you want to load-balance workload, RAC is your best (and maybe only) answer. RAC allows all the machines in the cluster to see the same data, so an update on one node is available on all the others. One thing I am not sure of though is GlassFish's ability to take advantage of a RAC enabled database. That is something you will need to ask them. Weblogic (assuming you are running a current enough version) does allow you to maximize on the features that RAC offers.

You don't mention if you are using multi-threaded server or what sometimes is called shared-server connections to allow a smaller memory footprint for all of your concurrent database connections. Each database connection on Solaris takes up a minimum of 11 megabytes of so - so for 1500 connections you are using at a minimum 15 gigabytes of RAM. How much RAM does your existing Sun V890 have configured in it? This would be memory on top of what the OS needs, and also on top of the SGA/PGA requirements of the Oracle kernel itself. If your box doesn't have the RAM necessary to support dedicated connections, MTS or Shared Server connections is sometimes a good alternative. And, having less RAM that what you are using can also impact your CPU utilization because the OS is busy swapping things in and out of RAM to the swap file to make room for active processes. Swapping can have a very negative effect on performance - especially in large systems like the one you are describing. Also remember that a connection can grow in size depending on the complexity of the SQL it is running - so if your users are all running complex queries then you can expect these connections to grow up to 30 megabytes or larger in size.

The number of connections is one aspect, what each of these connections is doing is another. You don't mention transaction load - that would be another key piece of information.

One thing to keep in mind when building/designing a RAC cluster - the nodes should be as close if not identical to performance of each other as possible. RAC will work on nodes that have the same CPU type and OS version installed, but having disparately configured nodes (hardware wise) makes it more complex (and IMHO needlessly) to manage. You want to have your RAC cluster be able to survive at least a single node failure. That means, for example in a two-node cluster, that no one RAC node should be tasked to run over 45% or so during peak load. If one of your two nodes fail, then the other node is able to absorb the workload of the failed node with no (aparent) impact to user response times. Obviously, the more nodes you add, the more you can load them up - with three nodes I would suggest no more than 60% load at peak, four nodes 70% or so, etc. You never want to run them at 100% even at peak so that you can absorb spikes. Obviously I am stating what you 'want or should' do, not what may be realistically feasible for your environment Wink.

Understanding your true expectation for planned and un-planned downtime is also critical in your design. Remember that planned down-time does NOT usually figure into the five 9's (99.999%) uptime calculation that most people quote. Planned downtime does NOT (at least it shouldn't) count against you. Un-planned downtime does - and building a system that meets the five 9's specification is one that requires a lot of planning, and a lot of money to do correctly. RAC is a key ingredient, but not the only ingredient, in building such a system. There should be some planned downtime in the operating schedule of any system to allow for upgrades, equipment moves, etc.

Aside from deciding what hardware you will use for servers in your RAC cluster, you also need to choose an appropriate SAN or other shared disk technology for Oracle RAC to use. Also keep in mind that you really want to use a private, dedicated LAN on it's own switches if at all possible to facilitate the RAC private interconnect network. I can't tell you how many times I have seen that VLAN or shared switches were the culprit for both performance and erroneous node failures in a RAC cluster.

I am sure that an Oracle sales consultant would be glad to help you get more information. RAC is a bit more complex than a single-instance database, but it is also fairly easy to manage and add to once you know how.

One last comment - 10.2.0.1 is an extremely old and buggy release of 10gr2. Moving to 10.2.0.4 might yield some benefit to you.

Good luck!
Re: Planning For RAC on 11g2 [message #458826 is a reply to message #458759] Wed, 02 June 2010 01:13 Go to previous messageGo to next message
sailesh
Messages: 72
Registered: September 2007
Location: MAURITIUS
Member
Hi,

Thanks for the information you have provided.

I am planning to used solaris 10 on T5240 sun servers, cool threads servers. Furthermore, my actual server V890 has 32 GB of RAM and 6 GB has been configured as SGA, 2 Gb as PGA.

On EM, the database limits feature found under all metric, show "current count login count". What does it means? Does it the number of concurrent users that is login or something else. Because the database actually hang when it reaches 2500 processes.

My CPU usage per transaction for today is 604. Normally the peak is at the end of each month.

For normal day to day transaction, the number of processes is around 1000. But this gradually increases. This month it had reached beyond 2500.


Thanks,

Sailesh
Re: Planning For RAC on 11g2 [message #458900 is a reply to message #458711] Wed, 02 June 2010 08:07 Go to previous message
mkounalis
Messages: 147
Registered: October 2009
Location: Dallas, TX
Senior Member
I don't know what you mean by "the cpu usage per transaction is 604" - 604 what? Seconds?

The databaes locking up at 2500 connections is odd - and again you are running 10.2.0.1 which is a buggy release - I would really think you are best served by moving to 10.2.0.4 and seeing what happens. Still - in Solaris you need to have all your kernel memory parameters adjusted to be able to handle the number of resources that Oracle will demand based on the number of processes that you want to host. I don't believe the metric you are seeing in EM is a limit - but rather the high-water-mark for users. I am going to double-check when I get into the office and see what parameter you are looking at.

I really think you are moving in the wrong direction with CoolThreads. CoolThreads is Hyperthreading on steroids - you have either eight or sixteen actual cores, and then their are up to four threads per core. This makes it look to the OS that you have more real cores than you actually do. For web servers this makes a lot of sense. For database servers it makes no sense. People have implemented Oracle on this platform - and RAC as well - but in the end they usually move to Linux on X86_64 boxes becuase they are much more efficient running large database loads than what CoolThreads can provide. I have implemented two large RAC clusters on CoolThreads - both have since moved to other platforms.

CoolThreads looks great on paper - and it is an awesome technology for hosting applications where the processes are waiting on humans to do something. In this case you have a greater chance of a process or 'thread' being idle as it waits for something to happen. In a database server, the liklihood of a process or 'thread' being idle is greatly reduced - especially if you are driving it from a middle-ware server where the connections from it are pooled. The human wait time is factored in at the middle-ware tier - it is using the database connection pool as it needs it - and if it needs a database connection then it needs work done. So therefore all your database connections are always doing work, and that's where CoolThreads works against you. Now you have a single core trying to execute two or four 'threads' and now you artifically and needlessly become CPU bound. You really don't have the CPU resources that you think you do Wink. For the same money (or less actually) you can go with X86_64 servers that I really think may serve you much better for you. I would at least get Sun to setup some sort of test to see how the CoolThreads perform for you - a test is worth more than any forum post Wink. At the very least you want to be VERY conservative in how much work you will do on each of your CoolThread servers, and you may need more of them than you would need if you used X86_64 (or even Sparc based servers) to handle your workload.

Good luck!
Previous Topic: Clusterware and second virtual interface
Next Topic: Oracle RAC 11g R2 Installer available for window 32 bit
Goto Forum:
  


Current Time: Thu Mar 28 19:25:38 CDT 2024