From: vsevolod afanassiev on
The amount of memory provisioned in new servers keeps growing, we
have
several servers where single image of OS (Solaris, AIX, HP-UX) has
access to
more than 32 GB RAM. A question arises how to use this memory most
efficiently to run Oracle databases, in particular what problems one
may experience when running Oracle 9iR2 or 10gR2 with SGA > 20 GB?
I am particularly interested in systems with mixed workload, i.e.
where we see simultaneously highly efficient index-based queries, very
inefficient
queries joining multiple tables (consuming in excess of 800 million
buffer gets
per execution), large updates, dynamic SQL, ad-hoc queries, etc.
What would you do on a server with 100 - 200 GB RAM that runs one
Oracle instance?
- would you configure direct I/O (bypass UNIX filesystem buffer
cache)?
- would you configure SGA around 50 - 70% of physical RAM?
- would you use 8K block size of bigger blocks?
- would you allow automatic resizing of SGA components in 10g (ASMM)
or use fixed sizes?
- would you use some OS features like "large pages" on AIX in
combination with LOCK_SGA?

From: vsevolod afanassiev on
Thanks for responding.

Let's say there is no RAC, only standalone instances.

Of course "start from something and then iterate" is the best approach
available.
However it is not very practical as:
1. Changing SGA_MAX_SIZE requires outage.
2. We found that on heavily loaded system dynamic changes of
DB_CACHE_SIZE don't work very well.
For example when we tried to resize DB_CACHE_SIZE from 20 GB to 30 GB
it took almost an hour.
During this time database experienced heavy waits on 'log file sync' -
but there were no log switches
and few transactions. When we tried to reisize back to 20 GB the
instance crashed. This was 9.2.0.8.
3. We would rather avoid significant changes (e.g. increasing
DB_CACHE_SIZE from 20 GB to 40 GB),
this means that we need to go in small steps (20, 24, 28, 32, etc).
But this is time-consuming.
Many systems are on weekly/monthly cycle (i.e. certain batch job run
once per month),
so we'll have to wait one month to see the results. By that time the
data may change.
From: Mladen Gogala on
On Mon, 01 Feb 2010 14:15:33 -0800, vsevolod afanassiev wrote:

> The amount of memory provisioned in new servers keeps growing, we have
> several servers where single image of OS (Solaris, AIX, HP-UX) has
> access to
> more than 32 GB RAM. A question arises how to use this memory most
> efficiently to run Oracle databases, in particular what problems one may
> experience when running Oracle 9iR2 or 10gR2 with SGA > 20 GB? I am
> particularly interested in systems with mixed workload, i.e. where we
> see simultaneously highly efficient index-based queries, very
> inefficient
> queries joining multiple tables (consuming in excess of 800 million
> buffer gets
> per execution), large updates, dynamic SQL, ad-hoc queries, etc. What
> would you do on a server with 100 - 200 GB RAM that runs one Oracle
> instance?
> - would you configure direct I/O (bypass UNIX filesystem buffer cache)?
> - would you configure SGA around 50 - 70% of physical RAM? - would you
> use 8K block size of bigger blocks? - would you allow automatic resizing
> of SGA components in 10g (ASMM) or use fixed sizes?
> - would you use some OS features like "large pages" on AIX in
> combination with LOCK_SGA?

I have 64bit linux and here is one of my instances:


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> show sga

Total System Global Area 1.7180E+10 bytes
Fixed Size 2310752 bytes
Variable Size 8578973088 bytes
Database Buffers 8589934592 bytes
Redo Buffers 8650752 bytes
SQL>
$>grep -i huge /proc/meminfo
HugePages_Total: 8192
HugePages_Free: 961
Hugepagesize: 2048 kB


The answer to your question is: yes, I do use huge pages because I don't
want to waste memory on page tables. Also, huge pages aren't swapped or
paged, handling of huge pages is much simpler. You're reducing your OS
overhead by using huge pages.

On a IBM server that has between 100 and 200 GB RAM, I would urge
management to license VLM ("Very Large Memory") option which allows me to
have 32K blocks and allocate at least half of the memory for SGA. I would
then call Connor McDonald to try getting bad BCHR on that machine.


--
http://mgogala.freehostia.com
From: joel garry on
On Feb 1, 5:19 pm, vsevolod afanassiev <vsevolod.afanass...(a)gmail.com>
wrote:
> Thanks for responding.

You're welcome. Just noticed this:
http://hoopercharles.wordpress.com/2010/01/31/faulty-quotes-5-block-sizes/

>
> Let's say there is no RAC, only standalone instances.
>
> Of course "start from something and then iterate" is the best approach
> available.
> However it is not very practical as:
> 1. Changing SGA_MAX_SIZE requires outage.
> 2. We found that on heavily loaded system dynamic changes of
> DB_CACHE_SIZE don't work very well.
> For example when we tried to resize DB_CACHE_SIZE from 20 GB to 30 GB
> it took almost an hour.
> During this time database experienced heavy waits on 'log file sync' -
> but there were no log switches
> and few transactions. When we tried to reisize back to 20 GB the
> instance crashed. This was 9.2.0.8.
> 3. We would rather avoid significant changes (e.g. increasing
> DB_CACHE_SIZE from 20 GB to 40 GB),
> this means that we need to go in small steps (20, 24, 28, 32, etc).
> But this is time-consuming.
> Many systems are on weekly/monthly cycle (i.e. certain batch job run
> once per month),
> so we'll have to wait one month to see the results. By that time the
> data may change.

Thanks for the real data points. Practicality usually wins.

jg
--
@home.com is bogus.
http://www.cleveland.com/living/index.ssf/2010/02/bill_watterson_creator_of_belo.html
From: vsevolod afanassiev on
Linux is a bit different as there is no direct I/O (as far as I know),
unless you use raw devices or ASM.
So memory that doesn't get allocated to SGA will be used for
filesystem buffer cache.

You have 17 GB SGA, and db_cache_size = 8.5 GB. This leaves 8.5 GB for
shared pool,
large pool, and Java pool. Seems too much? Is ASMM enabled? I think 1
GB should be enough for shared pool in most cases.