From: Ryan_Hoffman on
On Feb 4, 2:47 pm, Tim X <t...(a)nospam.dev.null> wrote:
> joel garry <joel-ga...(a)home.com> writes:
> > On Feb 3, 2:28 pm, vsevolod afanassiev <vsevolod.afanass...(a)gmail.com>
> > wrote:
> >> Requirements vary from application to application. Well-written
> >> application cache static data.
> >> There are several network-related statistics, this is from Statspack
> >> report for 9.2.0.8 database:
>
> >> Statistic                                      Total     per Second
> >> per Trans
> >> --------------------------------- ------------------ --------------
> >> ------------
> >> SQL*Net roundtrips to/from client         15,714,040
> >> 436.5         22.2
> >> bytes received via SQL*Net from c      5,056,985,913
> >> 140,475.7      7,144.4
> >> bytes sent via SQL*Net to client       3,173,523,378
> >> 88,155.9      4,483.5
>
> > A single application is the wrong level to determine this.  The dba
> > presumably is responsible for allocuting all Oracle related network
> > usage.  So, at a minimum, it would need to be statistics as above for
> > all applications current and contemplated, projected forward, plus the
> > likelihood of disaster or distributed usage.  The latter means
> > archiving transport, and rman usage over the net (among other
> > possibilities).  For example, to build a standby database you have to
> > figure how much data there is uncompressed (in case there happens to
> > be some bug that disallows the compression option), and how fast you
> > will need to rebuild such a beast from scratch should the archive gap
> > become untenable.  If management's view is "no, we will never lose our
> > computer room due to the upstairs crapper overflowing," get an
> > experienced consultant in there.
>
> This is very critical. Estimating growth is almost always udner
> estimated. come up with a figure and then multiply by PI and you may be
> getting closer to what reality is. I've frequently seen analysis that
> has focused on the app performance and completely overlooked network
> requirements for backups. If you expect to have, lets say, one full
> backup per week and incremental backups each day, then you need to be
> able to perform them within at least one 24 hour period - this means all
> your databases AND other critical data, such as file servers, mail

<snip>

> hehehe! Tell them they need not just video conferencing, but the full
> video grid stuff (aka VC inspired by Star Trek's holodeck).
>
> --
> tcross (at) rapttech dot com dot au- Hide quoted text -
>
> - Show quoted text -

Thanks for the response. Agreed, a latency figure for example is
dependant on how large of a pipe you've purchased specific to the
issue of getting your backup completed in the time you have allotted.
The larger the latency (typically distance between source and backup),
the larger the pipe you'll need to compensate. Loss plays a factor
here too, since TCP can provide recovery for a lossy connection; but
only so far before throughput is significantly degraded.

Can you comment on SYNC backup (realtime)? Is there a particular
latency and loss threshold you wouldn't want to exceed; since these
directly impact the application response for the client?

I've found a variety of marketing names the db vendors use for SYNC
and ASYNC backups. I'm curious if there are generic names DBA's use
for these functions? I was thinking 'hot standby' for SYNC and
perhaps 'WAN replication' for ASYNC.

Thanks!
From: hpuxrac on
On Feb 6, 12:31 pm, Ryan_Hoffman <hoffman.r...(a)gmail.com> wrote:

snip

> Can you comment on SYNC backup (realtime)?  Is there a particular
> latency and loss threshold you wouldn't want to exceed; since these
> directly impact the application response for the client?
>
> I've found a variety of marketing names the db vendors use for SYNC
> and ASYNC backups.  I'm curious if there are generic names DBA's use
> for these functions?  I was thinking 'hot standby' for SYNC and
> perhaps 'WAN replication' for ASYNC.

You lost me completely here.

You backup an Oracle database using rman. Best backups are done to
disk since they go much faster than backups to tape and if you have to
use the backup recovery from disk versus tape is much faster.

Are you talking about replicating a database?

Synchronous replication can carry a severe performance penalty
especially for commit happy applications. Asynchronous replication
carries the risk of data loss but reduces the performance hit. Log
file sync is the oracle wait event that is especially useful when
looking at the relative performance hit of synch versus asynch
replication.

From: Tim X on
hpuxrac <johnbhurley(a)sbcglobal.net> writes:

> On Feb 6, 12:31 pm, Ryan_Hoffman <hoffman.r...(a)gmail.com> wrote:
>
> snip
>
>> Can you comment on SYNC backup (realtime)?  Is there a particular
>> latency and loss threshold you wouldn't want to exceed; since these
>> directly impact the application response for the client?
>>
>> I've found a variety of marketing names the db vendors use for SYNC
>> and ASYNC backups.  I'm curious if there are generic names DBA's use
>> for these functions?  I was thinking 'hot standby' for SYNC and
>> perhaps 'WAN replication' for ASYNC.
>
> You lost me completely here.
>
> You backup an Oracle database using rman. Best backups are done to
> disk since they go much faster than backups to tape and if you have to
> use the backup recovery from disk versus tape is much faster.
>

Rman is ONE way of backing up an Oracle database, but its not the only
way and it may not necessarily be the best way, depending on Oracle
version. It would generally be the preferred way on nearly all current
Oracle versions.

I think what the OP was asking about was database agnostic/neutral
terminology rather than Oracle specific. If you need to work with a
bunch of network engineers, more than likely talking about RMAN is going
to be meaningless to them.

With respect to the OPs other questions, I'm not sure I can give an
answer. This is partly due to much of what your asking being dependent
on site specific variables. What latency is acceptable where you are may
not be acceptable where I am. The size of the databases, the type of
applicaiton and how fast your data changes, the costs of your local and
remote storage, types of storage available and its support for tiered
storage, snapshots etc all have a bearing on what you can do, what you
should do and what you can afford to do. there is no standard formula
that can be provided. Its a difficult question and one where I know
enough to know I don't know enough!

Based on experiences where I am at present, we have been seeing around
60% increase in data storage requirements per year (not just Oracle
DBs). We have had to implement tiered storage solution and differentiate
between the backup frequencies based on the storage tier i.e. more
frequent backups for the very dynamic data and less frequent backups for
the data on the slower more static storage tiers (actually, its not that
simple - the formula to determine backup frequency has to take factors
like risk impact, business impact, data value etc into account as well).
The backup system we implemented in 2004 only lasted half the projected
lifetime because it wasn't fast enough to handle the amount of data
being kbacked up. We could have extended this slightly by using more
incremental backups and compressed backups, but of course the downside
with doing tis is longer restores, which you need to consider from a DR
and business continuity perspective. Our failure was in adequately
predicting data growth. We estimated about a 30% growth per year, which
turned out to be about half what it really was. In our defense, much of
the growth was due to changes in business processes and new services
that we were not aware of at the time of estimate. The real failure was
in the projects that implemented the new services and changes not
incorporating backup into their planning.


To deal with things like latencies, temporary network outages etc and to
handle both backup and restore requirements, we rman backup to fairly
fast SAN storage, do snapshots of that backup onto slower local storage.
Move the snapshots to our remote backup system (10Bgit network
connection) onto local storage attached to a tape bac,backup system with
a tape jukebox with 8 high speed tape backups. Each week, tapes are
moved to a third off site location with a fireproof safe. The tape
jukebox can be increased in size and I expect, given current data
growth, will need to be before the system is totally replaced with
whatever is the better technology at the time.

More than likely, your best solution will not be found just working with
DBAs or just with network engineers. You need to find a solution working
with all thre areas, the dBAs, the network engineeers and someone whose
area of expertise is the low level management and administration of
backups. It is a rare individual who can be up to speed on all three
areas if the requirements are at all demanding/complex. the two things
I'd keep in mind is that backup is only half the story - a backup
solution is only as good as the restoration process it supports. No
point backing up f you cnnot restore. The second most important point
and the one which is ignored/overlooked more often than any other is
that you cannot be certain about your backups unless you do regular bare
metal restores. This is rarely done because of the time it takes and the
fact you need the necessary duplicated envrionments to do this without
impacting on core business, but it has to be the one most common point
of failure with backup systems.

Tim

--
tcross (at) rapttech dot com dot au