Prev: understanding oracle terminology - instance, database, sid,schema
Next: Global Industries - Central Infrastructure & Industry News
From: Steve Howard on 29 Jun 2010 12:21
On Jun 28, 1:15 pm, joel garry <joel-ga...(a)home.com> wrote:
> On Jun 28, 6:37 am, Steve Howard <stevedhow...(a)gmail.com> wrote:
> > On Jun 26, 2:05 am, "Jonathan Lewis" <jonat...(a)jlcomp.demon.co.uk>
> > wrote:
> > This doesnt seem to help, though. My pre-scan job never has an a
> > issue, but I run one hour windows for the range to scan.
> > A little more background. This is a transaction history table of
> > sorts. It is partitioned by month, and records are only added, never
> > updated.
> > SQL> desc big_table
> > Name Null? Type
> > ----------------------------------------- --------
> > ----------------------------
> > PK NOT NULL NUMBER
> > FK NOT NULL NUMBER
> > COL3 NOT NULL NUMBER(3)
> > CREATE_TIME TIMESTAMP(6)
> > COL5 NOT NULL VARCHAR2(50)
> > COL6 VARCHAR2(50)
> > COL7 XMLTYPE
> > SQL>
> > We query as follows:
> > SELECT concatenated_xml_string_of_columns_from_big_table,
> > a.xml_col.getClobVal()
> > FROM big_table a
> > WHERE create_time between trunc(sysdate) + (:1 / 1440) and
> > trunc(sysdate) + (:2 / 1440)
> > where the window is three hours. This does a range scan on the
> > create_time column, which is good as it is by far the most selective
> > filter.
> > The selected records are retrieved in PL/SQL (no bulk collect), and
> > run through a few more XML tagging operations and written to a file.
> > They are then propagated to a mainframe for additional business usage
> > to which I am not privy.
> > If the query runs fast enough (less than 30 minutes or so), we dont
> > see the issue. If it starts to get slow for whatever reason, we
> > start reading tons of undo.
> Something old but new to me I learned today (from Lob retention not
> changing when undo_retention is changed [ID 563470.1]):
> "...It is assumed that when UNDO_RETENTION is changed the lobs
> connected to that retention are also changed which is not the case .
> If a lob is modified from RETENTION to PCTVERSION and back to
> RETENTION again then the lob retention is updated. ..."
> Of course I have no idea if it is related to your problem, unless you
> say something like you've changed your undo retention from 30 minutes
> or so and didn't know about this...
> A bit more of a reach, maybe Bug 2931779 - False ORA-1555 accessing
> "cache read" LOBs in RAC [ID 2931779.8] or related has reanimated in
> some form.
> I guess you need to start tracing and digging deep to figure this one
> out. Those mysterious xml packages may be doing something strange...
> (I've run into 3rd party app code at times that does stuff like update
> and rollback, unexpectedly).
> @home.com is bogus.http://thehill.com/blogs/hillicon-valley/technology/105721-sen-bond-s...
I have often wondered about the black magic xmltype's as well. I can
say that by using logminer as well as dba_hist_sqltext I have verified
there are zero updates to these rows after they are inserted.
I am watching it happen as I type this, as I fend off arguments of
Oracle being "expensive, slow, and bloated"...yeah, a real fun day.
The session I am watching is selecting records inserted between 6AM
and 9AM this morning. It has been running for about 80 minutes, and
fetched a total of about 250,000 rows. For the last 30 minutes it has
fetched less than 2.000 while incrementing the "transaction tables
consistent reads - undo records applied" by almost 2 million. The
session has 256 counters for "transaction tables consistent read
Once again, range scan of the create_time index. It will eventually
(if it doesn't ORA-01555) fetch about 300,000 rows in total.
I am at a loss.
From: Steve Howard on 29 Jun 2010 12:53
On Jun 25, 12:37 pm, "Jonathan Lewis" <jonat...(a)jlcomp.demon.co.uk>
> This happens when your query hits a block that was updated
> "a long time" in the past by a transaction that has committed
> but not been cleaned out.
> Your query can tell that it is a committed transaction because
> the ITL entry for the transaction points to transaction table slot
> (in an undo segment header block) that has been re-used for
> a newer transaction. (Part of the transaction id is the "transaction
> sequence number", which is counting the number of times a transaction
> slot has been used).
> Your query therefore needs to know WHEN the transaction committed,
> so that it can decide whether or not it's supposed to see the new version
> or the old version of the row. (If the transaction committed before the
> query then the query doesn't need to know exactly when the transaction
> committed, if it started after the query then it has to be rolled back -
> and it's possible that the "snapshot too old" is the result of the data
> than the transaction table rollback.)
> To find out when the transaction committed, your query copies the undo
> segment header block and starts rolling it back. The number of times this
> happens is recorded as:
> "transaction tables consistent read rollbacks"
> To perform the rollback, your query will read the transaction control block
> (another part of the undo segment header) which contains a number of
> details - including the first undo block address of the most recent
> to use that undo segment header. This undo block address will hold the
> record of that transaction *** - which include information about the
> state of the transaction control block. By using this undo record your
> can take the undo segment header block backwards in time by one step -
> at which point it reads the older version of the transaction control block
> repeats the process until it reaches the point where the transaction slot
> interested in has been taken back to the correct sequence number (or a
> has taken the undo segment header block back to a point in time before the
> start of the query). Each record it reads in this process is counted in
> "transaction tables consistent reads - undo records applied"
> (*** This is why the block you dumped had nothing to do with your table.)
> The trouble with your requirement is that we really need to do a backwards
> tablescan - because it's probably the data near the end of the table that
> changing while you are "wasting" time reading all the data from the start
> the table.
> Unfortunately there is no such hint - but if it's really critical, you
> could write
> some code to scan the table one extent at a time in reverse order.
> Jonathan Lewishttp://jonathanlewis.wordpress.com
> "Steve Howard" <stevedhow...(a)gmail.com> wrote in message
> > Hi All,
> > 10.2.0.4 three node cluster EE on SLES 10
> > Can someone give me a good definition of *exactly* what this means and
> > what causes it (mainly the latter). The documentation is not very
> > descriptive, with "Number of undo records applied to transaction
> > tables that have been rolled back for consistent read purposes".
> > It sounds like undo on undo, but we don't have any larger number for
> > rollbacks (or commits) when this happens than we do at any other time.
> > We have been plagued by this for over a year, and after multilpe SR's
> > where the support analyst just reads us the documentation, I am at my
> > wits end.
> > We have a fairly large table (almost 1TB with about 300 million rows)
> > with a large XMLTYPE column. Once a day, a job scans this table for
> > records added that day for propagation to an external system. The
> > longer the query runs, the more we see the session doing single block
> > reads against the undo tablespace, with the stat in the subject
> > climbing into the millions. Eventually, after several hours, an
> > ORA-01555 is thrown.
> > I even grabbed one of the P1/P2 parameters for the session querying
> > and dumped the undo block in the P2 value. While it was a second or
> > two after the event was posted, the block itself didn't even contain
> > any references to the table being queried!
> > Can anyone shed some light?
> > Thanks,
> > Steve
I am just re-reading your post.
The trouble with your requirement is that we really need to do a
backwards tablescan - because it's probably the data near the end of
the table that is changing>>
This is true.
while you are "wasting" time reading all the data from the start of
the table. Unfortunately there is no such hint - but if it's really
critical, you could write some code to scan the table one extent at a
time in reverse order.
Are you suggesting that if I were to scan the *newest* rows prior to
the "real" job reading older rows, that may help? If so, that would
be fairly easy, as I can identify them by create_time.