From: Robert Klemme on

Hi all,

we have an Oracle 10 (version details below) on Linux (RHEL 4). When
connecting via JDBC we get intermittend ORA-12519 (reflected as TNS-12519
in listener.log). sqlldr also has a problem, although at the moment I
can't exactly determine whether it's the same (I'm guessing it is because
the happen about the same time).

Research on the web revealed that a too low value for "processes" might be
the reason. (The other possible cause I found was non matching versions of
DB and client but this is not the case here.) So we increased DB param
"parallel_max_servers" to 200. Since the error still showed up we went up
to 400. It's been quiet since the last change of this parameter on
Tuesday but some minutes ago I got an email notification that the error
occurred again.

I rather not want to increase the value by trial and error since we have
only 36 sessions on the database right now and there seems to be a
discrepancy between parameter "processes" (at 150 now, the value is
derived from "parallel_max_servers") and the actual # of processes. Also
the system is not much utilized and there's enough free resources (CPU
wise and memory wise). So I'd like to first find out what is causing this
error before I take further measures.

I checked the alert log but there were no significant entries. I checked
job scheduling to check whether there might be a job that eats up
connections, but no. I guess switching on some trace might be helpful but
at the moment I don't have an idea which one would be appropriate. Any
ideas? Thanks for any insights!

Kind regards

robert



PRODUCT
----------------------------------------------------------------
VERSION
----------------------------------------------------------------
NLSRTL
10.2.0.1.0

Oracle Database 10g Enterprise Edition
10.2.0.1.0

PL/SQL
10.2.0.1.0

TNS for Linux:
10.2.0.1.0


[root] r1:/usr/local/oracle/product/10.2.0: uname -a
Linux r1.webwasher.com 2.6.9-22.0.1.ELsmp #1 SMP Tue Oct 18 18:39:27 EDT
2005 i686 i686 i386 GNU/Linux

From: Frank van Bortel on
Robert Klemme wrote:
> Hi all,
>
> we have an Oracle 10 (version details below) on Linux (RHEL 4). When
> connecting via JDBC we get intermittend ORA-12519 (reflected as TNS-12519
> in listener.log). sqlldr also has a problem, although at the moment I
> can't exactly determine whether it's the same (I'm guessing it is because
> the happen about the same time).
>
> Research on the web revealed that a too low value for "processes" might be
> the reason. (The other possible cause I found was non matching versions of
> DB and client but this is not the case here.) So we increased DB param
> "parallel_max_servers" to 200. Since the error still showed up we went up
> to 400. It's been quiet since the last change of this parameter on
> Tuesday but some minutes ago I got an email notification that the error
> occurred again.
>
[snip]
12519, 00000, "TNS:no appropriate service handler found"
// *Cause: The listener could not find any available service handlers that
// are appropriate for the client connection.
// *Action: Run "lsnrctl services" to ensure that the instance(s) have
// registered with the listener, and are accepting connections.

And what does the manual say about parallel_max_servers?!?

PARALLEL_MAX_SERVERS

Parameter type Integer

Default value Derived from the values of CPU_COUNT,
PARALLEL_AUTOMATIC_TUNING, and PARALLEL_ADAPTIVE_MULTI_USER

Parameter class Static

Range of values 0 to 3599

Real Application Multiple instances must have
Clusters the same value.

Note:
This parameter applies to parallel execution in exclusive mode as well
as in the Oracle9i Real Application Clusters environment.

PARALLEL_MAX_SERVERS specifies the maximum number of parallel execution
processes and parallel recovery processes for an instance. As demand
increases, Oracle increases the number of processes from the number
created at instance startup up to this value.

If you set this parameter too low, some queries may not have a parallel
execution process available to them during query processing. If you set
it too high, memory resource shortages may occur during peak periods,
which can degrade performance.

Do you really think one has *anything* to do with the other?!?

Instead of searching the web, you should read the manual.
All manuals can be found at http://tahiti.oracle.com
--
Regards,
Frank van Bortel

Top-posting is one way to shut me up...
From: shortcutter@googlemail.com on
Frank van Bortel wrote:
> Robert Klemme wrote:
> > Hi all,
> >
> > we have an Oracle 10 (version details below) on Linux (RHEL 4). When
> > connecting via JDBC we get intermittend ORA-12519 (reflected as TNS-12519
> > in listener.log). sqlldr also has a problem, although at the moment I
> > can't exactly determine whether it's the same (I'm guessing it is because
> > the happen about the same time).
> >
> > Research on the web revealed that a too low value for "processes" might be
> > the reason. (The other possible cause I found was non matching versions of
> > DB and client but this is not the case here.) So we increased DB param
> > "parallel_max_servers" to 200. Since the error still showed up we went up
> > to 400. It's been quiet since the last change of this parameter on
> > Tuesday but some minutes ago I got an email notification that the error
> > occurred again.
> >
> [snip]
> 12519, 00000, "TNS:no appropriate service handler found"
> // *Cause: The listener could not find any available service handlers that
> // are appropriate for the client connection.
> // *Action: Run "lsnrctl services" to ensure that the instance(s) have
> // registered with the listener, and are accepting connections.
>
> And what does the manual say about parallel_max_servers?!?
>
> PARALLEL_MAX_SERVERS
>
> Parameter type Integer
>
> Default value Derived from the values of CPU_COUNT,
> PARALLEL_AUTOMATIC_TUNING, and PARALLEL_ADAPTIVE_MULTI_USER
>
> Parameter class Static
>
> Range of values 0 to 3599
>
> Real Application Multiple instances must have
> Clusters the same value.
>
> Note:
> This parameter applies to parallel execution in exclusive mode as well
> as in the Oracle9i Real Application Clusters environment.
>
> PARALLEL_MAX_SERVERS specifies the maximum number of parallel execution
> processes and parallel recovery processes for an instance. As demand
> increases, Oracle increases the number of processes from the number
> created at instance startup up to this value.
>
> If you set this parameter too low, some queries may not have a parallel
> execution process available to them during query processing. If you set
> it too high, memory resource shortages may occur during peak periods,
> which can degrade performance.
>
> Do you really think one has *anything* to do with the other?!?

Yes. First of all, the setup works normal most of the time, so the
registration has actually taken place (this is true even after the
intermittend failure). Also, the listener is local and firewall rules
are ok (=> no network problems). Second, it was explained that the
error will occur if "processes" is set too low. Third, the parameter
"processes" is derived from "parallel_max_servers" (indicated by online
doc, and you cannot change it directly via EM). Fourth, we actually saw
an improvement because before we changed the value we saw the error
more frequently.

> Instead of searching the web, you should read the manual.

How do you know I didn't?

> All manuals can be found at http://tahiti.oracle.com

Yes, I know.

robert

From: bdbafh on
Robert,

How many physical CPUs (and logical, if hyperthreading is enabled) are
in that server?
What type of storage subsystem is supporting the database?
Can you reasonably expect a degree of parallelism of greater than 4 off
of a dual CPU box?

If you don't need to support parallelism, I'd suggest turning it off.

Are you sure that you didn't hit some other resource restriction other
than "processes"?
Might you have hit the process limit (memory consumption) or have
exhausted a kernel resource?

Check your OS kernel settings (sysctl) and memory allocations.

hth.

-bdbafh

From: Frank van Bortel on
shortcutter(a)googlemail.com wrote:
> Yes. First of all, the setup works normal most of the time, so the
> registration has actually taken place (this is true even after the
> intermittend failure). Also, the listener is local and firewall rules
> are ok (=> no network problems). Second, it was explained that the
> error will occur if "processes" is set too low. Third, the parameter
> "processes" is derived from "parallel_max_servers" (indicated by online
> doc, and you cannot change it directly via EM). Fourth, we actually saw
> an improvement because before we changed the value we saw the error
> more frequently.
>

"Most of the time" ... ok, you investigate then.
Listener been down? DB been down?
Registration takes time, you know.

And if it's local, then why bother with a listener *at all*?!?
And if it's local, then what would a firewall have to do
with it? You're local, you shouldn't even see your
packets on the net. Switch to bequeath, it's faster, too.

And if you (still) believe processes could be too low,
then increase that, not your parallel_max_servers.

And your improvement is coincidence, nothing else

--
Regards,
Frank van Bortel

Top-posting is one way to shut me up...