From: Peter Olcott on

"Stefan Monnier" <monnier(a)iro.umontreal.ca> wrote in message
news:jwv1vehs3lr.fsf-monnier+comp.unix.programmer(a)gnu.org...
>> The first process is a web server that has been adapted
>> so
>> that it can directly interface with four OCR processes or
>> one OCR process with four threads.
>
> From your description, I just can't figure out how you get
> to
> a conclusion that you need:
>
> Ultimately what I am looking for is a way to provide
> absolute
> priority to one kind of job over three other kinds of
> jobs.
>
> or that
>
> The remaining three will have equal priority to each
> other. I want
> the high priority process to get about 80% of the CPU
> time available
> to the four processes, and the remaining three to share
> the remaining
> 20%.
>
> I don't mean to say that the end behavior shouldn't be how
> you describe
> it, but that these aren't the real constraints but their
> consequence.
> If you think of the actual constraints that you're trying
> to solve
> you'll probably find it easier to get to a solution.
> Among other
> things, the kind of directives you need to give to the OS
> might be
> closer to the higher-level constraints than to the
> lower-level
> consequence described in terms of CPU percentage.
>

I want one class of jobs to be completed in 500 ms from a
web application including all the HTTP connection and
transport time. This use of the technology must as much as
possible act exactly as if the software is directly
installed on the user's machine. There is a certain set of
customers where > 500 ms make use of this technology
infeasible. There are many aspects of this goal that are
completely out of my control. These jobs are from paying
customers and are within a size threshold.

Another set of jobs are provided for publicity purposes only
and at no cost to the user. These jobs are to as much as
possible take zero resources from the above high priority
jobs. I want nothing at all on my end to even slightly
impact the 500 ms goal.

I have another set of jobs that take a very long time and
they also are only to be completed without taking any
resources form the 500 ms high priority jobs.

The last set of jobs are large jobs from paying customers.
They are also to be provided without taking any time at all
from the first jobs.

To sum it all up. One set of jobs is to be provided (as mush
as possible) with a 500 ms response time and the remaining
sets of jobs a 24 hour turn-around is good enough. I do want
to provide these jobs with the very fastest response time
that can be provided without impacting the high priority
jobs at all.

> Also none of this sounds like any kind of strong real-time
> constraints:
> you may think of it as real-time, but really all you want
> to do is
> probably to minimize response latency. So I'd attack the
> problem in
> a very pragmatic manner: first try it out without any
> tweaking, look at
> the result and if you don't like it then try to improve it
> by tweaking
> scheduler options such as nice settings (always a good
> start since
> they're very easy to set).
>
>
> Stefan

My goal of making an online application provide the response
time of an application installed directly on the user's
computer is already close enough to impossible that I know
for sure that even if my system is infallibly perfectly as
fast as possible, that this will at best only be marginally
good enough.


From: Keith Thompson on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
> "Ian Collins" <ian-news(a)hotmail.com> wrote in message
> news:82moteF126U7(a)mid.individual.net...
> > On 04/15/10 08:04 AM, Peter Olcott wrote:
> >> "Keith Thompson"<kst-u(a)mib.org> wrote in message
> >> news:lnpr21q3rc.fsf(a)nuthaus.mib.org...
[...]
> >>> Or when you post a followup you can copy the initial
> >>> article
> >>> into a decent text editor, compose it there (adding
> >>> proper
> >>> "> "
> >>> prefixes and so forth if necessary), and the copy it
> >>> back
> >>> to OE.
> >>> Yes, it's some extra work, and no, ideally you shouldn't
> >>> have to
> >>> do it, but the alternative is to continue posting as you
> >>> have been
> >>> and imposing that cost on the rest of us.
> >>
> >> Exactly what cost is imposed on anyone else here?
> >
> > Read the truncated mess you have just posted! No client
> > worth using mucks up lines like that.
>
> So how does Thunderbird (or whatever you use) do it?

I don't know. Try it and find out, or ask in news.software.readers.

Personally I use Gnus under Emacs. It doesn't wrap long lines unless
I tell it to, and it has a simple command that wraps a paragraph
while maintaining the quoting characters. Or sometimes I filter
text through "fmt" with various options if I want more control
over the layout.

> Personally I use Gnus under Emacs. It doesn't wrap long lines
> unless I tell it to, and it has a simple command that wraps a
> paragraph while maintaining the quoting characters. Or sometimes
> I filter text through "fmt" with various options.

> Personally I use Gnus under Emacs.
> It doesn't wrap long lines unless
> I tell it to, and it has a simple
> command that wraps a paragraph while
> maintaining the quoting characters.
> Or sometimes I filter text through
> "fmt" with various options.

--
Keith Thompson (The_Other_Keith) kst-u(a)mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
From: David Given on
On 15/04/10 14:13, Peter Olcott wrote:
[...]
> I want one class of jobs to be completed in 500 ms from a
> web application including all the HTTP connection and
> transport time.

This is impossible. You will *never* be able to achieve this, for the
simple reason that you have no control over the network connection. What
if the customer only has a low-bandwidth network connection? What if a
router fails? What if random simple network congestion causes packet
loss somewhere in the backbone?

It's important to remember that TCP/IP is *not* real time, and therefore
neither is the web. It works on a best effort basis only. As such any
attempt to do real time work using HTTP is fundamentally doomed.

In addition, you've said earlier that you expect to be processing about
100 requests per second. If you're going to handle them serially, this
means you can spend no more than 10ms per request! And if you're going
to handle them concurrently, taking 500ms of wall-clock time for each
request, then you're going to have 50 OCR sessions running at once! Is
your software capable of doing this? If not, then what you're asking for
is simply impossible, and you're going to have to go back to the people
setting the requirements and say so.

Ignoring these requirements, the way this sort of thing is done in real
life is that you decide ahead of time how many concurrent requests you
can handle at once; you start that many daemon processes, each reading
requests from the same queue; then when requests come in, you queue
them, and the next available daemon will process it. You handle
different priorities by having multiple queues, each with its own pool
of daemon processes. Naturally, when requests come in faster than the
hardware can handle them, you have to wait --- this setup is not real
time, but then nothing web based ever is.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "In the beginning was the word.
│ And the word was: Content-type: text/plain" --- Unknown sage
From: Peter Olcott on

"David Given" <dg(a)cowlark.com> wrote in message
news:hq87h6$3b1$1(a)news.eternal-september.org...
> On 15/04/10 14:13, Peter Olcott wrote:
> [...]
>> I want one class of jobs to be completed in 500 ms from a
>> web application including all the HTTP connection and
>> transport time.
>
> This is impossible. You will *never* be able to achieve
> this, for the

Not true. It is impossible to consistently achieve this all
the time because of many things beyond my control, but, it
is quite possible to often achieve this under the right set
of conditions, many of which I do control.

> simple reason that you have no control over the network
> connection. What
> if the customer only has a low-bandwidth network
> connection? What if a
> router fails? What if random simple network congestion
> causes packet
> loss somewhere in the backbone?
>
> It's important to remember that TCP/IP is *not* real time,
> and therefore
> neither is the web. It works on a best effort basis only.
> As such any
> attempt to do real time work using HTTP is fundamentally
> doomed.
>
> In addition, you've said earlier that you expect to be
> processing about
> 100 requests per second. If you're going to handle them
> serially, this
> means you can spend no more than 10ms per request! And if
> you're going
> to handle them concurrently, taking 500ms of wall-clock
> time for each
> request, then you're going to have 50 OCR sessions running
> at once! Is
> your software capable of doing this? If not, then what
> you're asking for
> is simply impossible, and you're going to have to go back
> to the people
> setting the requirements and say so.

It takes a maximum of 10 ms for one of these jobs, with
another 90 ms of padding, that gives me 400 ms to receive
one or two 1024 byte packets and return a 1024 byte packet.
It is absolutely positively feasible to send and receive
these tiny quantities of data within 400 ms, under the right
conditions.

>
> Ignoring these requirements, the way this sort of thing is
> done in real
> life is that you decide ahead of time how many concurrent
> requests you
> can handle at once; you start that many daemon processes,
> each reading
> requests from the same queue; then when requests come in,
> you queue
> them, and the next available daemon will process it. You
> handle
> different priorities by having multiple queues, each with
> its own pool
> of daemon processes. Naturally, when requests come in
> faster than the
> hardware can handle them, you have to wait --- this setup
> is not real
> time, but then nothing web based ever is.

The reason that I did not want to get into this level of
detail was to avoid endlessly arguing about an answer that I
already have so that I could focus on the answers that I do
not have.


>
> --
> ???? dg(a)cowlark.com ????? http://www.cowlark.com ?????
> ?
> ? "In the beginning was the word.
> ? And the word was: Content-type: text/plain" --- Unknown
> sage


From: David Given on
On 16/04/10 00:59, Peter Olcott wrote:
[...]
> Not true. It is impossible to consistently achieve this all
> the time because of many things beyond my control, but, it
> is quite possible to often achieve this under the right set
> of conditions, many of which I do control.

If you cannot achieve it *all the time* then it's not real time. That's
what 'real time' *means*.

What you're describing here is not real time, it's a best-effort system.
Best-effort systems *will* fail to meet their deadlines, and you have to
plan for this. How do you intend to handle things if your system finds
itself in this situation?

It sounds like you have nonsensical requirements, which is a common
symptom of having technical requirements being set by non-technical
people --- they'll throw in phrases like 'real time' without any concept
of what they actually mean. I'm afraid this probably means another round
of negotiation with the customer to try and figure out what they
*actually* want, which is a pretty grim job.

[...]
> It takes a maximum of 10 ms for one of these jobs, with
> another 90 ms of padding, that gives me 400 ms to receive
> one or two 1024 byte packets and return a 1024 byte packet.
> It is absolutely positively feasible to send and receive
> these tiny quantities of data within 400 ms, under the right
> conditions.

Of course it is --- but that's irrelevant.

If you want to process 100 requests per second, then you have to spend
*at most* 10ms of CPU time on each request, because a single-core
machine only gets 1000ms of CPU time per second! Is your library
actually *capable* of handling a request in 10ms?

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "In the beginning was the word.
│ And the word was: Content-type: text/plain" --- Unknown sage