From: Peter Olcott on

"David Given" <dg(a)cowlark.com> wrote in message
news:hq8cb6$u5u$1(a)news.eternal-september.org...
> On 16/04/10 00:59, Peter Olcott wrote:
> [...]
>> Not true. It is impossible to consistently achieve this
>> all
>> the time because of many things beyond my control, but,
>> it
>> is quite possible to often achieve this under the right
>> set
>> of conditions, many of which I do control.
>
> If you cannot achieve it *all the time* then it's not real
> time. That's
> what 'real time' *means*.
>
> What you're describing here is not real time, it's a
> best-effort system.
> Best-effort systems *will* fail to meet their deadlines,
> and you have to
> plan for this. How do you intend to handle things if your
> system finds
> itself in this situation?

If I can achieve an average response time <= 500 ms, then my
system is still feasible for this specialized use.

>
> It sounds like you have nonsensical requirements, which is
> a common
> symptom of having technical requirements being set by
> non-technical
> people --- they'll throw in phrases like 'real time'
> without any concept
> of what they actually mean. I'm afraid this probably means
> another round
> of negotiation with the customer to try and figure out
> what they
> *actually* want, which is a pretty grim job.

It was a simple short-hand to express my needs without
having to get into an endless debate over subtle nuances of
meaning. By failing to provide these details previously I
was able to avoid wasting my time arguing things that I
already know.

>
> [...]
>> It takes a maximum of 10 ms for one of these jobs, with
>> another 90 ms of padding, that gives me 400 ms to receive
>> one or two 1024 byte packets and return a 1024 byte
>> packet.
>> It is absolutely positively feasible to send and receive
>> these tiny quantities of data within 400 ms, under the
>> right
>> conditions.
>
> Of course it is --- but that's irrelevant.

Sure it is, it shows that my plan may work: A web
application that provides desktop response time. I could use
this same model to provide much faster service on a intranet
web server.

> If you want to process 100 requests per second, then you
> have to spend
> *at most* 10ms of CPU time on each request, because a
> single-core
> machine only gets 1000ms of CPU time per second! Is your
> library
> actually *capable* of handling a request in 10ms?

Actually because of a nuance of queuing theory that I was
unaware of until recently I can only process about 80% that
many. This is the whole [as Lamba approaches Mu queue length
approaches infinity] thing. I only need to process seven
paying transactions per minute to produce my most recent
salary. I want to design the system with as much capacity as
possible, anyway. The free jobs may take up as much as 98%
of the total workload.

>
> --
> ???? dg(a)cowlark.com ????? http://www.cowlark.com ?????
> ?
> ? "In the beginning was the word.
> ? And the word was: Content-type: text/plain" --- Unknown
> sage


From: David Given on
On 16/04/10 02:13, Peter Olcott wrote:
[...]
> It was a simple short-hand to express my needs without
> having to get into an endless debate over subtle nuances of
> meaning. By failing to provide these details previously I
> was able to avoid wasting my time arguing things that I
> already know.

They're not subtle. Project requirements are at the absolute core of
software development. Project requirements tell you what you're trying
to achieve, and more importantly, they tell you when you've achieved it.
Without a win condition, you can't tell whether you've finished or not.

This is *vitally important*. If you go into a project without an
achievable goal, then no matter how much work you put into it, no matter
how clever your code, *you will fail*.

[...]
> I only need to process seven
> paying transactions per minute to produce my most recent
> salary. I want to design the system with as much capacity as
> possible, anyway.

But earlier you said you wanted 100 transactions per second. This is not
the same as 'as much capacity as possible'. Unless you tell me what your
requirements actually are, I cannot give you sensible advice.

If you want to process 100 transactions per second, the laws of physics
decree that you cannot take more than 10ms per transaction. You still
haven't told me whether your library can actually do this. 10ms is not a
lot of time. If it cannot, then you're not going to get 100 transactions
per second, you're going to get less. If you really do have a
requirement to process 100 transactions per second, then you will not be
able to meet your requirements with your current setup, and therefore
you will fail.

It sounds very much as if you need to go back to whoever set you these
requirements and get them to produce some *proper* requirements. Do they
really want to specify a minimum transaction rate? (Most likely not.) If
so, is this rate genuinely realistic given the amount of projected
demand for your web app? Is your hardware genuinely capable of handling
this demand?

Remember that 100 transactions per second is pushing the limit of what
your average webserver and CGI can do at the best of times, disregarding
the OCR backend. See www.acme.com/software/thttpd/benchmarks.html.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "In the beginning was the word.
│ And the word was: Content-type: text/plain" --- Unknown sage
From: Stefan Monnier on
> To sum it all up. One set of jobs is to be provided (as mush
> as possible) with a 500 ms response time and the remaining
> sets of jobs a 24 hour turn-around is good enough. I do want
> to provide these jobs with the very fastest response time
> that can be provided without impacting the high priority
> jobs at all.

I see, 3 classes of jobs, one of which (the "1st class") should really
suffer as little disruption as possible. Handling the 2 "best effort"
(or "slow") ones will be the easy part in the sense that it's what Unix
schedulers do all the time, so you may need to twiddle with the niceness
to favor the paying slow ones over the free ones but that's about it.

OTOH for the 1st class of jobs, you'll have to work harder. There are
several problems to solve:
- CPU scheduling: you may want to try one of the soft-realtime
scheduling as suggested by someone else for that. This will ensure
those jobs always get the CPU in preference to the other jobs.
But beware: this kind of scheduling has *very* high priority, i.e. not
just higher than your other jobs but also higher than most of the OS's
system processes, so you may get into trouble at high loads.
- Scheduling within the web-server: if you have only one web-server, it
may prove difficult to make sure that the many "slow" clients don't
slow down the "1st class" clients. I have no experience there, so I'll
just stop (maybe you can just run 2 web servers).
- Scheduling other resources: if you get many many "slow" requests, even
if they don't get much CPU time, they may eat up your RAM and cause
the machine to swap/thrash, so make sure your web-server is configured
to limit the number of "slow" requests that are being serviced at any
given time.

> My goal of making an online application provide the response time of
> an application installed directly on the user's computer is already
> close enough to impossible that I know for sure that even if my
> system is infallibly perfectly as fast as possible, that this will at
> best only be marginally good enough.

Computer scientists are accustomed to do the impossible on
a daily basis.


Stefan
From: Stefan Monnier on
>> Not true. It is impossible to consistently achieve this all
>> the time because of many things beyond my control, but, it
>> is quite possible to often achieve this under the right set
>> of conditions, many of which I do control.
> If you cannot achieve it *all the time* then it's not real time. That's
> what 'real time' *means*.

Who cares. He didn't say "real-time" in his description of his problem.
His users won't crash&burn if the answer comes a bit later: They'll just
find the service occasionally slow.


Stefan
From: Peter Olcott on

"David Given" <dg(a)cowlark.com> wrote in message
news:hqaapd$g73$1(a)news.eternal-september.org...
> On 16/04/10 02:13, Peter Olcott wrote:
> [...]
>> It was a simple short-hand to express my needs without
>> having to get into an endless debate over subtle nuances
>> of
>> meaning. By failing to provide these details previously I
>> was able to avoid wasting my time arguing things that I
>> already know.
>
> They're not subtle. Project requirements are at the
> absolute core of
> software development. Project requirements tell you what
> you're trying
> to achieve, and more importantly, they tell you when
> you've achieved it.
> Without a win condition, you can't tell whether you've
> finished or not.
>
> This is *vitally important*. If you go into a project
> without an
> achievable goal, then no matter how much work you put into
> it, no matter
> how clever your code, *you will fail*.
>
> [...]
>> I only need to process seven
>> paying transactions per minute to produce my most recent
>> salary. I want to design the system with as much capacity
>> as
>> possible, anyway.
>
> But earlier you said you wanted 100 transactions per
> second. This is not
> the same as 'as much capacity as possible'. Unless you
> tell me what your
> requirements actually are, I cannot give you sensible
> advice.
>
> If you want to process 100 transactions per second, the
> laws of physics
> decree that you cannot take more than 10ms per
> transaction. You still

Actually Jerry Coffin enlightened me about queuing theory so
the limit is actually about 80 / second.

> haven't told me whether your library can actually do this.
> 10ms is not a
> lot of time. If it cannot, then you're not going to get
> 100 transactions
> per second, you're going to get less. If you really do
> have a
> requirement to process 100 transactions per second, then
> you will not be

My goal is as many as possible, I originally estimated 100 /
second.

> able to meet your requirements with your current setup,
> and therefore
> you will fail.
>
> It sounds very much as if you need to go back to whoever
> set you these
> requirements and get them to produce some *proper*
> requirements. Do they
> really want to specify a minimum transaction rate? (Most
> likely not.) If
> so, is this rate genuinely realistic given the amount of
> projected
> demand for your web app? Is your hardware genuinely
> capable of handling
> this demand?
>
> Remember that 100 transactions per second is pushing the
> limit of what
> your average webserver and CGI can do at the best of
> times, disregarding
> the OCR backend. See
> www.acme.com/software/thttpd/benchmarks.html.

That is one of the reasons that I abandoned CGI, and am
instead going to modify the source code of a web server.

>
> --
> ???? dg(a)cowlark.com ????? http://www.cowlark.com ?????
> ?
> ? "In the beginning was the word.
> ? And the word was: Content-type: text/plain" --- Unknown
> sage