From: Per Jessen on
Tommy Pham wrote:

> Let's go back to my 1st e-commerce example. The manufacturers list i=
s
> about 3,700. The categories is about about 2,400. The products list=

> is right now at 500,000 and expected to be around 750,000. The site
> is only in English. The store owner wants to expand and be I18n:
> Chinese, French, German, Korean, Spanish. You see how big and comple=
x
> that database gets?=20

No, not really. So you want to add five languages - if your applicatio=
n
is just half-way prepared for multiple languages, that's no big deal
(apart from pure translation effort), and a database with only 5
million rows is also no big deal. If that is causing you a performance=

problem, it is definitely solveable by 1) hardware and 2) database
optimization.=20

> * from the moment the shopper click on a link, the response time
> (when web browser saids "Done" in the status bar) is 5 seconds or
> less. Preferably 2-3 seconds. Will be using stopwatch for the timer.

Yes, 3 seconds used to be the maximum response time for an inter-active=

application. The web might have moved the goal posts a bit :-)

> Now show me a website that meets those requirements and uses PHP, I'l=
l
> be glad to support your argument about PHP w/o threads :)=20

Tommy, you neglected to say anything about the number of concurrent
users, but if you have e.g. 10.000, you will need enough hardware to
run the webserver and the database. A webserver for serving 10.000
concurrent clients I would run on multiple boxes with an LVS load
distribution mechanism in front. The 5 million row database is
storage-wise not a lot, but running 10.000 concurrent queries will be a=

significant challenge. MySql cluster comes to mind. Apart from that,
apache and mysql will do all the threading you need.=20


/Per

--=20
Per Jessen, Z=C3=BCrich (7.8=C2=B0C)

From: Per Jessen on
Teus Benschop wrote:

> On Tue, 2010-03-23 at 19:08 -0700, Tommy Pham wrote:
>> The response time, max 5 seconds, will be tested on local gigabit LA=
N
>> to ensure the adequate response (optimized DB & code & proper
>> hardware) without worrying about users' connection limit and site's
>> upload bandwidth limit (which can easily rectify). Then thereafter
>> will be doing stress test of about 10 concurrent users. As for the
>> major queries, that's where threads come in, IMO, because those
>> queries depend on 1 primary parameter (category ID) and 1 secondary
>> parameter (language ID). This particular site starts with 500
>> products about 15 categories, without many of those mentioned
>> filters, later grew to its current state.
>>=20
> The bottle neck looking at speed in this example seems to be the
> database backend, not PHP. What would be needed is a fast database,
> and SQL queries optimized for speed. Teus.
>=20

+1.



--=20
Per Jessen, Z=C3=BCrich (7.8=C2=B0C)

From: Per Jessen on
Robert Cummings wrote:

> Yes, I do. There's nothing in your requirements above that sound
> particularly difficult for PHP to handle with a good design and lots
> of caching... and of course the right hardware. I think you're hung u=
p
> on the numbers a bit... those aren't very big numbers for a database.=


Yeah. Given a decent database machine, those 5 millions rows can be
kept in-core all the time.=20



--=20
Per Jessen, Z=C3=BCrich (7.8=C2=B0C)

From: Rene Veerman on
look per, i for one build systems designed to scale to popular levels.

that means that whatever i can squeeze out of a single machine will
save me money. quite a lot, coz as you know dedicated hosting gets
very expensive when you have to buy fast machines.

threading features and persistent shared memory _would_ decrease cost,
a lot, for any php app that wants to scale to popular levels.

i'd like to keep coding in php, and not have to switch languages
because my app got popular.

On Wed, Mar 24, 2010 at 9:46 AM, Per Jessen <per(a)computer.org> wrote:
> Robert Cummings wrote:
>
>> Yes, I do. There's nothing in your requirements above that sound
>> particularly difficult for PHP to handle with a good design and lots
>> of caching... and of course the right hardware. I think you're hung up
>> on the numbers a bit... those aren't very big numbers for a database.
>
> Yeah.  Given a decent database machine, those 5 millions rows can be
> kept in-core all the time.
>
>
>
> --
> Per Jessen, Zürich (7.8°C)
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>
From: Per Jessen on
Tommy Pham wrote:

> # of requests / second can be solved by load balancers/clusters. Wha=
t
> about the multiple answers for a simple request per user as in my
> example? How you would solve that if not by threading?=20

Ah, you're worried that running multiple sql queries sequentially will
cause the response time to be too long? Now we're getting to the crux
of the matter. My immediate thought is - if you can't optimize the
database any further, run multiple http requests, there are ways of
doing that. at.


--=20
Per Jessen, Z=C3=BCrich (7.9=C2=B0C)