From: "workerholic on
hmm, the infrastructure ist good, this is just this query....
so to solve my problem i could run mysql on the application server and
store just this table
and read the query from them, it could solve my problem litte, i hope so!



Daniel Brown schrieb:
> On Fri, Jul 10, 2009 at 13:07,
> workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
>
>> hi andrew i think you understand my problem a little,
>> but if 100 user load this query at the same time, the two mysql server had a
>> lot to do!
>> so i think to cache this query as xml to the application server local make
>> thinks faster,
>> but, i would like to have the same performance to read this xml document as
>> read the query from mysql server...
>> i dont know why php is so slow to read the xml file...
>>
>
> It will be slower to read a file than data from an SQL database by
> sheer design --- regardless of whether it's XML, CSV, plain text, etc.
> And MySQL is faster still because it's run as a server with it's own
> processing engine, completely independent of the PHP engine and
> spawned process. Other factors involved are disk seek time, memory
> capabilities, et cetera, but the SQL-vs-file point is the biggest.
>
> For PHP to locate something within the file, it must load the
> entire file into memory or read it byte-by-byte, line-by-line, from an
> exact offset (given explicitly). SQL databases such as MySQL work
> similarly, but don't catalog all data in quite the same linear
> fashion. Further, MySQL is capable of indexing, allowing it to return
> the data far faster.
>
> There's a time and a place for each, but it sounds as though what
> you're attempting to do would not be best-served by caching it in an
> XML sheet.
>
> Also, something to keep in mind (with no offense intended by any
> means): if you have two database servers (using replication) for
> load-balancing and they - combined - cannot handle 100 simultaneous
> connections and queries, you may want to re-evaluate your
> infrastructure and architecture.
>
>

From: Daniel Brown on
On Fri, Jul 10, 2009 at 13:23,
workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
> hmm, the infrastructure ist good, this is just this query....
> so to solve my problem i could run mysql on the application server and store
> just this table
> and read the query from them, it could solve my problem litte, i hope so!

You may also want to look into SQLite --- it's perfectly designed
for this kind of situation.

--
</Daniel P. Brown>
daniel.brown(a)parasane.net || danbrown(a)php.net
http://www.parasane.net/ || http://www.pilotpig.net/
Check out our great hosting and dedicated server deals at
http://twitter.com/pilotpig
From: Bastien Koert on
On Fri, Jul 10, 2009 at 1:23 PM,
workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
> hmm, the infrastructure ist good, this is just this query....
> so to solve my problem i could run mysql on the application server and store
> just this table
> and read the query from them, it could solve my problem litte, i hope so!
>
>
>
> Daniel Brown schrieb:
>>
>> On Fri, Jul 10, 2009 at 13:07,
>> workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
>>
>>>
>>> hi andrew i think you understand my problem a little,
>>> but if 100 user load this query at the same time, the two mysql server
>>> had a
>>> lot to do!
>>> so i think to cache this query as xml to the application server local
>>> make
>>> thinks faster,
>>> but, i would like to have the same performance to read this xml document
>>> as
>>> read the query from mysql server...
>>> i dont know why php is so slow to read the xml file...
>>>
>>
>>    It will be slower to read a file than data from an SQL database by
>> sheer design --- regardless of whether it's XML, CSV, plain text, etc.
>>  And MySQL is faster still because it's run as a server with it's own
>> processing engine, completely independent of the PHP engine and
>> spawned process.  Other factors involved are disk seek time, memory
>> capabilities, et cetera, but the SQL-vs-file point is the biggest.
>>
>>    For PHP to locate something within the file, it must load the
>> entire file into memory or read it byte-by-byte, line-by-line, from an
>> exact offset (given explicitly).  SQL databases such as MySQL work
>> similarly, but don't catalog all data in quite the same linear
>> fashion.  Further, MySQL is capable of indexing, allowing it to return
>> the data far faster.
>>
>>    There's a time and a place for each, but it sounds as though what
>> you're attempting to do would not be best-served by caching it in an
>> XML sheet.
>>
>>    Also, something to keep in mind (with no offense intended by any
>> means): if you have two database servers (using replication) for
>> load-balancing and they - combined - cannot handle 100 simultaneous
>> connections and queries, you may want to re-evaluate your
>> infrastructure and architecture.
>>
>>
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

is all the data from the query the same for each user? I.e. that they
get the same 4K rows of data for that query? How is that query done?
are there date parameters or other fields that would allow table
partitioning on the data? Could you use a temp table, to store that
data or a more fixed table that stores just that query's dataset?

Also how large is the main table?


--

Bastien

Cat, the other other white meat
From: "workerholic on
yes i think i should do this....

Daniel Brown schrieb:
> On Fri, Jul 10, 2009 at 13:23,
> workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
>
>> hmm, the infrastructure ist good, this is just this query....
>> so to solve my problem i could run mysql on the application server and store
>> just this table
>> and read the query from them, it could solve my problem litte, i hope so!
>>
>
> You may also want to look into SQLite --- it's perfectly designed
> for this kind of situation.
>
>

From: Jon Tamayo on
On Fri, 10 Jul 2009 13:29:31 -0400
Bastien Koert <phpster(a)gmail.com> wrote:

> On Fri, Jul 10, 2009 at 1:23 PM,
> workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
> > hmm, the infrastructure ist good, this is just this query....
> > so to solve my problem i could run mysql on the application server
> > and store just this table
> > and read the query from them, it could solve my problem litte, i
> > hope so!
> >
> >
> >
> > Daniel Brown schrieb:
> >>
> >> On Fri, Jul 10, 2009 at 13:07,
> >> workerholic(a)studysite.eu<workerholic(a)studysite.eu> wrote:
> >>
> >>>
> >>> hi andrew i think you understand my problem a little,
> >>> but if 100 user load this query at the same time, the two mysql
> >>> server had a
> >>> lot to do!
> >>> so i think to cache this query as xml to the application server
> >>> local make
> >>> thinks faster,
> >>> but, i would like to have the same performance to read this xml
> >>> document as
> >>> read the query from mysql server...
> >>> i dont know why php is so slow to read the xml file...
> >>>
> >>
> >>    It will be slower to read a file than data from an SQL database
> >> by sheer design --- regardless of whether it's XML, CSV, plain
> >> text, etc. And MySQL is faster still because it's run as a server
> >> with it's own processing engine, completely independent of the PHP
> >> engine and spawned process.  Other factors involved are disk seek
> >> time, memory capabilities, et cetera, but the SQL-vs-file point is
> >> the biggest.
> >>
> >>    For PHP to locate something within the file, it must load the
> >> entire file into memory or read it byte-by-byte, line-by-line,
> >> from an exact offset (given explicitly).  SQL databases such as
> >> MySQL work similarly, but don't catalog all data in quite the same
> >> linear fashion.  Further, MySQL is capable of indexing, allowing
> >> it to return the data far faster.
> >>
> >>    There's a time and a place for each, but it sounds as though
> >> what you're attempting to do would not be best-served by caching
> >> it in an XML sheet.
> >>
> >>    Also, something to keep in mind (with no offense intended by any
> >> means): if you have two database servers (using replication) for
> >> load-balancing and they - combined - cannot handle 100 simultaneous
> >> connections and queries, you may want to re-evaluate your
> >> infrastructure and architecture.
> >>
> >>
> >
> >
> > --
> > PHP General Mailing List (http://www.php.net/)
> > To unsubscribe, visit: http://www.php.net/unsub.php
> >
> >
>
> is all the data from the query the same for each user? I.e. that they
> get the same 4K rows of data for that query? How is that query done?
> are there date parameters or other fields that would allow table
> partitioning on the data? Could you use a temp table, to store that
> data or a more fixed table that stores just that query's dataset?
>
> Also how large is the main table?
>
>

I don't know much about mysql, as I've been using it only for some
basic things, but, couldn't you just set a bigger query_cache? The
first user would have that 0.3" query, but the other 99 would
get the results in... 0.001"? 0.002"?