From: Ashley Sheridan on
On Wed, 2010-01-20 at 10:30 -0800, Michael A. Peters wrote:

> Rene Veerman wrote:
> > Michael, while i respect your choices, i think you should know that
> > jquery.com is pretty good at minimizing browser-incompatibility
> > headaches (and keeping js apps small), and the quircks that are left
> > are easy enough to learn about.
> >
> > for things whereby
> > - the server needs to generate tons of HTML for a small(ish) dataset, or
> > - the client generates data (also to be translated to html) that the
> > server doesnt really need to know about (yet)
> >
> > js can really take some stress off the server.
>
> I also like to run any content that has user contributed data through a
> server side filter that enforces Content Security Policy -
>
> http://www.clfsrpm.net/xss/
>
> That filter makes sure the content sent to the browser does not include
> stuff that violates the defined CSP, and thus greatly reduces the risk
> of malicious content that input filtering missed from reaching the end user.
>
> Furthermore, when it does catch a violation, it reports the violating to
> a log file notifying me of the problem.
>
> The only way that works for content generated client side would be if
> the user was running a browser that is CSP aware, and right now, they
> just don't exist. Firefox has an experimental add-on for CSP but
> virtually no one uses it.
>
> Doing dynamic content server side allows me to run that content through
> the enforcement filter server side thus catching policy violating
> content before it is ever sent to the user.
>
> That itself, btw, is probably the biggest stress on the server.
>
> I understand prototype etc. is the "web 2.0" way but I really don't have
> a high opinion of "Web 2.0". JavaScript, flash, etc. all have been used
> far too often to do bad things.
>
> Right now, if I don't block except for white listed web sites, I end up
> with advertisements I don't care about expanding and covering the
> content I do care about. Unfortunately the web is full of jerks who do
> rude things with scripts, and people who do malicious things with scripts.
>
> You wouldn't execute code that someone you don't know sent you an
> e-mail, would you? I wouldn't, nor do I execute code someone I don't
> know embeds in a web page.
>
> I surf with script blockers (NoScript to be specific) and when I come
> upon web sites that don't properly function, I'm a lot liklier to head
> elsewhere than to enable scripting for that site. Since I surf that way,
> I expect others do as well, doing things server side that can be done
> server side allows users like me who block scripting to access the
> content without compromising the security of our systems.
>


It's for users like you that I keep saying to people that Javascript
shouldn't define functionality but enhance it. People will turn their
noses up at that attitude, but, like you said, some people get downright
nasty with the things they do online with their scripts.

I've not used that CSP thing you linked to before, it looks like it's
not for validating user input going *to* your server, but to sanitise
the downstream content from your server to the users browser. Validating
user input should prevent this as best as possible, as it shouldn't lead
to output from your script being used as part of an XSS attack. However,
the real worry is with ISP's that modify your pages before they are
output to the browser, as Canadian ISP Rogers has been found doing:

http://www.mattcutts.com/blog/confirmed-isp-modifies-google-home-page/

No PHP class will be able to prevent that, as the ISP is modifying the
content en route to its destination.

Thanks,
Ash
http://www.ashleysheridan.co.uk


From: Nathan Rixham on
Daevid Vincent wrote:
> BTW, I want to use GET so that the page can be bookmarked for future
> searches of the same data (or modified easily with different dates, etc.),
> so that's why I don't use POST.
>

to do as you say on the clientside you'd probably be best to write a
short js script to build the get url from the form data; and on the
serverside just take the klunky approach you mentioned.

worth thinking about scenarios where a field is empty on the initial
search though; but a user may want to modify it by entering in a value
to a previously blank field (which would at this point be stripped); so
maybe removal isn't the best option.

possibly worth considering having a GET url which (p)re-populates the
form (rather than direct to the search results) so the search can be
easily modified before submitting it..?

also you could just pass the url through to an url shrinker; if you use
the api of bit.ly or suchlike you could do this serverside; and reap the
benefits of stats for each search too.
From: "Daevid Vincent" on
Comments inline below...

> -----Original Message-----
> From: Ashley Sheridan
>
> GET has a limit on the amount of data it may carry, which is
> reduced the longer the base URL is.

True, but for search parameters, it's IMHO best to use GET rather than POST
so the page can be bookmarked.

This used to be a concern "back in the day" with 255 bytes.

http://classicasp.aspfaq.com/forms/what-is-the-limit-on-querystring/get/url
-parameters.html

Not so much anymore with most browsers supporting > 2000 characters:

http://www.boutell.com/newfaq/misc/urllength.html
http://support.microsoft.com/kb/208427
http://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-an-
url

Re-writing it to handle $_REQUEST doesn't seem to solve much as the user
would still need to know the form element names and the actual form would
be either POST/GET. GET is the problem I have now. POST is a whole other
problem of not being able to bookmark.

> -----Original Message-----
> From: Nathan Rixham
>
> to do as you say on the clientside you'd probably be best to write a
> short js script to build the get url from the form data; and on the
> serverside just take the klunky approach you mentioned.
>
> worth thinking about scenarios where a field is empty on the initial
> search though; but a user may want to modify it by entering in a value
> to a previously blank field (which would at this point be
> stripped); so maybe removal isn't the best option.

Also a very valid point that I hadn't fully considered.

> also you could just pass the url through to an url shrinker;
> if you use
> the api of bit.ly or suchlike you could do this serverside;
> and reap the benefits of stats for each search too.

This is for an internal intranet site so I can't use an outside shrinker,
however I suspect the code to create my own shrinker isn't so difficult and
this is a pretty interesting idea. Given your above observation, perhaps
this *is* the solution to persue further...

Way to think outside the box Nathan! :)

ÐÆ5ÏÐ
http://daevid.com

From: Ashley Sheridan on
On Wed, 2010-01-20 at 13:06 -0800, Daevid Vincent wrote:

> Comments inline below...
>
> > -----Original Message-----
> > From: Ashley Sheridan
> >
> > GET has a limit on the amount of data it may carry, which is
> > reduced the longer the base URL is.
>
> True, but for search parameters, it's IMHO best to use GET rather than POST
> so the page can be bookmarked.
>
> This used to be a concern "back in the day" with 255 bytes.
>
> http://classicasp.aspfaq.com/forms/what-is-the-limit-on-querystring/get/url
> -parameters.html
>
> Not so much anymore with most browsers supporting > 2000 characters:
>
> http://www.boutell.com/newfaq/misc/urllength.html
> http://support.microsoft.com/kb/208427
> http://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-an-
> url
>
> Re-writing it to handle $_REQUEST doesn't seem to solve much as the user
> would still need to know the form element names and the actual form would
> be either POST/GET. GET is the problem I have now. POST is a whole other
> problem of not being able to bookmark.
>
> > -----Original Message-----
> > From: Nathan Rixham
> >
> > to do as you say on the clientside you'd probably be best to write a
> > short js script to build the get url from the form data; and on the
> > serverside just take the klunky approach you mentioned.
> >
> > worth thinking about scenarios where a field is empty on the initial
> > search though; but a user may want to modify it by entering in a value
> > to a previously blank field (which would at this point be
> > stripped); so maybe removal isn't the best option.
>
> Also a very valid point that I hadn't fully considered.
>
> > also you could just pass the url through to an url shrinker;
> > if you use
> > the api of bit.ly or suchlike you could do this serverside;
> > and reap the benefits of stats for each search too.
>
> This is for an internal intranet site so I can't use an outside shrinker,
> however I suspect the code to create my own shrinker isn't so difficult and
> this is a pretty interesting idea. Given your above observation, perhaps
> this *is* the solution to persue further...
>
> Way to think outside the box Nathan! :)
>
> ÐÆ5ÏÐ
> http://daevid.com
>


What about sending the form via POST, but to a URL that includes some
sort of session in the URL the form is sent to, for example:

<form action="page.php?sid=gsslgugckjeglktjb" method="post">

Then your server script can reproduce the search from saved parameters
stored under the sid value.

This way, you retain the ability to bookmark the search, and use as many
search parameters as you need, without worrying about going over the GET
data limit no matter what browser the user has.



Thanks,
Ash
http://www.ashleysheridan.co.uk