From: Peter Michaux on
On Feb 26, 3:37 am, Jorge <jo...(a)jorgechamorro.com> wrote:
> Hi,
>
> Let's say a page does an XHR to theSameDomain, and the response is a
> redirect to a another resource in another domain. Is that legal ? Will
> such an XHR succeed ?

http://www.w3.org/TR/XMLHttpRequest/#infrastructure-for-the-send-method

---------
If the response is an HTTP redirect

If the redirect does not violate security (it is same origin for
instance), infinite loop precautions, and the scheme is supported,
transparently follow the redirect while observing the same-origin
request event rules.

Note: HTTP places requirements on the user agent regarding the
preservation of the request method and request entity body during
redirects, and also requires end users to be notified of certain kinds
of automatic redirections.

Otherwise, this is a network error.
---------

Peter
From: Thomas 'PointedEars' Lahn on
Peter Michaux wrote:

> Jorge wrote:
>> Let's say a page does an XHR to theSameDomain, and the response is a
>> redirect to a another resource in another domain. Is that legal ? Will
>> such an XHR succeed ?
>
> http://www.w3.org/TR/XMLHttpRequest/#infrastructure-for-the-send-method
> [...]

| Publication as a Working Draft does not imply endorsement by the W3C
| Membership. This is a draft document and may be updated, replaced or
| obsoleted by other documents at any time. It is inappropriate to cite
| this document as other than work in progress.


PointedEars
--
Use any version of Microsoft Frontpage to create your site.
(This won't prevent people from viewing your source, but no one
will want to steal it.)
-- from <http://www.vortex-webdesign.com/help/hidesource.htm> (404-comp.)
From: Richard Cornford on
On Feb 26, 6:49 pm, Jorge wrote:
> On Feb 26, 6:59 pm, Richard Cornford wrote:
>> On Feb 26, 5:26 pm, Jorge wrote:
>>> On Feb 26, 6:06 pm, Richard Cornford wrote:
>><snip>
>>>> If an XML HTTP request object was going to refuse to
>>>> automatically redirect then it should present the status
>>>> 30X response to the calling code, and let it work out what
>>>> to do next.
>
>>> ISTM -looking at it into w3.org- that it will throw either
>>> a security err or a network err:
>
>> As I said, attempting a cross-domain redirect is asking for
>> trouble.
>
> You said as well:
>
> <quote>
> I suspect that you mean; will the XML HTTP request system
> automatically act on the redirection and return the response from
> that
> alternative source. To which the answer is that mostly they will.
> </quote>

Redirecting will mostly not be attempted cross-domain.

Richard.
From: Richard Cornford on
On Feb 26, 6:53 pm, Jorge wrote:
> On Feb 26, 6:59 pm, Richard Cornford wrote:
>> On Feb 26, 5:26 pm, Jorge wrote:
>>> On Feb 26, 6:06 pm, Richard Cornford wrote:
>
>>>>>> Cookies should follow the rules for cookies. Which cookies
>>>>>> go with which requests depends on their (actual or implied)
>>>>>> Path and Domain parameters.
>
>><HERE> But you know that there are circumstances under which
>>>>> existing
>>>>> cookies are *not* sent.</HERE>
>
>>>> That is what the rules for cookies say is possible. So your
>>>> point is?
>
>>> That it might have been that this were another of these
>>> circumstances.
>
>> That what might be "another of these circumstances"?
>
> See the <HERE> element, ??? above ???

When you are not being understood, repeating yourself slightly louder
is a waste of everyone's time.

Richard.
From: Richard Cornford on
On Feb 26, 7:37 pm, Scott Sauyet wrote:
> On Feb 26, 1:15 pm, Richard Cornford wrote:
>> On Feb 26, 5:58 pm, Scott Sauyet wrote:
>>> On Feb 26, 12:40 pm, Richard Cornford wrote:
>>>> On Feb 26, 5:31 pm, Stefan Weiss wrote:
>>>>> I didn't try any other browsers, but I would be very surprised
>>>>> if any of them (the more recent ones, at least) could be tricked
>>>>> into sending an XHR which violates the browser's security
>>>>> policies by something as simple as an HTTP redirect.
>
>>>> Why not? For a very long time it has been possible to 'trick' a
>>>> browser into making a request to another domain by setting
>>>> the - src - of a - new Image(); -. Making the request or not
>>>> is not that important so long as access to the result is denied.
>
>>> ... and if the request is actually idempotent.
>
>> Alright, what if the request is actually idempotent?
>
> I meant to qualify your statement further. I mean that making
> the request or not is not that important so long as both (1)
> access to the result is denied and (2) the request is actually
> idempotent. A GET request is supposed to be idempotent, but
> if it's not, then having that request made on redirect could
> cause problems.

You mean that if people create systems that depend on HTTP without any
regard for how HTTP is supposed to work the results may cause someone
"problems"? Well, yes, but who is responsible for that? Is it
reasonable/realistic to expect a User Agent to anticipate and/or
mitigate all possible manifestations of incompetence in web
developers?

>>> I know GET and HEAD requests are supposed to be, but we all
>>> remember the havoc caused with many sites when some
>>> prefetching was released (was it Google Web Accelerator?)
>
>> I have absolutely no idea what you are talking about.
>
> At some point a few years back a browser plug-in was released;
> I think it might have been Google Web Accelerator. [1] This
> tool was supposed to speed up browsing by pre-fetching and
> caching links it thought you might visit off the current page.
> It makes perfect sense, except that a number of web
> applications out there had non-idempotent GET request,
> especially hyperlinked "delete row" actions. People
> started unintentionally altering all sorts of data using
> this tool. Granted, it was the fault of people not smart
> enough to develop properly with HTTP, but it was pretty easy
> to blame Google. The plug- in is long gone now.

While web site/application developers should be held responsible for
their own mistakes, the developers of such an "accelerator" should
have been able to anticipate the consequences of their actions from
the simple observation that most web developers are more or less
technically ignorant and/or incompetent (and so will be acting in
ignorance of applicable standards, or disregarding them as unimportant
in the 'real world').

Of course Google have a problem in making that judgment for themselves
as presumably they believe their own web developers to be 'above
average', 'cutting edge', etc. and that would have to modify their
perception of the general quality of web developers upwards.

Richard.