From: Thomas Jollans on
On 07/14/2010 07:49 PM, David wrote:
>
> urlretrieve works fine. However, when file size get very large. It
> goes on forever, and even fails.
>
> For instance, one of download .zip file is of 363,096KB.
>
> Particularly, when trying to get, with urlretrieve, a zipped folder of
> a very large size, it could take up to 20 to 30 minutes. Often it
> fails, never any problem with smaller folders. Any solution for this?

Does this only occur with urllib, or is this the case for other
software, like a web browser, or a downloader like wget?

The more you try to copy, the longer it takes. The longer it takes, the
more probable a problem affecting a file becomes. 20 to 30 minutes might
well, depending on the network speed, be a readonable timeframe for
360M. And what does "often" mean here?

In other words: Are you sure urlretrieve is to blame for your problems,
why are you sure, and have you tried using urllib2 or the other urllib
functions?

From: MRAB on
David wrote:
> urlretrieve works fine. However, when file size get very large. It
> goes on forever, and even fails.
>
> For instance, one of download .zip file is of 363,096KB.
>
> Particularly, when trying to get, with urlretrieve, a zipped folder of
> a very large size, it could take up to 20 to 30 minutes. Often it
> fails, never any problem with smaller folders. Any solution for this?
>
Do you have any control over the other end? If yes, then you could try
transferring it in multiple chunks.