Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In order to obtain correct resume_len on next iteration
|
|
|
|
download state
|
|
|
|
|
|
|
|
(#13731)
|
|
* Simplify code and split into separate routines to facilitate maintaining
* Make retry mechanism work on errors during actual download not only during connection establishment phase
* Retry on ECONNRESET and ETIMEDOUT during reading data from network
* Retry on content too short and various timeout errors
* Show error description on retry
* Closes #506, closes #809, closes #2849, closes #4240, closes #6023, closes #8625, closes #9483
|
|
10x reduced JSON size
refs #13810
|
|
|
|
|
|
|
|
|
|
|
|
(closes #8932)
|
|
This may be incorrect due some header (e.g. flv header in f4m downloader)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fields
|
|
|
|
- resume immediately
- no need to concatenate segments and decrypt them on every resume
- no need to save temp files for segments
and for hls downloader:
- no need to download keys for segments that already downloaded
|
|
|
|
|
|
|
|
|
|
|
|
Otherwise, if you screw up a playlist test by including a playlist
dictionary key, you'll be there for eons while it downloads all the
files before erroring out.
|
|
interval message
|
|
|
|
For example, https://www.oppetarkiv.se/video/1196142/natten-ar-dagens-mor
|
|
|
|
|
|
#11358)(closes #11373)(closes #11800)
|
|
|
|
|