diff options
author | Aakash Gajjar <skyqutip@gmail.com> | 2020-08-25 20:23:34 +0530 |
---|---|---|
committer | GitHub <noreply@github.com> | 2020-08-25 20:23:34 +0530 |
commit | b827ee921fe510a8730a9fab070148ed2b8279b5 (patch) | |
tree | 4db8b0a8d954cea6fd86188a633850d7139f79c7 /youtube_dl/extractor/soundcloud.py | |
parent | 89cee32ce9b504bd2b892f6fd2bcda46ae33a10c (diff) |
pull changes from remote master (#190)
* [scrippsnetworks] Add new extractor(closes #19857)(closes #22981)
* [teachable] Improve locked lessons detection (#23528)
* [teachable] Fail with error message if no video URL found
* [extractors] add missing import for ScrippsNetworksIE
* [brightcove] cache brightcove player policy keys
* [prosiebensat1] improve geo restriction handling(closes #23571)
* [soundcloud] automatically update client id on failing requests
* [spankbang] Fix extraction (closes #23307, closes #23423, closes #23444)
* [spankbang] Improve removed video detection (#23423)
* [brightcove] update policy key on failing requests
* [pornhub] Fix extraction and add support for m3u8 formats (closes #22749, closes #23082)
* [pornhub] Improve locked videos detection (closes #22449, closes #22780)
* [brightcove] invalidate policy key cache on failing requests
* [soundcloud] fix client id extraction for non fatal requests
* [ChangeLog] Actualize
[ci skip]
* [devscripts/create-github-release] Switch to using PAT for authentication
Basic authentication will be deprecated soon
* release 2020.01.01
* [redtube] Detect private videos (#23518)
* [vice] improve extraction(closes #23631)
* [devscripts/create-github-release] Remove unused import
* [wistia] improve format extraction and extract subtitles(closes #22590)
* [nrktv:seriebase] Fix extraction (closes #23625) (#23537)
* [discovery] fix anonymous token extraction(closes #23650)
* [scrippsnetworks] add support for www.discovery.com videos
* [scrippsnetworks] correct test case URL
* [dctp] fix format extraction(closes #23656)
* [pandatv] Remove extractor (#23630)
* [naver] improve extraction
- improve geo-restriction handling
- extract automatic captions
- extract uploader metadata
- extract VLive HLS formats
* [naver] improve metadata extraction
* [cloudflarestream] improve extraction
- add support for bytehighway.net domain
- add support for signed URLs
- extract thumbnail
* [cloudflarestream] import embed URL extraction
* [lego] fix extraction and extract subtitle(closes #23687)
* [safari] Fix kaltura session extraction (closes #23679) (#23670)
* [orf:fm4] Fix extraction (#23599)
* [orf:radio] Clean description and improve extraction
* [twitter] add support for promo_video_website cards(closes #23711)
* [vodplatform] add support for embed.kwikmotion.com domain
* [ndr:base:embed] Improve thumbnails extraction (closes #23731)
* [canvas] Add support for new API endpoint and update tests (closes #17680, closes #18629)
* [travis] Add flake8 job (#23720)
* [yourporn] Fix extraction (closes #21645, closes #22255, closes #23459)
* [ChangeLog] Actualize
[ci skip]
* release 2020.01.15
* [soundcloud] Restore previews extraction (closes #23739)
* [orf:tvthek] Improve geo restricted videos detection (closes #23741)
* [zype] improve extraction
- extract subtitles(closes #21258)
- support URLs with alternative keys/tokens(#21258)
- extract more metadata
* [americastestkitchen] fix extraction
* [nbc] add support for nbc multi network URLs(closes #23049)
* [ard] improve extraction(closes #23761)
- simplify extraction
- extract age limit and series
- bypass geo-restriction
* [ivi:compilation] Fix entries extraction (closes #23770)
* [24video] Add support for 24video.vip (closes #23753)
* [businessinsider] Fix jwplatform id extraction (closes #22929) (#22954)
* [ard] add a missing condition
* [azmedien] fix extraction(closes #23783)
* [voicerepublic] fix extraction
* [stretchinternet] fix extraction(closes #4319)
* [youtube] Fix sigfunc name extraction (closes #23819)
* [ChangeLog] Actualize
[ci skip]
* release 2020.01.24
* [soundcloud] imporve private playlist/set tracks extraction
https://github.com/ytdl-org/youtube-dl/issues/3707#issuecomment-577873539
* [svt] fix article extraction(closes #22897)(closes #22919)
* [svt] fix series extraction(closes #22297)
* [viewlift] improve extraction
- fix extraction(closes #23851)
- add add support for authentication
- add support for more domains
* [vimeo] fix album extraction(closes #23864)
* [tva] Relax _VALID_URL (closes #23903)
* [tv5mondeplus] Fix extraction (closes #23907, closes #23911)
* [twitch:stream] Lowercase channel id for stream request (closes #23917)
* [sportdeutschland] Update to new sportdeutschland API
They switched to SSL, but under a different host AND path...
Remove the old test cases because these videos have become unavailable.
* [popcorntimes] Add extractor (closes #23949)
* [thisoldhouse] fix extraction(closes #23951)
* [toggle] Add support for mewatch.sg (closes #23895) (#23930)
* [compat] Introduce compat_realpath (refs #23991)
* [update] Fix updating via symlinks (closes #23991)
* [nytimes] improve format sorting(closes #24010)
* [abc:iview] Support 720p (#22907) (#22921)
* [nova:embed] Fix extraction (closes #23672)
* [nova:embed] Improve (closes #23690)
* [nova] Improve extraction (refs #23690)
* [jpopsuki] Remove extractor (closes #23858)
* [YoutubeDL] Fix playlist entry indexing with --playlist-items (closes #10591, closes #10622)
* [test_YoutubeDL] Fix get_ids
* [test_YoutubeDL] Add tests for #10591 (closes #23873)
* [24video] Add support for porn.24video.net (closes #23779, closes #23784)
* [npr] Add support for streams (closes #24042)
* [ChangeLog] Actualize
[ci skip]
* release 2020.02.16
* [tv2dk:bornholm:play] Fix extraction (#24076)
* [imdb] Fix extraction (closes #23443)
* [wistia] Add support for multiple generic embeds (closes #8347, closes #11385)
* [teachable] Add support for multiple videos per lecture (closes #24101)
* [pornhd] Fix extraction (closes #24128)
* [options] Remove duplicate short option -v for --version (#24162)
* [extractor/common] Convert ISM manifest to unicode before processing on python 2 (#24152)
* [YoutubeDL] Force redirect URL to unicode on python 2
* Remove no longer needed compat_str around geturl
* [youjizz] Fix extraction (closes #24181)
* [test_subtitles] Remove obsolete test
* [zdf:channel] Fix tests
* [zapiks] Fix test
* [xtube] Fix metadata extraction (closes #21073, closes #22455)
* [xtube:user] Fix test
* [telecinco] Fix extraction (refs #24195)
* [telecinco] Add support for article opening videos
* [franceculture] Fix extraction (closes #24204)
* [xhamster] Fix extraction (closes #24205)
* [ChangeLog] Actualize
[ci skip]
* release 2020.03.01
* [vimeo] Fix subtitles URLs (#24209)
* [servus] Add support for new URL schema (closes #23475, closes #23583, closes #24142)
* [youtube:playlist] Fix tests (closes #23872) (#23885)
* [peertube] Improve extraction
* [peertube] Fix issues and improve extraction (closes #23657)
* [pornhub] Improve title extraction (closes #24184)
* [vimeo] fix showcase password protected video extraction(closes #24224)
* [youtube] Fix age-gated videos support without login (closes #24248)
* [youtube] Fix tests
* [ChangeLog] Actualize
[ci skip]
* release 2020.03.06
* [nhk] update API version(closes #24270)
* [youtube] Improve extraction in 429 error conditions (closes #24283)
* [youtube] Improve age-gated videos extraction in 429 error conditions (refs #24283)
* [youtube] Remove outdated code
Additional get_video_info requests don't seem to provide any extra itags any longer
* [README.md] Clarify 429 error
* [pornhub] Add support for pornhubpremium.com (#24288)
* [utils] Add support for cookies with spaces used instead of tabs
* [ChangeLog] Actualize
[ci skip]
* release 2020.03.08
* Revert "[utils] Add support for cookies with spaces used instead of tabs"
According to [1] TABs must be used as separators between fields.
Files produces by some tools with spaces as separators are considered
malformed.
1. https://curl.haxx.se/docs/http-cookies.html
This reverts commit cff99c91d150df2a4e21962a3ca8d4ae94533b8c.
* [utils] Add reference to cookie file format
* Revert "[vimeo] fix showcase password protected video extraction(closes #24224)"
This reverts commit 12ee431676bb655f04c7dd416a73c1f142ed368d.
* [nhk] Relax _VALID_URL (#24329)
* [nhk] Remove obsolete rtmp formats (closes #24329)
* [nhk] Update m3u8 URL and use native hls (#24329)
* [ndr] Fix extraction (closes #24326)
* [xtube] Fix formats extraction (closes #24348)
* [xtube] Fix typo
* [hellporno] Fix extraction (closes #24399)
* [cbc:watch] Add support for authentication
* [cbc:watch] Fix authenticated device token caching (closes #19160)
* [soundcloud] fix download url extraction(closes #24394)
* [limelight] remove disabled API requests(closes #24255)
* [bilibili] Add support for new URL schema with BV ids (closes #24439, closes #24442)
* [bilibili] Add support for player.bilibili.com (closes #24402)
* [teachable] Extract chapter metadata (closes #24421)
* [generic] Look for teachable embeds before wistia
* [teachable] Update upskillcourses domain
New version does not use teachable platform any longer
* [teachable] Update gns3 domain
* [teachable] Update test
* [ChangeLog] Actualize
[ci skip]
* [ChangeLog] Actualize
[ci skip]
* release 2020.03.24
* [spankwire] Fix extraction (closes #18924, closes #20648)
* [spankwire] Add support for generic embeds (refs #24633)
* [youporn] Add support form generic embeds
* [mofosex] Add support for generic embeds (closes #24633)
* [tele5] Fix extraction (closes #24553)
* [extractor/common] Skip malformed ISM manifest XMLs while extracting ISM formats (#24667)
* [tv4] Fix ISM formats extraction (closes #24667)
* [twitch:clips] Extend _VALID_URL (closes #24290) (#24642)
* [motherless] Fix extraction (closes #24699)
* [nova:embed] Fix extraction (closes #24700)
* [youtube] Skip broken multifeed videos (closes #24711)
* [soundcloud] Extract AAC format
* [soundcloud] Improve AAC format extraction (closes #19173, closes #24708)
* [thisoldhouse] Fix video id extraction (closes #24548)
Added support for:
with of without "www."
and either ".chorus.build" or ".com"
It now validated correctly on older URL's
```
<iframe src="https://thisoldhouse.chorus.build/videos/zype/5e33baec27d2e50001d5f52f
```
and newer ones
```
<iframe src="https://www.thisoldhouse.com/videos/zype/5e2b70e95216cc0001615120
```
* [thisoldhouse] Improve video id extraction (closes #24549)
* [youtube] Fix DRM videos detection (refs #24736)
* [options] Clarify doc on --exec command (closes #19087) (#24883)
* [prosiebensat1] Improve extraction and remove 7tv.de support (#24948)
* [prosiebensat1] Extract series metadata
* [tenplay] Relax _VALID_URL (closes #25001)
* [tvplay] fix Viafree extraction(closes #15189)(closes #24473)(closes #24789)
* [yahoo] fix GYAO Player extraction and relax title URL regex(closes #24178)(closes #24778)
* [youtube] Use redirected video id if any (closes #25063)
* [youtube] Improve player id extraction and add tests
* [extractor/common] Extract multiple JSON-LD entries
* [crunchyroll] Fix and improve extraction (closes #25096, closes #25060)
* [ChangeLog] Actualize
[ci skip]
* release 2020.05.03
* [puhutv] Remove no longer available HTTP formats (closes #25124)
* [utils] Improve cookie files support
+ Add support for UTF-8 in cookie files
* Skip malformed cookie file entries instead of crashing (invalid entry len, invalid expires at)
* [dailymotion] Fix typo
* [compat] Introduce compat_cookiejar_Cookie
* [extractor/common] Use compat_cookiejar_Cookie for _set_cookie (closes #23256, closes #24776)
To always ensure cookie name and value are bytestrings on python 2.
* [orf] Add support for more radio stations (closes #24938) (#24968)
* [uol] fix extraction(closes #22007)
* [downloader/http] Finish downloading once received data length matches expected
Always do this if possible, i.e. if Content-Length or expected length is known, not only in test.
This will save unnecessary last extra loop trying to read 0 bytes.
* [downloader/http] Request last data block of exact remaining size
Always request last data block of exact size remaining to download if possible not the current block size.
* [iprima] Improve extraction (closes #25138)
* [youtube] Improve signature cipher extraction (closes #25188)
* [ChangeLog] Actualize
[ci skip]
* release 2020.05.08
* [spike] fix Bellator mgid extraction(closes #25195)
* [bbccouk] PEP8
* [mailru] Fix extraction (closes #24530) (#25239)
* [README.md] flake8 HTTPS URL (#25230)
* [youtube] Add support for yewtu.be (#25226)
* [soundcloud] reduce API playlist page limit(closes #25274)
* [vimeo] improve format extraction and sorting(closes #25285)
* [redtube] Improve title extraction (#25208)
* [indavideo] Switch to HTTPS for API request (#25191)
* [utils] Fix file permissions in write_json_file (closes #12471) (#25122)
* [redtube] Improve formats extraction and extract m3u8 formats (closes #25311, closes #25321)
* [ard] Improve _VALID_URL (closes #25134) (#25198)
* [giantbomb] Extend _VALID_URL (#25222)
* [postprocessor/ffmpeg] Embed series metadata with --add-metadata
* [youtube] Add support for more invidious instances (#25417)
* [ard:beta] Extend _VALID_URL (closes #25405)
* [ChangeLog] Actualize
[ci skip]
* release 2020.05.29
* [jwplatform] Improve embeds extraction (closes #25467)
* [periscope] Fix untitled broadcasts (#25482)
* [twitter:broadcast] Add untitled periscope broadcast test
* [malltv] Add support for sk.mall.tv (#25445)
* [brightcove] Fix subtitles extraction (closes #25540)
* [brightcove] Sort imports
* [twitch] Pass v5 accept header and fix thumbnails extraction (closes #25531)
* [twitch:stream] Fix extraction (closes #25528)
* [twitch:stream] Expect 400 and 410 HTTP errors from API
* [tele5] Prefer jwplatform over nexx (closes #25533)
* [jwplatform] Add support for bypass geo restriction
* [tele5] Bypass geo restriction
* [ChangeLog] Actualize
[ci skip]
* release 2020.06.06
* [kaltura] Add support for multiple embeds on a webpage (closes #25523)
* [youtube] Extract chapters from JSON (closes #24819)
* [facebook] Support single-video ID links
I stumbled upon this at https://www.facebook.com/bwfbadminton/posts/10157127020046316 . No idea how prevalent it is yet.
* [youtube] Fix playlist and feed extraction (closes #25675)
* [youtube] Fix thumbnails extraction and remove uploader id extraction warning (closes #25676)
* [youtube] Fix upload date extraction
* [youtube] Improve view count extraction
* [youtube] Fix uploader id and uploader URL extraction
* [ChangeLog] Actualize
[ci skip]
* release 2020.06.16
* [youtube] Fix categories and improve tags extraction
* [youtube] Force old layout (closes #25682, closes #25683, closes #25680, closes #25686)
* [ChangeLog] Actualize
[ci skip]
* release 2020.06.16.1
* [brightcove] Improve embed detection (closes #25674)
* [bellmedia] add support for cp24.com clip URLs(closes #25764)
* [youtube:playlists] Extend _VALID_URL (closes #25810)
* [youtube] Prevent excess HTTP 301 (#25786)
* [wistia] Restrict embed regex (closes #25969)
* [youtube] Improve description extraction (closes #25937) (#25980)
* [youtube] Fix sigfunc name extraction (closes #26134, closes #26135, closes #26136, closes #26137)
* [ChangeLog] Actualize
[ci skip]
* release 2020.07.28
* [xhamster] Extend _VALID_URL (closes #25789) (#25804)
* [xhamster] Fix extraction (closes #26157) (#26254)
* [xhamster] Extend _VALID_URL (closes #25927)
Co-authored-by: Remita Amine <remitamine@gmail.com>
Co-authored-by: Sergey M․ <dstftw@gmail.com>
Co-authored-by: nmeum <soeren+github@soeren-tempel.net>
Co-authored-by: Roxedus <me@roxedus.dev>
Co-authored-by: Singwai Chan <c.singwai@gmail.com>
Co-authored-by: cdarlint <cdarlint@users.noreply.github.com>
Co-authored-by: Johannes N <31795504+jonolt@users.noreply.github.com>
Co-authored-by: jnozsc <jnozsc@gmail.com>
Co-authored-by: Moritz Patelscheck <moritz.patelscheck@campus.tu-berlin.de>
Co-authored-by: PB <3854688+uno20001@users.noreply.github.com>
Co-authored-by: Philipp Hagemeister <phihag@phihag.de>
Co-authored-by: Xaver Hellauer <software@hellauer.bayern>
Co-authored-by: d2au <d2au.dev@gmail.com>
Co-authored-by: Jan 'Yenda' Trmal <jtrmal@gmail.com>
Co-authored-by: jxu <7989982+jxu@users.noreply.github.com>
Co-authored-by: Martin Ström <name@my-domain.se>
Co-authored-by: The Hatsune Daishi <nao20010128@gmail.com>
Co-authored-by: tsia <github@tsia.de>
Co-authored-by: 3risian <59593325+3risian@users.noreply.github.com>
Co-authored-by: Tristan Waddington <tristan.waddington@gmail.com>
Co-authored-by: Devon Meunier <devon.meunier@gmail.com>
Co-authored-by: Felix Stupp <felix.stupp@outlook.com>
Co-authored-by: tom <tomster954@gmail.com>
Co-authored-by: AndrewMBL <62922222+AndrewMBL@users.noreply.github.com>
Co-authored-by: willbeaufoy <will@willbeaufoy.net>
Co-authored-by: Philipp Stehle <anderschwiedu@googlemail.com>
Co-authored-by: hh0rva1h <61889859+hh0rva1h@users.noreply.github.com>
Co-authored-by: comsomisha <shmelev1996@mail.ru>
Co-authored-by: TotalCaesar659 <14265316+TotalCaesar659@users.noreply.github.com>
Co-authored-by: Juan Francisco Cantero Hurtado <iam@juanfra.info>
Co-authored-by: Dave Loyall <dave@the-good-guys.net>
Co-authored-by: tlsssl <63866177+tlsssl@users.noreply.github.com>
Co-authored-by: Rob <ankenyr@gmail.com>
Co-authored-by: Michael Klein <github@a98shuttle.de>
Co-authored-by: JordanWeatherby <47519158+JordanWeatherby@users.noreply.github.com>
Co-authored-by: striker.sh <19488257+strikersh@users.noreply.github.com>
Co-authored-by: Matej Dujava <mdujava@gmail.com>
Co-authored-by: Glenn Slayden <5589855+glenn-slayden@users.noreply.github.com>
Co-authored-by: MRWITEK <mrvvitek@gmail.com>
Co-authored-by: JChris246 <43832407+JChris246@users.noreply.github.com>
Co-authored-by: TheRealDude2 <the.real.dude@gmx.de>
Diffstat (limited to 'youtube_dl/extractor/soundcloud.py')
-rw-r--r-- | youtube_dl/extractor/soundcloud.py | 211 |
1 files changed, 114 insertions, 97 deletions
diff --git a/youtube_dl/extractor/soundcloud.py b/youtube_dl/extractor/soundcloud.py index c2ee54457..d37c52543 100644 --- a/youtube_dl/extractor/soundcloud.py +++ b/youtube_dl/extractor/soundcloud.py @@ -9,10 +9,13 @@ from .common import ( SearchInfoExtractor ) from ..compat import ( + compat_HTTPError, + compat_kwargs, compat_str, compat_urlparse, ) from ..utils import ( + error_to_compat_str, ExtractorError, float_or_none, HEADRequest, @@ -24,6 +27,7 @@ from ..utils import ( unified_timestamp, update_url_query, url_or_none, + urlhandle_detect_ext, ) @@ -93,7 +97,7 @@ class SoundcloudIE(InfoExtractor): 'repost_count': int, } }, - # not streamable song + # geo-restricted { 'url': 'https://soundcloud.com/the-concept-band/goldrushed-mastered?in=the-concept-band/sets/the-royal-concept-ep', 'info_dict': { @@ -105,18 +109,13 @@ class SoundcloudIE(InfoExtractor): 'uploader_id': '9615865', 'timestamp': 1337635207, 'upload_date': '20120521', - 'duration': 30, + 'duration': 227.155, 'license': 'all-rights-reserved', 'view_count': int, 'like_count': int, 'comment_count': int, 'repost_count': int, }, - 'params': { - # rtmp - 'skip_download': True, - }, - 'skip': 'Preview', }, # private link { @@ -227,7 +226,6 @@ class SoundcloudIE(InfoExtractor): 'skip_download': True, }, }, - # not available via api.soundcloud.com/i1/tracks/id/streams { 'url': 'https://soundcloud.com/giovannisarani/mezzo-valzer', 'md5': 'e22aecd2bc88e0e4e432d7dcc0a1abf7', @@ -236,7 +234,7 @@ class SoundcloudIE(InfoExtractor): 'ext': 'mp3', 'title': 'Mezzo Valzer', 'description': 'md5:4138d582f81866a530317bae316e8b61', - 'uploader': 'Giovanni Sarani', + 'uploader': 'Micronie', 'uploader_id': '3352531', 'timestamp': 1551394171, 'upload_date': '20190228', @@ -248,14 +246,16 @@ class SoundcloudIE(InfoExtractor): 'comment_count': int, 'repost_count': int, }, - 'expected_warnings': ['Unable to download JSON metadata'], - } + }, + { + # with AAC HQ format available via OAuth token + 'url': 'https://soundcloud.com/wandw/the-chainsmokers-ft-daya-dont-let-me-down-ww-remix-1', + 'only_matching': True, + }, ] - _API_BASE = 'https://api.soundcloud.com/' _API_V2_BASE = 'https://api-v2.soundcloud.com/' _BASE_URL = 'https://soundcloud.com/' - _CLIENT_ID = 'UW9ajvMgVdMMW3cdeBi8lPfN6dvOVGji' _IMAGE_REPL_RE = r'-([0-9a-z]+)\.jpg' _ARTWORK_MAP = { @@ -271,14 +271,53 @@ class SoundcloudIE(InfoExtractor): 'original': 0, } + def _store_client_id(self, client_id): + self._downloader.cache.store('soundcloud', 'client_id', client_id) + + def _update_client_id(self): + webpage = self._download_webpage('https://soundcloud.com/', None) + for src in reversed(re.findall(r'<script[^>]+src="([^"]+)"', webpage)): + script = self._download_webpage(src, None, fatal=False) + if script: + client_id = self._search_regex( + r'client_id\s*:\s*"([0-9a-zA-Z]{32})"', + script, 'client id', default=None) + if client_id: + self._CLIENT_ID = client_id + self._store_client_id(client_id) + return + raise ExtractorError('Unable to extract client id') + + def _download_json(self, *args, **kwargs): + non_fatal = kwargs.get('fatal') is False + if non_fatal: + del kwargs['fatal'] + query = kwargs.get('query', {}).copy() + for _ in range(2): + query['client_id'] = self._CLIENT_ID + kwargs['query'] = query + try: + return super(SoundcloudIE, self)._download_json(*args, **compat_kwargs(kwargs)) + except ExtractorError as e: + if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401: + self._store_client_id(None) + self._update_client_id() + continue + elif non_fatal: + self._downloader.report_warning(error_to_compat_str(e)) + return False + raise + + def _real_initialize(self): + self._CLIENT_ID = self._downloader.cache.load('soundcloud', 'client_id') or 'YUKXoArFcqrlQn9tfNHvvyfnDISj04zk' + @classmethod def _resolv_url(cls, url): - return SoundcloudIE._API_V2_BASE + 'resolve?url=' + url + '&client_id=' + cls._CLIENT_ID + return SoundcloudIE._API_V2_BASE + 'resolve?url=' + url - def _extract_info_dict(self, info, full_title=None, secret_token=None, version=2): + def _extract_info_dict(self, info, full_title=None, secret_token=None): track_id = compat_str(info['id']) title = info['title'] - track_base_url = self._API_BASE + 'tracks/%s' % track_id format_urls = set() formats = [] @@ -287,26 +326,27 @@ class SoundcloudIE(InfoExtractor): query['secret_token'] = secret_token if info.get('downloadable') and info.get('has_downloads_left'): - format_url = update_url_query( - info.get('download_url') or track_base_url + '/download', query) - format_urls.add(format_url) - if version == 2: - v1_info = self._download_json( - track_base_url, track_id, query=query, fatal=False) or {} - else: - v1_info = info - formats.append({ - 'format_id': 'download', - 'ext': v1_info.get('original_format') or 'mp3', - 'filesize': int_or_none(v1_info.get('original_content_size')), - 'url': format_url, - 'preference': 10, - }) + download_url = update_url_query( + self._API_V2_BASE + 'tracks/' + track_id + '/download', query) + redirect_url = (self._download_json(download_url, track_id, fatal=False) or {}).get('redirectUri') + if redirect_url: + urlh = self._request_webpage( + HEADRequest(redirect_url), track_id, fatal=False) + if urlh: + format_url = urlh.geturl() + format_urls.add(format_url) + formats.append({ + 'format_id': 'download', + 'ext': urlhandle_detect_ext(urlh) or 'mp3', + 'filesize': int_or_none(urlh.headers.get('Content-Length')), + 'url': format_url, + 'preference': 10, + }) def invalid_url(url): - return not url or url in format_urls or re.search(r'/(?:preview|playlist)/0/30/', url) + return not url or url in format_urls - def add_format(f, protocol): + def add_format(f, protocol, is_preview=False): mobj = re.search(r'\.(?P<abr>\d+)\.(?P<ext>[0-9a-z]{3,4})(?=[/?])', stream_url) if mobj: for k, v in mobj.groupdict().items(): @@ -315,16 +355,27 @@ class SoundcloudIE(InfoExtractor): format_id_list = [] if protocol: format_id_list.append(protocol) + ext = f.get('ext') + if ext == 'aac': + f['abr'] = '256' for k in ('ext', 'abr'): v = f.get(k) if v: format_id_list.append(v) + preview = is_preview or re.search(r'/(?:preview|playlist)/0/30/', f['url']) + if preview: + format_id_list.append('preview') abr = f.get('abr') if abr: f['abr'] = int(abr) + if protocol == 'hls': + protocol = 'm3u8' if ext == 'aac' else 'm3u8_native' + else: + protocol = 'http' f.update({ 'format_id': '_'.join(format_id_list), - 'protocol': 'm3u8_native' if protocol == 'hls' else 'http', + 'protocol': protocol, + 'preference': -10 if preview else None, }) formats.append(f) @@ -335,7 +386,7 @@ class SoundcloudIE(InfoExtractor): if not isinstance(t, dict): continue format_url = url_or_none(t.get('url')) - if not format_url or t.get('snipped') or '/preview/' in format_url: + if not format_url: continue stream = self._download_json( format_url, track_id, query=query, fatal=False) @@ -358,44 +409,14 @@ class SoundcloudIE(InfoExtractor): add_format({ 'url': stream_url, 'ext': ext, - }, 'http' if protocol == 'progressive' else protocol) - - if not formats: - # Old API, does not work for some tracks (e.g. - # https://soundcloud.com/giovannisarani/mezzo-valzer) - # and might serve preview URLs (e.g. - # http://www.soundcloud.com/snbrn/ele) - format_dict = self._download_json( - track_base_url + '/streams', track_id, - 'Downloading track url', query=query, fatal=False) or {} - - for key, stream_url in format_dict.items(): - if invalid_url(stream_url): - continue - format_urls.add(stream_url) - mobj = re.search(r'(http|hls)_([^_]+)_(\d+)_url', key) - if mobj: - protocol, ext, abr = mobj.groups() - add_format({ - 'abr': abr, - 'ext': ext, - 'url': stream_url, - }, protocol) - - if not formats: - # We fallback to the stream_url in the original info, this - # cannot be always used, sometimes it can give an HTTP 404 error - urlh = self._request_webpage( - HEADRequest(info.get('stream_url') or track_base_url + '/stream'), - track_id, query=query, fatal=False) - if urlh: - stream_url = urlh.geturl() - if not invalid_url(stream_url): - add_format({'url': stream_url}, 'http') + }, 'http' if protocol == 'progressive' else protocol, + t.get('snipped') or '/preview/' in format_url) for f in formats: f['vcodec'] = 'none' + if not formats and info.get('policy') == 'BLOCK': + self.raise_geo_restricted() self._sort_formats(formats) user = info.get('user') or {} @@ -451,9 +472,7 @@ class SoundcloudIE(InfoExtractor): track_id = mobj.group('track_id') - query = { - 'client_id': self._CLIENT_ID, - } + query = {} if track_id: info_json_url = self._API_V2_BASE + 'tracks/' + track_id full_title = track_id @@ -467,20 +486,24 @@ class SoundcloudIE(InfoExtractor): resolve_title += '/%s' % token info_json_url = self._resolv_url(self._BASE_URL + resolve_title) - version = 2 info = self._download_json( - info_json_url, full_title, 'Downloading info JSON', query=query, fatal=False) - if not info: - info = self._download_json( - info_json_url.replace(self._API_V2_BASE, self._API_BASE), - full_title, 'Downloading info JSON', query=query) - version = 1 + info_json_url, full_title, 'Downloading info JSON', query=query) - return self._extract_info_dict(info, full_title, token, version) + return self._extract_info_dict(info, full_title, token) class SoundcloudPlaylistBaseIE(SoundcloudIE): - def _extract_track_entries(self, tracks, token=None): + def _extract_set(self, playlist, token=None): + playlist_id = compat_str(playlist['id']) + tracks = playlist.get('tracks') or [] + if not all([t.get('permalink_url') for t in tracks]) and token: + tracks = self._download_json( + self._API_V2_BASE + 'tracks', playlist_id, + 'Downloading tracks', query={ + 'ids': ','.join([compat_str(t['id']) for t in tracks]), + 'playlistId': playlist_id, + 'playlistSecretToken': token, + }) entries = [] for track in tracks: track_id = str_or_none(track.get('id')) @@ -493,7 +516,10 @@ class SoundcloudPlaylistBaseIE(SoundcloudIE): url += '?secret_token=' + token entries.append(self.url_result( url, SoundcloudIE.ie_key(), track_id)) - return entries + return self.playlist_result( + entries, playlist_id, + playlist.get('title'), + playlist.get('description')) class SoundcloudSetIE(SoundcloudPlaylistBaseIE): @@ -504,6 +530,7 @@ class SoundcloudSetIE(SoundcloudPlaylistBaseIE): 'info_dict': { 'id': '2284613', 'title': 'The Royal Concept EP', + 'description': 'md5:71d07087c7a449e8941a70a29e34671e', }, 'playlist_mincount': 5, }, { @@ -526,17 +553,13 @@ class SoundcloudSetIE(SoundcloudPlaylistBaseIE): msgs = (compat_str(err['error_message']) for err in info['errors']) raise ExtractorError('unable to download video webpage: %s' % ','.join(msgs)) - entries = self._extract_track_entries(info['tracks'], token) - - return self.playlist_result( - entries, str_or_none(info.get('id')), info.get('title')) + return self._extract_set(info, token) -class SoundcloudPagedPlaylistBaseIE(SoundcloudPlaylistBaseIE): +class SoundcloudPagedPlaylistBaseIE(SoundcloudIE): def _extract_playlist(self, base_url, playlist_id, playlist_title): COMMON_QUERY = { - 'limit': 2000000000, - 'client_id': self._CLIENT_ID, + 'limit': 80000, 'linked_partitioning': '1', } @@ -722,9 +745,7 @@ class SoundcloudPlaylistIE(SoundcloudPlaylistBaseIE): mobj = re.match(self._VALID_URL, url) playlist_id = mobj.group('id') - query = { - 'client_id': self._CLIENT_ID, - } + query = {} token = mobj.group('token') if token: query['secret_token'] = token @@ -733,10 +754,7 @@ class SoundcloudPlaylistIE(SoundcloudPlaylistBaseIE): self._API_V2_BASE + 'playlists/' + playlist_id, playlist_id, 'Downloading playlist', query=query) - entries = self._extract_track_entries(data['tracks'], token) - - return self.playlist_result( - entries, playlist_id, data.get('title'), data.get('description')) + return self._extract_set(data, token) class SoundcloudSearchIE(SearchInfoExtractor, SoundcloudIE): @@ -761,7 +779,6 @@ class SoundcloudSearchIE(SearchInfoExtractor, SoundcloudIE): self._MAX_RESULTS_PER_PAGE) query.update({ 'limit': limit, - 'client_id': self._CLIENT_ID, 'linked_partitioning': 1, 'offset': 0, }) |