bulk project name change, expect bugssssss

merge-requests/5/head
Dominika 2020-11-01 04:39:17 +01:00
parent 7488e5a683
commit eacab4c6a3
864 changed files with 801 additions and 2768 deletions

View File

@ -6,7 +6,7 @@ default:
py2.7-core:
image: python:2.7
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
before_script:
- pip install nose
script:
@ -15,7 +15,7 @@ py2.7-core:
py2.7-download:
image: python:2.7
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
before_script:
- pip install nose
@ -25,7 +25,7 @@ py2.7-download:
py3.2-core:
image: python:3.2
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
before_script:
- pip install nose
script:
@ -34,7 +34,7 @@ py3.2-core:
py3.2-download:
image: python:3.2
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
before_script:
- pip install nose
@ -44,7 +44,7 @@ py3.2-download:
py3.3-core:
image: python:3.3-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
before_script:
- apk update
- apk add bash
@ -55,7 +55,7 @@ py3.3-core:
py3.3-download:
image: python:3.3-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
before_script:
- apk update
- apk add bash
@ -67,14 +67,14 @@ py3.3-download:
py3.4-core:
image: python:3.4-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.4-download:
image: python:3.4-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh
@ -82,14 +82,14 @@ py3.4-download:
py3.5-core:
image: python:3.5-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.5-download:
image: python:3.5-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh
@ -97,14 +97,14 @@ py3.5-download:
py3.6-core:
image: python:3.6-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.6-download:
image: python:3.6-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh
@ -112,14 +112,14 @@ py3.6-download:
py3.7-core:
image: python:3.7-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.7-download:
image: python:3.7-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh
@ -127,14 +127,14 @@ py3.7-download:
py3.8-core:
image: python:3.8-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.8-download:
image: python:3.8-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh
@ -142,14 +142,14 @@ py3.8-download:
py3.9-core:
image: python:3.9-alpine
variables:
YTDL_TEST_SET: core
HDL_TEST_SET: core
script:
- ./devscripts/run_tests.sh
py3.9-download:
image: python:3.9-alpine
variables:
YTDL_TEST_SET: download
HDL_TEST_SET: download
allow_failure: true
script:
- ./devscripts/run_tests.sh

View File

@ -1,50 +0,0 @@
language: python
python:
- "2.6"
- "2.7"
- "3.2"
- "3.3"
- "3.4"
- "3.5"
- "3.6"
- "pypy"
- "pypy3"
dist: trusty
env:
- YTDL_TEST_SET=core
- YTDL_TEST_SET=download
jobs:
include:
- python: 3.7
dist: xenial
env: YTDL_TEST_SET=core
- python: 3.7
dist: xenial
env: YTDL_TEST_SET=download
- python: 3.8
dist: xenial
env: YTDL_TEST_SET=core
- python: 3.8
dist: xenial
env: YTDL_TEST_SET=download
- python: 3.8-dev
dist: xenial
env: YTDL_TEST_SET=core
- python: 3.8-dev
dist: xenial
env: YTDL_TEST_SET=download
- env: JYTHON=true; YTDL_TEST_SET=core
- env: JYTHON=true; YTDL_TEST_SET=download
- name: flake8
python: 3.8
dist: xenial
install: pip install flake8
script: flake8 .
fast_finish: true
allow_failures:
- env: YTDL_TEST_SET=download
- env: JYTHON=true; YTDL_TEST_SET=core
- env: JYTHON=true; YTDL_TEST_SET=download
before_install:
- if [ "$JYTHON" == "true" ]; then ./devscripts/install_jython.sh; export PATH="$HOME/jython/bin:$PATH"; fi
script: ./devscripts/run_tests.sh

View File

@ -246,3 +246,5 @@ Enes Solak
Nathan Rossi
Thomas van der Berg
Luca Cherubin
Dominika Liberda
Laura Liberda

View File

@ -1,434 +0,0 @@
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
```
$ youtube-dl -v <your command line>
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2015.12.06
[debug] Git HEAD: 135392e
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
```
**Do not post screenshots of verbose logs; only plain text is acceptable.**
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
### Is the description of the issue itself sufficient?
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
- What the problem is
- How it could be fixed
- How your proposed solution would look like
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
### Are you using the latest version?
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
### Is there enough context in your bug report?
People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
### Does the issue involve one problem, and one problem only?
Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
### Is anyone going to need the feature?
Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
### Is your question about youtube-dl?
It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
# DEVELOPER INSTRUCTIONS
Most users do not need to build youtube-dl and can [download the builds](https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution.
To run youtube-dl as a developer, you don't need to build anything either. Simply execute
python -m youtube_dl
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
python -m unittest discover
python test/test_download.py
nosetests
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
If you want to create a build of youtube-dl yourself, you'll need
* python
* make (only GNU make is supported)
* pandoc
* zip
* nosetests
### Adding support for a new site
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**.
After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
1. [Fork this repository](https://github.com/ytdl-org/youtube-dl/fork)
2. Check out the source code with:
git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git
3. Start a new git branch with
cd youtube-dl
git checkout -b yourextractor
4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
```python
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
class YourExtractorIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
_TEST = {
'url': 'https://yourextractor.com/watch/42',
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
'info_dict': {
'id': '42',
'ext': 'mp4',
'title': 'Video title goes here',
'thumbnail': r're:^https?://.*\.jpg$',
# TODO more properties, either as:
# * A value
# * MD5 checksum; start the string with md5:
# * A regular expression; start the string with re:
# * Any Python type (for example int or float)
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
# TODO more code goes here, for example ...
title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
return {
'id': video_id,
'title': title,
'description': self._og_search_description(webpage),
'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
# TODO more properties (see youtube_dl/extractor/common.py)
}
```
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
$ flake8 youtube_dl/extractor/yourextractor.py
9. Make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
10. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
$ git add youtube_dl/extractor/extractors.py
$ git add youtube_dl/extractor/yourextractor.py
$ git commit -m '[yourextractor] Add new extractor'
$ git push origin yourextractor
11. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
In any case, thank you very much for your contributions!
## youtube-dl coding conventions
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
### Mandatory and optional metafields
For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl:
- `id` (media identifier)
- `title` (media title)
- `url` (media download URL) or `formats`
In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken.
[Any field](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
#### Example
Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
```python
meta = self._download_json(url, video_id)
```
Assume at this point `meta`'s layout is:
```python
{
...
"summary": "some fancy summary text",
...
}
```
Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
```python
description = meta.get('summary') # correct
```
and not like:
```python
description = meta['summary'] # incorrect
```
The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
```python
description = self._search_regex(
r'<span[^>]+id="title"[^>]*>([^<]+)<',
webpage, 'description', fatal=False)
```
With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
You can also pass `default=<some fallback value>`, for example:
```python
description = self._search_regex(
r'<span[^>]+id="title"[^>]*>([^<]+)<',
webpage, 'description', default=None)
```
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
### Provide fallbacks
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
#### Example
Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
```python
title = meta['title']
```
If `title` disappears from `meta` in future due to some changes on the hoster's side the extraction would fail since `title` is mandatory. That's expected.
Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback scenario:
```python
title = meta.get('title') or self._og_search_title(webpage)
```
This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
### Regular expressions
#### Don't capture groups you don't use
Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing.
##### Example
Don't capture id attribute name here since you can't use it for anything anyway.
Correct:
```python
r'(?:id|ID)=(?P<id>\d+)'
```
Incorrect:
```python
r'(id|ID)=(?P<id>\d+)'
```
#### Make regular expressions relaxed and flexible
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
##### Example
Say you need to extract `title` from the following HTML code:
```html
<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
```
The code for that task should look similar to:
```python
title = self._search_regex(
r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
```
Or even better:
```python
title = self._search_regex(
r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
webpage, 'title', group='title')
```
Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
The code definitely should not look like:
```python
title = self._search_regex(
r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
webpage, 'title', group='title')
```
### Long lines policy
There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse.
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
Correct:
```python
'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
```
Incorrect:
```python
'https://www.youtube.com/watch?v=FqZTN594JQw&list='
'PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
```
### Inline values
Extracting variables is acceptable for reducing code duplication and improving readability of complex expressions. However, you should avoid extracting variables used only once and moving them to opposite parts of the extractor file, which makes reading the linear flow difficult.
#### Example
Correct:
```python
title = self._html_search_regex(r'<title>([^<]+)</title>', webpage, 'title')
```
Incorrect:
```python
TITLE_RE = r'<title>([^<]+)</title>'
# ...some lines of code...
title = self._html_search_regex(TITLE_RE, webpage, 'title')
```
### Collapse fallbacks
Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
#### Example
Good:
```python
description = self._html_search_meta(
['og:description', 'description', 'twitter:description'],
webpage, 'description', default=None)
```
Unwieldy:
```python
description = (
self._og_search_description(webpage, default=None)
or self._html_search_meta('description', webpage, default=None)
or self._html_search_meta('twitter:description', webpage, default=None))
```
Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
### Trailing parentheses
Always move trailing parentheses after the last argument.
#### Example
Correct:
```python
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
list)
```
Incorrect:
```python
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
list,
)
```
### Use convenience conversion and parsing functions
Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Use `url_or_none` for safe URL processing.
Use `try_get` for safe metadata extraction from parsed JSON.
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions.
#### More examples
##### Safely extract optional description from parsed JSON
```python
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str)
```
##### Safely extract more optional metadata
```python
video = try_get(response, lambda x: x['result']['video'][0], dict) or {}
description = video.get('summary')
duration = float_or_none(video.get('durationMs'), scale=1000)
view_count = int_or_none(video.get('views'))
```

View File

@ -1,3 +1,15 @@
version 2020.11.01
Core
* Changed naming to HaruhiDL
Extractors
* [youtube] Fixed extracting videos
* [youtube] Removed JSInterpreter and SWFInterpreter, as they would be a
hassle to maintain in the future
--- fork line ---
version 2020.09.20
Core
@ -234,7 +246,7 @@ Extractors
version 2020.03.01
Core
* [YoutubeDL] Force redirect URL to unicode on python 2
* [HaruhiDL] Force redirect URL to unicode on python 2
- [options] Remove duplicate short option -v for --version (#24162)
Extractors
@ -255,7 +267,7 @@ Extractors
version 2020.02.16
Core
* [YoutubeDL] Fix playlist entry indexing with --playlist-items (#10591,
* [HaruhiDL] Fix playlist entry indexing with --playlist-items (#10591,
#10622)
* [update] Fix updating via symlinks (#23991)
+ [compat] Introduce compat_realpath (#23991)
@ -649,7 +661,7 @@ Extractors
version 2019.09.28
Core
* [YoutubeDL] Honour all --get-* options with --flat-playlist (#22493)
* [HaruhiDL] Honour all --get-* options with --flat-playlist (#22493)
Extractors
* [vk] Fix extraction (#22522)
@ -715,7 +727,7 @@ version 2019.08.13
Core
* [downloader/fragment] Fix ETA calculation of resumed download (#21992)
* [YoutubeDL] Check annotations availability (#18582)
* [HaruhiDL] Check annotations availability (#18582)
Extractors
* [youtube:playlist] Improve flat extraction (#21927)
@ -1178,7 +1190,7 @@ version 2019.02.08
Core
* [utils] Improve JSON-LD regular expression (#18058)
* [YoutubeDL] Fallback to ie_key of matching extractor while making
* [HaruhiDL] Fallback to ie_key of matching extractor while making
download archive id when no explicit ie_key is provided (#19022)
Extractors
@ -1249,7 +1261,7 @@ Extractors
version 2019.01.24
Core
* [YoutubeDL] Fix negation for string operators in format selection (#18961)
* [HaruhiDL] Fix negation for string operators in format selection (#18961)
version 2019.01.23
@ -1257,7 +1269,7 @@ version 2019.01.23
Core
* [utils] Fix urljoin for paths with non-http(s) schemes
* [extractor/common] Improve jwplayer relative URL handling (#18892)
+ [YoutubeDL] Add negation support for string comparisons in format selection
+ [HaruhiDL] Add negation support for string comparisons in format selection
expressions (#18600, #18805)
* [extractor/common] Improve HLS video-only format detection (#18923)
@ -1388,8 +1400,8 @@ Extractors
version 2018.12.09
Core
* [YoutubeDL] Keep session cookies in cookie file between runs
* [YoutubeDL] Recognize session cookies with expired set to 0 (#12929)
* [HaruhiDL] Keep session cookies in cookie file between runs
* [HaruhiDL] Recognize session cookies with expired set to 0 (#12929)
Extractors
+ [teachable] Add support for teachable platform sites (#5451, #18150, #18272)
@ -1931,7 +1943,7 @@ Extractors
version 2018.05.09
Core
* [YoutubeDL] Ensure ext exists for automatic captions
* [HaruhiDL] Ensure ext exists for automatic captions
* Introduce --geo-bypass-ip-block
Extractors
@ -1977,7 +1989,7 @@ version 2018.04.25
Core
* [utils] Fix match_str for boolean meta fields
+ [Makefile] Add support for pandoc 2 and disable smart extension (#16251)
* [YoutubeDL] Fix typo in media extension compatibility checker (#16215)
* [HaruhiDL] Fix typo in media extension compatibility checker (#16215)
Extractors
+ [openload] Recognize IPv6 stream URLs (#16136, #16137, #16205, #16246,
@ -2017,7 +2029,7 @@ Extractors
version 2018.04.09
Core
* [YoutubeDL] Do not save/restore console title while simulate (#16103)
* [HaruhiDL] Do not save/restore console title while simulate (#16103)
* [extractor/common] Relax JSON-LD context check (#16006)
Extractors
@ -2193,7 +2205,7 @@ Extractors
version 2018.02.11
Core
+ [YoutubeDL] Add support for filesize_approx in format selector (#15550)
+ [HaruhiDL] Add support for filesize_approx in format selector (#15550)
Extractors
+ [francetv] Add support for live streams (#13689)
@ -2333,8 +2345,8 @@ Extractors
version 2018.01.07
Core
* [utils] Fix youtube-dl under PyPy3 on Windows
* [YoutubeDL] Output python implementation in debug header
* [utils] Fix haruhi-dl under PyPy3 on Windows
* [HaruhiDL] Output python implementation in debug header
Extractors
+ [jwplatform] Add support for multiple embeds (#15192)
@ -2397,7 +2409,7 @@ version 2017.12.23
Core
* [extractor/common] Move X-Forwarded-For setup code into _request_webpage
+ [YoutubeDL] Add support for playlist_uploader and playlist_uploader_id in
+ [HaruhiDL] Add support for playlist_uploader and playlist_uploader_id in
output template (#11427, #15018)
+ [extractor/common] Introduce uploader, uploader_id and uploader_url
meta fields for playlists (#11427, #15018)
@ -2525,7 +2537,7 @@ version 2017.11.15
Core
* [common] Skip Apple FairPlay m3u8 manifests (#14741)
* [YoutubeDL] Fix playlist range optimization for --playlist-items (#14740)
* [HaruhiDL] Fix playlist range optimization for --playlist-items (#14740)
Extractors
* [vshare] Capture and output error message
@ -2641,7 +2653,7 @@ Extractors
version 2017.10.12
Core
* [YoutubeDL] Improve _default_format_spec (#14461)
* [HaruhiDL] Improve _default_format_spec (#14461)
Extractors
* [steam] Fix extraction (#14067)
@ -2667,8 +2679,8 @@ Extractors
version 2017.10.07
Core
* [YoutubeDL] Ignore duplicates in --playlist-items
* [YoutubeDL] Fix out of range --playlist-items for iterable playlists and
* [HaruhiDL] Ignore duplicates in --playlist-items
* [HaruhiDL] Fix out of range --playlist-items for iterable playlists and
reduce code duplication (#14425)
+ [utils] Use cache in OnDemandPagedList by default
* [postprocessor/ffmpeg] Convert to opus using libopus (#14381)
@ -2691,7 +2703,7 @@ Extractors
version 2017.10.01
Core
* [YoutubeDL] Document youtube_include_dash_manifest
* [HaruhiDL] Document youtube_include_dash_manifest
Extractors
+ [tvp] Add support for new URL schema (#14368)
@ -2742,7 +2754,7 @@ version 2017.09.15
Core
* [downloader/fragment] Restart inconsistent incomplete fragment downloads
(#13731)
* [YoutubeDL] Download raw subtitles files (#12909, #14191)
* [HaruhiDL] Download raw subtitles files (#12909, #14191)
Extractors
* [condenast] Fix extraction (#14196, #14207)
@ -2762,7 +2774,7 @@ version 2017.09.10
Core
+ [utils] Introduce bool_or_none
* [YoutubeDL] Ensure dir existence for each requested format (#14116)
* [HaruhiDL] Ensure dir existence for each requested format (#14116)
Extractors
* [fox] Fix extraction (#14147)
@ -2852,7 +2864,7 @@ Extractors
version 2017.08.18
Core
* [YoutubeDL] Sanitize byte string format URLs (#13951)
* [HaruhiDL] Sanitize byte string format URLs (#13951)
+ [extractor/common] Add support for float durations in _parse_mpd_formats
(#13919)
@ -2870,7 +2882,7 @@ Extractors
version 2017.08.13
Core
* [YoutubeDL] Make sure format id is not empty
* [HaruhiDL] Make sure format id is not empty
* [extractor/common] Make _family_friendly_search optional
* [extractor/common] Respect source's type attribute for HTML5 media (#13892)
@ -2952,8 +2964,8 @@ Extractors
version 2017.07.23
Core
* [YoutubeDL] Improve default format specification (#13704)
* [YoutubeDL] Do not override id, extractor and extractor_key for
* [HaruhiDL] Improve default format specification (#13704)
* [HaruhiDL] Do not override id, extractor and extractor_key for
url_transparent entities
* [extractor/common] Fix playlist_from_matches
@ -2979,7 +2991,7 @@ Extractors
version 2017.07.15
Core
* [YoutubeDL] Don't expand environment variables in meta fields (#13637)
* [HaruhiDL] Don't expand environment variables in meta fields (#13637)
Extractors
* [spiegeltv] Delegate extraction to nexx extractor (#13159)
@ -3052,7 +3064,7 @@ version 2017.06.25
Core
+ [adobepass] Add support for DIRECTV NOW (mso ATTOTT) (#13472)
* [YoutubeDL] Skip malformed formats for better extraction robustness
* [HaruhiDL] Skip malformed formats for better extraction robustness
Extractors
+ [wsj] Add support for barrons.com (#13470)
@ -3118,7 +3130,7 @@ Core
* [utils] Improve unified_timestamp
* [extractor/generic] Ensure format id is unicode string
* [extractor/common] Return unicode string from _match_id
+ [YoutubeDL] Sanitize more fields (#13313)
+ [HaruhiDL] Sanitize more fields (#13313)
Extractors
+ [xfileshare] Add support for rapidvideo.tv (#13348)
@ -3146,7 +3158,7 @@ Extractors
version 2017.06.05
Core
* [YoutubeDL] Don't emit ANSI escape codes on Windows (#13270)
* [HaruhiDL] Don't emit ANSI escape codes on Windows (#13270)
Extractors
+ [bandcamp:weekly] Add support for bandcamp weekly (#12758)
@ -3262,7 +3274,7 @@ Extractors
version 2017.05.09
Core
* [YoutubeDL] Force --restrict-filenames when no locale is set on all python
* [HaruhiDL] Force --restrict-filenames when no locale is set on all python
versions (#13027)
Extractors
@ -3364,7 +3376,7 @@ version 2017.04.26
Core
* Introduce --keep-fragments for keeping fragments of fragmented download
on disk after download is finished
* [YoutubeDL] Fix output template for missing timestamp (#12796)
* [HaruhiDL] Fix output template for missing timestamp (#12796)
* [socks] Handle cases where credentials are required but missing
* [extractor/common] Improve HLS extraction (#12211)
* Extract m3u8 parsing to separate method
@ -3419,8 +3431,8 @@ Extractors
version 2017.04.16
Core
* [YoutubeDL] Apply expand_path after output template substitution
+ [YoutubeDL] Propagate overridden meta fields to extraction results of type
* [HaruhiDL] Apply expand_path after output template substitution
+ [HaruhiDL] Propagate overridden meta fields to extraction results of type
url (#11163)
Extractors
@ -3508,7 +3520,7 @@ Extractors
version 2017.04.02
Core
* [YoutubeDL] Return early when extraction of url_transparent fails
* [HaruhiDL] Return early when extraction of url_transparent fails
Extractors
* [rai] Fix and improve extraction (#11790)
@ -3573,7 +3585,7 @@ Extractors
version 2017.03.20
Core
+ [YoutubeDL] Allow multiple input URLs to be used with stdout (-) as
+ [HaruhiDL] Allow multiple input URLs to be used with stdout (-) as
output template
+ [adobepass] Detect and output error on authz token extraction (#12472)
@ -3669,7 +3681,7 @@ version 2017.03.02
Core
+ [adobepass] Add support for Charter Spectrum (#11465)
* [YoutubeDL] Don't sanitize identifiers in output template (#12317)
* [HaruhiDL] Don't sanitize identifiers in output template (#12317)
Extractors
* [facebook] Fix extraction (#12323, #12330)
@ -3736,7 +3748,7 @@ version 2017.02.24
Core
* [options] Hide deprecated options from --help
* [options] Deprecate --autonumber-size
+ [YoutubeDL] Add support for string formatting operations in output template
+ [HaruhiDL] Add support for string formatting operations in output template
(#5185, #5748, #6841, #9929, #9966 #9978, #12189)
Extractors
@ -3782,7 +3794,7 @@ Core
+ Add experimental geo restriction bypass mechanism based on faking
X-Forwarded-For HTTP header
+ [utils] Introduce GeoRestrictedError for geo restricted videos
+ [utils] Introduce YoutubeDLError base class for all youtube-dl exceptions
+ [utils] Introduce HaruhiDLError base class for all haruhi-dl exceptions
Extractors
+ [ninecninemedia] Use geo bypass mechanism
@ -4071,7 +4083,7 @@ version 2017.01.16
Core
* [options] Apply custom config to final composite configuration (#11741)
* [YoutubeDL] Improve protocol auto determining (#11720)
* [HaruhiDL] Improve protocol auto determining (#11720)
Extractors
* [xiami] Relax URL regular expressions
@ -4422,7 +4434,7 @@ Extractors
version 2016.10.25
Core
* Running youtube-dl in the background is fixed (#10996, #10706, #955)
* Running haruhi-dl in the background is fixed (#10996, #10706, #955)
Extractors
+ [jamendo] Add support for jamendo.com (#10132, #10736)

181
LICENSE
View File

@ -1,24 +1,165 @@
This is free and unencumbered software released into the public domain.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
For more information, please refer to <http://unlicense.org/>
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

View File

@ -2,8 +2,8 @@ include README.md
include LICENSE
include AUTHORS
include ChangeLog
include youtube-dl.bash-completion
include youtube-dl.fish
include youtube-dl.1
include haruhi-dl.bash-completion
include haruhi-dl.fish
include haruhi-dl.1
recursive-include docs Makefile conf.py *.rst
recursive-include test *

View File

@ -1,7 +1,7 @@
all: youtube-dl README.md CONTRIBUTING.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish supportedsites
all: haruhi-dl haruhi-dl.1 haruhi-dl.bash-completion haruhi-dl.zsh haruhi-dl.fish supportedsites
clean:
rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish youtube_dl/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png CONTRIBUTING.md.tmp youtube-dl youtube-dl.exe
rm -rf haruhi-dl.1.temp.md haruhi-dl.1 haruhi-dl.bash-completion MANIFEST build/ dist/ .coverage cover/ haruhi-dl.tar.gz haruhi-dl.zsh haruhi-dl.fish haruhi_dl/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png CONTRIBUTING.md.tmp haruhi-dl haruhi-dl.exe
find . -name "*.pyc" -delete
find . -name "*.class" -delete
@ -17,23 +17,23 @@ SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then ech
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
install: youtube-dl youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish
install: haruhi-dl haruhi-dl.1 haruhi-dl.bash-completion haruhi-dl.zsh haruhi-dl.fish
install -d $(DESTDIR)$(BINDIR)
install -m 755 youtube-dl $(DESTDIR)$(BINDIR)
install -m 755 haruhi-dl $(DESTDIR)$(BINDIR)
install -d $(DESTDIR)$(MANDIR)/man1
install -m 644 youtube-dl.1 $(DESTDIR)$(MANDIR)/man1
install -m 644 haruhi-dl.1 $(DESTDIR)$(MANDIR)/man1
install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d
install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl
install -m 644 haruhi-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/haruhi-dl
install -d $(DESTDIR)$(SHAREDIR)/zsh/site-functions
install -m 644 youtube-dl.zsh $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_youtube-dl
install -m 644 haruhi-dl.zsh $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_haruhi-dl
install -d $(DESTDIR)$(SYSCONFDIR)/fish/completions
install -m 644 youtube-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/youtube-dl.fish
install -m 644 haruhi-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/haruhi-dl.fish
codetest:
flake8 .
test:
#nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
#nosetests --with-coverage --cover-package=haruhi_dl --cover-html --verbose --processes 4 test
nosetests --verbose test
$(MAKE) codetest
@ -51,34 +51,34 @@ offlinetest: codetest
--exclude test_youtube_lists.py \
--exclude test_youtube_signature.py
tar: youtube-dl.tar.gz
tar: haruhi-dl.tar.gz
.PHONY: all clean install test tar bash-completion pypi-files zsh-completion fish-completion ot offlinetest codetest supportedsites
pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish
pypi-files: haruhi-dl.bash-completion haruhi-dl.1 haruhi-dl.fish
youtube-dl: youtube_dl/*.py youtube_dl/*/*.py
haruhi-dl: haruhi_dl/*.py haruhi_dl/*/*.py
mkdir -p zip
for d in youtube_dl youtube_dl/downloader youtube_dl/extractor youtube_dl/postprocessor ; do \
for d in haruhi_dl haruhi_dl/downloader haruhi_dl/extractor haruhi_dl/postprocessor ; do \
mkdir -p zip/$$d ;\
cp -pPR $$d/*.py zip/$$d/ ;\
done
touch -t 200001010101 zip/youtube_dl/*.py zip/youtube_dl/*/*.py
mv zip/youtube_dl/__main__.py zip/
cd zip ; zip -q ../youtube-dl youtube_dl/*.py youtube_dl/*/*.py __main__.py
touch -t 200001010101 zip/haruhi_dl/*.py zip/haruhi_dl/*/*.py
mv zip/haruhi_dl/__main__.py zip/
cd zip ; zip -q ../haruhi-dl haruhi_dl/*.py haruhi_dl/*/*.py __main__.py
rm -rf zip
echo '#!$(PYTHON)' > youtube-dl
cat youtube-dl.zip >> youtube-dl
rm youtube-dl.zip
chmod a+x youtube-dl
echo '#!$(PYTHON)' > haruhi-dl
cat haruhi-dl.zip >> haruhi-dl
rm haruhi-dl.zip
chmod a+x haruhi-dl
README.md: youtube_dl/*.py youtube_dl/*/*.py
COLUMNS=80 $(PYTHON) youtube_dl/__main__.py --help | $(PYTHON) devscripts/make_readme.py
#README.md: haruhi_dl/*.py haruhi_dl/*/*.py
# COLUMNS=80 $(PYTHON) haruhi_dl/__main__.py --help | $(PYTHON) devscripts/make_readme.py
CONTRIBUTING.md: README.md
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
#CONTRIBUTING.md: README.md
# $(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md youtube_dl/version.py
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md haruhi_dl/version.py
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md
@ -88,37 +88,37 @@ issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_
supportedsites:
$(PYTHON) devscripts/make_supportedsites.py docs/supportedsites.md
README.txt: README.md
pandoc -f $(MARKDOWN) -t plain README.md -o README.txt
#README.txt: README.md
# pandoc -f $(MARKDOWN) -t plain README.md -o README.txt
youtube-dl.1: README.md
$(PYTHON) devscripts/prepare_manpage.py youtube-dl.1.temp.md
pandoc -s -f $(MARKDOWN) -t man youtube-dl.1.temp.md -o youtube-dl.1
rm -f youtube-dl.1.temp.md
haruhi-dl.1: README.md
$(PYTHON) devscripts/prepare_manpage.py haruhi-dl.1.temp.md
pandoc -s -f $(MARKDOWN) -t man haruhi-dl.1.temp.md -o haruhi-dl.1
rm -f haruhi-dl.1.temp.md
youtube-dl.bash-completion: youtube_dl/*.py youtube_dl/*/*.py devscripts/bash-completion.in
haruhi-dl.bash-completion: haruhi_dl/*.py haruhi_dl/*/*.py devscripts/bash-completion.in
$(PYTHON) devscripts/bash-completion.py
bash-completion: youtube-dl.bash-completion
bash-completion: haruhi-dl.bash-completion
youtube-dl.zsh: youtube_dl/*.py youtube_dl/*/*.py devscripts/zsh-completion.in
haruhi-dl.zsh: haruhi_dl/*.py haruhi_dl/*/*.py devscripts/zsh-completion.in
$(PYTHON) devscripts/zsh-completion.py
zsh-completion: youtube-dl.zsh
zsh-completion: haruhi-dl.zsh
youtube-dl.fish: youtube_dl/*.py youtube_dl/*/*.py devscripts/fish-completion.in
haruhi-dl.fish: haruhi_dl/*.py haruhi_dl/*/*.py devscripts/fish-completion.in
$(PYTHON) devscripts/fish-completion.py
fish-completion: youtube-dl.fish
fish-completion: haruhi-dl.fish
lazy-extractors: youtube_dl/extractor/lazy_extractors.py
lazy-extractors: haruhi_dl/extractor/lazy_extractors.py
_EXTRACTOR_FILES = $(shell find youtube_dl/extractor -iname '*.py' -and -not -iname 'lazy_extractors.py')
youtube_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
_EXTRACTOR_FILES = $(shell find haruhi_dl/extractor -iname '*.py' -and -not -iname 'lazy_extractors.py')
haruhi_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@
youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish ChangeLog AUTHORS
@tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \
haruhi-dl.tar.gz: haruhi-dl haruhi-dl.1 haruhi-dl.bash-completion haruhi-dl.zsh haruhi-dl.fish ChangeLog AUTHORS
@tar -czf haruhi-dl.tar.gz --transform "s|^|haruhi-dl/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \
--exclude '*.kate-swp' \
--exclude '*.pyc' \
@ -128,8 +128,8 @@ youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-
--exclude '.git' \
--exclude 'docs/_build' \
-- \
bin devscripts test youtube_dl docs \
ChangeLog AUTHORS LICENSE README.md README.txt \
Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \
youtube-dl.zsh youtube-dl.fish setup.py setup.cfg \
youtube-dl
bin devscripts test haruhi_dl docs \
ChangeLog AUTHORS LICENSE \
Makefile MANIFEST.in haruhi-dl.1 haruhi-dl.bash-completion \
haruhi-dl.zsh haruhi-dl.fish setup.py setup.cfg \
haruhi-dl

1447
README.md

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python
import youtube_dl
import haruhi_dl
if __name__ == '__main__':
youtube_dl.main()
haruhi_dl.main()

View File

@ -1,4 +1,4 @@
__youtube_dl()
__haruhi_dl()
{
local cur prev opts fileopts diropts keywords
COMPREPLY=()
@ -26,4 +26,4 @@ __youtube_dl()
fi
}
complete -F __youtube_dl youtube-dl
complete -F __haruhi_dl haruhi-dl

View File

@ -6,9 +6,9 @@ from os.path import dirname as dirn
import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
import youtube_dl
import haruhi_dl
BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
BASH_COMPLETION_FILE = "haruhi-dl.bash-completion"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
@ -26,5 +26,5 @@ def build_completion(opt_parser):
f.write(filled_template)
parser = youtube_dl.parseOpts()[0]
parser = haruhi_dl.parseOpts()[0]
build_completion(parser)

View File

@ -12,7 +12,7 @@ import traceback
import os.path
sys.path.insert(0, os.path.dirname(os.path.dirname((os.path.abspath(__file__)))))
from youtube_dl.compat import (
from haruhi_dl.compat import (
compat_input,
compat_http_server,
compat_str,
@ -321,16 +321,16 @@ class GITBuilder(GITInfoBuilder):
super(GITBuilder, self).build()
class YoutubeDLBuilder(object):
class HaruhiDLBuilder(object):
authorizedUsers = ['fraca7', 'phihag', 'rg3', 'FiloSottile', 'ytdl-org']
def __init__(self, **kwargs):
if self.repoName != 'youtube-dl':
if self.repoName != 'haruhi-dl':
raise BuildError('Invalid repository "%s"' % self.repoName)
if self.user not in self.authorizedUsers:
raise HTTPError('Unauthorized user "%s"' % self.user, 401)
super(YoutubeDLBuilder, self).__init__(**kwargs)
super(HaruhiDLBuilder, self).__init__(**kwargs)
def build(self):
try:
@ -341,7 +341,7 @@ class YoutubeDLBuilder(object):
except subprocess.CalledProcessError as e:
raise BuildError(e.output)
super(YoutubeDLBuilder, self).build()
super(HaruhiDLBuilder, self).build()
class DownloadBuilder(object):
@ -396,7 +396,7 @@ class Null(object):
pass
class Builder(PythonBuilder, GITBuilder, YoutubeDLBuilder, DownloadBuilder, CleanupTempDir, Null):
class Builder(PythonBuilder, GITBuilder, HaruhiDLBuilder, DownloadBuilder, CleanupTempDir, Null):
pass

View File

@ -15,8 +15,8 @@ import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import gettestcases
from youtube_dl.utils import compat_urllib_parse_urlparse
from youtube_dl.utils import compat_urllib_request
from haruhi_dl.utils import compat_urllib_parse_urlparse
from haruhi_dl.utils import compat_urllib_request
if len(sys.argv) > 1:
METHOD = 'LIST'

View File

@ -12,21 +12,21 @@ import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.compat import (
from haruhi_dl.compat import (
compat_basestring,
compat_getpass,
compat_print,
compat_urllib_request,
)
from youtube_dl.utils import (
from haruhi_dl.utils import (
make_HTTPS_handler,
sanitized_Request,
)
class GitHubReleaser(object):
_API_URL = 'https://api.github.com/repos/ytdl-org/youtube-dl/releases'
_UPLOADS_URL = 'https://uploads.github.com/repos/ytdl-org/youtube-dl/releases/%s/assets?name=%s'
_API_URL = 'https://api.github.com/repos/ytdl-org/haruhi-dl/releases'
_UPLOADS_URL = 'https://uploads.github.com/repos/ytdl-org/haruhi-dl/releases/%s/assets?name=%s'
_NETRC_MACHINE = 'github.com'
def __init__(self, debuglevel=0):
@ -98,7 +98,7 @@ def main():
releaser = GitHubReleaser()
new_release = releaser.create_release(
version, name='youtube-dl %s' % version, body=body)
version, name='haruhi-dl %s' % version, body=body)
release_id = new_release['id']
for asset in os.listdir(build_path):

View File

@ -2,4 +2,4 @@
{{commands}}
complete --command youtube-dl --arguments ":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"
complete --command haruhi-dl --arguments ":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"

View File

@ -7,10 +7,10 @@ from os.path import dirname as dirn
import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
import youtube_dl
from youtube_dl.utils import shell_quote
import haruhi_dl
from haruhi_dl.utils import shell_quote
FISH_COMPLETION_FILE = 'youtube-dl.fish'
FISH_COMPLETION_FILE = 'haruhi-dl.fish'
FISH_COMPLETION_TEMPLATE = 'devscripts/fish-completion.in'
EXTRA_ARGS = {
@ -30,7 +30,7 @@ def build_completion(opt_parser):
for group in opt_parser.option_groups:
for option in group.option_list:
long_option = option.get_opt_string().strip('-')
complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option]
complete_cmd = ['complete', '--command', 'haruhi-dl', '--long-option', long_option]
if option._short_opts:
complete_cmd += ['--short-option', option._short_opts[0].strip('-')]
if option.help != optparse.SUPPRESS_HELP:
@ -45,5 +45,5 @@ def build_completion(opt_parser):
f.write(filled_template)
parser = youtube_dl.parseOpts()[0]
parser = haruhi_dl.parseOpts()[0]
build_completion(parser)

View File

@ -7,8 +7,8 @@ import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.utils import intlist_to_bytes
from youtube_dl.aes import aes_encrypt, key_expansion
from haruhi_dl.utils import intlist_to_bytes
from haruhi_dl.aes import aes_encrypt, key_expansion
secret_msg = b'Secret message goes here'

View File

@ -22,9 +22,9 @@ if 'signature' in versions_info:
new_version = {}
filenames = {
'bin': 'youtube-dl',
'exe': 'youtube-dl.exe',
'tar': 'youtube-dl-%s.tar.gz' % version}
'bin': 'haruhi-dl',
'exe': 'haruhi-dl.exe',
'tar': 'haruhi-dl-%s.tar.gz' % version}
build_dir = os.path.join('..', '..', 'build', version)
for key, filename in filenames.items():
url = 'https://yt-dl.org/downloads/%s/%s' % (version, filename)

View File

@ -10,25 +10,25 @@ import textwrap
atom_template = textwrap.dedent("""\
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<link rel="self" href="http://ytdl-org.github.io/youtube-dl/update/releases.atom" />
<title>youtube-dl releases</title>
<id>https://yt-dl.org/feed/youtube-dl-updates-feed</id>
<link rel="self" href="http://ytdl-org.github.io/haruhi-dl/update/releases.atom" />
<title>haruhi-dl releases</title>
<id>https://yt-dl.org/feed/haruhi-dl-updates-feed</id>
<updated>@TIMESTAMP@</updated>
@ENTRIES@
</feed>""")
entry_template = textwrap.dedent("""
<entry>
<id>https://yt-dl.org/feed/youtube-dl-updates-feed/youtube-dl-@VERSION@</id>
<id>https://yt-dl.org/feed/haruhi-dl-updates-feed/haruhi-dl-@VERSION@</id>
<title>New version @VERSION@</title>
<link href="http://ytdl-org.github.io/youtube-dl" />
<link href="http://ytdl-org.github.io/haruhi-dl" />
<content type="xhtml">
<div xmlns="http://www.w3.org/1999/xhtml">
Downloads available at <a href="https://yt-dl.org/downloads/@VERSION@/">https://yt-dl.org/downloads/@VERSION@/</a>
</div>
</content>
<author>
<name>The youtube-dl maintainers</name>
<name>The haruhi-dl maintainers</name>
</author>
<updated>@TIMESTAMP@</updated>
</entry>

View File

@ -5,10 +5,10 @@ import sys
import os
import textwrap
# We must be able to import youtube_dl
# We must be able to import haruhi_dl
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
import youtube_dl
import haruhi_dl
def main():
@ -16,7 +16,7 @@ def main():
template = tmplf.read()
ie_htmls = []
for ie in youtube_dl.list_extractors(age_limit=None):
for ie in haruhi_dl.list_extractors(age_limit=None):
ie_html = '<b>{}</b>'.format(ie.IE_NAME)
ie_desc = getattr(ie, 'IE_DESC', None)
if ie_desc is False:

View File

@ -16,9 +16,9 @@ def main():
with io.open(infile, encoding='utf-8') as inf:
issue_template_tmpl = inf.read()
# Get the version from youtube_dl/version.py without importing the package
exec(compile(open('youtube_dl/version.py').read(),
'youtube_dl/version.py', 'exec'))
# Get the version from haruhi_dl/version.py without importing the package
exec(compile(open('haruhi_dl/version.py').read(),
'haruhi_dl/version.py', 'exec'))
out = issue_template_tmpl % {'version': locals()['__version__']}

View File

@ -14,8 +14,8 @@ lazy_extractors_filename = sys.argv[1]
if os.path.exists(lazy_extractors_filename):
os.remove(lazy_extractors_filename)
from youtube_dl.extractor import _ALL_CLASSES
from youtube_dl.extractor.common import InfoExtractor, SearchInfoExtractor
from haruhi_dl.extractor import _ALL_CLASSES
from haruhi_dl.extractor.common import InfoExtractor, SearchInfoExtractor
with open('devscripts/lazy_load_template.py', 'rt') as f:
module_template = f.read()

View File

@ -23,4 +23,4 @@ options = '# OPTIONS\n' + options + '\n'
with io.open(README_FILE, 'w', encoding='utf-8') as f:
f.write(header)
f.write(options)
f.write(footer)
f.write(footer)

View File

@ -7,10 +7,10 @@ import os
import sys
# Import youtube_dl
# Import haruhi_dl
ROOT_DIR = os.path.join(os.path.dirname(__file__), '..')
sys.path.insert(0, ROOT_DIR)
import youtube_dl
import haruhi_dl
def main():
@ -33,7 +33,7 @@ def main():
ie_md += ' (Currently broken)'
yield ie_md
ies = sorted(youtube_dl.gen_extractors(), key=lambda i: i.IE_NAME.lower())
ies = sorted(haruhi_dl.gen_extractors(), key=lambda i: i.IE_NAME.lower())
out = '# Supported sites\n' + ''.join(
' - ' + md + '\n'
for md in gen_ies_md(ies))

View File

@ -16,7 +16,7 @@ youtube\-dl \- download videos from youtube.com or other video platforms
# SYNOPSIS
**youtube-dl** \[OPTIONS\] URL [URL...]
**haruhi-dl** \[OPTIONS\] URL [URL...]
'''
@ -33,7 +33,7 @@ def main():
readme = f.read()
readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme)
readme = re.sub(r'\s+youtube-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme)
readme = re.sub(r'\s+haruhi-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme)
readme = PREFIX + readme
readme = filter_options(readme)

View File

@ -53,8 +53,8 @@ fi
if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi
if [ ! -z "`git status --porcelain | grep -v CHANGELOG`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi
useless_files=$(find youtube_dl -type f -not -name '*.py')
if [ ! -z "$useless_files" ]; then echo "ERROR: Non-.py files in youtube_dl: $useless_files"; exit 1; fi
useless_files=$(find haruhi_dl -type f -not -name '*.py')
if [ ! -z "$useless_files" ]; then echo "ERROR: Non-.py files in haruhi_dl: $useless_files"; exit 1; fi
if [ ! -f "updates_key.pem" ]; then echo 'ERROR: updates_key.pem missing'; exit 1; fi
if ! type pandoc >/dev/null 2>/dev/null; then echo 'ERROR: pandoc is missing'; exit 1; fi
if ! python3 -c 'import rsa' 2>/dev/null; then echo 'ERROR: python3-rsa is missing'; exit 1; fi
@ -68,18 +68,18 @@ make clean
if $skip_tests ; then
echo 'SKIPPING TESTS'
else
nosetests --verbose --with-coverage --cover-package=youtube_dl --cover-html test --stop || exit 1
nosetests --verbose --with-coverage --cover-package=haruhi_dl --cover-html test --stop || exit 1
fi
/bin/echo -e "\n### Changing version in version.py..."
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
sed -i "s/__version__ = '.*'/__version__ = '$version'/" haruhi_dl/version.py
/bin/echo -e "\n### Changing version in ChangeLog..."
sed -i "s/<unreleased>/$version/" ChangeLog
/bin/echo -e "\n### Committing documentation, templates and youtube_dl/version.py..."
/bin/echo -e "\n### Committing documentation, templates and haruhi_dl/version.py..."
make README.md CONTRIBUTING.md issuetemplates supportedsites
git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE/1_broken_site.md .github/ISSUE_TEMPLATE/2_site_support_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md .github/ISSUE_TEMPLATE/4_bug_report.md .github/ISSUE_TEMPLATE/5_feature_request.md .github/ISSUE_TEMPLATE/6_question.md docs/supportedsites.md youtube_dl/version.py ChangeLog
git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE/1_broken_site.md .github/ISSUE_TEMPLATE/2_site_support_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md .github/ISSUE_TEMPLATE/4_bug_report.md .github/ISSUE_TEMPLATE/5_feature_request.md .github/ISSUE_TEMPLATE/6_question.md docs/supportedsites.md haruhi_dl/version.py ChangeLog
git commit $gpg_sign_commits -m "release $version"
/bin/echo -e "\n### Now tagging, signing and pushing..."
@ -94,13 +94,13 @@ git push origin "$version"
/bin/echo -e "\n### OK, now it is time to build the binaries..."
REV=$(git rev-parse HEAD)
make youtube-dl youtube-dl.tar.gz
make haruhi-dl haruhi-dl.tar.gz
read -p "VM running? (y/n) " -n 1
wget "http://$buildserver/build/ytdl-org/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe
wget "http://$buildserver/build/ytdl-org/haruhi-dl/haruhi-dl.exe?rev=$REV" -O haruhi-dl.exe
mkdir -p "build/$version"
mv youtube-dl youtube-dl.exe "build/$version"
mv youtube-dl.tar.gz "build/$version/youtube-dl-$version.tar.gz"
RELEASE_FILES="youtube-dl youtube-dl.exe youtube-dl-$version.tar.gz"
mv haruhi-dl haruhi-dl.exe "build/$version"
mv haruhi-dl.tar.gz "build/$version/haruhi-dl-$version.tar.gz"
RELEASE_FILES="haruhi-dl haruhi-dl.exe haruhi-dl-$version.tar.gz"
(cd build/$version/ && md5sum $RELEASE_FILES > MD5SUMS)
(cd build/$version/ && sha1sum $RELEASE_FILES > SHA1SUMS)
(cd build/$version/ && sha256sum $RELEASE_FILES > SHA2-256SUMS)

View File

@ -6,7 +6,7 @@ DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|socks|subtitles|w
test_set=""
multiprocess_args=""
case "$YTDL_TEST_SET" in
case "$HDL_TEST_SET" in
core)
test_set="-I test_($DOWNLOAD_TESTS)\.py"
;;

View File

@ -9,11 +9,11 @@ import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.compat import (
from haruhi_dl.compat import (
compat_print,
compat_urllib_request,
)
from youtube_dl.utils import format_bytes
from haruhi_dl.utils import format_bytes
def format_size(bytes):
@ -24,7 +24,7 @@ total_bytes = 0
for page in itertools.count(1):
releases = json.loads(compat_urllib_request.urlopen(
'https://api.github.com/repos/ytdl-org/youtube-dl/releases?page=%s' % page
'https://api.github.com/repos/ytdl-org/haruhi-dl/releases?page=%s' % page
).read().decode('utf-8'))
if not releases:
@ -36,9 +36,9 @@ for page in itertools.count(1):
asset_name = asset['name']
total_bytes += asset['download_count'] * asset['size']
if all(not re.match(p, asset_name) for p in (
r'^youtube-dl$',
r'^youtube-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$',
r'^youtube-dl\.exe$')):
r'^haruhi-dl$',
r'^haruhi-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$',
r'^haruhi-dl\.exe$')):
continue
compat_print(
' %s size: %s downloads: %d'

View File

@ -1,6 +1,6 @@
#compdef youtube-dl
#compdef haruhi-dl
__youtube_dl() {
__haruhi_dl() {
local curcontext="$curcontext" fileopts diropts cur prev
typeset -A opt_args
fileopts="{{fileopts}}"
@ -25,4 +25,4 @@ __youtube_dl() {
esac
}
__youtube_dl
__haruhi_dl

View File

@ -6,9 +6,9 @@ from os.path import dirname as dirn
import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
import youtube_dl
import haruhi_dl
ZSH_COMPLETION_FILE = "youtube-dl.zsh"
ZSH_COMPLETION_FILE = "haruhi-dl.zsh"
ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
@ -45,5 +45,5 @@ def build_completion(opt_parser):
f.write(template)
parser = youtube_dl.parseOpts()[0]
parser = haruhi_dl.parseOpts()[0]
build_completion(parser)

View File

@ -85,17 +85,17 @@ qthelp:
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/youtube-dl.qhcp"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/haruhi-dl.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/youtube-dl.qhc"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/haruhi-dl.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/youtube-dl"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/youtube-dl"
@echo "# mkdir -p $$HOME/.local/share/devhelp/haruhi-dl"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/haruhi-dl"
@echo "# devhelp"
epub:

View File

@ -1,6 +1,6 @@
# coding: utf-8
#
# youtube-dl documentation build configuration file, created by
# haruhi-dl documentation build configuration file, created by
# sphinx-quickstart on Fri Mar 14 21:05:43 2014.
#
# This file is execfile()d with the current directory set to its
@ -14,7 +14,7 @@
import sys
import os
# Allows to import youtube_dl
# Allows to import haruhi_dl
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# -- General configuration ------------------------------------------------
@ -36,7 +36,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'youtube-dl'
project = u'haruhi-dl'
copyright = u'2014, Ricardo Garcia Gonzalez'
# The version info for the project you're documenting, acts as replacement for
@ -44,7 +44,7 @@ copyright = u'2014, Ricardo Garcia Gonzalez'
# built documents.
#
# The short X.Y version.
from youtube_dl.version import __version__
from haruhi_dl.version import __version__
version = __version__
# The full version, including alpha/beta/rc tags.
release = version
@ -68,4 +68,4 @@ html_theme = 'default'
html_static_path = ['_static']
# Output file base name for HTML help builder.
htmlhelp_basename = 'youtube-dldoc'
htmlhelp_basename = 'haruhi-dldoc'

View File

@ -1,13 +1,13 @@
Welcome to youtube-dl's documentation!
Welcome to haruhi-dl's documentation!
======================================
*youtube-dl* is a command-line program to download videos from YouTube.com and more sites.
*haruhi-dl* is a command-line program to download videos from YouTube.com and more sites.
It can also be used in Python code.
Developer guide
---------------
This section contains information for using *youtube-dl* from Python programs.
This section contains information for using *haruhi-dl* from Python programs.
.. toctree::
:maxdepth: 2

View File

@ -1,18 +1,18 @@
Using the ``youtube_dl`` module
Using the ``haruhi_dl`` module
===============================
When using the ``youtube_dl`` module, you start by creating an instance of :class:`YoutubeDL` and adding all the available extractors:
When using the ``haruhi_dl`` module, you start by creating an instance of :class:`HaruhiDL` and adding all the available extractors:
.. code-block:: python
>>> from youtube_dl import YoutubeDL
>>> ydl = YoutubeDL()
>>> from haruhi_dl import HaruhiDL
>>> ydl = HaruhiDL()
>>> ydl.add_default_info_extractors()
Extracting video information
----------------------------
You use the :meth:`YoutubeDL.extract_info` method for getting the video information, which returns a dictionary:
You use the :meth:`HaruhiDL.extract_info` method for getting the video information, which returns a dictionary:
.. code-block:: python
@ -22,7 +22,7 @@ You use the :meth:`YoutubeDL.extract_info` method for getting the video informat
[youtube] BaW_jenozKc: Downloading video info webpage
[youtube] BaW_jenozKc: Extracting video information
>>> info['title']
'youtube-dl test video "\'/\\ä↭𝕐'
'haruhi-dl test video "\'/\\ä↭𝕐'
>>> info['height'], info['width']
(720, 1280)

View File

@ -1,4 +1,4 @@
# This allows the youtube-dl command to be installed in ZSH using antigen.
# This allows the haruhi-dl command to be installed in ZSH using antigen.
# Antigen is a bundle manager. It allows you to enhance the functionality of
# your zsh session by installing bundles and themes easily.
@ -6,11 +6,11 @@
# http://antigen.sharats.me/
# https://github.com/zsh-users/antigen
# Install youtube-dl:
# antigen bundle ytdl-org/youtube-dl
# Install haruhi-dl:
# antigen bundle ytdl-org/haruhi-dl
# Bundles installed by antigen are available for use immediately.
# Update youtube-dl (and all other antigen bundles):
# Update haruhi-dl (and all other antigen bundles):
# antigen update
# The antigen command will download the git repository to a folder and then
@ -18,7 +18,7 @@
# code is documented here:
# https://github.com/zsh-users/antigen#notes-on-writing-plugins
# This specific script just aliases youtube-dl to the python script that this
# This specific script just aliases haruhi-dl to the python script that this
# library provides. This requires updating the PYTHONPATH to ensure that the
# full set of code can be located.
alias youtube-dl="PYTHONPATH=$(dirname $0) $(dirname $0)/bin/youtube-dl"
alias haruhi-dl="PYTHONPATH=$(dirname $0) $(dirname $0)/bin/haruhi-dl"

View File

@ -89,10 +89,10 @@ from .utils import (
version_tuple,
write_json_file,
write_string,
YoutubeDLCookieJar,
YoutubeDLCookieProcessor,
YoutubeDLHandler,
YoutubeDLRedirectHandler,
HaruhiDLCookieJar,
HaruhiDLCookieProcessor,
HaruhiDLHandler,
HaruhiDLRedirectHandler,
)
from .cache import Cache
from .extractor import get_info_extractor, gen_extractor_classes, _LAZY_LOADER
@ -113,28 +113,28 @@ if compat_os_name == 'nt':
import ctypes
class YoutubeDL(object):
"""YoutubeDL class.
class HaruhiDL(object):
"""HaruhiDL class.
YoutubeDL objects are the ones responsible of downloading the
HaruhiDL objects are the ones responsible of downloading the
actual video file and writing it to disk if the user has requested
it, among some other tasks. In most cases there should be one per
program. As, given a video URL, the downloader doesn't know how to
extract all the needed information, task that InfoExtractors do, it
has to pass the URL to one of them.
For this, YoutubeDL objects have a method that allows
For this, HaruhiDL objects have a method that allows
InfoExtractors to be registered in a given order. When it is passed
a URL, the YoutubeDL object handles it to the first InfoExtractor it
a URL, the HaruhiDL object handles it to the first InfoExtractor it
finds that reports being able to handle it. The InfoExtractor extracts
all the information about the video or videos the URL refers to, and
YoutubeDL process the extracted information, possibly using a File
HaruhiDL process the extracted information, possibly using a File
Downloader to download the video.
YoutubeDL objects accept a lot of parameters. In order not to saturate
HaruhiDL objects accept a lot of parameters. In order not to saturate
the object constructor with arguments, it receives a dictionary of
options instead. These options are available through the params
attribute for the InfoExtractors to use. The YoutubeDL also
attribute for the InfoExtractors to use. The HaruhiDL also
registers itself as the downloader in charge for the InfoExtractors
that are added to it, so this is a "mutual registration".
@ -228,7 +228,7 @@ class YoutubeDL(object):
playlist items.
postprocessors: A list of dictionaries, each with an entry
* key: The name of the postprocessor. See
youtube_dl/postprocessor/__init__.py for a list.
haruhi_dl/postprocessor/__init__.py for a list.
as well as any further keyword arguments for the
postprocessor.
progress_hooks: A list of functions that get called on download
@ -264,7 +264,7 @@ class YoutubeDL(object):
about it, warn otherwise (default)
source_address: Client-side IP address to bind to.
call_home: Boolean, true iff we are allowed to contact the
youtube-dl servers for debugging.
haruhi-dl servers for debugging.
sleep_interval: Number of seconds to sleep before each download when
used alone or a lower bound of a range for randomized
sleep before each download (minimum possible number
@ -300,8 +300,8 @@ class YoutubeDL(object):
if True, otherwise use ffmpeg/avconv if False, otherwise
use downloader suggested by extractor if None.
The following parameters are not used by YoutubeDL itself, they are used by
the downloader (see youtube_dl/downloader/common.py):
The following parameters are not used by HaruhiDL itself, they are used by
the downloader (see haruhi_dl/downloader/common.py):
nopart, updatetime, buffersize, ratelimit, min_filesize, max_filesize, test,
noresizebuffer, retries, continuedl, noprogress, consoletitle,
xattr_set_filesize, external_downloader_args, hls_use_mpegts,
@ -441,7 +441,7 @@ class YoutubeDL(object):
if re.match(r'^-[0-9A-Za-z_-]{10}$', a)]
if idxs:
correct_argv = (
['youtube-dl']
['haruhi-dl']
+ [a for i, a in enumerate(argv) if i not in idxs]
+ ['--'] + [argv[i] for i in idxs]
)
@ -893,7 +893,7 @@ class YoutubeDL(object):
# url_transparent. In such cases outer metadata (from ie_result)
# should be propagated to inner one (info). For this to happen
# _type of info should be overridden with url_transparent. This
# fixes issue from https://github.com/ytdl-org/youtube-dl/pull/11163.
# fixes issue from https://github.com/ytdl-org/haruhi-dl/pull/11163.
if new_result.get('_type') == 'url':
new_result['_type'] = 'url_transparent'
@ -1610,7 +1610,7 @@ class YoutubeDL(object):
# by extractor are incomplete or not (i.e. whether extractor provides only
# video-only or audio-only formats) for proper formats selection for
# extractors with such incomplete formats (see
# https://github.com/ytdl-org/youtube-dl/pull/5556).
# https://github.com/ytdl-org/haruhi-dl/pull/5556).
# Since formats may be filtered during format selection and may not match
# the original formats the results may be incorrect. Thus original formats
# or pre-calculated metrics should be passed to format selection routines
@ -1618,7 +1618,7 @@ class YoutubeDL(object):
# We will pass a context object containing all necessary additional data
# instead of just formats.
# This fixes incorrect format selection issue (see
# https://github.com/ytdl-org/youtube-dl/issues/10083).
# https://github.com/ytdl-org/haruhi-dl/issues/10083).
incomplete_formats = (
# All formats are video-only or
all(f.get('vcodec') != 'none' and f.get('acodec') == 'none' for f in formats)
@ -1823,7 +1823,7 @@ class YoutubeDL(object):
if sub_info.get('data') is not None:
try:
# Use newline='' to prevent conversion of newline characters
# See https://github.com/ytdl-org/youtube-dl/issues/10268
# See https://github.com/ytdl-org/haruhi-dl/issues/10268
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8', newline='') as subfile:
subfile.write(sub_info['data'])
except (OSError, IOError):
@ -2242,7 +2242,7 @@ class YoutubeDL(object):
return
if type('') is not compat_str:
# Python 2.6 on SLES11 SP1 (https://github.com/ytdl-org/youtube-dl/issues/3326)
# Python 2.6 on SLES11 SP1 (https://github.com/ytdl-org/haruhi-dl/issues/3326)
self.report_warning(
'Your Python is broken! Update to a newer and supported version')
@ -2256,7 +2256,7 @@ class YoutubeDL(object):
self.get_encoding()))
write_string(encoding_str, encoding=None)
self._write_string('[debug] youtube-dl version ' + __version__ + '\n')
self._write_string('[debug] haruhi-dl version ' + __version__ + '\n')
if _LAZY_LOADER:
self._write_string('[debug] Lazy loading extractors enabled' + '\n')
try:
@ -2324,11 +2324,11 @@ class YoutubeDL(object):
self.cookiejar = compat_cookiejar.CookieJar()
else:
opts_cookiefile = expand_path(opts_cookiefile)
self.cookiejar = YoutubeDLCookieJar(opts_cookiefile)
self.cookiejar = HaruhiDLCookieJar(opts_cookiefile)
if os.access(opts_cookiefile, os.R_OK):
self.cookiejar.load(ignore_discard=True, ignore_expires=True)
cookie_processor = YoutubeDLCookieProcessor(self.cookiejar)
cookie_processor = HaruhiDLCookieProcessor(self.cookiejar)
if opts_proxy is not None:
if opts_proxy == '':
proxies = {}
@ -2336,25 +2336,25 @@ class YoutubeDL(object):
proxies = {'http': opts_proxy, 'https': opts_proxy}
else:
proxies = compat_urllib_request.getproxies()
# Set HTTPS proxy to HTTP one if given (https://github.com/ytdl-org/youtube-dl/issues/805)
# Set HTTPS proxy to HTTP one if given (https://github.com/ytdl-org/haruhi-dl/issues/805)
if 'http' in proxies and 'https' not in proxies:
proxies['https'] = proxies['http']
proxy_handler = PerRequestProxyHandler(proxies)
debuglevel = 1 if self.params.get('debug_printtraffic') else 0
https_handler = make_HTTPS_handler(self.params, debuglevel=debuglevel)
ydlh = YoutubeDLHandler(self.params, debuglevel=debuglevel)
redirect_handler = YoutubeDLRedirectHandler()
ydlh = HaruhiDLHandler(self.params, debuglevel=debuglevel)
redirect_handler = HaruhiDLRedirectHandler()
data_handler = compat_urllib_request_DataHandler()
# When passing our own FileHandler instance, build_opener won't add the
# default FileHandler and allows us to disable the file protocol, which
# can be used for malicious purposes (see
# https://github.com/ytdl-org/youtube-dl/issues/8227)
# https://github.com/ytdl-org/haruhi-dl/issues/8227)
file_handler = compat_urllib_request.FileHandler()
def file_open(*args, **kwargs):
raise compat_urllib_error.URLError('file:// scheme is explicitly disabled in youtube-dl for security reasons')
raise compat_urllib_error.URLError('file:// scheme is explicitly disabled in haruhi-dl for security reasons')
file_handler.file_open = file_open
opener = compat_urllib_request.build_opener(
@ -2362,7 +2362,7 @@ class YoutubeDL(object):
# Delete the default user-agent header, which would otherwise apply in
# cases where our custom HTTP handler doesn't come into play
# (See https://github.com/ytdl-org/youtube-dl/issues/1309 for details)
# (See https://github.com/ytdl-org/haruhi-dl/issues/1309 for details)
opener.addheaders = []
self._opener = opener

View File

@ -42,18 +42,18 @@ from .downloader import (
)
from .extractor import gen_extractors, list_extractors
from .extractor.adobepass import MSO_INFO
from .YoutubeDL import YoutubeDL
from .HaruhiDL import HaruhiDL
def _real_main(argv=None):
# Compatibility fixes for Windows
if sys.platform == 'win32':
# https://github.com/ytdl-org/youtube-dl/issues/820
# https://github.com/ytdl-org/haruhi-dl/issues/820
codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None)
workaround_optparse_bug9161()
setproctitle('youtube-dl')
setproctitle('haruhi-dl')
parser, opts, args = parseOpts(argv)
@ -438,7 +438,7 @@ def _real_main(argv=None):
'usetitle': opts.usetitle if opts.usetitle is True else None,
}
with YoutubeDL(ydl_opts) as ydl:
with HaruhiDL(ydl_opts) as ydl:
# Update version
if opts.update_self:
update_self(ydl.to_screen, opts.verbose, ydl._opener)
@ -455,7 +455,7 @@ def _real_main(argv=None):
ydl.warn_if_short_id(sys.argv[1:] if argv is None else argv)
parser.error(
'You must provide at least one URL.\n'
'Type youtube-dl --help to see a list of all options.')
'Type haruhi-dl --help to see a list of all options.')
try:
if opts.load_info_filename is not None:
@ -480,4 +480,4 @@ def main(argv=None):
sys.exit('\nERROR: Interrupted by user')
__all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors']
__all__ = ['main', 'HaruhiDL', 'gen_extractors', 'list_extractors']

View File

@ -2,8 +2,8 @@
from __future__ import unicode_literals
# Execute with
# $ python youtube_dl/__main__.py (2.6+)
# $ python -m youtube_dl (2.7+)
# $ python haruhi_dl/__main__.py (2.6+)
# $ python -m haruhi_dl (2.7+)
import sys
@ -13,7 +13,7 @@ if __package__ is None and not hasattr(sys, 'frozen'):
path = os.path.realpath(os.path.abspath(__file__))
sys.path.insert(0, os.path.dirname(os.path.dirname(path)))
import youtube_dl
import haruhi_dl
if __name__ == '__main__':
youtube_dl.main()
haruhi_dl.main()

View File

@ -23,7 +23,7 @@ class Cache(object):
res = self._ydl.params.get('cachedir')
if res is None:
cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache')
res = os.path.join(cache_root, 'youtube-dl')
res = os.path.join(cache_root, 'haruhi-dl')
return expand_path(res)
def _get_cache_fn(self, section, key, dtype):

View File

@ -2375,7 +2375,7 @@ except ImportError: # Python 2
# HACK: The following are the correct unquote_to_bytes, unquote and unquote_plus
# implementations from cpython 3.4.3's stdlib. Python 2's version
# is apparently broken (see https://github.com/ytdl-org/youtube-dl/pull/6244)
# is apparently broken (see https://github.com/ytdl-org/haruhi-dl/pull/6244)
def compat_urllib_parse_unquote_to_bytes(string):
"""unquote_to_bytes('abc%20def') -> b'abc def'."""
@ -2850,7 +2850,7 @@ else:
compat_socket_create_connection = socket.create_connection
# Fix https://github.com/ytdl-org/youtube-dl/issues/4223
# Fix https://github.com/ytdl-org/haruhi-dl/issues/4223
# See http://bugs.python.org/issue9161 for what is broken
def workaround_optparse_bug9161():
op = optparse.OptionParser()
@ -2973,9 +2973,9 @@ else:
if platform.python_implementation() == 'PyPy' and sys.pypy_version_info < (5, 4, 0):
# PyPy2 prior to version 5.4.0 expects byte strings as Windows function
# names, see the original PyPy issue [1] and the youtube-dl one [2].
# names, see the original PyPy issue [1] and the haruhi-dl one [2].
# 1. https://bitbucket.org/pypy/pypy/issues/2360/windows-ctypescdll-typeerror-function-name
# 2. https://github.com/ytdl-org/youtube-dl/pull/4392
# 2. https://github.com/ytdl-org/haruhi-dl/pull/4392
def compat_ctypes_WINFUNCTYPE(*args, **kwargs):
real = ctypes.WINFUNCTYPE(*args, **kwargs)

View File

@ -44,7 +44,7 @@ class FileDownloader(object):
test: Download only first bytes to test the downloader.
min_filesize: Skip files smaller than this size
max_filesize: Skip files larger than this size
xattr_set_filesize: Set ytdl.filesize user xattribute with expected size.
xattr_set_filesize: Set hdl.filesize user xattribute with expected size.
external_downloader_args: A list of additional command-line arguments for the
external downloader.
hls_use_mpegts: Use the mpegts container for HLS videos.
@ -192,7 +192,7 @@ class FileDownloader(object):
return filename[:-len('.part')]
return filename
def ytdl_filename(self, filename):
def hdl_filename(self, filename):
return filename + '.ytdl'
def try_rename(self, old_filename, new_filename):
@ -243,7 +243,7 @@ class FileDownloader(object):
else:
clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r')
self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line)
self.to_console_title('youtube-dl ' + msg)
self.to_console_title('haruhi-dl ' + msg)
def report_progress(self, s):
if s['status'] == 'finished':

View File

@ -240,7 +240,7 @@ class FFmpegFD(ExternalFD):
# setting -seekable prevents ffmpeg from guessing if the server
# supports seeking(by adding the header `Range: bytes=0-`), which
# can cause problems in some cases
# https://github.com/ytdl-org/youtube-dl/issues/11800#issuecomment-275037127
# https://github.com/ytdl-org/haruhi-dl/issues/11800#issuecomment-275037127
# http://trac.ffmpeg.org/ticket/6125#comment:10
args += ['-seekable', '1' if seekable else '0']
@ -341,7 +341,7 @@ class FFmpegFD(ExternalFD):
# mp4 file couldn't be played, but if we ask ffmpeg to quit it
# produces a file that is playable (this is mostly useful for live
# streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
# files (see https://github.com/ytdl-org/haruhi-dl/issues/8300).
if sys.platform != 'win32':
proc.communicate(b'q')
raise

View File

@ -324,8 +324,8 @@ class F4mFD(FragmentFD):
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
man_url = urlh.geturl()
# Some manifests may be malformed, e.g. prosiebensat1 generated manifests
# (see https://github.com/ytdl-org/youtube-dl/issues/6215#issuecomment-121704244
# and https://github.com/ytdl-org/youtube-dl/issues/7823)
# (see https://github.com/ytdl-org/haruhi-dl/issues/6215#issuecomment-121704244
# and https://github.com/ytdl-org/haruhi-dl/issues/7823)
manifest = fix_xml_ampersands(urlh.read().decode('utf-8', 'ignore')).strip()
doc = compat_etree_fromstring(manifest)
@ -409,7 +409,7 @@ class F4mFD(FragmentFD):
# In tests, segments may be truncated, and thus
# FlvReader may not be able to parse the whole
# chunk. If so, write the segment as is
# See https://github.com/ytdl-org/youtube-dl/issues/9214
# See https://github.com/ytdl-org/haruhi-dl/issues/9214
dest_stream.write(down_data)
break
raise

View File

@ -32,9 +32,9 @@ class FragmentFD(FileDownloader):
keep_fragments: Keep downloaded fragments on disk after downloading is
finished
For each incomplete fragment download youtube-dl keeps on disk a special
For each incomplete fragment download haruhi-dl keeps on disk a special
bookkeeping file with download state and metadata (in future such files will
be used for any incomplete download handled by youtube-dl). This file is
be used for any incomplete download handled by haruhi-dl). This file is
used to properly handle resuming, check download file consistency and detect
potential errors. The file has a .ytdl extension and represents a standard
JSON file of the following format:
@ -70,21 +70,21 @@ class FragmentFD(FileDownloader):
self._start_frag_download(ctx)
@staticmethod
def __do_ytdl_file(ctx):
def __do_hdl_file(ctx):
return not ctx['live'] and not ctx['tmpfilename'] == '-'
def _read_ytdl_file(self, ctx):
assert 'ytdl_corrupt' not in ctx
stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'r')
def _read_hdl_file(self, ctx):
assert 'hdl_corrupt' not in ctx
stream, _ = sanitize_open(self.hdl_filename(ctx['filename']), 'r')
try:
ctx['fragment_index'] = json.loads(stream.read())['downloader']['current_fragment']['index']
except Exception:
ctx['ytdl_corrupt'] = True
ctx['hdl_corrupt'] = True
finally:
stream.close()
def _write_ytdl_file(self, ctx):
frag_index_stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'w')
def _write_hdl_file(self, ctx):
frag_index_stream, _ = sanitize_open(self.hdl_filename(ctx['filename']), 'w')
downloader = {
'current_fragment': {
'index': ctx['fragment_index'],
@ -114,8 +114,8 @@ class FragmentFD(FileDownloader):
ctx['dest_stream'].write(frag_content)
ctx['dest_stream'].flush()
finally:
if self.__do_ytdl_file(ctx):
self._write_ytdl_file(ctx)
if self.__do_hdl_file(ctx):
self._write_hdl_file(ctx)
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(ctx['fragment_filename_sanitized']))
del ctx['fragment_filename_sanitized']
@ -160,10 +160,10 @@ class FragmentFD(FileDownloader):
'fragment_index': 0,
})
if self.__do_ytdl_file(ctx):
if os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename']))):
self._read_ytdl_file(ctx)
is_corrupt = ctx.get('ytdl_corrupt') is True
if self.__do_hdl_file(ctx):
if os.path.isfile(encodeFilename(self.hdl_filename(ctx['filename']))):
self._read_hdl_file(ctx)
is_corrupt = ctx.get('hdl_corrupt') is True
is_inconsistent = ctx['fragment_index'] > 0 and resume_len == 0
if is_corrupt or is_inconsistent:
message = (
@ -172,11 +172,11 @@ class FragmentFD(FileDownloader):
self.report_warning(
'%s. Restarting from the beginning...' % message)
ctx['fragment_index'] = resume_len = 0
if 'ytdl_corrupt' in ctx:
del ctx['ytdl_corrupt']
self._write_ytdl_file(ctx)
if 'hdl_corrupt' in ctx:
del ctx['hdl_corrupt']
self._write_hdl_file(ctx)
else:
self._write_ytdl_file(ctx)
self._write_hdl_file(ctx)
assert ctx['fragment_index'] == 0
dest_stream, tmpfilename = sanitize_open(tmpfilename, open_mode)
@ -248,10 +248,10 @@ class FragmentFD(FileDownloader):
def _finish_frag_download(self, ctx):
ctx['dest_stream'].close()
if self.__do_ytdl_file(ctx):
ytdl_filename = encodeFilename(self.ytdl_filename(ctx['filename']))
if os.path.isfile(ytdl_filename):
os.remove(ytdl_filename)
if self.__do_hdl_file(ctx):
hdl_filename = encodeFilename(self.hdl_filename(ctx['filename']))
if os.path.isfile(hdl_filename):
os.remove(hdl_filename)
elapsed = time.time() - ctx['started']
if ctx['tmpfilename'] == '-':

View File

@ -152,8 +152,8 @@ class HlsFD(FragmentFD):
except compat_urllib_error.HTTPError as err:
# Unavailable (possibly temporary) fragments may be served.
# First we try to retry then either skip or abort.
# See https://github.com/ytdl-org/youtube-dl/issues/10165,
# https://github.com/ytdl-org/youtube-dl/issues/10448).
# See https://github.com/ytdl-org/haruhi-dl/issues/10165,
# https://github.com/ytdl-org/haruhi-dl/issues/10448).
count += 1
if count <= fragment_retries:
self.report_retry_fragment(err, frag_index, count, fragment_retries)

View File

@ -116,7 +116,7 @@ class HttpFD(FileDownloader):
# to match the value of requested Range HTTP header. This is due to a webservers
# that don't support resuming and serve a whole file with no Content-Range
# set in response despite of requested Range (see
# https://github.com/ytdl-org/youtube-dl/issues/6057#issuecomment-126129799)
# https://github.com/ytdl-org/haruhi-dl/issues/6057#issuecomment-126129799)
if has_range:
content_range = ctx.data.headers.get('Content-Range')
if content_range:
@ -265,7 +265,7 @@ class HttpFD(FileDownloader):
if self.params.get('xattr_set_filesize', False) and data_len is not None:
try:
write_xattr(ctx.tmpfilename, 'user.ytdl.filesize', str(data_len).encode('utf-8'))
write_xattr(ctx.tmpfilename, 'user.hdl.filesize', str(data_len).encode('utf-8'))
except (XAttrUnavailableError, XAttrMetadataError) as err:
self.report_error('unable to set filesize xattr: %s' % str(err))

View File

@ -199,7 +199,7 @@ class AppleTrailersIE(InfoExtractor):
'upload_date': upload_date,
'uploader_id': uploader_id,
'http_headers': {
'User-Agent': 'QuickTime compatible (youtube-dl)',
'User-Agent': 'QuickTime compatible (haruhi-dl)',
},
})

View File

@ -103,7 +103,7 @@ class ArkenaIE(InfoExtractor):
f_url, video_id, mpd_id=kind, fatal=False))
elif kind == 'silverlight':
# TODO: process when ism is supported (see
# https://github.com/ytdl-org/youtube-dl/issues/8118)
# https://github.com/ytdl-org/haruhi-dl/issues/8118)
continue
else:
tbr = float_or_none(f.get('Bitrate'), 1000)

View File

@ -28,12 +28,12 @@ from ..utils import (
class BandcampIE(InfoExtractor):
_VALID_URL = r'https?://[^/]+\.bandcamp\.com/track/(?P<title>[^/?#&]+)'
_TESTS = [{
'url': 'http://youtube-dl.bandcamp.com/track/youtube-dl-test-song',
'url': 'http://haruhi-dl.bandcamp.com/track/haruhi-dl-test-song',
'md5': 'c557841d5e50261777a6585648adf439',
'info_dict': {
'id': '1812978515',
'ext': 'mp3',
'title': "youtube-dl \"'/\\\u00e4\u21ad - youtube-dl test song \"'/\\\u00e4\u21ad",
'title': "haruhi-dl \"'/\\\u00e4\u21ad - haruhi-dl test song \"'/\\\u00e4\u21ad",
'duration': 9.8485,
},
'_skip': 'There is a limit of 200 free downloads / month for the test song'

View File

@ -209,7 +209,7 @@ class BBCCoUkIE(InfoExtractor):
},
'skip': 'Now it\'s really geo-restricted',
}, {
# compact player (https://github.com/ytdl-org/youtube-dl/issues/8147)
# compact player (https://github.com/ytdl-org/haruhi-dl/issues/8147)
'url': 'http://www.bbc.co.uk/programmes/p028bfkf/player',
'info_dict': {
'id': 'p028bfkj',

Some files were not shown because too many files have changed in this diff Show More