Scroll to navigation

GALLERY-DL.CONF(5) gallery-dl Manual GALLERY-DL.CONF(5)

NAME

gallery-dl.conf - gallery-dl configuration file

DESCRIPTION

gallery-dl will search for configuration files in the following places every time it is started, unless --ignore-config is specified:

/etc/gallery-dl.conf
$HOME/.config/gallery-dl/config.json
$HOME/.gallery-dl.conf

It is also possible to specify additional configuration files with the -c/--config command-line option or to add further option values with -o/--option as <key>=<value> pairs,

Configuration files are JSON-based and therefore don't allow any ordinary comments, but, since unused keys are simply ignored, it is possible to utilize those as makeshift comments by settings their values to arbitrary strings.

EXAMPLE

{

"base-directory": "/tmp/",
"extractor": {
"pixiv": {
"directory": ["Pixiv", "Works", "{user[id]}"],
"filename": "{id}{num}.{extension}",
"username": "foo",
"password": "bar"
},
"flickr": {
"_comment": "OAuth keys for account 'foobar'",
"access-token": "0123456789-0123456789abcdef",
"access-token-secret": "fedcba9876543210"
}
},
"downloader": {
"retries": 3,
"timeout": 2.5
}
}

EXTRACTOR OPTIONS

extractor.*.filename

string or object


* .. code:: json

"{manga}_c{chapter}_{page:>03}.{extension}"

* .. code:: json

{ "extension == 'mp4'": "{id}_video.{extension}", "'nature' in title" : "{id}_{title}.{extension}", "" : "{id}_default.{extension}" }

A format string to build filenames for downloaded files with.

If this is an object, it must contain Python expressions mapping to the filename format strings to use. These expressions are evaluated in the order as specified in Python 3.6+ and in an undetermined order in Python 3.4 and 3.5.

The available replacement keys depend on the extractor used. A list of keys for a specific one can be acquired by calling *gallery-dl* with the -K/--list-keywords command-line option. For example:

$ gallery-dl -K http://seiga.nicovideo.jp/seiga/im5977527 Keywords for directory names:

category seiga subcategory image

Keywords for filenames:

category seiga extension None image-id 5977527 subcategory image

Note: Even if the value of the extension key is missing or None, it will be filled in later when the file download is starting. This key is therefore always available to provide a valid filename extension.

extractor.*.directory

list of strings or object


* .. code:: json

["{category}", "{manga}", "c{chapter} - {title}"]

* .. code:: json

{ "'nature' in content": ["Nature Pictures"], "retweet_id != 0" : ["{category}", "{user[name]}", "Retweets"], "" : ["{category}", "{user[name]}"] }

A list of format strings to build target directory paths with.

If this is an object, it must contain Python expressions mapping to the list of format strings to use.

Each individual string in such a list represents a single path segment, which will be joined together and appended to the base-directory to form the complete target directory path.

extractor.*.base-directory

Path

"./gallery-dl/"

Directory path used as base for all download destinations.

extractor.*.parent-directory

bool

false

Use an extractor's current target directory as base-directory for any spawned child extractors.

extractor.*.parent-metadata

bool or string

false

If true, overwrite any metadata provided by a child extractor with its parent's.

If this is a string, add a parent's metadata to its children's
to a field named after said string. For example with "parent-metadata": "_p_":

{ "id": "child-id", "_p_": {"id": "parent-id"} }

extractor.*.parent-skip

bool

false

Share number of skipped downloads between parent and child extractors.

extractor.*.path-restrict

string or object

"auto"


* "/!? (){}"
* {" ": "_", "/": "-", "|": "-", ":": "-", "*": "+"}

A string of characters to be replaced with the value of
path-replace or an object mapping invalid/unwanted characters to their replacements
for generated path segment names.

Special values:

* "auto": Use characters from "unix" or "windows" depending on the local operating system
* "unix": "/"
* "windows": "\\\\|/<>:\"?*"
* "ascii": "^0-9A-Za-z_."

Note: In a string with 2 or more characters, []^-\ need to be escaped with backslashes, e.g. "\\[\\]"

extractor.*.path-replace

string

"_"

The replacement character(s) for path-restrict

extractor.*.path-remove

string

"\u0000-\u001f\u007f" (ASCII control characters)

Set of characters to remove from generated path names.

Note: In a string with 2 or more characters, []^-\ need to be escaped with backslashes, e.g. "\\[\\]"

extractor.*.path-strip

string

"auto"

Set of characters to remove from the end of generated path segment names using str.rstrip()

Special values:

* "auto": Use characters from "unix" or "windows" depending on the local operating system
* "unix": ""
* "windows": ". "

extractor.*.extension-map

object

{ "jpeg": "jpg", "jpe" : "jpg", "jfif": "jpg", "jif" : "jpg", "jfi" : "jpg" }

A JSON object mapping filename extensions to their replacements.

extractor.*.skip

bool or string

true

Controls the behavior when downloading files that have been downloaded before, i.e. a file with the same filename already exists or its ID is in a download archive.

* true: Skip downloads
* false: Overwrite already existing files

* "abort": Stop the current extractor run
* "abort:N": Skip downloads and stop the current extractor run after N consecutive skips

* "terminate": Stop the current extractor run, including parent extractors
* "terminate:N": Skip downloads and stop the current extractor run, including parent extractors, after N consecutive skips

* "exit": Exit the program altogether
* "exit:N": Skip downloads and exit the program after N consecutive skips

* "enumerate": Add an enumeration index to the beginning of the filename extension (file.1.ext, file.2.ext, etc.)

extractor.*.sleep

Duration

0

Number of seconds to sleep before each download.

extractor.*.sleep-extractor

Duration

0

Number of seconds to sleep before handling an input URL, i.e. before starting a new extractor.

extractor.*.sleep-request

Duration

0

Minimal time interval in seconds between each HTTP request during data extraction.

extractor.*.username & .password

string

null

The username and password to use when attempting to log in to another site.

Specifying a username and password is required for

* nijie

and optional for

* aryion
* danbooru (*)
* e621 (*)
* exhentai
* idolcomplex
* imgbb
* inkbunny
* instagram
* kemonoparty
* mangadex
* mangoxo
* pillowfort
* sankaku
* seisoparty
* subscribestar
* tapas
* tsumino
* twitter
* zerochan

These values can also be specified via the -u/--username and -p/--password command-line options or by using a .netrc file. (see Authentication_)

(*) The password value for danbooru and e621 should be the API key found in your user profile, not the actual account password.

extractor.*.netrc

bool

false

Enable the use of .netrc authentication data.

extractor.*.cookies

Path or object or list

null

Source to read additional cookies from. This can be

* The Path to a Mozilla/Netscape format cookies.txt file

"~/.local/share/cookies-instagram-com.txt"

* An object specifying cookies as name-value pairs

{ "cookie-name": "cookie-value", "sessionid" : "14313336321%3AsabDFvuASDnlpb%3A31", "isAdult" : "1" }

* A list with up to 3 entries specifying a browser profile.

* The first entry is the browser name
* The optional second entry is a profile name or an absolute path to a profile directory
* The optional third entry is the keyring to retrieve passwords for decrypting cookies from

["firefox"] ["chromium", "Private", "kwallet"]

extractor.*.cookies-update

bool

true

If extractor.*.cookies specifies the Path to a cookies.txt file and it can be opened and parsed without errors, update its contents with cookies received during data extraction.

extractor.*.proxy

string or object

null

Proxy (or proxies) to be used for remote connections.

* If this is a string, it is the proxy URL for all outgoing requests.
* If this is an object, it is a scheme-to-proxy mapping to specify different proxy URLs for each scheme. It is also possible to set a proxy for a specific host by using scheme://host as key. See Requests' proxy documentation for more details.

Example:

{ "http" : "http://10.10.1.10:3128", "https": "http://10.10.1.10:1080", "http://10.20.1.128": "http://10.10.1.10:5323" }

Note: All proxy URLs should include a scheme, otherwise http:// is assumed.

extractor.*.source-address


* string
* list with 1 string and 1 integer as elements


* "192.168.178.20"
* ["192.168.178.20", 8080]

Client-side IP address to bind to.

Can be either a simple string with just the local IP address
or a list with IP and explicit port number as elements.

extractor.*.user-agent

string

"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0"

User-Agent header value to be used for HTTP requests.

Note: This option has no effect on pixiv extractors, as these need specific values to function correctly.

extractor.*.browser

string

"firefox" for patreon, null everywhere else


* "chrome:macos"

Try to emulate a real browser (firefox or chrome) by using their default HTTP headers and TLS ciphers for HTTP requests.

Optionally, the operating system used in the User-Agent header can be specified after a : (windows, linux, or macos).

Note: requests and urllib3 only support HTTP/1.1, while a real browser would use HTTP/2.

extractor.*.keywords

object

{"type": "Pixel Art", "type_id": 123}

Additional key-value pairs to be added to each metadata dictionary.

extractor.*.keywords-default

any

"None"

Default value used for missing or undefined keyword names in format strings.

extractor.*.url-metadata

string

null

Insert a file's download URL into its metadata dictionary as the given name.

For example, setting this option to "gdl_file_url" will cause a new metadata field with name gdl_file_url to appear, which contains the current file's download URL. This can then be used in filenames, with a metadata post processor, etc.

extractor.*.path-metadata

string

null

Insert a reference to the current PathFormat data structure into metadata dictionaries as the given name.

For example, setting this option to "gdl_path" would make it possible to access the current file's filename as "[gdl_path.filename}".

extractor.*.category-transfer

bool

Extractor-specific

Transfer an extractor's (sub)category values to all child extractors spawned by it, to let them inherit their parent's config options.

extractor.*.blacklist & .whitelist

list of strings

["oauth", "recursive", "test"] + current extractor category

["imgur", "gfycat:user", "*:image"]

A list of extractor identifiers to ignore (or allow) when spawning child extractors for unknown URLs, e.g. from reddit or plurk.

Each identifier can be

* A category or basecategory name ("imgur", "mastodon")
* | A (base)category-subcategory pair, where both names are separated by a colon ("gfycat:user"). Both names can be a * or left empty, matching all possible names ("*:image", ":user").

Note: Any blacklist setting will automatically include "oauth", "recursive", and "test".

extractor.*.archive

Path

null

"$HOME/.archives/{category}.sqlite3"

File to store IDs of downloaded files in. Downloads of files already recorded in this archive file will be skipped.

The resulting archive file is not a plain text file but an SQLite3 database, as either lookup operations are significantly faster or memory requirements are significantly lower when the amount of stored IDs gets reasonably large.

Note: Archive files that do not already exist get generated automatically.

Note: Archive paths support regular format string replacements, but be aware that using external inputs for building local paths may pose a security risk.

extractor.*.archive-format

string

"{id}_{offset}"

An alternative format string to build archive IDs with.

extractor.*.archive-prefix

string

"{category}"

Prefix for archive IDs.

extractor.*.postprocessors

list of Postprocessor Configuration objects

[ { "name": "zip" , "compression": "store" }, { "name": "exec", "command": ["/home/foobar/script", "{category}", "{image_id}"] } ]

A list of post processors to be applied to each downloaded file in the specified order.

Unlike other options, a postprocessors setting at a deeper level
does not override any postprocessors setting at a lower level. Instead, all post processors from all applicable postprocessors
settings get combined into a single list.

For example

* an mtime post processor at extractor.postprocessors,
* a zip post processor at extractor.pixiv.postprocessors,
* and using --exec

will run all three post processors - mtime, zip, exec - for each downloaded pixiv file.

extractor.*.retries

integer

4

Maximum number of times a failed HTTP request is retried before giving up, or -1 for infinite retries.

extractor.*.timeout

float

30.0

Amount of time (in seconds) to wait for a successful connection and response from a remote server.

This value gets internally used as the timeout parameter for the requests.request() method.

extractor.*.verify

bool or string

true

Controls whether to verify SSL/TLS certificates for HTTPS requests.

If this is a string, it must be the path to a CA bundle to use instead of the default certificates.

This value gets internally used as the verify parameter for the requests.request() method.

extractor.*.download

bool

true

Controls whether to download media files.

Setting this to false won't download any files, but all other functions (postprocessors, download archive, etc.) will be executed as normal.

extractor.*.fallback

bool

true

Use fallback download URLs when a download fails.

extractor.*.image-range

string


* "10-20"
* "-5, 10, 30-50, 100-"

Index-range(s) specifying which images to download.

Note: The index of the first image is 1.

extractor.*.chapter-range

string

Like image-range, but applies to delegated URLs like manga-chapters, etc.

extractor.*.image-filter

string


* "width >= 1200 and width/height > 1.2"
* "re.search(r'foo(bar)+', description)"

Python expression controlling which files to download.

Files for which the expression evaluates to False are ignored.
Available keys are the filename-specific ones listed by -K or -j.

extractor.*.chapter-filter

string


* "lang == 'en'"
* "language == 'French' and 10 <= chapter < 20"

Like image-filter, but applies to delegated URLs like manga-chapters, etc.

extractor.*.image-unique

bool

false

Ignore image URLs that have been encountered before during the current extractor run.

extractor.*.chapter-unique

bool

false

Like image-unique, but applies to delegated URLs like manga-chapters, etc.

extractor.*.date-format

string

"%Y-%m-%dT%H:%M:%S"

Format string used to parse string values of date-min and date-max.

See strptime for a list of formatting directives.

EXTRACTOR-SPECIFIC OPTIONS

extractor.artstation.external

bool

false

Try to follow external URLs of embedded players.

extractor.aryion.recursive

bool

true

Controls the post extraction strategy.

* true: Start on users' main gallery pages and recursively descend into subfolders
* false: Get posts from "Latest Updates" pages

extractor.bbc.width

int

1920

Specifies the requested image width.

This value must be divisble by 16 and gets rounded down otherwise. The maximum possible value appears to be 1920.

extractor.blogger.videos

bool

true

Download embedded videos hosted on https://www.blogger.com/

extractor.cyberdrop.domain

string

"auto"

"cyberdrop.to"

Specifies the domain used by cyberdrop regardless of input URL.

Setting this option to "auto" uses the same domain as a given input URL.

extractor.danbooru.external

bool

false

For unavailable or restricted posts, follow the source and download from there if possible.

extractor.danbooru.metadata

bool

false

Extract additional metadata (notes, artist commentary, parent, children)

Note: This requires 1 additional HTTP request for each post.

extractor.danbooru.ugoira

bool

false

Controls the download target for Ugoira posts.

* true: Original ZIP archives
* false: Converted video files

extractor.derpibooru.api-key

string

null

Your Derpibooru API Key, to use your account's browsing settings and filters.

extractor.derpibooru.filter

integer

56027 (Everything filter)

The content filter ID to use.

Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.

See Filters for details.

extractor.deviantart.auto-watch

bool

false

Automatically watch users when encountering "Watchers-Only Deviations" (requires a refresh-token).

extractor.deviantart.auto-unwatch

bool

false

After watching a user through auto-watch, unwatch that user at the end of the current extractor run.

extractor.deviantart.comments

bool

false

Extract comments metadata.

extractor.deviantart.extra

bool

false

Download extra Sta.sh resources from description texts and journals.

Note: Enabling this option also enables deviantart.metadata_.

extractor.deviantart.flat

bool

true

Select the directory structure created by the Gallery- and Favorite-Extractors.

* true: Use a flat directory structure.
* false: Collect a list of all gallery-folders or favorites-collections and transfer any further work to other extractors (folder or collection), which will then create individual subdirectories for each of them.

Note: Going through all gallery folders will not be able to fetch deviations which aren't in any folder.

extractor.deviantart.folders

bool

false

Provide a folders metadata field that contains the names of all folders a deviation is present in.

Note: Gathering this information requires a lot of API calls. Use with caution.

extractor.deviantart.include

string or list of strings

"gallery"

"favorite,journal,scraps" or ["favorite", "journal", "scraps"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "gallery", "scraps", "journal", "favorite".

You can use "all" instead of listing all values separately.

extractor.deviantart.journals

string

"html"

Selects the output format of journal entries.

* "html": HTML with (roughly) the same layout as on DeviantArt.
* "text": Plain text with image references and HTML tags removed.
* "none": Don't download journals.

extractor.deviantart.mature

bool

true

Enable mature content.

This option simply sets the mature_content parameter for API calls to either "true" or "false" and does not do any other form of content filtering.

extractor.deviantart.metadata

bool

false

Request extended metadata for deviation objects to additionally provide description, tags, license and is_watching fields.

extractor.deviantart.original

bool or string

true

Download original files if available.

Setting this option to "images" only downloads original files if they are images and falls back to preview versions for everything else (archives, etc.).

extractor.deviantart.pagination

string

"api"

Controls when to stop paginating over API results.

* "api": Trust the API and stop when has_more is false.
* "manual": Disregard has_more and only stop when a batch of results is empty.

extractor.deviantart.refresh-token

string

null

The refresh-token value you get from linking your DeviantArt account to gallery-dl.

Using a refresh-token allows you to access private or otherwise not publicly available deviations.

Note: The refresh-token becomes invalid after 3 months or whenever your cache file is deleted or cleared.

extractor.deviantart.wait-min

integer

0

Minimum wait time in seconds before API requests.

extractor.exhentai.domain

string

"auto"


* "auto": Use e-hentai.org or exhentai.org depending on the input URL
* "e-hentai.org": Use e-hentai.org for all URLs
* "exhentai.org": Use exhentai.org for all URLs

extractor.exhentai.limits

integer

null

Sets a custom image download limit and stops extraction when it gets exceeded.

extractor.exhentai.metadata

bool

false

Load extended gallery metadata from the API.

Adds archiver_key, posted, and torrents. Makes date and filesize more precise.

extractor.exhentai.original

bool

true

Download full-sized original images if available.

extractor.exhentai.source

string

"gallery"

Selects an alternative source to download files from.

* "hitomi": Download the corresponding gallery from hitomi.la

extractor.fanbox.embeds

bool or string

true

Control behavior on embedded content from external sites.

* true: Extract embed URLs and download them if supported (videos are not downloaded).
* "ytdl": Like true, but let youtube-dl handle video extraction and download for YouTube, Vimeo and SoundCloud embeds.
* false: Ignore embeds.

extractor.flickr.access-token & .access-token-secret

string

null

The access_token and access_token_secret values you get from linking your Flickr account to gallery-dl.

extractor.flickr.videos

bool

true

Extract and download videos.

extractor.flickr.size-max

integer or string

null

Sets the maximum allowed size for downloaded images.

* If this is an integer, it specifies the maximum image dimension (width and height) in pixels.
* If this is a string, it should be one of Flickr's format specifiers ("Original", "Large", ... or "o", "k", "h", "l", ...) to use as an upper limit.

extractor.furaffinity.descriptions

string

"text"

Controls the format of description metadata fields.

* "text": Plain text with HTML tags removed
* "html": Raw HTML content

extractor.furaffinity.external

bool

false

Follow external URLs linked in descriptions.

extractor.furaffinity.include

string or list of strings

"gallery"

"scraps,favorite" or ["scraps", "favorite"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "gallery", "scraps", "favorite".

You can use "all" instead of listing all values separately.

extractor.furaffinity.layout

string

"auto"

Selects which site layout to expect when parsing posts.

* "auto": Automatically differentiate between "old" and "new"
* "old": Expect the *old* site layout
* "new": Expect the *new* site layout

extractor.gelbooru.api-key & .user-id

string

null

Values from the API Access Credentials section found at the bottom of your Account Options page.

extractor.generic.enabled

bool

false

Match **all** URLs not otherwise supported by gallery-dl, even ones without a generic: prefix.

extractor.gfycat.format


* list of strings
* string

["mp4", "webm", "mobile", "gif"]

List of names of the preferred animation format, which can be "mp4", "webm", "mobile", "gif", or "webp".

If a selected format is not available, the next one in the list will be tried until an available format is found.

If the format is given as string, it will be extended with ["mp4", "webm", "mobile", "gif"]. Use a list with one element to restrict it to only one possible format.

extractor.gofile.api-token

string

null

API token value found at the bottom of your profile page.

If not set, a temporary guest token will be used.

extractor.gofile.website-token

string

"12345"

API token value used during API requests.

A not up-to-date value will result in 401 Unauthorized errors.

Setting this value to null will do an extra HTTP request to fetch the current value used by gofile.

extractor.gofile.recursive

bool

false

Recursively download files from subfolders.

extractor.hentaifoundry.include

string or list of strings

"pictures"

"scraps,stories" or ["scraps", "stories"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "pictures", "scraps", "stories", "favorite".

You can use "all" instead of listing all values separately.

extractor.hitomi.format

string

"webp"

Selects which image format to download.

Available formats are "webp" and "avif".

"original" will try to download the original jpg or png versions, but is most likely going to fail with 403 Forbidden errors.

extractor.imgur.mp4

bool or string

true

Controls whether to choose the GIF or MP4 version of an animation.

* true: Follow Imgur's advice and choose MP4 if the prefer_video flag in an image's metadata is set.
* false: Always choose GIF.
* "always": Always choose MP4.

extractor.inkbunny.orderby

string

"create_datetime"

Value of the orderby parameter for submission searches.

(See API#Search for details)

extractor.instagram.include

string or list of strings

"posts"

"stories,highlights,posts" or ["stories", "highlights", "posts"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "posts", "reels", "channel", "tagged", "stories", "highlights".

You can use "all" instead of listing all values separately.

extractor.instagram.previews

bool

false

Download video previews.

extractor.instagram.videos

bool

true

Download video files.

extractor.itaku.videos

bool

true

Download video files.

extractor.kemonoparty.comments

bool

false

Extract comments metadata.

extractor.kemonoparty.duplicates

bool

false

Controls how to handle duplicate files in a post.

* true: Download duplicates
* false: Ignore duplicates

extractor.kemonoparty.dms

bool

false

Extract a user's direct messages as dms metadata.

extractor.kemonoparty.favorites

string

artist

Determines the type of favorites to be downloaded.

Available types are artist, and post.

extractor.kemonoparty.files

list of strings

["attachments", "file", "inline"]

Determines the type and order of files to be downloaded.

Available types are file, attachments, and inline.

extractor.kemonoparty.max-posts

integer

null

Limit the number of posts to download.

extractor.kemonoparty.metadata

bool

false

Extract username metadata

extractor.khinsider.format

string

"mp3"

The name of the preferred file format to download.

Use "all" to download all available formats, or a (comma-separated) list to select multiple formats.

If the selected format is not available, the first in the list gets chosen (usually mp3).

extractor.lolisafe.domain

string

"auto"

Specifies the domain used by a lolisafe extractor regardless of input URL.

Setting this option to "auto" uses the same domain as a given input URL.

extractor.luscious.gif

bool

false

Format in which to download animated images.

Use true to download animated images as gifs and false to download as mp4 videos.

extractor.mangadex.api-server

string

"https://api.mangadex.org"

The server to use for API requests.

extractor.mangadex.api-parameters

object

{"order[updatedAt]": "desc"}

Additional query parameters to send when fetching manga chapters.

(See /manga/{id}/feed and /user/follows/manga/feed)

extractor.mangadex.lang

string

"en"

ISO 639-1 language code to filter chapters by.

extractor.mangadex.ratings

list of strings

["safe", "suggestive", "erotica", "pornographic"]

List of acceptable content ratings for returned chapters.

extractor.mastodon.reblogs

bool

false

Fetch media from reblogged posts.

extractor.mastodon.replies

bool

true

Fetch media from replies to other posts.

extractor.mastodon.text-posts

bool

false

Also emit metadata for text-only posts without media content.

extractor.newgrounds.flash

bool

true

Download original Adobe Flash animations instead of pre-rendered videos.

extractor.newgrounds.format

string

"original"

"720p"

Selects the preferred format for video downloads.

If the selected format is not available, the next smaller one gets chosen.

extractor.newgrounds.include

string or list of strings

"art"

"movies,audio" or ["movies", "audio"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "art", "audio", "movies".

You can use "all" instead of listing all values separately.

extractor.nijie.include

string or list of strings

"illustration,doujin"

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "illustration", "doujin", "favorite", "nuita".

You can use "all" instead of listing all values separately.

extractor.oauth.browser

bool

true

Controls how a user is directed to an OAuth authorization page.

* true: Use Python's webbrowser.open() method to automatically open the URL in the user's default browser.
* false: Ask the user to copy & paste an URL from the terminal.

extractor.oauth.cache

bool

true

Store tokens received during OAuth authorizations in cache.

extractor.oauth.host

string

"localhost"

Host name / IP address to bind to during OAuth authorization.

extractor.oauth.port

integer

6414

Port number to listen on during OAuth authorization.

Note: All redirects will go to http://localhost:6414/, regardless of the port specified here. You'll have to manually adjust the port number in your browser's address bar when using a different port than the default.

extractor.paheal.metadata

bool

false

Extract additional metadata (source, uploader)

Note: This requires 1 additional HTTP request per post.

extractor.patreon.files

list of strings

["images", "image_large", "attachments", "postfile", "content"]

Determines the type and order of files to be downloaded.

Available types are postfile, images, image_large, attachments, and content.

extractor.photobucket.subalbums

bool

true

Download subalbums.

extractor.pillowfort.external

bool

false

Follow links to external sites, e.g. Twitter,

extractor.pillowfort.inline

bool

true

Extract inline images.

extractor.pillowfort.reblogs

bool

false

Extract media from reblogged posts.

extractor.pinterest.sections

bool

true

Include pins from board sections.

extractor.pinterest.videos

bool

true

Download from video pins.

extractor.pixiv.include


* string
* list of strings

"artworks"


* "avatar,background,artworks"
* ["avatar", "background", "artworks"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "artworks", "avatar", "background", "favorite".

It is possible to use "all" instead of listing all values separately.

extractor.pixiv.artworks.metadata

bool

false

Fetch extended user metadata.

extractor.pixiv.work.related

bool

false

Also download related artworks.

extractor.pixiv.tags

string

"japanese"

Controls the tags metadata field.

* "japanese": List of Japanese tags
* "translated": List of translated tags
* "original": Unmodified list with both Japanese and translated tags

extractor.pixiv.ugoira

bool

true

Download Pixiv's Ugoira animations or ignore them.

These animations come as a .zip file containing all animation frames in JPEG format.

Use an ugoira post processor to convert them to watchable videos. (Example__)

extractor.pixiv.max-posts

integer

0

When downloading galleries, this sets the maximum number of posts to get. A value of 0 means no limit.

extractor.plurk.comments

bool

false

Also search Plurk comments for URLs.

extractor.reactor.gif

bool

false

Format in which to download animated images.

Use true to download animated images as gifs and false to download as mp4 videos.

extractor.readcomiconline.captcha

string

"stop"

Controls how to handle redirects to CAPTCHA pages.

* "stop: Stop the current extractor run.
* "wait: Ask the user to solve the CAPTCHA and wait.

extractor.readcomiconline.quality

string

"auto"

Sets the quality query parameter of issue pages. ("lq" or "hq")

"auto" uses the quality parameter of the input URL or "hq" if not present.

extractor.reddit.comments

integer

0

The value of the limit parameter when loading a submission and its comments. This number (roughly) specifies the total amount of comments being retrieved with the first API call.

Reddit's internal default and maximum values for this parameter appear to be 200 and 500 respectively.

The value 0 ignores all comments and significantly reduces the time required when scanning a subreddit.

extractor.reddit.morecomments

bool

false

Retrieve additional comments by resolving the more comment stubs in the base comment tree.

This requires 1 additional API call for every 100 extra comments.

extractor.reddit.date-min & .date-max

Date

0 and 253402210800 (timestamp of datetime.max)

Ignore all submissions posted before/after this date.

extractor.reddit.id-min & .id-max

string

"6kmzv2"

Ignore all submissions posted before/after the submission with this ID.

extractor.reddit.recursion

integer

0

Reddit extractors can recursively visit other submissions linked to in the initial set of submissions. This value sets the maximum recursion depth.

Special values:

* 0: Recursion is disabled
* -1: Infinite recursion (don't do this)

extractor.reddit.refresh-token

string

null

The refresh-token value you get from linking your Reddit account to gallery-dl.

Using a refresh-token allows you to access private or otherwise not publicly available subreddits, given that your account is authorized to do so, but requests to the reddit API are going to be rate limited at 600 requests every 10 minutes/600 seconds.

extractor.reddit.videos

bool or string

true

Control video download behavior.

* true: Download videos and use youtube-dl to handle HLS and DASH manifests
* "ytdl": Download videos and let youtube-dl handle all of video extraction and download
* false: Ignore videos

extractor.redgifs.format


* list of strings
* string

["hd", "sd", "gif"]

List of names of the preferred animation format, which can be "hd", "sd", "gif", "vthumbnail"`, "thumbnail", or "poster".

If a selected format is not available, the next one in the list will be tried until an available format is found.

If the format is given as string, it will be extended with ["hd", "sd", "gif"]``. Use a list with one element to restrict it to only one possible format.

extractor.sankakucomplex.embeds

bool

false

Download video embeds from external sites.

extractor.sankakucomplex.videos

bool

true

Download videos.

extractor.skeb.article

bool

false

Download article images.

extractor.skeb.sent-requests

bool

false

Download sent requests.

extractor.skeb.thumbnails

bool

false

Download thumbnails.

extractor.smugmug.videos

bool

true

Download video files.

extractor.tumblr.avatar

bool

false

Download blog avatars.

extractor.tumblr.date-min & .date-max

Date

0 and null

Ignore all posts published before/after this date.

extractor.tumblr.external

bool

false

Follow external URLs (e.g. from "Link" posts) and try to extract images from them.

extractor.tumblr.inline

bool

true

Search posts for inline images and videos.

extractor.tumblr.original

bool

true

Download full-resolution photo and inline images.

For each photo with "maximum" resolution (width equal to 2048 or height equal to 3072) or each inline image, use an extra HTTP request to find the URL to its full-resolution version.

extractor.tumblr.ratelimit

string

"abort"

Selects how to handle exceeding the daily API rate limit.

* "abort": Raise an error and stop extraction
* "wait": Wait until rate limit reset

extractor.tumblr.reblogs

bool or string

true


* true: Extract media from reblogged posts
* false: Skip reblogged posts
* "same-blog": Skip reblogged posts unless the original post is from the same blog

extractor.tumblr.posts

string or list of strings

"all"

"video,audio,link" or ["video", "audio", "link"]

A (comma-separated) list of post types to extract images, etc. from.

Possible types are text, quote, link, answer, video, audio, photo, chat.

You can use "all" instead of listing all types separately.

extractor.twibooru.api-key

string

null

Your Twibooru API Key, to use your account's browsing settings and filters.

extractor.twibooru.filter

integer

2 (Everything filter)

The content filter ID to use.

Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.

See Filters for details.

extractor.twitter.cards

bool or string

false

Controls how to handle Twitter Cards.

* false: Ignore cards
* true: Download image content from supported cards
* "ytdl": Additionally download video content from unsupported cards using youtube-dl

extractor.twitter.cards-blacklist

list of strings

["summary", "youtube.com", "player:twitch.tv"]

List of card types to ignore.

Possible values are

* card names
* card domains
* <card name>:<card domain>

extractor.twitter.conversations

bool

false

For input URLs pointing to a single Tweet, e.g. https://twitter.com/i/web/status/<TweetID>, fetch media from all Tweets and replies in this conversation <https://help.twitter.com/en/using-twitter/twitter-conversations> or thread.

extractor.twitter.csrf

string

"cookies"

Controls how to handle Cross Site Request Forgery (CSRF) tokens.

* "auto": Always auto-generate a token.
* "cookies": Use token given by the ct0 cookie if present.

extractor.twitter.expand

bool

false

For each Tweet, return *all* Tweets from that initial Tweet's conversation or thread, i.e. *expand* all Twitter threads.

Going through a timeline with this option enabled is essentially the same as running gallery-dl https://twitter.com/i/web/status/<TweetID> with enabled conversations option for each Tweet in said timeline.

Note: This requires at least 1 additional API call per initial Tweet.

extractor.twitter.size

list of strings

["orig", "4096x4096", "large", "medium", "small"]

The image version to download. Any entries after the first one will be used for potential fallback URLs.

Known available sizes are 4096x4096, orig, large, medium, and small.

extractor.twitter.syndication

bool

false

Retrieve age-restricted content using Twitter's syndication API.

extractor.twitter.logout

bool

false

Logout and retry as guest when access to another user's Tweets is blocked.

extractor.twitter.pinned

bool

false

Fetch media from pinned Tweets.

extractor.twitter.quoted

bool

false

Fetch media from quoted Tweets.

extractor.twitter.replies

bool

true

Fetch media from replies to other Tweets.

If this value is "self", only consider replies where reply and original Tweet are from the same user.

extractor.twitter.retweets

bool

false

Fetch media from Retweets.

If this value is "original", metadata for these files will be taken from the original Tweets, not the Retweets.

extractor.twitter.timeline.strategy

string

"auto"

Controls the strategy / tweet source used for user URLs (https://twitter.com/USER).

* "tweets": /tweets timeline + search
* "media": /media timeline + search
* "with_replies": /with_replies timeline + search
* "auto": "tweets" or "media", depending on retweets and text-tweets settings

extractor.twitter.text-tweets

bool

false

Also emit metadata for text-only Tweets without media content.

This only has an effect with a metadata (or exec) post processor with "event": "post" and appropriate filename.

extractor.twitter.twitpic

bool

false

Extract TwitPic embeds.

extractor.twitter.unique

bool

true

Ignore previously seen Tweets.

extractor.twitter.users

string

"timeline"

"https://twitter.com/search?q=from:{legacy[screen_name]}"

Format string for user URLs generated from
following and list-members queries, whose replacement field values come from Twitter user objects
(Example)

Special values:

* "timeline": https://twitter.com/i/user/{rest_id}
* "tweets": https://twitter.com/id:{rest_id}/tweets
* "media": https://twitter.com/id:{rest_id}/media

Note: To allow gallery-dl to follow custom URL formats, set the blacklist for twitter to a non-default value, e.g. an empty string "".

extractor.twitter.videos

bool or string

true

Control video download behavior.

* true: Download videos
* "ytdl": Download videos using youtube-dl
* false: Skip video Tweets

extractor.unsplash.format

string

"raw"

Name of the image format to download.

Available formats are "raw", "full", "regular", "small", and "thumb".

extractor.vsco.videos

bool

true

Download video files.

extractor.wallhaven.api-key

string

null

Your Wallhaven API Key, to use your account's browsing settings and default filters when searching.

See https://wallhaven.cc/help/api for more information.

extractor.wallhaven.metadata

bool

false

Extract additional metadata (tags, uploader)

Note: This requires 1 additional HTTP request for each post.

extractor.weasyl.api-key

string

null

Your Weasyl API Key, to use your account's browsing settings and filters.

extractor.weasyl.metadata

bool

false

Fetch extra submission metadata during gallery downloads.
(comments, description, favorites, folder_name,
tags, views)

Note: This requires 1 additional HTTP request per submission.

extractor.weibo.include


* string
* list of strings

"feed"

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "home", "feed", "videos", "newvideo", "article", "album".

It is possible to use "all" instead of listing all values separately.

extractor.weibo.livephoto

bool

true

Download livephoto files.

extractor.weibo.retweets

bool

true

Fetch media from retweeted posts.

If this value is "original", metadata for these files will be taken from the original posts, not the retweeted posts.

extractor.weibo.videos

bool

true

Download video files.

extractor.ytdl.enabled

bool

false

Match **all** URLs, even ones without a ytdl: prefix.

extractor.ytdl.format

string

youtube-dl's default, currently "bestvideo+bestaudio/best"

Video format selection <https://github.com/ytdl-org/youtube-dl#format-selection> directly passed to youtube-dl.

extractor.ytdl.generic

bool

true

Controls the use of youtube-dl's generic extractor.

Set this option to "force" for the same effect as youtube-dl's --force-generic-extractor.

extractor.ytdl.logging

bool

true

Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.

Note: Set quiet and no_warnings in extractor.ytdl.raw-options to true to suppress all output.

extractor.ytdl.module

string

null

Name of the youtube-dl Python module to import.

Setting this to null will try to import "yt_dlp" followed by "youtube_dl" as fallback.

extractor.ytdl.raw-options

object

{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }

Additional options passed directly to the YoutubeDL constructor.

All available options can be found in youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>.

extractor.ytdl.cmdline-args


* string
* list of strings


* "--quiet --write-sub --merge-output-format mkv"
* ["--quiet", "--write-sub", "--merge-output-format", "mkv"]

Additional options specified as youtube-dl command-line arguments.

extractor.ytdl.config-file

Path

"~/.config/youtube-dl/config"

Location of a youtube-dl configuration file to load options from.

extractor.zerochan.metadata

bool

false

Extract additional metadata (date, md5, tags, ...)

Note: This requires 1-2 additional HTTP request for each post.

extractor.[booru].tags

bool

false

Categorize tags by their respective types and provide them as tags_<type> metadata fields.

Note: This requires 1 additional HTTP request for each post.

extractor.[booru].notes

bool

false

Extract overlay notes (position and text).

Note: This requires 1 additional HTTP request for each post.

extractor.[manga-extractor].chapter-reverse

bool

false

Reverse the order of chapter URLs extracted from manga pages.

* true: Start with the latest chapter
* false: Start with the first chapter

extractor.[manga-extractor].page-reverse

bool

false

Download manga chapter pages in reverse order.

DOWNLOADER OPTIONS

downloader.*.enabled

bool

true

Enable/Disable this downloader module.

downloader.*.filesize-min & .filesize-max

string

null

"32000", "500k", "2.5M"

Minimum/Maximum allowed file size in bytes. Any file smaller/larger than this limit will not be downloaded.

Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t or p. These suffixes are case-insensitive.

downloader.*.mtime

bool

true

Use Last-Modified HTTP response headers to set file modification times.

downloader.*.part

bool

true

Controls the use of .part files during file downloads.

* true: Write downloaded data into .part files and rename them upon download completion. This mode additionally supports resuming incomplete downloads.
* false: Do not use .part files and write data directly into the actual output files.

downloader.*.part-directory

Path

null

Alternate location for .part files.

Missing directories will be created as needed. If this value is null, .part files are going to be stored alongside the actual output files.

downloader.*.progress

float

3.0

Number of seconds until a download progress indicator for the current download is displayed.

Set this option to null to disable this indicator.

downloader.*.rate

string

null

"32000", "500k", "2.5M"

Maximum download rate in bytes per second.

Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t or p. These suffixes are case-insensitive.

downloader.*.retries

integer

extractor.*.retries

Maximum number of retries during file downloads, or -1 for infinite retries.

downloader.*.timeout

float or null

extractor.*.timeout

Connection timeout during file downloads.

downloader.*.verify

bool or string

extractor.*.verify

Certificate validation during file downloads.

downloader.*.proxy

string or object

extractor.*.proxy

Proxy server used for file downloads.
Disable the use of a proxy by explicitly setting this option to null.

downloader.http.adjust-extensions

bool

true

Check the file headers of jpg, png, and gif files and adjust their filename extensions if they do not match.

downloader.http.headers

object

{"Accept": "image/webp,*/*", "Referer": "https://example.org/"}

Additional HTTP headers to send when downloading files,

downloader.ytdl.format

string

youtube-dl's default, currently "bestvideo+bestaudio/best"

Video format selection <https://github.com/ytdl-org/youtube-dl#format-selection> directly passed to youtube-dl.

downloader.ytdl.forward-cookies

bool

false

Forward cookies to youtube-dl.

downloader.ytdl.logging

bool

true

Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.

Note: Set quiet and no_warnings in downloader.ytdl.raw-options to true to suppress all output.

downloader.ytdl.module

string

null

Name of the youtube-dl Python module to import.

Setting this to null will first try to import "yt_dlp" and use "youtube_dl" as fallback.

downloader.ytdl.outtmpl

string

null

The Output Template used to generate filenames for files downloaded with youtube-dl.

Special values:

* null: generate filenames with extractor.*.filename
* "default": use youtube-dl's default, currently "%(title)s-%(id)s.%(ext)s"

Note: An output template other than null might cause unexpected results in combination with other options (e.g. "skip": "enumerate")

downloader.ytdl.raw-options

object

{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }

Additional options passed directly to the YoutubeDL constructor.

All available options can be found in youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>.

downloader.ytdl.cmdline-args


* string
* list of strings


* "--quiet --write-sub --merge-output-format mkv"
* ["--quiet", "--write-sub", "--merge-output-format", "mkv"]

Additional options specified as youtube-dl command-line arguments.

downloader.ytdl.config-file

Path

"~/.config/youtube-dl/config"

Location of a youtube-dl configuration file to load options from.

OUTPUT OPTIONS

output.mode

string

"auto"

Controls the output string format and status indicators.

* "null": No output
* "pipe": Suitable for piping to other processes or files
* "terminal": Suitable for the standard Windows console
* "color": Suitable for terminals that understand ANSI escape codes and colors
* "auto": Automatically choose the best suitable output mode

output.shorten

bool

true

Controls whether the output strings should be shortened to fit on one console line.

Set this option to "eaw" to also work with east-asian characters with a display width greater than 1.

output.colors

object

{"success": "1;32", "skip": "2"}

Controls the ANSI colors used with mode: color for successfully downloaded or skipped files.

output.ansi

bool

false

On Windows, enable ANSI escape sequences and colored output
by setting the ENABLE_VIRTUAL_TERMINAL_PROCESSING flag for stdout and stderr.

output.skip

bool

true

Show skipped file downloads.

output.fallback

bool

true

Include fallback URLs in the output of -g/--get-urls.

output.private

bool

false

Include private fields, i.e. fields whose name starts with an underscore, in the output of -K/--list-keywords and -j/--dump-json.

output.progress

bool or string

true

Controls the progress indicator when *gallery-dl* is run with multiple URLs as arguments.

* true: Show the default progress indicator ("[{current}/{total}] {url}")
* false: Do not show any progress indicator
* Any string: Show the progress indicator using this as a custom format string. Possible replacement keys are current, total and url.

output.log

string or Logging Configuration

"[{name}][{levelname}] {message}"

Configuration for standard logging output to stderr.

If this is a simple string, it specifies the format string for logging messages.

output.logfile

Path or Logging Configuration

null

File to write logging output to.

output.unsupportedfile

Path or Logging Configuration

null

File to write external URLs unsupported by *gallery-dl* to.

The default format string here is "{message}".

output.num-to-str

bool

false

Convert numeric values (integer or float) to string before outputting them as JSON.

POSTPROCESSOR OPTIONS

classify.mapping

object

{ "Pictures": ["jpg", "jpeg", "png", "gif", "bmp", "svg", "webp"], "Video" : ["flv", "ogv", "avi", "mp4", "mpg", "mpeg", "3gp", "mkv", "webm", "vob", "wmv"], "Music" : ["mp3", "aac", "flac", "ogg", "wma", "m4a", "wav"], "Archives": ["zip", "rar", "7z", "tar", "gz", "bz2"] }

A mapping from directory names to filename extensions that should be stored in them.

Files with an extension not listed will be ignored and stored in their default location.

compare.action

string

"replace"

The action to take when files do **not** compare as equal.

* "replace": Replace/Overwrite the old version with the new one

* "enumerate": Add an enumeration index to the filename of the new version like skip = "enumerate"

compare.equal

string

"null"

The action to take when files do compare as equal.

* "abort:N": Stop the current extractor run after N consecutive files compared as equal.

* "terminate:N": Stop the current extractor run, including parent extractors, after N consecutive files compared as equal.

* "exit:N": Exit the program after N consecutive files compared as equal.

compare.shallow

bool

false

Only compare file sizes. Do not read and compare their content.

exec.async

bool

false

Controls whether to wait for a subprocess to finish or to let it run asynchronously.

exec.command

string or list of strings


* "convert {} {}.png && rm {}"
* ["echo", "{user[account]}", "{id}"]

The command to run.

* If this is a string, it will be executed using the system's shell, e.g. /bin/sh. Any {} will be replaced with the full path of a file or target directory, depending on exec.event

* If this is a list, the first element specifies the program name and any further elements its arguments. Each element of this list is treated as a format string using the files' metadata as well as {_path}, {_directory}, and {_filename}.

exec.event

string

"after"

The event for which exec.command is run.

See metadata.event for a list of available events.

metadata.mode

string

"json"

Selects how to process metadata.

* "json": write metadata using json.dump() <https://docs.python.org/3/library/json.html#json.dump>
* "tags": write tags separated by newlines
* "custom": write the result of applying metadata.content-format to a file's metadata dictionary
* "modify": add or modify metadata entries
* "delete": remove metadata entries

metadata.filename

string

null

"{id}.data.json"

A format string to build the filenames for metadata files with. (see extractor.filename)

Using "-" as filename will write all output to stdout.

If this option is set, metadata.extension and metadata.extension-format will be ignored.

metadata.directory

string

"."

"metadata"

Directory where metadata files are stored in relative to the current target location for file downloads.

metadata.extension

string

"json" or "txt"

Filename extension for metadata files that will be appended to the original file names.

metadata.extension-format

string


* "{extension}.json"
* "json"

Custom format string to build filename extensions for metadata files with, which will replace the original filename extensions.

Note: metadata.extension is ignored if this option is set.

metadata.event

string

"file"

The event for which metadata gets written to a file.

The available events are:

init After post processor initialization and before the first file download finalize On extractor shutdown, e.g. after all files were downloaded prepare Before a file download file When completing a file download, but before it gets moved to its target location after After a file got moved to its target location skip When skipping a file download post When starting to download all files of a post, e.g. a Tweet on Twitter or a post on Patreon.

metadata.fields


* list of strings
* object (field name -> format string)


* .. code:: json

["blocked", "watching", "status[creator][name]"]

* .. code:: json

{ "blocked" : "***", "watching" : "\fE 'yes' if watching else 'no'", "status[username]": "{status[creator][name]!l}" }


* "mode": "delete": A list of metadata field names to remove.
* "mode": "modify": An object with metadata field names mapping to a format string whose result is assigned to said field name.

metadata.content-format

string or list of strings


* "tags:\n\n{tags:J\n}\n"
* ["tags:", "", "{tags:J\n}"]

Custom format string to build the content of metadata files with.

Note: Only applies for "mode": "custom".

metadata.archive

Path

File to store IDs of generated metadata files in, similar to extractor.*.archive.

archive-format and archive-prefix options, akin to extractor.*.archive-format and extractor.*.archive-prefix, are supported as well.

metadata.mtime

bool

false

Set modification times of generated metadata files according to the accompanying downloaded file.

Enabling this option will only have an effect *if* there is actual mtime metadata available, that is

* after a file download ("event": "file" (default), "event": "after")
* when running *after* an mtime post processes for the same event

For example, a metadata post processor for "event": "post" will *not* be able to set its file's modification time unless an mtime post processor with "event": "post" runs *before* it.

mtime.event

string

"file"

See metadata.event

mtime.key

string

"date"

Name of the metadata field whose value should be used.

This value must either be a UNIX timestamp or a datetime object.

Note: This option gets ignored if mtime.value is set.

mtime.value

string

null


* "{status[date]}"
* "{content[0:6]:R22/2022/D%Y%m%d/}"

A format string whose value should be used.

The resulting value must either be a UNIX timestamp or a datetime object.

ugoira.extension

string

"webm"

Filename extension for the resulting video files.

ugoira.ffmpeg-args

list of strings

null

["-c:v", "libvpx-vp9", "-an", "-b:v", "2M"]

Additional FFmpeg command-line arguments.

ugoira.ffmpeg-demuxer

string

auto

FFmpeg demuxer to read and process input files with. Possible values are

* "concat" (inaccurate frame timecodes for non-uniform frame delays)
* "image2" (accurate timecodes, requires nanosecond file timestamps, i.e. no Windows or macOS)
* "mkvmerge" (accurate timecodes, only WebM or MKV, requires mkvmerge)

"auto" will select mkvmerge if available and fall back to concat otherwise.

ugoira.ffmpeg-location

Path

"ffmpeg"

Location of the ffmpeg (or avconv) executable to use.

ugoira.mkvmerge-location

Path

"mkvmerge"

Location of the mkvmerge executable for use with the mkvmerge demuxer.

ugoira.ffmpeg-output

bool

true

Show FFmpeg output.

ugoira.ffmpeg-twopass

bool

false

Enable Two-Pass encoding.

ugoira.framerate

string

"auto"

Controls the frame rate argument (-r) for FFmpeg

* "auto": Automatically assign a fitting frame rate based on delays between frames.
* any other string: Use this value as argument for -r.
* null or an empty string: Don't set an explicit frame rate.

ugoira.keep-files

bool

false

Keep ZIP archives after conversion.

ugoira.libx264-prevent-odd

bool

true

Prevent "width/height not divisible by 2" errors when using libx264 or libx265 encoders by applying a simple cropping filter. See this Stack Overflow thread for more information.

This option, when libx264/5 is used, automatically adds ["-vf", "crop=iw-mod(iw\\,2):ih-mod(ih\\,2)"] to the list of FFmpeg command-line arguments to reduce an odd width/height by 1 pixel and make them even.

ugoira.mtime

bool

true

Set modification times of generated ugoira aniomations.

ugoira.repeat-last-frame

bool

true

Allow repeating the last frame when necessary to prevent it from only being displayed for a very short amount of time.

zip.extension

string

"zip"

Filename extension for the created ZIP archive.

zip.files

list of Path

["info.json"]

List of extra files to be added to a ZIP archive.

Note: Relative paths are relative to the current download directory.

zip.keep-files

bool

false

Keep the actual files after writing them to a ZIP archive.

zip.mode

string

"default"


* "default": Write the central directory file header once after everything is done or an exception is raised.

* "safe": Update the central directory file header each time a file is stored in a ZIP archive.

This greatly reduces the chance a ZIP archive gets corrupted in case the Python interpreter gets shut down unexpectedly (power outage, SIGKILL) but is also a lot slower.

MISCELLANEOUS OPTIONS

extractor.modules

list of strings

The modules list in extractor/__init__.py

["reddit", "danbooru", "mangadex"]

The list of modules to load when searching for a suitable extractor class. Useful to reduce startup time and memory usage.

cache.file

Path


* (%APPDATA% or "~") + "/gallery-dl/cache.sqlite3" on Windows
* ($XDG_CACHE_HOME or "~/.cache") + "/gallery-dl/cache.sqlite3" on all other platforms

Path of the SQLite3 database used to cache login sessions, cookies and API tokens across gallery-dl invocations.

Set this option to null or an invalid path to disable this cache.

format-separator

string

"/"

Character(s) used as argument separator in format string format specifiers.

For example, setting this option to "#" would allow a replacement operation to be Rold#new# instead of the default Rold/new/

signals-ignore

list of strings

["SIGTTOU", "SIGTTIN", "SIGTERM"]

The list of signal names to ignore, i.e. set SIG_IGN as signal handler for.

warnings

string

"default"

The Warnings Filter action used for (urllib3) warnings.

pyopenssl

bool

false

Use pyOpenSSL-backed SSL-support.

API TOKENS & IDS

extractor.deviantart.client-id & .client-secret

string


* login and visit DeviantArt's Applications & Keys section
* click "Register Application"
* scroll to "OAuth2 Redirect URI Whitelist (Required)" and enter "https://mikf.github.io/gallery-dl/oauth-redirect.html"
* scroll to the bottom and agree to the API License Agreement. Submission Policy, and Terms of Service.
* click "Save"
* copy client_id and client_secret of your new application and put them in your configuration file as "client-id" and "client-secret"
* clear your cache to delete any remaining access-token entries. (gallery-dl --clear-cache deviantart)
* get a new refresh-token for the new client-id (gallery-dl oauth:deviantart)

extractor.flickr.api-key & .api-secret

string


* login and Create an App in Flickr's App Garden
* click "APPLY FOR A NON-COMMERCIAL KEY"
* fill out the form with a random name and description and click "SUBMIT"
* copy Key and Secret and put them in your configuration file

extractor.reddit.client-id & .user-agent

string


* login and visit the apps section of your account's preferences
* click the "are you a developer? create an app..." button
* fill out the form, choose "installed app", preferably set "http://localhost:6414/" as "redirect uri" and finally click "create app"
* copy the client id (third line, under your application's name and "installed app") and put it in your configuration file
* use "Python:<application name>:v1.0 (by /u/<username>)" as user-agent and replace <application name> and <username> accordingly (see Reddit's API access rules)

extractor.smugmug.api-key & .api-secret

string


* login and Apply for an API Key
* use a random name and description, set "Type" to "Application", "Platform" to "All", and "Use" to "Non-Commercial"
* fill out the two checkboxes at the bottom and click "Apply"
* copy API Key and API Secret and put them in your configuration file

extractor.tumblr.api-key & .api-secret

string


* login and visit Tumblr's Applications section
* click "Register application"
* fill out the form: use a random name and description, set https://example.org/ as "Application Website" and "Default callback URL"
* solve Google's "I'm not a robot" challenge and click "Register"
* click "Show secret key" (below "OAuth Consumer Key")
* copy your OAuth Consumer Key and Secret Key and put them in your configuration file

CUSTOM TYPES

Date


* string
* integer


* "2019-01-01T00:00:00"
* "2019" with "%Y" as date-format
* 1546297200

A Date value represents a specific point in time.

* If given as string, it is parsed according to date-format.
* If given as integer, it is interpreted as UTC timestamp.

Duration


* float
* list with 2 floats
* string


* 2.85
* [1.5, 3.0]
* "2.85", "1.5-3.0"

A Duration represents a span of time in seconds.

* If given as a single float, it will be used as that exact value.
* If given as a list with 2 floating-point numbers a & b , it will be randomly chosen with uniform distribution such that a <= N <=b. (see random.uniform())
* If given as a string, it can either represent a single float value ("2.85") or a range ("1.5-3.0").

Path


* string
* list of strings


* "file.ext"
* "~/path/to/file.ext"
* "$HOME/path/to/file.ext"
* ["$HOME", "path", "to", "file.ext"]

A Path is a string representing the location of a file or directory.

Simple tilde expansion and environment variable expansion is supported.

In Windows environments, backslashes ("\") can, in addition to forward slashes ("/"), be used as path separators. Because backslashes are JSON's escape character, they themselves have to be escaped. The path C:\path\to\file.ext has therefore to be written as "C:\\path\\to\\file.ext" if you want to use backslashes.

Logging Configuration

object

{ "format" : "{asctime} {name}: {message}", "format-date": "%H:%M:%S", "path" : "~/log.txt", "encoding" : "ascii" }

{ "level" : "debug", "format": { "debug" : "debug: {message}", "info" : "[{name}] {message}", "warning": "Warning: {message}", "error" : "ERROR: {message}" } }

Extended logging output configuration.

* format
* General format string for logging messages or a dictionary with format strings for each loglevel.

In addition to the default LogRecord attributes, it is also possible to access the current extractor, job, path, and keywords objects and their attributes, for example "{extractor.url}", "{path.filename}", "{keywords.title}"
* Default: "[{name}][{levelname}] {message}"
* format-date
* Format string for {asctime} fields in logging messages (see strftime() directives)
* Default: "%Y-%m-%d %H:%M:%S"
* level
* Minimum logging message level (one of "debug", "info", "warning", "error", "exception")
* Default: "info"
* path
* Path to the output file
* mode
* Mode in which the file is opened; use "w" to truncate or "a" to append (see open())
* Default: "w"
* encoding
* File encoding
* Default: "utf-8"

Note: path, mode, and encoding are only applied when configuring logging output to a file.

Postprocessor Configuration

object

{ "name": "mtime" }

{ "name" : "zip", "compression": "store", "extension" : "cbz", "filter" : "extension not in ('zip', 'rar')", "whitelist" : ["mangadex", "exhentai", "nhentai"] }

An object containing a "name" attribute specifying the post-processor type, as well as any of its options.

It is possible to set a "filter" expression similar to image-filter to only run a post-processor conditionally.

It is also possible set a "whitelist" or "blacklist" to only enable or disable a post-processor for the specified extractor categories.

The available post-processor types are

classify Categorize files by filename extension compare Compare versions of the same file and replace/enumerate them on mismatch
(requires downloader.*.part = true and extractor.*.skip = false)
exec Execute external commands metadata Write metadata to separate files mtime Set file modification time according to its metadata ugoira Convert Pixiv Ugoira to WebM using FFmpeg zip Store files in a ZIP archive

BUGS

https://github.com/mikf/gallery-dl/issues

AUTHORS

Mike Fährmann <mike_faehrmann@web.de>
and https://github.com/mikf/gallery-dl/graphs/contributors

SEE ALSO

gallery-dl(1)

2022-09-18 1.23.1