Scroll to navigation

GALLERY-DL.CONF(5) gallery-dl Manual GALLERY-DL.CONF(5)

NAME

gallery-dl.conf - gallery-dl configuration file

DESCRIPTION

gallery-dl will search for configuration files in the following places every time it is started, unless --ignore-config is specified:

/etc/gallery-dl.conf
$HOME/.config/gallery-dl/config.json
$HOME/.gallery-dl.conf

It is also possible to specify additional configuration files with the -c/--config command-line option or to add further option values with -o/--option as <key>=<value> pairs,

Configuration files are JSON-based and therefore don't allow any ordinary comments, but, since unused keys are simply ignored, it is possible to utilize those as makeshift comments by settings their values to arbitrary strings.

EXAMPLE

{

"base-directory": "/tmp/",
"extractor": {
"pixiv": {
"directory": ["Pixiv", "Works", "{user[id]}"],
"filename": "{id}{num}.{extension}",
"username": "foo",
"password": "bar"
},
"flickr": {
"_comment": "OAuth keys for account 'foobar'",
"access-token": "0123456789-0123456789abcdef",
"access-token-secret": "fedcba9876543210"
}
},
"downloader": {
"retries": 3,
"timeout": 2.5
}
}

EXTRACTOR OPTIONS

extractor.*.filename


* string
* object (condition -> format string)

"{manga}_c{chapter}_{page:>03}.{extension}"

{ "extension == 'mp4'": "{id}_video.{extension}", "'nature' in title" : "{id}_{title}.{extension}", "" : "{id}_default.{extension}" }

A format string to build filenames for downloaded files with.

If this is an object, it must contain Python expressions mapping to the filename format strings to use. These expressions are evaluated in the order as specified in Python 3.6+ and in an undetermined order in Python 3.4 and 3.5.

The available replacement keys depend on the extractor used. A list of keys for a specific one can be acquired by calling *gallery-dl* with the -K/--list-keywords command-line option. For example:

$ gallery-dl -K http://seiga.nicovideo.jp/seiga/im5977527 Keywords for directory names:

category seiga subcategory image

Keywords for filenames:

category seiga extension None image-id 5977527 subcategory image

Note: Even if the value of the extension key is missing or None, it will be filled in later when the file download is starting. This key is therefore always available to provide a valid filename extension.

extractor.*.directory


* list of strings
* object (condition -> format strings)

["{category}", "{manga}", "c{chapter} - {title}"]

{ "'nature' in content": ["Nature Pictures"], "retweet_id != 0" : ["{category}", "{user[name]}", "Retweets"], "" : ["{category}", "{user[name]}"] }

A list of format strings to build target directory paths with.

If this is an object, it must contain Python expressions mapping to the list of format strings to use.

Each individual string in such a list represents a single path segment, which will be joined together and appended to the base-directory to form the complete target directory path.

extractor.*.base-directory

Path

"./gallery-dl/"

Directory path used as base for all download destinations.

extractor.*.parent-directory

bool

false

Use an extractor's current target directory as base-directory for any spawned child extractors.

extractor.*.metadata-parent


* bool
* string

false

If true, overwrite any metadata provided by a child extractor with its parent's.

If this is a string, add a parent's metadata to its children's
to a field named after said string. For example with "parent-metadata": "_p_":

{ "id": "child-id", "_p_": {"id": "parent-id"} }

extractor.*.parent-skip

bool

false

Share number of skipped downloads between parent and child extractors.

extractor.*.path-restrict


* string
* object (character -> replacement character(s))

"auto"


* "/!? (){}"
* {" ": "_", "/": "-", "|": "-", ":": "_-_", "*": "_+_"}

A string of characters to be replaced with the value of
path-replace or an object mapping invalid/unwanted characters to their replacements
for generated path segment names.

Special values:

* "auto": Use characters from "unix" or "windows" depending on the local operating system
* "unix": "/"
* "windows": "\\\\|/<>:\"?*"
* "ascii": "^0-9A-Za-z_." (only ASCII digits, letters, underscores, and dots)
* "ascii+": "^0-9@-[\\]-{ #-)+-.;=!}~" (all ASCII characters except the ones not allowed by Windows)

Implementation Detail: For strings with length >= 2, this option uses a Regular Expression Character Set, meaning that:

* using a caret ^ as first character inverts the set
* character ranges are supported (0-9a-z)
* ], -, and \ need to be escaped as \\], \\-, and \\\\ respectively to use them as literal characters

extractor.*.path-replace

string

"_"

The replacement character(s) for path-restrict

extractor.*.path-remove

string

"\u0000-\u001f\u007f" (ASCII control characters)

Set of characters to remove from generated path names.

Note: In a string with 2 or more characters, []^-\ need to be escaped with backslashes, e.g. "\\[\\]"

extractor.*.path-strip

string

"auto"

Set of characters to remove from the end of generated path segment names using str.rstrip()

Special values:

* "auto": Use characters from "unix" or "windows" depending on the local operating system
* "unix": ""
* "windows": ". "

extractor.*.path-extended

bool

true

On Windows, use extended-length paths prefixed with \\?\ to work around the 260 characters path length limit.

extractor.*.extension-map

object (extension -> replacement)

{ "jpeg": "jpg", "jpe" : "jpg", "jfif": "jpg", "jif" : "jpg", "jfi" : "jpg" }

A JSON object mapping filename extensions to their replacements.

extractor.*.skip


* bool
* string

true

Controls the behavior when downloading files that have been downloaded before, i.e. a file with the same filename already exists or its ID is in a download archive.

* true: Skip downloads
* false: Overwrite already existing files

* "abort": Stop the current extractor run
* "abort:N": Skip downloads and stop the current extractor run after N consecutive skips

* "terminate": Stop the current extractor run, including parent extractors
* "terminate:N": Skip downloads and stop the current extractor run, including parent extractors, after N consecutive skips

* "exit": Exit the program altogether
* "exit:N": Skip downloads and exit the program after N consecutive skips

* "enumerate": Add an enumeration index to the beginning of the filename extension (file.1.ext, file.2.ext, etc.)

extractor.*.sleep

Duration

0

Number of seconds to sleep before each download.

extractor.*.sleep-extractor

Duration

0

Number of seconds to sleep before handling an input URL, i.e. before starting a new extractor.

extractor.*.sleep-request

Duration

0

Minimal time interval in seconds between each HTTP request during data extraction.

extractor.*.username & .password

string

null

The username and password to use when attempting to log in to another site.

Specifying username and password is required for

* nijie

and optional for

* aibooru (*)
* aryion
* atfbooru (*)
* bluesky
* danbooru (*)
* e621 (*)
* e926 (*)
* exhentai
* idolcomplex
* imgbb
* inkbunny
* kemonoparty
* mangadex
* mangoxo
* pillowfort
* sankaku
* seisoparty
* subscribestar
* tapas
* tsumino
* twitter
* vipergirls
* zerochan

These values can also be specified via the -u/--username and -p/--password command-line options or by using a .netrc file. (see Authentication_)

(*) The password value for these sites should be the API key found in your user profile, not the actual account password.

Note: Leave the password value empty or undefined to get prompted for a passeword when performing a login (see getpass()).

extractor.*.netrc

bool

false

Enable the use of .netrc authentication data.

extractor.*.cookies


* Path
* object (name -> value)
* list

Source to read additional cookies from. This can be

* The Path to a Mozilla/Netscape format cookies.txt file

"~/.local/share/cookies-instagram-com.txt"

* An object specifying cookies as name-value pairs

{ "cookie-name": "cookie-value", "sessionid" : "14313336321%3AsabDFvuASDnlpb%3A31", "isAdult" : "1" }

* A list with up to 5 entries specifying a browser profile.

* The first entry is the browser name
* The optional second entry is a profile name or an absolute path to a profile directory
* The optional third entry is the keyring to retrieve passwords for decrypting cookies from
* The optional fourth entry is a (Firefox) container name ("none" for only cookies with no container)
* The optional fifth entry is the domain to extract cookies for. Prefix it with a dot . to include cookies for subdomains. Has no effect when also specifying a container.

["firefox"] ["firefox", null, null, "Personal"] ["chromium", "Private", "kwallet", null, ".twitter.com"]

extractor.*.cookies-update


* bool
* Path

true

Export session cookies in cookies.txt format.

* If this is a Path, write cookies to the given file path.

* If this is true and extractor.*.cookies specifies the Path of a valid cookies.txt file, update its contents.

extractor.*.proxy


* string
* object (scheme -> proxy)

"http://10.10.1.10:3128"

{ "http" : "http://10.10.1.10:3128", "https": "http://10.10.1.10:1080", "http://10.20.1.128": "http://10.10.1.10:5323" }

Proxy (or proxies) to be used for remote connections.

* If this is a string, it is the proxy URL for all outgoing requests.
* If this is an object, it is a scheme-to-proxy mapping to specify different proxy URLs for each scheme. It is also possible to set a proxy for a specific host by using scheme://host as key. See Requests' proxy documentation for more details.

Note: If a proxy URLs does not include a scheme, http:// is assumed.

extractor.*.source-address


* string
* list with 1 string and 1 integer as elements


* "192.168.178.20"
* ["192.168.178.20", 8080]

Client-side IP address to bind to.

Can be either a simple string with just the local IP address
or a list with IP and explicit port number as elements.

extractor.*.user-agent

string

"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0"

User-Agent header value to be used for HTTP requests.

Setting this value to "browser" will try to automatically detect and use the User-Agent used by the system's default browser.

Note: This option has no effect on pixiv, e621, and mangadex extractors, as these need specific values to function correctly.

extractor.*.browser

string


* "firefox" for patreon, mangapark, and mangasee
* null everywhere else


* "chrome:macos"

Try to emulate a real browser (firefox or chrome) by using their default HTTP headers and TLS ciphers for HTTP requests.

Optionally, the operating system used in the User-Agent header can be specified after a : (windows, linux, or macos).

Note: requests and urllib3 only support HTTP/1.1, while a real browser would use HTTP/2.

extractor.*.referer


* bool
* string

true

Send Referer headers with all outgoing HTTP requests.

If this is a string, send it as Referer instead of the extractor's root domain.

extractor.*.headers

object (name -> value)

{ "User-Agent" : "<extractor.*.user-agent>", "Accept" : "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer" : "<extractor.*.referer>" }

Additional HTTP headers to be sent with each HTTP request,

To disable sending a header, set its value to null.

extractor.*.ciphers

list of strings

["ECDHE-ECDSA-AES128-GCM-SHA256", "ECDHE-RSA-AES128-GCM-SHA256", "ECDHE-ECDSA-CHACHA20-POLY1305", "ECDHE-RSA-CHACHA20-POLY1305"]

List of TLS/SSL cipher suites in OpenSSL cipher list format to be passed to ssl.SSLContext.set_ciphers()

extractor.*.tls12

bool


* true
* false for patreon, pixiv:series

Allow selecting TLS 1.2 cipher suites.

Can be disabled to alter TLS fingerprints and potentially bypass Cloudflare blocks.

extractor.*.keywords

object (name -> value)

{"type": "Pixel Art", "type_id": 123}

Additional name-value pairs to be added to each metadata dictionary.

extractor.*.keywords-default

any

"None"

Default value used for missing or undefined keyword names in format strings.

extractor.*.url-metadata

string

Insert a file's download URL into its metadata dictionary as the given name.

For example, setting this option to "gdl_file_url" will cause a new metadata field with name gdl_file_url to appear, which contains the current file's download URL. This can then be used in filenames, with a metadata post processor, etc.

extractor.*.path-metadata

string

Insert a reference to the current PathFormat data structure into metadata dictionaries as the given name.

For example, setting this option to "gdl_path" would make it possible to access the current file's filename as "{gdl_path.filename}".

extractor.*.extractor-metadata

string

Insert a reference to the current Extractor object into metadata dictionaries as the given name.

extractor.*.http-metadata

string

Insert an object containing a file's HTTP headers and filename, extension, and date parsed from them into metadata dictionaries as the given name.

For example, setting this option to "gdl_http" would make it possible to access the current file's Last-Modified header as "{gdl_http[Last-Modified]}" and its parsed form as "{gdl_http[date]}".

extractor.*.version-metadata

string

Insert an object containing gallery-dl's version info into metadata dictionaries as the given name.

The content of the object is as follows:

{ "version" : "string", "is_executable" : "bool", "current_git_head": "string or null" }

extractor.*.category-transfer

bool

Extractor-specific

Transfer an extractor's (sub)category values to all child extractors spawned by it, to let them inherit their parent's config options.

extractor.*.blacklist & .whitelist

list of strings

["oauth", "recursive", "test"] + current extractor category

["imgur", "redgifs:user", "*:image"]

A list of extractor identifiers to ignore (or allow) when spawning child extractors for unknown URLs, e.g. from reddit or plurk.

Each identifier can be

* A category or basecategory name ("imgur", "mastodon")
* | A (base)category-subcategory pair, where both names are separated by a colon ("redgifs:user"). Both names can be a * or left empty, matching all possible names ("*:image", ":user").

Note: Any blacklist setting will automatically include "oauth", "recursive", and "test".

extractor.*.archive

Path

null

"$HOME/.archives/{category}.sqlite3"

File to store IDs of downloaded files in. Downloads of files already recorded in this archive file will be skipped.

The resulting archive file is not a plain text file but an SQLite3 database, as either lookup operations are significantly faster or memory requirements are significantly lower when the amount of stored IDs gets reasonably large.

Note: Archive files that do not already exist get generated automatically.

Note: Archive paths support regular format string replacements, but be aware that using external inputs for building local paths may pose a security risk.

extractor.*.archive-format

string

"{id}_{offset}"

An alternative format string to build archive IDs with.

extractor.*.archive-prefix

string

"{category}"

Prefix for archive IDs.

extractor.*.archive-pragma

list of strings

["journal_mode=WAL", "synchronous=NORMAL"]

A list of SQLite PRAGMA statements to run during archive initialization.

See <https://www.sqlite.org/pragma.html> for available PRAGMA statements and further details.

extractor.*.postprocessors

list of Postprocessor Configuration objects

[ { "name": "zip" , "compression": "store" }, { "name": "exec", "command": ["/home/foobar/script", "{category}", "{image_id}"] } ]

A list of post processors to be applied to each downloaded file in the specified order.

Unlike other options, a postprocessors setting at a deeper level
does not override any postprocessors setting at a lower level. Instead, all post processors from all applicable postprocessors
settings get combined into a single list.

For example

* an mtime post processor at extractor.postprocessors,
* a zip post processor at extractor.pixiv.postprocessors,
* and using --exec

will run all three post processors - mtime, zip, exec - for each downloaded pixiv file.

extractor.*.postprocessor-options

object (name -> value)

{ "archive": null, "keep-files": true }

Additional Postprocessor Options that get added to each individual post processor object before initializing it and evaluating filters.

extractor.*.retries

integer

4

Maximum number of times a failed HTTP request is retried before giving up, or -1 for infinite retries.

extractor.*.retry-codes

list of integers

[404, 429, 430]

Additional HTTP response status codes to retry an HTTP request on.

2xx codes (success responses) and 3xx codes (redirection messages) will never be retried and always count as success, regardless of this option.

5xx codes (server error responses) will always be retried, regardless of this option.

extractor.*.timeout

float

30.0

Amount of time (in seconds) to wait for a successful connection and response from a remote server.

This value gets internally used as the timeout parameter for the requests.request() method.

extractor.*.verify


* bool
* string

true

Controls whether to verify SSL/TLS certificates for HTTPS requests.

If this is a string, it must be the path to a CA bundle to use instead of the default certificates.

This value gets internally used as the verify parameter for the requests.request() method.

extractor.*.download

bool

true

Controls whether to download media files.

Setting this to false won't download any files, but all other functions (postprocessors, download archive, etc.) will be executed as normal.

extractor.*.fallback

bool

true

Use fallback download URLs when a download fails.

extractor.*.image-range


* string
* list of strings


* "10-20"
* "-5, 10, 30-50, 100-"
* "10:21, 30:51:2, :5, 100:"
* ["-5", "10", "30-50", "100-"]

Index range(s) selecting which files to download.

These can be specified as

* index: 3 (file number 3)
* range: 2-4 (files 2, 3, and 4)
* slice: 3:8:2 (files 3, 5, and 7)

Arguments for range and slice notation are optional
and will default to begin (1) or end (sys.maxsize) if omitted. For example 5-, 5:, and 5:: all mean "Start at file number 5".

Note: The index of the first file is 1.

extractor.*.chapter-range

string

Like image-range, but applies to delegated URLs like manga chapters, etc.

extractor.*.image-filter


* string
* list of strings


* "re.search(r'foo(bar)+', description)"
* ["width >= 1200", "width/height > 1.2"]

Python expression controlling which files to download.

A file only gets downloaded when *all* of the given expressions evaluate to True.

Available values are the filename-specific ones listed by -K or -j.

extractor.*.chapter-filter


* string
* list of strings


* "lang == 'en'"
* ["language == 'French'", "10 <= chapter < 20"]

Like image-filter, but applies to delegated URLs like manga chapters, etc.

extractor.*.image-unique

bool

false

Ignore image URLs that have been encountered before during the current extractor run.

extractor.*.chapter-unique

bool

false

Like image-unique, but applies to delegated URLs like manga chapters, etc.

extractor.*.date-format

string

"%Y-%m-%dT%H:%M:%S"

Format string used to parse string values of date-min and date-max.

See strptime for a list of formatting directives.

Note: Despite its name, this option does **not** control how {date} metadata fields are formatted. To use a different formatting for those values other than the default %Y-%m-%d %H:%M:%S, put strptime formatting directives after a colon :, for example {date:%Y%m%d}.

extractor.*.write-pages


* bool
* string

false

During data extraction, write received HTTP request data to enumerated files in the current working directory.

Special values:

* "all": Include HTTP request and response headers. Hide Authorization, Cookie, and Set-Cookie values.
* "ALL": Include all HTTP request and response headers.

EXTRACTOR-SPECIFIC OPTIONS

extractor.artstation.external

bool

false

Try to follow external URLs of embedded players.

extractor.artstation.max-posts

integer

null

Limit the number of posts/projects to download.

extractor.artstation.previews

bool

false

Download video previews.

extractor.artstation.videos

bool

true

Download video clips.

extractor.artstation.search.pro-first

bool

true

Enable the "Show Studio and Pro member artwork first" checkbox when retrieving search results.

extractor.aryion.recursive

bool

true

Controls the post extraction strategy.

* true: Start on users' main gallery pages and recursively descend into subfolders
* false: Get posts from "Latest Updates" pages

extractor.bbc.width

integer

1920

Specifies the requested image width.

This value must be divisble by 16 and gets rounded down otherwise. The maximum possible value appears to be 1920.

extractor.behance.modules

list of strings

["image", "video", "mediacollection", "embed"]

Selects which gallery modules to download from.

Supported module types are image, video, mediacollection, embed, text.

extractor.blogger.videos

bool

true

Download embedded videos hosted on https://www.blogger.com/

extractor.bluesky.include


* string
* list of strings

"media"


* "avatar,background,posts"
* ["avatar", "background", "posts"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "avatar", "background", "posts", "replies", "media", "likes",

It is possible to use "all" instead of listing all values separately.

extractor.bluesky.metadata


* bool
* string
* list of strings

false


* "facets,user"
* ["facets", "user"]

Extract additional metadata.

* facets: hashtags, mentions, and uris
* user: detailed user metadata for the user referenced in the input URL (See app.bsky.actor.getProfile).

extractor.bluesky.post.depth

integer

0

Sets the maximum depth of returned reply posts.

(See depth parameter of app.bsky.feed.getPostThread)

extractor.bluesky.reposts

bool

false

Process reposts.

extractor.cyberdrop.domain

string

null

"cyberdrop.to"

Specifies the domain used by cyberdrop regardless of input URL.

Setting this option to "auto" uses the same domain as a given input URL.

extractor.danbooru.external

bool

false

For unavailable or restricted posts, follow the source and download from there if possible.

extractor.danbooru.ugoira

bool

false

Controls the download target for Ugoira posts.

* true: Original ZIP archives
* false: Converted video files

extractor.[Danbooru].metadata


* bool
* string
* list of strings

false


* replacements,comments,ai_tags
* ["replacements", "comments", "ai_tags"]

Extract additional metadata (notes, artist commentary, parent, children, uploader)

It is possible to specify a custom list of metadata includes. See available_includes for possible field names. aibooru also supports ai_metadata.

Note: This requires 1 additional HTTP request per 200-post batch.

extractor.[Danbooru].threshold


* string
* integer

"auto"

Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 200.

Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1.

extractor.derpibooru.api-key

string

null

Your Derpibooru API Key, to use your account's browsing settings and filters.

extractor.derpibooru.filter

integer

56027 (Everything filter)

The content filter ID to use.

Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.

See Filters for details.

extractor.deviantart.auto-watch

bool

false

Automatically watch users when encountering "Watchers-Only Deviations" (requires a refresh-token).

extractor.deviantart.auto-unwatch

bool

false

After watching a user through auto-watch, unwatch that user at the end of the current extractor run.

extractor.deviantart.comments

bool

false

Extract comments metadata.

extractor.deviantart.comments-avatars

bool

false

Download the avatar of each commenting user.

Note: Enabling this option also enables deviantart.comments_.

extractor.deviantart.extra

bool

false

Download extra Sta.sh resources from description texts and journals.

Note: Enabling this option also enables deviantart.metadata_.

extractor.deviantart.flat

bool

true

Select the directory structure created by the Gallery- and Favorite-Extractors.

* true: Use a flat directory structure.
* false: Collect a list of all gallery-folders or favorites-collections and transfer any further work to other extractors (folder or collection), which will then create individual subdirectories for each of them.

Note: Going through all gallery folders will not be able to fetch deviations which aren't in any folder.

extractor.deviantart.folders

bool

false

Provide a folders metadata field that contains the names of all folders a deviation is present in.

Note: Gathering this information requires a lot of API calls. Use with caution.

extractor.deviantart.group


* bool
* string

true

Check whether the profile name in a given URL belongs to a group or a regular user.

When disabled, assume every given profile name belongs to a regular user.

Special values:

* "skip": Skip groups

extractor.deviantart.include


* string
* list of strings

"gallery"


* "favorite,journal,scraps"
* ["favorite", "journal", "scraps"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "avatar", "background", "gallery", "scraps", "journal", "favorite", "status".

It is possible to use "all" instead of listing all values separately.

extractor.deviantart.intermediary

bool

true

For older non-downloadable images, download a higher-quality /intermediary/ version.

extractor.deviantart.journals

string

"html"

Selects the output format for textual content. This includes journals, literature and status updates.

* "html": HTML with (roughly) the same layout as on DeviantArt.
* "text": Plain text with image references and HTML tags removed.
* "none": Don't download textual content.

extractor.deviantart.jwt

bool

false

Update JSON Web Tokens (the token URL parameter) of otherwise non-downloadable, low-resolution images to be able to download them in full resolution.

Note: No longer functional as of 2023-10-11

extractor.deviantart.mature

bool

true

Enable mature content.

This option simply sets the mature_content parameter for API calls to either "true" or "false" and does not do any other form of content filtering.

extractor.deviantart.metadata


* bool
* string
* list of strings

false


* "stats,submission"
* ["camera", "stats", "submission"]

Extract additional metadata for deviation objects.

Provides description, tags, license, and is_watching fields when enabled.

It is possible to request extended metadata by specifying a list of

* camera : EXIF information (if available)
* stats : deviation statistics
* submission : submission information
* collection : favourited folder information (requires a refresh token)
* gallery : gallery folder information (requires a refresh token)

Set this option to "all" to request all extended metadata categories.

See /deviation/metadata for official documentation.

extractor.deviantart.original


* bool
* string

true

Download original files if available.

Setting this option to "images" only downloads original files if they are images and falls back to preview versions for everything else (archives, etc.).

extractor.deviantart.pagination

string

"api"

Controls when to stop paginating over API results.

* "api": Trust the API and stop when has_more is false.
* "manual": Disregard has_more and only stop when a batch of results is empty.

extractor.deviantart.public

bool

true

Use a public access token for API requests.

Disable this option to *force* using a private token for all requests when a refresh token is provided.

extractor.deviantart.quality


* integer
* string

100

JPEG quality level of images for which an original file download is not available.

Set this to "png" to download a PNG version of these images instead.

extractor.deviantart.refresh-token

string

null

The refresh-token value you get from linking your DeviantArt account to gallery-dl.

Using a refresh-token allows you to access private or otherwise not publicly available deviations.

Note: The refresh-token becomes invalid after 3 months or whenever your cache file is deleted or cleared.

extractor.deviantart.wait-min

integer

0

Minimum wait time in seconds before API requests.

extractor.deviantart.avatar.formats

list of strings

["original.jpg", "big.jpg", "big.gif", ".png"]

Avatar URL formats to return.

Each format is parsed as SIZE.EXT.
Leave SIZE empty to download the regular, small avatar format.

extractor.[E621].metadata


* bool
* string
* list of strings

false


* "notes,pools"
* ["notes", "pools"]

Extract additional metadata (notes, pool metadata) if available.

Note: This requires 0-2 additional HTTP requests per post.

extractor.[E621].threshold


* string
* integer

"auto"

Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 320.

Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1.

extractor.exhentai.domain

string

"auto"


* "auto": Use e-hentai.org or exhentai.org depending on the input URL
* "e-hentai.org": Use e-hentai.org for all URLs
* "exhentai.org": Use exhentai.org for all URLs

extractor.exhentai.fallback-retries

integer

2

Number of times a failed image gets retried or -1 for infinite retries.

extractor.exhentai.fav

string

"4"

After downloading a gallery, add it to your account's favorites as the given category number.

Note: Set this to "favdel" to remove galleries from your favorites.

Note: This will remove any Favorite Notes when applied to already favorited galleries.

extractor.exhentai.gp

string

"resized"

Selects how to handle "you do not have enough GP" errors.

* "resized": Continue downloading non-original images.
* "stop": Stop the current extractor run.
* "wait": Wait for user input before retrying the current image.

extractor.exhentai.limits

integer

null

Sets a custom image download limit and stops extraction when it gets exceeded.

extractor.exhentai.metadata

bool

false

Load extended gallery metadata from the API.

Adds archiver_key, posted, and torrents. Makes date and filesize more precise.

extractor.exhentai.original

bool

true

Download full-sized original images if available.

extractor.exhentai.source

string

"gallery"

Selects an alternative source to download files from.

* "hitomi": Download the corresponding gallery from hitomi.la

extractor.fanbox.embeds


* bool
* string

true

Control behavior on embedded content from external sites.

* true: Extract embed URLs and download them if supported (videos are not downloaded).
* "ytdl": Like true, but let youtube-dl handle video extraction and download for YouTube, Vimeo and SoundCloud embeds.
* false: Ignore embeds.

extractor.fanbox.metadata


* bool
* string
* list of strings

false


* user,plan
* ["user", "plan"]

Extract plan and extended user metadata.

extractor.flickr.access-token & .access-token-secret

string

null

The access_token and access_token_secret values you get from linking your Flickr account to gallery-dl.

extractor.flickr.contexts

bool

false

For each photo, return the albums and pools it belongs to as set and pool metadata.

Note: This requires 1 additional API call per photo. See flickr.photos.getAllContexts for details.

extractor.flickr.exif

bool

false

For each photo, return its EXIF/TIFF/GPS tags as exif and camera metadata.

Note: This requires 1 additional API call per photo. See flickr.photos.getExif for details.

extractor.flickr.metadata


* bool
* string
* list of strings

false


* license,last_update,machine_tags
* ["license", "last_update", "machine_tags"]

Extract additional metadata (license, date_taken, original_format, last_update, geo, machine_tags, o_dims)

It is possible to specify a custom list of metadata includes. See the extras parameter in Flickr's API docs for possible field names.

extractor.flickr.videos

bool

true

Extract and download videos.

extractor.flickr.size-max


* integer
* string

null

Sets the maximum allowed size for downloaded images.

* If this is an integer, it specifies the maximum image dimension (width and height) in pixels.
* If this is a string, it should be one of Flickr's format specifiers ("Original", "Large", ... or "o", "k", "h", "l", ...) to use as an upper limit.

extractor.furaffinity.descriptions

string

"text"

Controls the format of description metadata fields.

* "text": Plain text with HTML tags removed
* "html": Raw HTML content

extractor.furaffinity.external

bool

false

Follow external URLs linked in descriptions.

extractor.furaffinity.include


* string
* list of strings

"gallery"


* "scraps,favorite"
* ["scraps", "favorite"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "gallery", "scraps", "favorite".

It is possible to use "all" instead of listing all values separately.

extractor.furaffinity.layout

string

"auto"

Selects which site layout to expect when parsing posts.

* "auto": Automatically differentiate between "old" and "new"
* "old": Expect the *old* site layout
* "new": Expect the *new* site layout

extractor.gelbooru.api-key & .user-id

string

null

Values from the API Access Credentials section found at the bottom of your Account Options page.

extractor.gelbooru.favorite.order-posts

string

"desc"

Controls the order in which favorited posts are returned.

* "asc": Ascending favorite date order (oldest first)
* "desc": Descending favorite date order (newest first)
* "reverse": Same as "asc"

extractor.generic.enabled

bool

false

Match **all** URLs not otherwise supported by gallery-dl, even ones without a generic: prefix.

extractor.gofile.api-token

string

null

API token value found at the bottom of your profile page.

If not set, a temporary guest token will be used.

extractor.gofile.website-token

string

API token value used during API requests.

An invalid or not up-to-date value will result in 401 Unauthorized errors.

Keeping this option unset will use an extra HTTP request to attempt to fetch the current value used by gofile.

extractor.gofile.recursive

bool

false

Recursively download files from subfolders.

extractor.hentaifoundry.include


* string
* list of strings

"pictures"


* "scraps,stories"
* ["scraps", "stories"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "pictures", "scraps", "stories", "favorite".

It is possible to use "all" instead of listing all values separately.

extractor.hitomi.format

string

"webp"

Selects which image format to download.

Available formats are "webp" and "avif".

"original" will try to download the original jpg or png versions, but is most likely going to fail with 403 Forbidden errors.

extractor.imagechest.access-token

string

Your personal Image Chest access token.

These tokens allow using the API instead of having to scrape HTML pages, providing more detailed metadata. (date, description, etc)

See https://imgchest.com/docs/api/1.0/general/authorization for instructions on how to generate such a token.

extractor.imgur.client-id

string

Custom Client ID value for API requests.

extractor.imgur.mp4


* bool
* string

true

Controls whether to choose the GIF or MP4 version of an animation.

* true: Follow Imgur's advice and choose MP4 if the prefer_video flag in an image's metadata is set.
* false: Always choose GIF.
* "always": Always choose MP4.

extractor.inkbunny.orderby

string

"create_datetime"

Value of the orderby parameter for submission searches.

(See API#Search for details)

extractor.instagram.api

string

"rest"

Selects which API endpoints to use.

* "rest": REST API - higher-resolution media
* "graphql": GraphQL API - lower-resolution media

extractor.instagram.include


* string
* list of strings

"posts"


* "stories,highlights,posts"
* ["stories", "highlights", "posts"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "posts", "reels", "tagged", "stories", "highlights", "avatar".

It is possible to use "all" instead of listing all values separately.

extractor.instagram.metadata

bool

false

Provide extended user metadata even when referring to a user by ID, e.g. instagram.com/id:12345678.

Note: This metadata is always available when referring to a user by name, e.g. instagram.com/USERNAME.

extractor.instagram.order-files

string

"asc"

Controls the order in which files of each post are returned.

* "asc": Same order as displayed in a post
* "desc": Reverse order as displayed in a post
* "reverse": Same as "desc"

Note: This option does *not* affect {num}. To enumerate files in reverse order, use count - num + 1.

extractor.instagram.order-posts

string

"asc"

Controls the order in which posts are returned.

* "asc": Same order as displayed
* "desc": Reverse order as displayed
* "id" or "id_asc": Ascending order by ID
* "id_desc": Descending order by ID
* "reverse": Same as "desc"

Note: This option only affects highlights.

extractor.instagram.previews

bool

false

Download video previews.

extractor.instagram.videos

bool

true

Download video files.

extractor.itaku.videos

bool

true

Download video files.

extractor.kemonoparty.comments

bool

false

Extract comments metadata.

Note: This requires 1 additional HTTP request per post.

extractor.kemonoparty.duplicates

bool

false

Controls how to handle duplicate files in a post.

* true: Download duplicates
* false: Ignore duplicates

extractor.kemonoparty.dms

bool

false

Extract a user's direct messages as dms metadata.

extractor.kemonoparty.favorites

string

artist

Determines the type of favorites to be downloaded.

Available types are artist, and post.

extractor.kemonoparty.files

list of strings

["attachments", "file", "inline"]

Determines the type and order of files to be downloaded.

Available types are file, attachments, and inline.

extractor.kemonoparty.max-posts

integer

null

Limit the number of posts to download.

extractor.kemonoparty.metadata

bool

false

Extract username metadata.

extractor.kemonoparty.revisions


* bool
* string

false

Extract post revisions.

Set this to "unique" to filter out duplicate revisions.

Note: This requires 1 additional HTTP request per post.

extractor.kemonoparty.order-revisions

string

"desc"

Controls the order in which revisions are returned.

* "asc": Ascending order (oldest first)
* "desc": Descending order (newest first)
* "reverse": Same as "asc"

extractor.khinsider.format

string

"mp3"

The name of the preferred file format to download.

Use "all" to download all available formats, or a (comma-separated) list to select multiple formats.

If the selected format is not available, the first in the list gets chosen (usually mp3).

extractor.lolisafe.domain

string

null

Specifies the domain used by a lolisafe extractor regardless of input URL.

Setting this option to "auto" uses the same domain as a given input URL.

extractor.luscious.gif

bool

false

Format in which to download animated images.

Use true to download animated images as gifs and false to download as mp4 videos.

extractor.mangadex.api-server

string

"https://api.mangadex.org"

The server to use for API requests.

extractor.mangadex.api-parameters

object (name -> value)

{"order[updatedAt]": "desc"}

Additional query parameters to send when fetching manga chapters.

(See /manga/{id}/feed and /user/follows/manga/feed)

extractor.mangadex.lang


* string
* list of strings


* "en"
* "fr,it"
* ["fr", "it"]

ISO 639-1 language codes to filter chapters by.

extractor.mangadex.ratings

list of strings

["safe", "suggestive", "erotica", "pornographic"]

List of acceptable content ratings for returned chapters.

extractor.mangapark.source


* string
* integer


* "koala:en"
* 15150116

Select chapter source and language for a manga.

The general syntax is "<source name>:<ISO 639-1 language code>".
Both are optional, meaning "koala", "koala:", ":en",
or even just ":" are possible as well.

Specifying the numeric ID of a source is also supported.

extractor.[mastodon].access-token

string

null

The access-token value you get from linking your account to gallery-dl.

Note: gallery-dl comes with built-in tokens for mastodon.social, pawoo and baraag. For other instances, you need to obtain an access-token in order to use usernames in place of numerical user IDs.

extractor.[mastodon].reblogs

bool

false

Fetch media from reblogged posts.

extractor.[mastodon].replies

bool

true

Fetch media from replies to other posts.

extractor.[mastodon].text-posts

bool

false

Also emit metadata for text-only posts without media content.

extractor.[misskey].access-token

string

Your access token, necessary to fetch favorited notes.

extractor.[misskey].renotes

bool

false

Fetch media from renoted notes.

extractor.[misskey].replies

bool

true

Fetch media from replies to other notes.

extractor.[moebooru].pool.metadata

bool

false

Extract extended pool metadata.

Note: Not supported by all moebooru instances.

extractor.newgrounds.flash

bool

true

Download original Adobe Flash animations instead of pre-rendered videos.

extractor.newgrounds.format

string

"original"

"720p"

Selects the preferred format for video downloads.

If the selected format is not available, the next smaller one gets chosen.

extractor.newgrounds.include


* string
* list of strings

"art"


* "movies,audio"
* ["movies", "audio"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "art", "audio", "games", "movies".

It is possible to use "all" instead of listing all values separately.

extractor.nijie.include


* string
* list of strings

"illustration,doujin"

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "illustration", "doujin", "favorite", "nuita".

It is possible to use "all" instead of listing all values separately.

extractor.nitter.quoted

bool

false

Fetch media from quoted Tweets.

extractor.nitter.retweets

bool

false

Fetch media from Retweets.

extractor.nitter.videos


* bool
* string

true

Control video download behavior.

* true: Download videos
* "ytdl": Download videos using youtube-dl
* false: Skip video Tweets

extractor.oauth.browser

bool

true

Controls how a user is directed to an OAuth authorization page.

* true: Use Python's webbrowser.open() method to automatically open the URL in the user's default browser.
* false: Ask the user to copy & paste an URL from the terminal.

extractor.oauth.cache

bool

true

Store tokens received during OAuth authorizations in cache.

extractor.oauth.host

string

"localhost"

Host name / IP address to bind to during OAuth authorization.

extractor.oauth.port

integer

6414

Port number to listen on during OAuth authorization.

Note: All redirects will go to port 6414, regardless of the port specified here. You'll have to manually adjust the port number in your browser's address bar when using a different port than the default.

extractor.paheal.metadata

bool

false

Extract additional metadata (source, uploader)

Note: This requires 1 additional HTTP request per post.

extractor.patreon.files

list of strings

["images", "image_large", "attachments", "postfile", "content"]

Determines the type and order of files to be downloaded.

Available types are postfile, images, image_large, attachments, and content.

extractor.photobucket.subalbums

bool

true

Download subalbums.

extractor.pillowfort.external

bool

false

Follow links to external sites, e.g. Twitter,

extractor.pillowfort.inline

bool

true

Extract inline images.

extractor.pillowfort.reblogs

bool

false

Extract media from reblogged posts.

extractor.pinterest.domain

string

"auto"

Specifies the domain used by pinterest extractors.

Setting this option to "auto" uses the same domain as a given input URL.

extractor.pinterest.sections

bool

true

Include pins from board sections.

extractor.pinterest.videos

bool

true

Download from video pins.

extractor.pixeldrain.api-key

string

Your account's API key

extractor.pixiv.include


* string
* list of strings

"artworks"


* "avatar,background,artworks"
* ["avatar", "background", "artworks"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "artworks", "avatar", "background", "favorite", "novel-user", "novel-bookmark".

It is possible to use "all" instead of listing all values separately.

extractor.pixiv.refresh-token

string

The refresh-token value you get from running gallery-dl oauth:pixiv (see OAuth_) or by using a third-party tool like gppt.

extractor.pixiv.embeds

bool

false

Download images embedded in novels.

extractor.pixiv.novel.full-series

bool

false

When downloading a novel being part of a series, download all novels of that series.

extractor.pixiv.metadata

bool

false

Fetch extended user metadata.

extractor.pixiv.metadata-bookmark

bool

false

For works bookmarked by your own account, fetch bookmark tags as tags_bookmark metadata.

Note: This requires 1 additional API call per bookmarked post.

extractor.pixiv.work.related

bool

false

Also download related artworks.

extractor.pixiv.tags

string

"japanese"

Controls the tags metadata field.

* "japanese": List of Japanese tags
* "translated": List of translated tags
* "original": Unmodified list with both Japanese and translated tags

extractor.pixiv.ugoira

bool

true

Download Pixiv's Ugoira animations or ignore them.

These animations come as a .zip file containing all animation frames in JPEG format.

Use an ugoira post processor to convert them to watchable videos. (Example__)

extractor.pixiv.max-posts

integer

0

When downloading galleries, this sets the maximum number of posts to get. A value of 0 means no limit.

extractor.plurk.comments

bool

false

Also search Plurk comments for URLs.

extractor.[postmill].save-link-post-body

bool

false

Whether or not to save the body for link/image posts.

extractor.reactor.gif

bool

false

Format in which to download animated images.

Use true to download animated images as gifs and false to download as mp4 videos.

extractor.readcomiconline.captcha

string

"stop"

Controls how to handle redirects to CAPTCHA pages.

* "stop: Stop the current extractor run.
* "wait: Ask the user to solve the CAPTCHA and wait.

extractor.readcomiconline.quality

string

"auto"

Sets the quality query parameter of issue pages. ("lq" or "hq")

"auto" uses the quality parameter of the input URL or "hq" if not present.

extractor.reddit.comments

integer

0

The value of the limit parameter when loading a submission and its comments. This number (roughly) specifies the total amount of comments being retrieved with the first API call.

Reddit's internal default and maximum values for this parameter appear to be 200 and 500 respectively.

The value 0 ignores all comments and significantly reduces the time required when scanning a subreddit.

extractor.reddit.morecomments

bool

false

Retrieve additional comments by resolving the more comment stubs in the base comment tree.

Note: This requires 1 additional API call for every 100 extra comments.

extractor.reddit.date-min & .date-max

Date

0 and 253402210800 (timestamp of datetime.max)

Ignore all submissions posted before/after this date.

extractor.reddit.id-min & .id-max

string

"6kmzv2"

Ignore all submissions posted before/after the submission with this ID.

extractor.reddit.previews

bool

true

For failed downloads from external URLs / child extractors, download Reddit's preview image/video if available.

extractor.reddit.recursion

integer

0

Reddit extractors can recursively visit other submissions linked to in the initial set of submissions. This value sets the maximum recursion depth.

Special values:

* 0: Recursion is disabled
* -1: Infinite recursion (don't do this)

extractor.reddit.refresh-token

string

null

The refresh-token value you get from linking your Reddit account to gallery-dl.

Using a refresh-token allows you to access private or otherwise not publicly available subreddits, given that your account is authorized to do so, but requests to the reddit API are going to be rate limited at 600 requests every 10 minutes/600 seconds.

extractor.reddit.videos


* bool
* string

true

Control video download behavior.

* true: Download videos and use youtube-dl to handle HLS and DASH manifests
* "ytdl": Download videos and let youtube-dl handle all of video extraction and download
* "dash": Extract DASH manifest URLs and use youtube-dl to download and merge them. (*)
* false: Ignore videos

(*) This saves 1 HTTP request per video and might potentially be able to download otherwise deleted videos, but it will not always get the best video quality available.

extractor.redgifs.format


* string
* list of strings

["hd", "sd", "gif"]

List of names of the preferred animation format, which can be "hd", "sd", "gif", "thumbnail", "vthumbnail", or "poster".

If a selected format is not available, the next one in the list will be tried until an available format is found.

If the format is given as string, it will be extended with ["hd", "sd", "gif"]. Use a list with one element to restrict it to only one possible format.

extractor.sankaku.id-format

string

"numeric"

Format of id metadata fields.

* "alphanumeric" or "alnum": 11-character alphanumeric IDs (y0abGlDOr2o)
* "numeric" or "legacy": numeric IDs (360451)

extractor.sankaku.refresh

bool

false

Refresh download URLs before they expire.

extractor.sankakucomplex.embeds

bool

false

Download video embeds from external sites.

extractor.sankakucomplex.videos

bool

true

Download videos.

extractor.skeb.article

bool

false

Download article images.

extractor.skeb.sent-requests

bool

false

Download sent requests.

extractor.skeb.thumbnails

bool

false

Download thumbnails.

extractor.skeb.search.filters


* string
* list of strings

["genre:art", "genre:voice", "genre:novel", "genre:video", "genre:music", "genre:correction"]

"genre:music OR genre:voice"

Filters used during searches.

extractor.smugmug.videos

bool

true

Download video files.

extractor.steamgriddb.animated

bool

true

Include animated assets when downloading from a list of assets.

extractor.steamgriddb.epilepsy

bool

true

Include assets tagged with epilepsy when downloading from a list of assets.

extractor.steamgriddb.dimensions


* string
* list of strings

"all"


* "1024x512,512x512"
* ["460x215", "920x430"]

Only include assets that are in the specified dimensions. all can be used to specify all dimensions. Valid values are:

* Grids: 460x215, 920x430, 600x900, 342x482, 660x930, 512x512, 1024x1024
* Heroes: 1920x620, 3840x1240, 1600x650
* Logos: N/A (will be ignored)
* Icons: 8x8, 10x10, 14x14, 16x16, 20x20, 24x24, 28x28, 32x32, 35x35, 40x40, 48x48, 54x54, 56x56, 57x57, 60x60, 64x64, 72x72, 76x76, 80x80, 90x90, 96x96, 100x100, 114x114, 120x120, 128x128, 144x144, 150x150, 152x152, 160x160, 180x180, 192x192, 194x194, 256x256, 310x310, 512x512, 768x768, 1024x1024

extractor.steamgriddb.file-types


* string
* list of strings

"all"


* "png,jpeg"
* ["jpeg", "webp"]

Only include assets that are in the specified file types. all can be used to specify all file types. Valid values are:

* Grids: png, jpeg, jpg, webp
* Heroes: png, jpeg, jpg, webp
* Logos: png, webp
* Icons: png, ico

extractor.steamgriddb.download-fake-png

bool

true

Download fake PNGs alongside the real file.

extractor.steamgriddb.humor

bool

true

Include assets tagged with humor when downloading from a list of assets.

extractor.steamgriddb.languages


* string
* list of strings

"all"


* "en,km"
* ["fr", "it"]

Only include assets that are in the specified languages. all can be used to specify all languages. Valid values are ISO 639-1 language codes.

extractor.steamgriddb.nsfw

bool

true

Include assets tagged with adult content when downloading from a list of assets.

extractor.steamgriddb.sort

string

score_desc

Set the chosen sorting method when downloading from a list of assets. Can be one of:

* score_desc (Highest Score (Beta))
* score_asc (Lowest Score (Beta))
* score_old_desc (Highest Score (Old))
* score_old_asc (Lowest Score (Old))
* age_desc (Newest First)
* age_asc (Oldest First)

extractor.steamgriddb.static

bool

true

Include static assets when downloading from a list of assets.

extractor.steamgriddb.styles


* string
* list of strings

all


* white,black
* ["no_logo", "white_logo"]

Only include assets that are in the specified styles. all can be used to specify all styles. Valid values are:

* Grids: alternate, blurred, no_logo, material, white_logo
* Heroes: alternate, blurred, material
* Logos: official, white, black, custom
* Icons: official, custom

extractor.steamgriddb.untagged

bool

true

Include untagged assets when downloading from a list of assets.

extractor.[szurubooru].username & .token

string

Username and login token of your account to access private resources.

To generate a token, visit /user/USERNAME/list-tokens and click Create Token.

extractor.tumblr.avatar

bool

false

Download blog avatars.

extractor.tumblr.date-min & .date-max

Date

0 and null

Ignore all posts published before/after this date.

extractor.tumblr.external

bool

false

Follow external URLs (e.g. from "Link" posts) and try to extract images from them.

extractor.tumblr.inline

bool

true

Search posts for inline images and videos.

extractor.tumblr.offset

integer

0

Custom offset starting value when paginating over blog posts.

Allows skipping over posts without having to waste API calls.

extractor.tumblr.original

bool

true

Download full-resolution photo and inline images.

For each photo with "maximum" resolution (width equal to 2048 or height equal to 3072) or each inline image, use an extra HTTP request to find the URL to its full-resolution version.

extractor.tumblr.ratelimit

string

"abort"

Selects how to handle exceeding the daily API rate limit.

* "abort": Raise an error and stop extraction
* "wait": Wait until rate limit reset

extractor.tumblr.reblogs


* bool
* string

true


* true: Extract media from reblogged posts
* false: Skip reblogged posts
* "same-blog": Skip reblogged posts unless the original post is from the same blog

extractor.tumblr.posts


* string
* list of strings

"all"


* "video,audio,link"
* ["video", "audio", "link"]

A (comma-separated) list of post types to extract images, etc. from.

Possible types are text, quote, link, answer, video, audio, photo, chat.

It is possible to use "all" instead of listing all types separately.

extractor.tumblr.fallback-delay

float

120.0

Number of seconds to wait between retries for fetching full-resolution images.

extractor.tumblr.fallback-retries

integer

2

Number of retries for fetching full-resolution images or -1 for infinite retries.

extractor.twibooru.api-key

string

null

Your Twibooru API Key, to use your account's browsing settings and filters.

extractor.twibooru.filter

integer

2 (Everything filter)

The content filter ID to use.

Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.

See Filters for details.

extractor.twitter.ads

bool

false

Fetch media from promoted Tweets.

extractor.twitter.cards


* bool
* string

false

Controls how to handle Twitter Cards.

* false: Ignore cards
* true: Download image content from supported cards
* "ytdl": Additionally download video content from unsupported cards using youtube-dl

extractor.twitter.cards-blacklist

list of strings

["summary", "youtube.com", "player:twitch.tv"]

List of card types to ignore.

Possible values are

* card names
* card domains
* <card name>:<card domain>

extractor.twitter.conversations


* bool
* string

false

For input URLs pointing to a single Tweet, e.g. https://twitter.com/i/web/status/<TweetID>, fetch media from all Tweets and replies in this conversation <https://help.twitter.com/en/using-twitter/twitter-conversations>.

If this option is equal to "accessible", only download from conversation Tweets if the given initial Tweet is accessible.

extractor.twitter.csrf

string

"cookies"

Controls how to handle Cross Site Request Forgery (CSRF) tokens.

* "auto": Always auto-generate a token.
* "cookies": Use token given by the ct0 cookie if present.

extractor.twitter.expand

bool

false

For each Tweet, return *all* Tweets from that initial Tweet's conversation or thread, i.e. *expand* all Twitter threads.

Going through a timeline with this option enabled is essentially the same as running gallery-dl https://twitter.com/i/web/status/<TweetID> with enabled conversations option for each Tweet in said timeline.

Note: This requires at least 1 additional API call per initial Tweet.

extractor.twitter.include


* string
* list of strings

"timeline"


* "avatar,background,media"
* ["avatar", "background", "media"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "avatar", "background", "timeline", "tweets", "media", "replies", "likes".

It is possible to use "all" instead of listing all values separately.

extractor.twitter.transform

bool

true

Transform Tweet and User metadata into a simpler, uniform format.

extractor.twitter.tweet-endpoint

string

"auto"

Selects the API endpoint used to retrieve single Tweets.

* "restid": /TweetResultByRestId - accessible to guest users
* "detail": /TweetDetail - more stable
* "auto": "detail" when logged in, "restid" otherwise

extractor.twitter.size

list of strings

["orig", "4096x4096", "large", "medium", "small"]

The image version to download. Any entries after the first one will be used for potential fallback URLs.

Known available sizes are 4096x4096, orig, large, medium, and small.

extractor.twitter.logout

bool

false

Logout and retry as guest when access to another user's Tweets is blocked.

extractor.twitter.pinned

bool

false

Fetch media from pinned Tweets.

extractor.twitter.quoted

bool

false

Fetch media from quoted Tweets.

If this option is enabled, gallery-dl will try to fetch a quoted (original) Tweet when it sees the Tweet which quotes it.

extractor.twitter.ratelimit

string

"wait"

Selects how to handle exceeding the API rate limit.

* "abort": Raise an error and stop extraction
* "wait": Wait until rate limit reset

extractor.twitter.locked

string

"abort"

Selects how to handle "account is temporarily locked" errors.

* "abort": Raise an error and stop extraction
* "wait": Wait until the account is unlocked and retry

extractor.twitter.replies

bool

true

Fetch media from replies to other Tweets.

If this value is "self", only consider replies where reply and original Tweet are from the same user.

Note: Twitter will automatically expand conversations if you use the /with_replies timeline while logged in. For example, media from Tweets which the user replied to will also be downloaded.

It is possible to exclude unwanted Tweets using image-filter <extractor.*.image-filter_>.

extractor.twitter.retweets

bool

false

Fetch media from Retweets.

If this value is "original", metadata for these files will be taken from the original Tweets, not the Retweets.

extractor.twitter.timeline.strategy

string

"auto"

Controls the strategy / tweet source used for timeline URLs (https://twitter.com/USER/timeline).

* "tweets": /tweets timeline + search
* "media": /media timeline + search
* "with_replies": /with_replies timeline + search
* "auto": "tweets" or "media", depending on retweets and text-tweets settings

extractor.twitter.text-tweets

bool

false

Also emit metadata for text-only Tweets without media content.

This only has an effect with a metadata (or exec) post processor with "event": "post" and appropriate filename.

extractor.twitter.twitpic

bool

false

Extract TwitPic embeds.

extractor.twitter.unique

bool

true

Ignore previously seen Tweets.

extractor.twitter.users

string

"user"

"https://twitter.com/search?q=from:{legacy[screen_name]}"

Format string for user URLs generated from
following and list-members queries, whose replacement field values come from Twitter user objects
(Example)

Special values:

* "user": https://twitter.com/i/user/{rest_id}
* "timeline": https://twitter.com/id:{rest_id}/timeline
* "tweets": https://twitter.com/id:{rest_id}/tweets
* "media": https://twitter.com/id:{rest_id}/media

Note: To allow gallery-dl to follow custom URL formats, set the blacklist for twitter to a non-default value, e.g. an empty string "".

extractor.twitter.videos


* bool
* string

true

Control video download behavior.

* true: Download videos
* "ytdl": Download videos using youtube-dl
* false: Skip video Tweets

extractor.unsplash.format

string

"raw"

Name of the image format to download.

Available formats are "raw", "full", "regular", "small", and "thumb".

extractor.vipergirls.domain

string

"vipergirls.to"

Specifies the domain used by vipergirls extractors.

For example "viper.click" if the main domain is blocked or to bypass Cloudflare,

extractor.vipergirls.like

bool

false

Automatically like posts after downloading their images.

Note: Requires login or cookies

extractor.vsco.videos

bool

true

Download video files.

extractor.wallhaven.api-key

string

null

Your Wallhaven API Key, to use your account's browsing settings and default filters when searching.

See https://wallhaven.cc/help/api for more information.

extractor.wallhaven.include


* string
* list of strings

"uploads"


* "uploads,collections"
* ["uploads", "collections"]

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "uploads", "collections".

It is possible to use "all" instead of listing all values separately.

extractor.wallhaven.metadata

bool

false

Extract additional metadata (tags, uploader)

Note: This requires 1 additional HTTP request per post.

extractor.weasyl.api-key

string

null

Your Weasyl API Key, to use your account's browsing settings and filters.

extractor.weasyl.metadata

bool

false

Fetch extra submission metadata during gallery downloads.
(comments, description, favorites, folder_name,
tags, views)

Note: This requires 1 additional HTTP request per submission.

extractor.weibo.gifs


* bool
* string

true

Download gif files.

Set this to "video" to download GIFs as video files.

extractor.weibo.include


* string
* list of strings

"feed"

A (comma-separated) list of subcategories to include when processing a user profile.

Possible values are "home", "feed", "videos", "newvideo", "article", "album".

It is possible to use "all" instead of listing all values separately.

extractor.weibo.livephoto

bool

true

Download livephoto files.

extractor.weibo.retweets

bool

false

Fetch media from retweeted posts.

If this value is "original", metadata for these files will be taken from the original posts, not the retweeted posts.

extractor.weibo.videos

bool

true

Download video files.

extractor.ytdl.enabled

bool

false

Match **all** URLs, even ones without a ytdl: prefix.

extractor.ytdl.format

string

youtube-dl's default, currently "bestvideo+bestaudio/best"

Video format selection <https://github.com/ytdl-org/youtube-dl#format-selection> directly passed to youtube-dl.

extractor.ytdl.generic

bool

true

Controls the use of youtube-dl's generic extractor.

Set this option to "force" for the same effect as youtube-dl's --force-generic-extractor.

extractor.ytdl.logging

bool

true

Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.

Note: Set quiet and no_warnings in extractor.ytdl.raw-options to true to suppress all output.

extractor.ytdl.module

string

null

Name of the youtube-dl Python module to import.

Setting this to null will try to import "yt_dlp" followed by "youtube_dl" as fallback.

extractor.ytdl.raw-options

object (name -> value)

{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }

Additional options passed directly to the YoutubeDL constructor.

All available options can be found in youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>.

extractor.ytdl.cmdline-args


* string
* list of strings


* "--quiet --write-sub --merge-output-format mkv"
* ["--quiet", "--write-sub", "--merge-output-format", "mkv"]

Additional options specified as youtube-dl command-line arguments.

extractor.ytdl.config-file

Path

"~/.config/youtube-dl/config"

Location of a youtube-dl configuration file to load options from.

extractor.zerochan.metadata

bool

false

Extract additional metadata (date, md5, tags, ...)

Note: This requires 1-2 additional HTTP requests per post.

extractor.zerochan.pagination

string

"api"

Controls how to paginate over tag search results.

* "api": Use the JSON API (no extension metadata)
* "html": Parse HTML pages (limited to 100 pages * 24 posts)

extractor.[booru].tags

bool

false

Categorize tags by their respective types and provide them as tags_<type> metadata fields.

Note: This requires 1 additional HTTP request per post.

extractor.[booru].notes

bool

false

Extract overlay notes (position and text).

Note: This requires 1 additional HTTP request per post.

extractor.[booru].url

string

"file_url"

"preview_url"

Alternate field name to retrieve download URLs from.

extractor.[manga-extractor].chapter-reverse

bool

false

Reverse the order of chapter URLs extracted from manga pages.

* true: Start with the latest chapter
* false: Start with the first chapter

extractor.[manga-extractor].page-reverse

bool

false

Download manga chapter pages in reverse order.

DOWNLOADER OPTIONS

downloader.*.enabled

bool

true

Enable/Disable this downloader module.

downloader.*.filesize-min & .filesize-max

string

null

"32000", "500k", "2.5M"

Minimum/Maximum allowed file size in bytes. Any file smaller/larger than this limit will not be downloaded.

Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.

downloader.*.mtime

bool

true

Use Last-Modified HTTP response headers to set file modification times.

downloader.*.part

bool

true

Controls the use of .part files during file downloads.

* true: Write downloaded data into .part files and rename them upon download completion. This mode additionally supports resuming incomplete downloads.
* false: Do not use .part files and write data directly into the actual output files.

downloader.*.part-directory

Path

null

Alternate location for .part files.

Missing directories will be created as needed. If this value is null, .part files are going to be stored alongside the actual output files.

downloader.*.progress

float

3.0

Number of seconds until a download progress indicator for the current download is displayed.

Set this option to null to disable this indicator.

downloader.*.rate

string

null

"32000", "500k", "2.5M"

Maximum download rate in bytes per second.

Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.

downloader.*.retries

integer

extractor.*.retries

Maximum number of retries during file downloads, or -1 for infinite retries.

downloader.*.timeout

float

extractor.*.timeout

Connection timeout during file downloads.

downloader.*.verify


* bool
* string

extractor.*.verify

Certificate validation during file downloads.

downloader.*.proxy


* string
* object (scheme -> proxy)

extractor.*.proxy

Proxy server used for file downloads.

Disable the use of a proxy for file downloads by explicitly setting this option to null.

downloader.http.adjust-extensions

bool

true

Check file headers of downloaded files and adjust their filename extensions if they do not match.

For example, this will change the filename extension ({extension}) of a file called example.png from png to jpg when said file contains JPEG/JFIF data.

downloader.http.consume-content

bool

false

Controls the behavior when an HTTP response is considered unsuccessful

If the value is true, consume the response body. This avoids closing the connection and therefore improves connection reuse.

If the value is false, immediately close the connection without reading the response. This can be useful if the server is known to send large bodies for error responses.

downloader.http.chunk-size


* integer
* string

32768

"50k", "0.8M"

Number of bytes per downloaded chunk.

Possible values are integer numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.

downloader.http.headers

object (name -> value)

{"Accept": "image/webp,*/*", "Referer": "https://example.org/"}

Additional HTTP headers to send when downloading files,

downloader.http.retry-codes

list of integers

extractor.*.retry-codes

Additional HTTP response status codes to retry a download on.

Codes 200, 206, and 416 (when resuming a partial download) will never be retried and always count as success, regardless of this option.

5xx codes (server error responses) will always be retried, regardless of this option.

downloader.http.validate

bool

true

Check for invalid responses.

Fail a download when a file does not pass instead of downloading a potentially broken file.

downloader.ytdl.format

string

youtube-dl's default, currently "bestvideo+bestaudio/best"

Video format selection <https://github.com/ytdl-org/youtube-dl#format-selection> directly passed to youtube-dl.

downloader.ytdl.forward-cookies

bool

false

Forward cookies to youtube-dl.

downloader.ytdl.logging

bool

true

Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.

Note: Set quiet and no_warnings in downloader.ytdl.raw-options to true to suppress all output.

downloader.ytdl.module

string

null

Name of the youtube-dl Python module to import.

Setting this to null will first try to import "yt_dlp" and use "youtube_dl" as fallback.

downloader.ytdl.outtmpl

string

null

The Output Template used to generate filenames for files downloaded with youtube-dl.

Special values:

* null: generate filenames with extractor.*.filename
* "default": use youtube-dl's default, currently "%(title)s-%(id)s.%(ext)s"

Note: An output template other than null might cause unexpected results in combination with other options (e.g. "skip": "enumerate")

downloader.ytdl.raw-options

object (name -> value)

{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }

Additional options passed directly to the YoutubeDL constructor.

All available options can be found in youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>.

downloader.ytdl.cmdline-args


* string
* list of strings


* "--quiet --write-sub --merge-output-format mkv"
* ["--quiet", "--write-sub", "--merge-output-format", "mkv"]

Additional options specified as youtube-dl command-line arguments.

downloader.ytdl.config-file

Path

"~/.config/youtube-dl/config"

Location of a youtube-dl configuration file to load options from.

OUTPUT OPTIONS

output.mode


* string
* object (key -> format string)

"auto"

Controls the output string format and status indicators.

* "null": No output
* "pipe": Suitable for piping to other processes or files
* "terminal": Suitable for the standard Windows console
* "color": Suitable for terminals that understand ANSI escape codes and colors
* "auto": "terminal" on Windows with output.ansi disabled, "color" otherwise.

It is possible to use custom output format strings
by setting this option to an object and specifying start, success, skip, progress, and progress-total.

For example, the following will replicate the same output as mode: color:

{ "start" : "{}", "success": "\r\u001b[1;32m{}\u001b[0m\n", "skip" : "\u001b[2m{}\u001b[0m\n", "progress" : "\r{0:>7}B {1:>7}B/s ", "progress-total": "\r{3:>3}% {0:>7}B {1:>7}B/s " }

start, success, and skip are used to output the current filename, where {} or {0} is replaced with said filename. If a given format string contains printable characters other than that, their number needs to be specified as [<number>, <format string>] to get the correct results for output.shorten. For example

"start" : [12, "Downloading {}"]

progress and progress-total are used when displaying the
download progress indicator, progress when the total number of bytes to download is unknown,
progress-total otherwise.

For these format strings

* {0} is number of bytes downloaded
* {1} is number of downloaded bytes per second
* {2} is total number of bytes
* {3} is percent of bytes downloaded to total bytes

output.stdout & .stdin & .stderr


* string
* object

"utf-8"

{ "encoding": "utf-8", "errors": "replace", "line_buffering": true }

Reconfigure a standard stream.

Possible options are

* encoding
* errors
* newline
* line_buffering
* write_through

When this option is specified as a simple string, it is interpreted as {"encoding": "<string-value>", "errors": "replace"}

Note: errors always defaults to "replace"

output.shorten

bool

true

Controls whether the output strings should be shortened to fit on one console line.

Set this option to "eaw" to also work with east-asian characters with a display width greater than 1.

output.colors

object (key -> ANSI color)

{"success": "1;32", "skip": "2"}

Controls the ANSI colors used with mode: color for successfully downloaded or skipped files.

output.ansi

bool

false

On Windows, enable ANSI escape sequences and colored output
by setting the ENABLE_VIRTUAL_TERMINAL_PROCESSING flag for stdout and stderr.

output.skip

bool

true

Show skipped file downloads.

output.fallback

bool

true

Include fallback URLs in the output of -g/--get-urls.

output.private

bool

false

Include private fields, i.e. fields whose name starts with an underscore, in the output of -K/--list-keywords and -j/--dump-json.

output.progress


* bool
* string

true

Controls the progress indicator when *gallery-dl* is run with multiple URLs as arguments.

* true: Show the default progress indicator ("[{current}/{total}] {url}")
* false: Do not show any progress indicator
* Any string: Show the progress indicator using this as a custom format string. Possible replacement keys are current, total and url.

output.log


* string
* Logging Configuration

"[{name}][{levelname}] {message}"

Configuration for logging output to stderr.

If this is a simple string, it specifies the format string for logging messages.

output.logfile


* Path
* Logging Configuration

File to write logging output to.

output.unsupportedfile


* Path
* Logging Configuration

File to write external URLs unsupported by *gallery-dl* to.

The default format string here is "{message}".

output.errorfile


* Path
* Logging Configuration

File to write input URLs which returned an error to.

The default format string here is also "{message}".

When combined with -I/--input-file-comment or -x/--input-file-delete, this option will cause *all* input URLs from these files to be commented/deleted after processing them and not just successful ones.

output.num-to-str

bool

false

Convert numeric values (integer or float) to string before outputting them as JSON.

POSTPROCESSOR OPTIONS

classify.mapping

object (directory -> extensions)

{ "Pictures": ["jpg", "jpeg", "png", "gif", "bmp", "svg", "webp"], "Video" : ["flv", "ogv", "avi", "mp4", "mpg", "mpeg", "3gp", "mkv", "webm", "vob", "wmv"], "Music" : ["mp3", "aac", "flac", "ogg", "wma", "m4a", "wav"], "Archives": ["zip", "rar", "7z", "tar", "gz", "bz2"] }

A mapping from directory names to filename extensions that should be stored in them.

Files with an extension not listed will be ignored and stored in their default location.

compare.action

string

"replace"

The action to take when files do **not** compare as equal.

* "replace": Replace/Overwrite the old version with the new one

* "enumerate": Add an enumeration index to the filename of the new version like skip = "enumerate"

compare.equal

string

"null"

The action to take when files do compare as equal.

* "abort:N": Stop the current extractor run after N consecutive files compared as equal.

* "terminate:N": Stop the current extractor run, including parent extractors, after N consecutive files compared as equal.

* "exit:N": Exit the program after N consecutive files compared as equal.

compare.shallow

bool

false

Only compare file sizes. Do not read and compare their content.

exec.archive

Path

File to store IDs of executed commands in, similar to extractor.*.archive.

archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.

exec.async

bool

false

Controls whether to wait for a subprocess to finish or to let it run asynchronously.

exec.command


* string
* list of strings


* "convert {} {}.png && rm {}"
* ["echo", "{user[account]}", "{id}"]

The command to run.

* If this is a string, it will be executed using the system's shell, e.g. /bin/sh. Any {} will be replaced with the full path of a file or target directory, depending on exec.event

* If this is a list, the first element specifies the program name and any further elements its arguments. Each element of this list is treated as a format string using the files' metadata as well as {_path}, {_directory}, and {_filename}.

exec.event


* string
* list of strings

"after"

The event(s) for which exec.command is run.

See metadata.event for a list of available events.

metadata.mode

string

"json"

Selects how to process metadata.

* "json": write metadata using json.dump()
* "jsonl": write metadata in JSON Lines <https://jsonlines.org/> format
* "tags": write tags separated by newlines
* "custom": write the result of applying metadata.content-format to a file's metadata dictionary
* "modify": add or modify metadata entries
* "delete": remove metadata entries

metadata.filename

string

null

"{id}.data.json"

A format string to build the filenames for metadata files with. (see extractor.filename)

Using "-" as filename will write all output to stdout.

If this option is set, metadata.extension and metadata.extension-format will be ignored.

metadata.directory

string

"."

"metadata"

Directory where metadata files are stored in relative to the current target location for file downloads.

metadata.extension

string

"json" or "txt"

Filename extension for metadata files that will be appended to the original file names.

metadata.extension-format

string


* "{extension}.json"
* "json"

Custom format string to build filename extensions for metadata files with, which will replace the original filename extensions.

Note: metadata.extension is ignored if this option is set.

metadata.event


* string
* list of strings

"file"


* "prepare,file,after"
* ["prepare-after", "skip"]

The event(s) for which metadata gets written to a file.

Available events are:

init After post processor initialization and before the first file download finalize On extractor shutdown, e.g. after all files were downloaded finalize-success On extractor shutdown when no error occurred finalize-error On extractor shutdown when at least one error occurred prepare Before a file download prepare-after Before a file download, but after building and checking file paths file When completing a file download, but before it gets moved to its target location after After a file got moved to its target location skip When skipping a file download post When starting to download all files of a post, e.g. a Tweet on Twitter or a post on Patreon. post-after After downloading all files of a post

metadata.fields


* list of strings
* object (field name -> format string)

["blocked", "watching", "status[creator][name]"]

{ "blocked" : "***", "watching" : "\fE 'yes' if watching else 'no'", "status[username]": "{status[creator][name]!l}" }


* "mode": "delete": A list of metadata field names to remove.
* "mode": "modify": An object with metadata field names mapping to a format string whose result is assigned to said field name.

metadata.content-format


* string
* list of strings


* "tags:\n\n{tags:J\n}\n"
* ["tags:", "", "{tags:J\n}"]

Custom format string to build the content of metadata files with.

Note: Only applies for "mode": "custom".

metadata.ascii

bool

false

Escape all non-ASCII characters.

See the ensure_ascii argument of json.dump() for further details.

Note: Only applies for "mode": "json" and "jsonl".

metadata.indent


* integer
* string

4

Indentation level of JSON output.

See the indent argument of json.dump() for further details.

Note: Only applies for "mode": "json".

metadata.separators

list with two string elements

[", ", ": "]

<item separator> - <key separator> pair to separate JSON keys and values with.

See the separators argument of json.dump() for further details.

Note: Only applies for "mode": "json" and "jsonl".

metadata.sort

bool

false

Sort output by key.

See the sort_keys argument of json.dump() for further details.

Note: Only applies for "mode": "json" and "jsonl".

metadata.open

string

"w"

The mode in which metadata files get opened.

For example, use "a" to append to a file's content or "w" to truncate it.

See the mode argument of open() for further details.

metadata.encoding

string

"utf-8"

Name of the encoding used to encode a file's content.

See the encoding argument of open() for further details.

metadata.private

bool

false

Include private fields, i.e. fields whose name starts with an underscore.

metadata.skip

bool

false

Do not overwrite already existing files.

metadata.archive

Path

File to store IDs of generated metadata files in, similar to extractor.*.archive.

archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.

metadata.mtime

bool

false

Set modification times of generated metadata files according to the accompanying downloaded file.

Enabling this option will only have an effect *if* there is actual mtime metadata available, that is

* after a file download ("event": "file" (default), "event": "after")
* when running *after* an mtime post processes for the same event

For example, a metadata post processor for "event": "post" will *not* be able to set its file's modification time unless an mtime post processor with "event": "post" runs *before* it.

mtime.event


* string
* list of strings

"file"

The event(s) for which mtime.key or mtime.value get evaluated.

See metadata.event for a list of available events.

mtime.key

string

"date"

Name of the metadata field whose value should be used.

This value must be either a UNIX timestamp or a datetime object.

Note: This option gets ignored if mtime.value is set.

mtime.value

string

null


* "{status[date]}"
* "{content[0:6]:R22/2022/D%Y%m%d/}"

A format string whose value should be used.

The resulting value must be either a UNIX timestamp or a datetime object.

python.archive

Path

File to store IDs of called Python functions in, similar to extractor.*.archive.

archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.

python.event


* string
* list of strings

"file"

The event(s) for which python.function gets called.

See metadata.event for a list of available events.

python.function

string


* "my_module:generate_text"
* "~/.local/share/gdl-utils.py:resize"

The Python function to call.

This function is specified as <module>:<function name> and gets called with the current metadata dict as argument.

module is either an importable Python module name or the Path to a .py file,

ugoira.extension

string

"webm"

Filename extension for the resulting video files.

ugoira.ffmpeg-args

list of strings

null

["-c:v", "libvpx-vp9", "-an", "-b:v", "2M"]

Additional FFmpeg command-line arguments.

ugoira.ffmpeg-demuxer

string

auto

FFmpeg demuxer to read and process input files with. Possible values are

* "concat" (inaccurate frame timecodes for non-uniform frame delays)
* "image2" (accurate timecodes, requires nanosecond file timestamps, i.e. no Windows or macOS)
* "mkvmerge" (accurate timecodes, only WebM or MKV, requires mkvmerge)

"auto" will select mkvmerge if available and fall back to concat otherwise.

ugoira.ffmpeg-location

Path

"ffmpeg"

Location of the ffmpeg (or avconv) executable to use.

ugoira.mkvmerge-location

Path

"mkvmerge"

Location of the mkvmerge executable for use with the mkvmerge demuxer.

ugoira.ffmpeg-output


* bool
* string

"error"

Controls FFmpeg output.

* true: Enable FFmpeg output
* false: Disable all FFmpeg output
* any string: Pass -hide_banner and -loglevel with this value as argument to FFmpeg

ugoira.ffmpeg-twopass

bool

false

Enable Two-Pass encoding.

ugoira.framerate

string

"auto"

Controls the frame rate argument (-r) for FFmpeg

* "auto": Automatically assign a fitting frame rate based on delays between frames.
* "uniform": Like auto, but assign an explicit frame rate only to Ugoira with uniform frame delays.
* any other string: Use this value as argument for -r.
* null or an empty string: Don't set an explicit frame rate.

ugoira.keep-files

bool

false

Keep ZIP archives after conversion.

ugoira.libx264-prevent-odd

bool

true

Prevent "width/height not divisible by 2" errors when using libx264 or libx265 encoders by applying a simple cropping filter. See this Stack Overflow thread for more information.

This option, when libx264/5 is used, automatically adds ["-vf", "crop=iw-mod(iw\\,2):ih-mod(ih\\,2)"] to the list of FFmpeg command-line arguments to reduce an odd width/height by 1 pixel and make them even.

ugoira.mtime

bool

true

Set modification times of generated ugoira aniomations.

ugoira.repeat-last-frame

bool

true

Allow repeating the last frame when necessary to prevent it from only being displayed for a very short amount of time.

zip.extension

string

"zip"

Filename extension for the created ZIP archive.

zip.files

list of Path

["info.json"]

List of extra files to be added to a ZIP archive.

Note: Relative paths are relative to the current download directory.

zip.keep-files

bool

false

Keep the actual files after writing them to a ZIP archive.

zip.mode

string

"default"


* "default": Write the central directory file header once after everything is done or an exception is raised.

* "safe": Update the central directory file header each time a file is stored in a ZIP archive.

This greatly reduces the chance a ZIP archive gets corrupted in case the Python interpreter gets shut down unexpectedly (power outage, SIGKILL) but is also a lot slower.

MISCELLANEOUS OPTIONS

extractor.modules

list of strings

The modules list in extractor/__init__.py

["reddit", "danbooru", "mangadex"]

List of internal modules to load when searching for a suitable extractor class. Useful to reduce startup time and memory usage.

extractor.module-sources

list of Path instances

["~/.config/gallery-dl/modules", null]

List of directories to load external extractor modules from.

Any file in a specified directory with a .py filename extension gets imported and searched for potential extractors, i.e. classes with a pattern attribute.

Note: null references internal extractors defined in extractor/__init__.py or by extractor.modules.

globals


* Path
* string


* "~/.local/share/gdl-globals.py"
* "gdl-globals"

Path to or name of an
importable Python module, whose namespace,
in addition to the GLOBALS dict in util.py, gets used as globals parameter for compiled Python expressions.

cache.file

Path


* (%APPDATA% or "~") + "/gallery-dl/cache.sqlite3" on Windows
* ($XDG_CACHE_HOME or "~/.cache") + "/gallery-dl/cache.sqlite3" on all other platforms

Path of the SQLite3 database used to cache login sessions, cookies and API tokens across gallery-dl invocations.

Set this option to null or an invalid path to disable this cache.

format-separator

string

"/"

Character(s) used as argument separator in format string format specifiers.

For example, setting this option to "#" would allow a replacement operation to be Rold#new# instead of the default Rold/new/

signals-ignore

list of strings

["SIGTTOU", "SIGTTIN", "SIGTERM"]

The list of signal names to ignore, i.e. set SIG_IGN as signal handler for.

subconfigs

list of Path

["~/cfg-twitter.json", "~/cfg-reddit.json"]

Additional configuration files to load.

warnings

string

"default"

The Warnings Filter action used for (urllib3) warnings.

API TOKENS & IDS

extractor.deviantart.client-id & .client-secret

string


* login and visit DeviantArt's Applications & Keys section
* click "Register Application"
* scroll to "OAuth2 Redirect URI Whitelist (Required)" and enter "https://mikf.github.io/gallery-dl/oauth-redirect.html"
* scroll to the bottom and agree to the API License Agreement. Submission Policy, and Terms of Service.
* click "Save"
* copy client_id and client_secret of your new application and put them in your configuration file as "client-id" and "client-secret"
* clear your cache to delete any remaining access-token entries. (gallery-dl --clear-cache deviantart)
* get a new refresh-token for the new client-id (gallery-dl oauth:deviantart)

extractor.flickr.api-key & .api-secret

string


* login and Create an App in Flickr's App Garden
* click "APPLY FOR A NON-COMMERCIAL KEY"
* fill out the form with a random name and description and click "SUBMIT"
* copy Key and Secret and put them in your configuration file as "api-key" and "api-secret"

extractor.reddit.client-id & .user-agent

string


* login and visit the apps section of your account's preferences
* click the "are you a developer? create an app..." button
* fill out the form:

* choose a name
* select "installed app"
* set http://localhost:6414/ as "redirect uri"
* solve the "I'm not a robot" reCAPTCHA if needed
* click "create app"

* copy the client id (third line, under your application's name and "installed app") and put it in your configuration file as "client-id"
* use "Python:<application name>:v1.0 (by /u/<username>)" as user-agent and replace <application name> and <username> accordingly (see Reddit's API access rules)
* clear your cache to delete any remaining access-token entries. (gallery-dl --clear-cache reddit)
* get a refresh-token for the new client-id (gallery-dl oauth:reddit)

extractor.smugmug.api-key & .api-secret

string


* login and Apply for an API Key
* use a random name and description, set "Type" to "Application", "Platform" to "All", and "Use" to "Non-Commercial"
* fill out the two checkboxes at the bottom and click "Apply"
* copy API Key and API Secret and put them in your configuration file as "api-key" and "api-secret"

extractor.tumblr.api-key & .api-secret

string


* login and visit Tumblr's Applications section
* click "Register application"
* fill out the form: use a random name and description, set https://example.org/ as "Application Website" and "Default callback URL"
* solve Google's "I'm not a robot" challenge and click "Register"
* click "Show secret key" (below "OAuth Consumer Key")
* copy your OAuth Consumer Key and Secret Key and put them in your configuration file as "api-key" and "api-secret"

CUSTOM TYPES

Date


* string
* integer


* "2019-01-01T00:00:00"
* "2019" with "%Y" as date-format
* 1546297200

A Date value represents a specific point in time.

* If given as string, it is parsed according to date-format.
* If given as integer, it is interpreted as UTC timestamp.

Duration


* float
* list with 2 floats
* string


* 2.85
* [1.5, 3.0]
* "2.85", "1.5-3.0"

A Duration represents a span of time in seconds.

* If given as a single float, it will be used as that exact value.
* If given as a list with 2 floating-point numbers a & b , it will be randomly chosen with uniform distribution such that a <= N <= b. (see random.uniform())
* If given as a string, it can either represent a single float value ("2.85") or a range ("1.5-3.0").

Path


* string
* list of strings


* "file.ext"
* "~/path/to/file.ext"
* "$HOME/path/to/file.ext"
* ["$HOME", "path", "to", "file.ext"]

A Path is a string representing the location of a file or directory.

Simple tilde expansion and environment variable expansion is supported.

In Windows environments, backslashes ("\") can, in addition to forward slashes ("/"), be used as path separators. Because backslashes are JSON's escape character, they themselves have to be escaped. The path C:\path\to\file.ext has therefore to be written as "C:\\path\\to\\file.ext" if you want to use backslashes.

Logging Configuration

object

{ "format" : "{asctime} {name}: {message}", "format-date": "%H:%M:%S", "path" : "~/log.txt", "encoding" : "ascii" }

{ "level" : "debug", "format": { "debug" : "debug: {message}", "info" : "[{name}] {message}", "warning": "Warning: {message}", "error" : "ERROR: {message}" } }

Extended logging output configuration.

* format
* General format string for logging messages or a dictionary with format strings for each loglevel.

In addition to the default LogRecord attributes, it is also possible to access the current extractor, job, path, and keywords objects and their attributes, for example "{extractor.url}", "{path.filename}", "{keywords.title}"
* Default: "[{name}][{levelname}] {message}"
* format-date
* Format string for {asctime} fields in logging messages (see strftime() directives)
* Default: "%Y-%m-%d %H:%M:%S"
* level
* Minimum logging message level (one of "debug", "info", "warning", "error", "exception")
* Default: "info"
* path
* Path to the output file
* mode
* Mode in which the file is opened; use "w" to truncate or "a" to append (see open())
* Default: "w"
* encoding
* File encoding
* Default: "utf-8"

Note: path, mode, and encoding are only applied when configuring logging output to a file.

Postprocessor Configuration

object

{ "name": "mtime" }

{ "name" : "zip", "compression": "store", "extension" : "cbz", "filter" : "extension not in ('zip', 'rar')", "whitelist" : ["mangadex", "exhentai", "nhentai"] }

An object containing a "name" attribute specifying the post-processor type, as well as any of its options.

It is possible to set a "filter" expression similar to image-filter to only run a post-processor conditionally.

It is also possible set a "whitelist" or "blacklist" to only enable or disable a post-processor for the specified extractor categories.

The available post-processor types are

classify Categorize files by filename extension compare Compare versions of the same file and replace/enumerate them on mismatch
(requires downloader.*.part = true and extractor.*.skip = false)
exec Execute external commands metadata Write metadata to separate files mtime Set file modification time according to its metadata python Call Python functions ugoira Convert Pixiv Ugoira to WebM using FFmpeg zip Store files in a ZIP archive

BUGS

https://github.com/mikf/gallery-dl/issues

AUTHORS

Mike Fährmann <mike_faehrmann@web.de>
and https://github.com/mikf/gallery-dl/graphs/contributors

SEE ALSO

gallery-dl(1)

2024-03-23 1.26.9