summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorng0 <ng0@n0.is>2019-09-12 15:09:29 +0000
committerng0 <ng0@n0.is>2019-09-12 15:09:29 +0000
commit5d5a61dc56228532927a7786375a13d7ae749180 (patch)
tree1b4b73a0016f005655aaa18982df8383f790527f /docs
parentbc555b4f37422efffcc9969f645f9dbf3cb444bd (diff)
parent9cd755e1d768bbf228e7c9faf223b7459f7e0105 (diff)
downloadgnurl-5d5a61dc56228532927a7786375a13d7ae749180.tar.gz
gnurl-5d5a61dc56228532927a7786375a13d7ae749180.tar.bz2
gnurl-5d5a61dc56228532927a7786375a13d7ae749180.zip
Merge tag 'curl-7_66_0'
7.66.0
Diffstat (limited to 'docs')
-rw-r--r--docs/ALTSVC.md64
-rw-r--r--docs/DEPRECATE.md15
-rw-r--r--docs/EXPERIMENTAL.md22
-rw-r--r--docs/HTTP3.md121
-rw-r--r--docs/INTERNALS.md2
-rw-r--r--docs/KNOWN_BUGS68
-rw-r--r--docs/MANUAL1058
-rw-r--r--docs/MANUAL.md1011
-rw-r--r--docs/Makefile.am3
-rw-r--r--docs/PARALLEL-TRANSFERS.md58
-rw-r--r--docs/ROADMAP.md65
-rw-r--r--docs/THANKS25
-rw-r--r--docs/THANKS-filter1
-rw-r--r--docs/TODO238
-rw-r--r--docs/cmdline-opts/Makefile.inc7
-rw-r--r--docs/cmdline-opts/config.d2
-rw-r--r--docs/cmdline-opts/http0.9.d3
-rw-r--r--docs/cmdline-opts/http2.d1
-rw-r--r--docs/cmdline-opts/http3.d19
-rw-r--r--docs/cmdline-opts/parallel-max.d9
-rw-r--r--docs/cmdline-opts/parallel.d7
-rw-r--r--docs/cmdline-opts/retry.d3
-rw-r--r--docs/cmdline-opts/sasl-authzid.d11
-rw-r--r--docs/examples/Makefile.inc5
-rw-r--r--docs/examples/altsvc.c56
-rw-r--r--docs/examples/curlx.c6
-rw-r--r--docs/examples/ephiperfifo.c42
-rw-r--r--docs/examples/hiperfifo.c52
-rw-r--r--docs/examples/http3-present.c47
-rw-r--r--docs/examples/http3.c54
-rw-r--r--docs/examples/imap-authzid.c71
-rw-r--r--docs/examples/pop3-authzid.c70
-rw-r--r--docs/examples/smtp-authzid.c161
-rw-r--r--docs/libcurl/Makefile.inc1
-rw-r--r--docs/libcurl/curl_multi_poll.3110
-rw-r--r--docs/libcurl/gnurl_easy_getinfo.33
-rw-r--r--docs/libcurl/gnurl_easy_setopt.32
-rw-r--r--docs/libcurl/gnurl_global_init_mem.34
-rw-r--r--docs/libcurl/gnurl_version_info.3103
-rw-r--r--docs/libcurl/libgnurl-errors.32
-rw-r--r--docs/libcurl/opts/CURLINFO_RETRY_AFTER.363
-rw-r--r--docs/libcurl/opts/CURLOPT_SASL_AUTHZID.364
-rw-r--r--docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME_T.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_CONNECT_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_CONNECT_TIME_T.35
-rw-r--r--docs/libcurl/opts/GNURLINFO_HTTP_VERSION.39
-rw-r--r--docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME_T.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME_T.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME_T.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_TOTAL_TIME.34
-rw-r--r--docs/libcurl/opts/GNURLINFO_TOTAL_TIME_T.34
-rw-r--r--docs/libcurl/opts/GNURLOPT_ALTSVC.32
-rw-r--r--docs/libcurl/opts/GNURLOPT_ALTSVC_CTRL.37
-rw-r--r--docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.35
-rw-r--r--docs/libcurl/opts/GNURLOPT_HTTP09_ALLOWED.310
-rw-r--r--docs/libcurl/opts/GNURLOPT_HTTP_VERSION.312
-rw-r--r--docs/libcurl/opts/GNURLOPT_POST.35
-rw-r--r--docs/libcurl/opts/GNURLOPT_PROXY_SSL_VERIFYHOST.313
-rw-r--r--docs/libcurl/opts/GNURLOPT_READFUNCTION.333
-rw-r--r--docs/libcurl/opts/GNURLOPT_SSL_VERIFYHOST.316
-rw-r--r--docs/libcurl/opts/Makefile.inc2
-rw-r--r--docs/libcurl/symbols-in-versions7
66 files changed, 2335 insertions, 1564 deletions
diff --git a/docs/ALTSVC.md b/docs/ALTSVC.md
index 5aca1c950..48401415b 100644
--- a/docs/ALTSVC.md
+++ b/docs/ALTSVC.md
@@ -2,21 +2,6 @@
curl features **EXPERIMENTAL** support for the Alt-Svc: HTTP header.
-## Experimental
-
-Experimental support in curl means:
-
-1. Experimental features are provided to allow users to try them out and
- provide feedback on functionality and API etc before they ship and get
- "carved in stone".
-2. You must enable the feature when invoking configure as otherwise curl will
- not be built with the feature present.
-3. We strongly advice against using this feature in production.
-4. **We reserve the right to change behavior** of the feature without sticking
- to our API/ABI rules as we do for regular features, as long as it is marked
- experimental.
-5. Experimental features are clearly marked so in documentation. Beware.
-
## Enable Alt-Svc in build
`./configure --enable-alt-svc`
@@ -25,35 +10,30 @@ Experimental support in curl means:
[RFC 7838](https://tools.ietf.org/html/rfc7838)
-## What works
-
-- read alt-svc file from disk
-- write alt-svc file from disk
-- parse `Alt-Svc:` response headers, including `ma`, `clear` and `persist`.
-- replaces old entries when new alternatives are received
-- unit tests to verify most of this functionality (test 1654)
-- act on `Alt-Svc:` response headers
-- build conditionally on `configure --enable-alt-svc` only, feature marked as
- **EXPERIMENTAL**
-- implement `CURLOPT_ALTSVC_CTRL`
-- implement `CURLOPT_ALTSVC`
-- document `CURLOPT_ALTSVC_CTRL`
-- document `CURLOPT_ALTSVC`
-- document `--alt-svc`
-- add `CURL_VERSION_ALTSVC`
-- make `curl -V` show 'alt-svc' as a feature if built-in
-- support `curl --alt-svc [file]` to enable caching, using that file
-- make `tests/runtests.pl` able to filter tests on the feature `alt-svc`
-- actually use the existing in-memory alt-svc cache for outgoing connections
-- alt-svc cache expiry
-- test 355 and 356 verify curl acting on Alt-Svc, received from header and
- loaded from cache. The latter needs a debug build since it enables Alt-Svc
- for plain HTTP.
-
-## What is left
+# Alt-Svc cache file format
+
+This a text based file with one line per entry and each line consists of nine
+space separated fields.
+
+## Example
+
+ h2 quic.tech 8443 h3-22 quic.tech 8443 "20190808 06:18:37" 0 0
+
+## Fields
+
+1. The ALPN id for the source origin
+2. The host name for the source origin
+3. The port number for the source origin
+4. The ALPN id for the destination host
+5. The host name for the destination host
+6. The host number for the destination host
+7. The expiration date and time of this entry withing double quotes. The date format is "YYYYMMDD HH:MM:SS" and the time zone is GMT.
+8. Boolean (1 or 0) if "persist" was set for this entry
+9. Integer priority value (not currently used)
+
+# TODO
- handle multiple response headers, when one of them says `clear` (should
override them all)
- using `Age:` value for caching age as per spec
- `CURLALTSVC_IMMEDIATELY` support
-- `CURLALTSVC_ALTUSED` support
diff --git a/docs/DEPRECATE.md b/docs/DEPRECATE.md
index f04f0eeaa..4f4ef8ab6 100644
--- a/docs/DEPRECATE.md
+++ b/docs/DEPRECATE.md
@@ -5,21 +5,6 @@ email the curl-library mailing list as soon as possible and explain to us why
this is a problem for you and how your use case can't be satisfied properly
using a work around.
-## HTTP/0.9
-
-Supporting this is non-obvious and might even come as a surprise to some
-users. Potentially even being a security risk in some cases.
-
-### State
-
-curl 7.64.0 introduces options to disable/enable support for this protocol
-version. The default remains supported for now.
-
-### Removal
-
-The support for HTTP/0.9 will be switched to disabled by default in 6 months,
-in the September 2019 release (possibly called curl 7.68.0).
-
## PolarSSL
The polarssl TLS library has not had an update in over three years. The last
diff --git a/docs/EXPERIMENTAL.md b/docs/EXPERIMENTAL.md
new file mode 100644
index 000000000..6c33bcf53
--- /dev/null
+++ b/docs/EXPERIMENTAL.md
@@ -0,0 +1,22 @@
+# Experimental
+
+Some features and functionality in curl and libcurl are considered
+**EXPERIMENTAL**.
+
+Experimental support in curl means:
+
+1. Experimental features are provided to allow users to try them out and
+ provide feedback on functionality and API etc before they ship and get
+ "carved in stone".
+2. You must enable the feature when invoking configure as otherwise curl will
+ not be built with the feature present.
+3. We strongly advice against using this feature in production.
+4. **We reserve the right to change behavior** of the feature without sticking
+ to our API/ABI rules as we do for regular features, as long as it is marked
+ experimental.
+5. Experimental features are clearly marked so in documentation. Beware.
+
+## Experimental features right now
+
+ - HTTP/3 support and options
+ - alt-svc support and options
diff --git a/docs/HTTP3.md b/docs/HTTP3.md
new file mode 100644
index 000000000..1e9b183c4
--- /dev/null
+++ b/docs/HTTP3.md
@@ -0,0 +1,121 @@
+# HTTP3 (and QUIC)
+
+## Resources
+
+[HTTP/3 Explained](https://daniel.haxx.se/http3-explained/) - the online free
+book describing the protocols involved.
+
+[QUIC implementation](https://github.com/curl/curl/wiki/QUIC-implementation) -
+the wiki page describing the plan for how to support QUIC and HTTP/3 in curl
+and libcurl.
+
+[quicwg.org](https://quicwg.org/) - home of the official protocol drafts
+
+## QUIC libraries
+
+QUIC libraries we're experiementing with:
+
+[ngtcp2](https://github.com/ngtcp2/ngtcp2)
+
+[quiche](https://github.com/cloudflare/quiche)
+
+## Experimental!
+
+HTTP/3 and QUIC support in curl is considered **EXPERIMENTAL** until further
+notice. It needs to be enabled at build-time.
+
+Further development and tweaking of the HTTP/3 support in curl will happen in
+in the master branch using pull-requests, just like ordinary changes.
+
+# ngtcp2 version
+
+## Build
+
+Build (patched) OpenSSL
+
+ % git clone --depth 1 -b openssl-quic-draft-22 https://github.com/tatsuhiro-t/openssl
+ % cd openssl
+ % ./config enable-tls1_3 --prefix=<somewhere1>
+ % make
+ % make install_sw
+
+Build nghttp3
+
+ % cd ..
+ % git clone https://github.com/ngtcp2/nghttp3
+ % cd nghttp3
+ % autoreconf -i
+ % ./configure --prefix=<somewhere2> --enable-lib-only
+ % make
+ % make install
+
+Build ngtcp2
+
+ % cd ..
+ % git clone -b draft-22 https://github.com/ngtcp2/ngtcp2
+ % cd ngtcp2
+ % autoreconf -i
+ % ./configure PKG_CONFIG_PATH=<somewhere1>/lib/pkgconfig:<somewhere2>/lib/pkgconfig LDFLAGS="-Wl,-rpath,<somehere1>/lib" --prefix==<somewhere3>
+ % make
+ % make install
+
+Build curl
+
+ % cd ..
+ % git clone https://github.com/curl/curl
+ % cd curl
+ % ./buildconf
+ % LDFLAGS="-Wl,-rpath,<somewhere1>/lib" ./configure -with-ssl=<somewhere1> --with-nghttp3=<somewhere2> --with-ngtcp2=<somewhere3>
+ % make
+
+## Running
+
+Make sure the custom OpenSSL library is the one used at run-time, as otherwise
+you'll just get ld.so linker errors.
+
+## Invoke from command line
+
+ curl --http3 https://nghttp2.org:8443/
+
+# quiche version
+
+## build
+
+Clone quiche and BoringSSL:
+
+ % git clone --recursive https://github.com/cloudflare/quiche
+
+Build BoringSSL (it needs to be built manually so it can be reused with curl):
+
+ % cd quiche/deps/boringssl
+ % mkdir build
+ % cd build
+ % cmake -DCMAKE_POSITION_INDEPENDENT_CODE=on ..
+ % make -j`nproc`
+ % cd ..
+ % mkdir .openssl/lib -p
+ % cp build/crypto/libcrypto.a build/ssl/libssl.a .openssl/lib
+ % ln -s $PWD/include .openssl
+
+Build quiche:
+
+ % cd ../..
+ % QUICHE_BSSL_PATH=$PWD/deps/boringssl cargo build --release --features pkg-config-meta
+
+Clone and build curl:
+
+ % cd ..
+ % git clone https://github.com/curl/curl
+ % cd curl
+ % ./buildconf
+ % ./configure LDFLAGS="-Wl,-rpath,$PWD/../quiche/target/release" --with-ssl=$PWD/../quiche/deps/boringssl/.openssl --with-quiche=$PWD/../quiche/target/release
+ % make -j`nproc`
+
+## Running
+
+Make an HTTP/3 request.
+
+ % src/curl --http3 https://cloudflare-quic.com/
+ % src/curl --http3 https://facebook.com/
+ % src/curl --http3 https://quic.aiortc.org:4433/
+ % src/curl --http3 https://quic.rocks:4433/
diff --git a/docs/INTERNALS.md b/docs/INTERNALS.md
index cd004e8f4..9ae722898 100644
--- a/docs/INTERNALS.md
+++ b/docs/INTERNALS.md
@@ -773,7 +773,7 @@ Track Down Memory Leaks
Add a line in your application code:
- `curl_memdebug("dump");`
+ `curl_dbg_memdebug("dump");`
This will make the malloc debug system output a full trace of all resource
using functions to the given file name. Make sure you rebuild your program
diff --git a/docs/KNOWN_BUGS b/docs/KNOWN_BUGS
index e385ef597..5850f7fbd 100644
--- a/docs/KNOWN_BUGS
+++ b/docs/KNOWN_BUGS
@@ -13,7 +13,6 @@ problems may have been fixed or changed somewhat since this was written!
1. HTTP
1.1 CURLFORM_CONTENTLEN in an array
- 1.2 Disabling HTTP Pipelining
1.3 STARTTRANSFER time is wrong for HTTP POSTs
1.4 multipart formposts file name encoding
1.5 Expect-100 meets 417
@@ -21,7 +20,6 @@ problems may have been fixed or changed somewhat since this was written!
1.7 Deflate error after all content was received
1.8 DoH isn't used for all name resolves when enabled
1.9 HTTP/2 frames while in the connection pool kill reuse
- 1.10 Strips trailing dot from host name
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
2. TLS
@@ -48,6 +46,7 @@ problems may have been fixed or changed somewhat since this was written!
4.5 Improve --data-urlencode space encoding
5. Build and portability issues
+ 5.1 USE_UNIX_SOCKETS on Windows
5.2 curl-config --libs contains private details
5.3 curl compiled on OSX 10.13 failed to run on OSX 10.10
5.4 Cannot compile against a static build of OpenLDAP
@@ -98,6 +97,7 @@ problems may have been fixed or changed somewhat since this was written!
11.4 HTTP test server 'connection-monitor' problems
11.5 Connection information when using TCP Fast Open
11.6 slow connect to localhost on Windows
+ 11.7 signal-based resolver timeouts
12. LDAP and OpenLDAP
12.1 OpenLDAP hangs after returning results
@@ -121,14 +121,6 @@ problems may have been fixed or changed somewhat since this was written!
see the now closed related issue:
https://github.com/curl/curl/issues/608
-1.2 Disabling HTTP Pipelining
-
- Disabling HTTP Pipelining when there are ongoing transfers can lead to
- heap corruption and crash. https://curl.haxx.se/bug/view.cgi?id=1411
-
- Similarly, removing a handle when pipelining corrupts data:
- https://github.com/curl/curl/issues/2101
-
1.3 STARTTRANSFER time is wrong for HTTP POSTs
Wrong STARTTRANSFER timer accounting for POST requests Timer works fine with
@@ -189,42 +181,6 @@ problems may have been fixed or changed somewhat since this was written!
This is *best* fixed by adding monitoring to connections while they are kept
in the pool so that pings can be responded to appropriately.
-1.10 Strips trailing dot from host name
-
- When given a URL with a trailing dot for the host name part:
- "https://example.com./", libcurl will strip off the dot and use the name
- without a dot internally and send it dot-less in HTTP Host: headers and in
- the TLS SNI field. For the purpose of resolving the name to an address
- the hostname is used as is without any change.
-
- The HTTP part violates RFC 7230 section 5.4 but the SNI part is accordance
- with RFC 6066 section 3.
-
- URLs using these trailing dots are very rare in the wild and we have not seen
- or gotten any real-world problems with such URLs reported. The popular
- browsers seem to have stayed with not stripping the dot for both uses (thus
- they violate RFC 6066 instead of RFC 7230).
-
- Daniel took the discussion to the HTTPbis mailing list in March 2016:
- https://lists.w3.org/Archives/Public/ietf-http-wg/2016JanMar/0430.html but
- there was not major rush or interest to fix this. The impression I get is
- that most HTTP people rather not rock the boat now and instead prioritize web
- compatibility rather than to strictly adhere to these RFCs.
-
- Our current approach allows a knowing client to send a custom HTTP header
- with the dot added.
-
- In a few cases there is a difference in name resolving to IP addresses with
- a trailing dot, but it can be noted that many HTTP servers will not happily
- accept the trailing dot there unless that has been specifically configured
- to be a fine virtual host.
-
- If URLs with trailing dots for host names become more popular or even just
- used more than for just plain fun experiments, I'm sure we will have reason
- to go back and reconsider.
-
- See https://github.com/curl/curl/issues/716 for the discussion.
-
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
I'm using libcurl to POST form data using a FILE* with the CURLFORM_STREAM
@@ -389,6 +345,13 @@ problems may have been fixed or changed somewhat since this was written!
5. Build and portability issues
+5.1 USE_UNIX_SOCKETS on Windows
+
+ Due to incorrect CMake checks for the presense of the feature, it will never
+ be enabled for windows in a cmake build.
+
+ See https://github.com/curl/curl/issues/4040
+
5.2 curl-config --libs contains private details
"curl-config --libs" will include details set in LDFLAGS when configure is
@@ -728,6 +691,19 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/2281
+11.7 signal-based resolver timeouts
+
+ libcurl built without an asynchronous resolver library uses alarm() to time
+ out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
+ signal handler back into the library with a sigsetjmp, which effectively
+ causes libcurl to continue running within the signal handler. This is
+ non-portable and could cause problems on some platforms. A discussion on the
+ problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html
+
+ Also, alarm() provides timeout resolution only to the nearest second. alarm
+ ought to be replaced by setitimer on systems that support it.
+
+
12. LDAP and OpenLDAP
12.1 OpenLDAP hangs after returning results
diff --git a/docs/MANUAL b/docs/MANUAL
deleted file mode 100644
index 59b97427c..000000000
--- a/docs/MANUAL
+++ /dev/null
@@ -1,1058 +0,0 @@
-LATEST VERSION
-
- You always find news about what's going on as well as the latest versions
- from the curl web pages, located at:
-
- https://curl.haxx.se
-
-SIMPLE USAGE
-
- Get the main page from Netscape's web-server:
-
- curl http://www.netscape.com/
-
- Get the README file the user's home directory at funet's ftp-server:
-
- curl ftp://ftp.funet.fi/README
-
- Get a web page from a server using port 8000:
-
- curl http://www.weirdserver.com:8000/
-
- Get a directory listing of an FTP site:
-
- curl ftp://cool.haxx.se/
-
- Get the definition of curl from a dictionary:
-
- curl dict://dict.org/m:curl
-
- Fetch two documents at once:
-
- curl ftp://cool.haxx.se/ http://www.weirdserver.com:8000/
-
- Get a file off an FTPS server:
-
- curl ftps://files.are.secure.com/secrets.txt
-
- or use the more appropriate FTPS way to get the same file:
-
- curl --ftp-ssl ftp://files.are.secure.com/secrets.txt
-
- Get a file from an SSH server using SFTP:
-
- curl -u username sftp://example.com/etc/issue
-
- Get a file from an SSH server using SCP using a private key
- (not password-protected) to authenticate:
-
- curl -u username: --key ~/.ssh/id_rsa \
- scp://example.com/~/file.txt
-
- Get a file from an SSH server using SCP using a private key
- (password-protected) to authenticate:
-
- curl -u username: --key ~/.ssh/id_rsa --pass private_key_password \
- scp://example.com/~/file.txt
-
- Get the main page from an IPv6 web server:
-
- curl "http://[2001:1890:1112:1::20]/"
-
- Get a file from an SMB server:
-
- curl -u "domain\username:passwd" smb://server.example.com/share/file.txt
-
-DOWNLOAD TO A FILE
-
- Get a web page and store in a local file with a specific name:
-
- curl -o thatpage.html http://www.netscape.com/
-
- Get a web page and store in a local file, make the local file get the name
- of the remote document (if no file name part is specified in the URL, this
- will fail):
-
- curl -O http://www.netscape.com/index.html
-
- Fetch two files and store them with their remote names:
-
- curl -O www.haxx.se/index.html -O curl.haxx.se/download.html
-
-USING PASSWORDS
-
- FTP
-
- To ftp files using name+passwd, include them in the URL like:
-
- curl ftp://name:passwd@machine.domain:port/full/path/to/file
-
- or specify them with the -u flag like
-
- curl -u name:passwd ftp://machine.domain:port/full/path/to/file
-
- FTPS
-
- It is just like for FTP, but you may also want to specify and use
- SSL-specific options for certificates etc.
-
- Note that using FTPS:// as prefix is the "implicit" way as described in the
- standards while the recommended "explicit" way is done by using FTP:// and
- the --ftp-ssl option.
-
- SFTP / SCP
-
- This is similar to FTP, but you can use the --key option to specify a
- private key to use instead of a password. Note that the private key may
- itself be protected by a password that is unrelated to the login password
- of the remote system; this password is specified using the --pass option.
- Typically, curl will automatically extract the public key from the private
- key file, but in cases where curl does not have the proper library support,
- a matching public key file must be specified using the --pubkey option.
-
- HTTP
-
- Curl also supports user and password in HTTP URLs, thus you can pick a file
- like:
-
- curl http://name:passwd@machine.domain/full/path/to/file
-
- or specify user and password separately like in
-
- curl -u name:passwd http://machine.domain/full/path/to/file
-
- HTTP offers many different methods of authentication and curl supports
- several: Basic, Digest, NTLM and Negotiate (SPNEGO). Without telling which
- method to use, curl defaults to Basic. You can also ask curl to pick the
- most secure ones out of the ones that the server accepts for the given URL,
- by using --anyauth.
-
- NOTE! According to the URL specification, HTTP URLs can not contain a user
- and password, so that style will not work when using curl via a proxy, even
- though curl allows it at other times. When using a proxy, you _must_ use
- the -u style for user and password.
-
- HTTPS
-
- Probably most commonly used with private certificates, as explained below.
-
-PROXY
-
- curl supports both HTTP and SOCKS proxy servers, with optional authentication.
- It does not have special support for FTP proxy servers since there are no
- standards for those, but it can still be made to work with many of them. You
- can also use both HTTP and SOCKS proxies to transfer files to and from FTP
- servers.
-
- Get an ftp file using an HTTP proxy named my-proxy that uses port 888:
-
- curl -x my-proxy:888 ftp://ftp.leachsite.com/README
-
- Get a file from an HTTP server that requires user and password, using the
- same proxy as above:
-
- curl -u user:passwd -x my-proxy:888 http://www.get.this/
-
- Some proxies require special authentication. Specify by using -U as above:
-
- curl -U user:passwd -x my-proxy:888 http://www.get.this/
-
- A comma-separated list of hosts and domains which do not use the proxy can
- be specified as:
-
- curl --noproxy localhost,get.this -x my-proxy:888 http://www.get.this/
-
- If the proxy is specified with --proxy1.0 instead of --proxy or -x, then
- curl will use HTTP/1.0 instead of HTTP/1.1 for any CONNECT attempts.
-
- curl also supports SOCKS4 and SOCKS5 proxies with --socks4 and --socks5.
-
- See also the environment variables Curl supports that offer further proxy
- control.
-
- Most FTP proxy servers are set up to appear as a normal FTP server from the
- client's perspective, with special commands to select the remote FTP server.
- curl supports the -u, -Q and --ftp-account options that can be used to
- set up transfers through many FTP proxies. For example, a file can be
- uploaded to a remote FTP server using a Blue Coat FTP proxy with the
- options:
-
- curl -u "Remote-FTP-Username@remote.ftp.server Proxy-Username:Remote-Pass" \
- --ftp-account Proxy-Password --upload-file local-file \
- ftp://my-ftp.proxy.server:21/remote/upload/path/
-
- See the manual for your FTP proxy to determine the form it expects to set up
- transfers, and curl's -v option to see exactly what curl is sending.
-
-RANGES
-
- HTTP 1.1 introduced byte-ranges. Using this, a client can request
- to get only one or more subparts of a specified document. Curl supports
- this with the -r flag.
-
- Get the first 100 bytes of a document:
-
- curl -r 0-99 http://www.get.this/
-
- Get the last 500 bytes of a document:
-
- curl -r -500 http://www.get.this/
-
- Curl also supports simple ranges for FTP files as well. Then you can only
- specify start and stop position.
-
- Get the first 100 bytes of a document using FTP:
-
- curl -r 0-99 ftp://www.get.this/README
-
-UPLOADING
-
- FTP / FTPS / SFTP / SCP
-
- Upload all data on stdin to a specified server:
-
- curl -T - ftp://ftp.upload.com/myfile
-
- Upload data from a specified file, login with user and password:
-
- curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile
-
- Upload a local file to the remote site, and use the local file name at the remote
- site too:
-
- curl -T uploadfile -u user:passwd ftp://ftp.upload.com/
-
- Upload a local file to get appended to the remote file:
-
- curl -T localfile -a ftp://ftp.upload.com/remotefile
-
- Curl also supports ftp upload through a proxy, but only if the proxy is
- configured to allow that kind of tunneling. If it does, you can run curl in
- a fashion similar to:
-
- curl --proxytunnel -x proxy:port -T localfile ftp.upload.com
-
-SMB / SMBS
-
- curl -T file.txt -u "domain\username:passwd" \
- smb://server.example.com/share/
-
- HTTP
-
- Upload all data on stdin to a specified HTTP site:
-
- curl -T - http://www.upload.com/myfile
-
- Note that the HTTP server must have been configured to accept PUT before
- this can be done successfully.
-
- For other ways to do HTTP data upload, see the POST section below.
-
-VERBOSE / DEBUG
-
- If curl fails where it isn't supposed to, if the servers don't let you in,
- if you can't understand the responses: use the -v flag to get verbose
- fetching. Curl will output lots of info and what it sends and receives in
- order to let the user see all client-server interaction (but it won't show
- you the actual data).
-
- curl -v ftp://ftp.upload.com/
-
- To get even more details and information on what curl does, try using the
- --trace or --trace-ascii options with a given file name to log to, like
- this:
-
- curl --trace trace.txt www.haxx.se
-
-
-DETAILED INFORMATION
-
- Different protocols provide different ways of getting detailed information
- about specific files/documents. To get curl to show detailed information
- about a single file, you should use -I/--head option. It displays all
- available info on a single file for HTTP and FTP. The HTTP information is a
- lot more extensive.
-
- For HTTP, you can get the header information (the same as -I would show)
- shown before the data by using -i/--include. Curl understands the
- -D/--dump-header option when getting files from both FTP and HTTP, and it
- will then store the headers in the specified file.
-
- Store the HTTP headers in a separate file (headers.txt in the example):
-
- curl --dump-header headers.txt curl.haxx.se
-
- Note that headers stored in a separate file can be very useful at a later
- time if you want curl to use cookies sent by the server. More about that in
- the cookies section.
-
-POST (HTTP)
-
- It's easy to post data using curl. This is done using the -d <data>
- option. The post data must be urlencoded.
-
- Post a simple "name" and "phone" guestbook.
-
- curl -d "name=Rafael%20Sagula&phone=3320780" \
- http://www.where.com/guest.cgi
-
- How to post a form with curl, lesson #1:
-
- Dig out all the <input> tags in the form that you want to fill in.
-
- If there's a "normal" post, you use -d to post. -d takes a full "post
- string", which is in the format
-
- <variable1>=<data1>&<variable2>=<data2>&...
-
- The 'variable' names are the names set with "name=" in the <input> tags, and
- the data is the contents you want to fill in for the inputs. The data *must*
- be properly URL encoded. That means you replace space with + and that you
- replace weird letters with %XX where XX is the hexadecimal representation of
- the letter's ASCII code.
-
- Example:
-
- (page located at http://www.formpost.com/getthis/
-
- <form action="post.cgi" method="post">
- <input name=user size=10>
- <input name=pass type=password size=10>
- <input name=id type=hidden value="blablabla">
- <input name=ding value="submit">
- </form>
-
- We want to enter user 'foobar' with password '12345'.
-
- To post to this, you enter a curl command line like:
-
- curl -d "user=foobar&pass=12345&id=blablabla&ding=submit" \
- http://www.formpost.com/getthis/post.cgi
-
-
- While -d uses the application/x-www-form-urlencoded mime-type, generally
- understood by CGI's and similar, curl also supports the more capable
- multipart/form-data type. This latter type supports things like file upload.
-
- -F accepts parameters like -F "name=contents". If you want the contents to
- be read from a file, use <@filename> as contents. When specifying a file,
- you can also specify the file content type by appending ';type=<mime type>'
- to the file name. You can also post the contents of several files in one
- field. For example, the field name 'coolfiles' is used to send three files,
- with different content types using the following syntax:
-
- curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html" \
- http://www.post.com/postit.cgi
-
- If the content-type is not specified, curl will try to guess from the file
- extension (it only knows a few), or use the previously specified type (from
- an earlier file if several files are specified in a list) or else it will
- use the default type 'application/octet-stream'.
-
- Emulate a fill-in form with -F. Let's say you fill in three fields in a
- form. One field is a file name which to post, one field is your name and one
- field is a file description. We want to post the file we have written named
- "cooltext.txt". To let curl do the posting of this data instead of your
- favourite browser, you have to read the HTML source of the form page and
- find the names of the input fields. In our example, the input field names
- are 'file', 'yourname' and 'filedescription'.
-
- curl -F "file=@cooltext.txt" -F "yourname=Daniel" \
- -F "filedescription=Cool text file with cool text inside" \
- http://www.post.com/postit.cgi
-
- To send two files in one post you can do it in two ways:
-
- 1. Send multiple files in a single "field" with a single field name:
-
- curl -F "pictures=@dog.gif,cat.gif"
-
- 2. Send two fields with two field names:
-
- curl -F "docpicture=@dog.gif" -F "catpicture=@cat.gif"
-
- To send a field value literally without interpreting a leading '@'
- or '<', or an embedded ';type=', use --form-string instead of
- -F. This is recommended when the value is obtained from a user or
- some other unpredictable source. Under these circumstances, using
- -F instead of --form-string would allow a user to trick curl into
- uploading a file.
-
-REFERRER
-
- An HTTP request has the option to include information about which address
- referred it to the actual page. Curl allows you to specify the
- referrer to be used on the command line. It is especially useful to
- fool or trick stupid servers or CGI scripts that rely on that information
- being available or contain certain data.
-
- curl -e www.coolsite.com http://www.showme.com/
-
- NOTE: The Referer: [sic] field is defined in the HTTP spec to be a full URL.
-
-USER AGENT
-
- An HTTP request has the option to include information about the browser
- that generated the request. Curl allows it to be specified on the command
- line. It is especially useful to fool or trick stupid servers or CGI
- scripts that only accept certain browsers.
-
- Example:
-
- curl -A 'Mozilla/3.0 (Win95; I)' http://www.nationsbank.com/
-
- Other common strings:
- 'Mozilla/3.0 (Win95; I)' Netscape Version 3 for Windows 95
- 'Mozilla/3.04 (Win95; U)' Netscape Version 3 for Windows 95
- 'Mozilla/2.02 (OS/2; U)' Netscape Version 2 for OS/2
- 'Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)' NS for AIX
- 'Mozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)' NS for Linux
-
- Note that Internet Explorer tries hard to be compatible in every way:
- 'Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)' MSIE for W95
-
- Mozilla is not the only possible User-Agent name:
- 'Konqueror/1.0' KDE File Manager desktop client
- 'Lynx/2.7.1 libwww-FM/2.14' Lynx command line browser
-
-COOKIES
-
- Cookies are generally used by web servers to keep state information at the
- client's side. The server sets cookies by sending a response line in the
- headers that looks like 'Set-Cookie: <data>' where the data part then
- typically contains a set of NAME=VALUE pairs (separated by semicolons ';'
- like "NAME1=VALUE1; NAME2=VALUE2;"). The server can also specify for what
- path the "cookie" should be used for (by specifying "path=value"), when the
- cookie should expire ("expire=DATE"), for what domain to use it
- ("domain=NAME") and if it should be used on secure connections only
- ("secure").
-
- If you've received a page from a server that contains a header like:
- Set-Cookie: sessionid=boo123; path="/foo";
-
- it means the server wants that first pair passed on when we get anything in
- a path beginning with "/foo".
-
- Example, get a page that wants my name passed in a cookie:
-
- curl -b "name=Daniel" www.sillypage.com
-
- Curl also has the ability to use previously received cookies in following
- sessions. If you get cookies from a server and store them in a file in a
- manner similar to:
-
- curl --dump-header headers www.example.com
-
- ... you can then in a second connect to that (or another) site, use the
- cookies from the 'headers' file like:
-
- curl -b headers www.example.com
-
- While saving headers to a file is a working way to store cookies, it is
- however error-prone and not the preferred way to do this. Instead, make curl
- save the incoming cookies using the well-known netscape cookie format like
- this:
-
- curl -c cookies.txt www.example.com
-
- Note that by specifying -b you enable the "cookie awareness" and with -L
- you can make curl follow a location: (which often is used in combination
- with cookies). So that if a site sends cookies and a location, you can
- use a non-existing file to trigger the cookie awareness like:
-
- curl -L -b empty.txt www.example.com
-
- The file to read cookies from must be formatted using plain HTTP headers OR
- as netscape's cookie file. Curl will determine what kind it is based on the
- file contents. In the above command, curl will parse the header and store
- the cookies received from www.example.com. curl will send to the server the
- stored cookies which match the request as it follows the location. The
- file "empty.txt" may be a nonexistent file.
-
- To read and write cookies from a netscape cookie file, you can set both -b
- and -c to use the same file:
-
- curl -b cookies.txt -c cookies.txt www.example.com
-
-PROGRESS METER
-
- The progress meter exists to show a user that something actually is
- happening. The different fields in the output have the following meaning:
-
- % Total % Received % Xferd Average Speed Time Curr.
- Dload Upload Total Current Left Speed
- 0 151M 0 38608 0 0 9406 0 4:41:43 0:00:04 4:41:39 9287
-
- From left-to-right:
- % - percentage completed of the whole transfer
- Total - total size of the whole expected transfer
- % - percentage completed of the download
- Received - currently downloaded amount of bytes
- % - percentage completed of the upload
- Xferd - currently uploaded amount of bytes
- Average Speed
- Dload - the average transfer speed of the download
- Average Speed
- Upload - the average transfer speed of the upload
- Time Total - expected time to complete the operation
- Time Current - time passed since the invoke
- Time Left - expected time left to completion
- Curr.Speed - the average transfer speed the last 5 seconds (the first
- 5 seconds of a transfer is based on less time of course.)
-
- The -# option will display a totally different progress bar that doesn't
- need much explanation!
-
-SPEED LIMIT
-
- Curl allows the user to set the transfer speed conditions that must be met
- to let the transfer keep going. By using the switch -y and -Y you
- can make curl abort transfers if the transfer speed is below the specified
- lowest limit for a specified time.
-
- To have curl abort the download if the speed is slower than 3000 bytes per
- second for 1 minute, run:
-
- curl -Y 3000 -y 60 www.far-away-site.com
-
- This can very well be used in combination with the overall time limit, so
- that the above operation must be completed in whole within 30 minutes:
-
- curl -m 1800 -Y 3000 -y 60 www.far-away-site.com
-
- Forcing curl not to transfer data faster than a given rate is also possible,
- which might be useful if you're using a limited bandwidth connection and you
- don't want your transfer to use all of it (sometimes referred to as
- "bandwidth throttle").
-
- Make curl transfer data no faster than 10 kilobytes per second:
-
- curl --limit-rate 10K www.far-away-site.com
-
- or
-
- curl --limit-rate 10240 www.far-away-site.com
-
- Or prevent curl from uploading data faster than 1 megabyte per second:
-
- curl -T upload --limit-rate 1M ftp://uploadshereplease.com
-
- When using the --limit-rate option, the transfer rate is regulated on a
- per-second basis, which will cause the total transfer speed to become lower
- than the given number. Sometimes of course substantially lower, if your
- transfer stalls during periods.
-
-CONFIG FILE
-
- Curl automatically tries to read the .curlrc file (or _curlrc file on win32
- systems) from the user's home dir on startup.
-
- The config file could be made up with normal command line switches, but you
- can also specify the long options without the dashes to make it more
- readable. You can separate the options and the parameter with spaces, or
- with = or :. Comments can be used within the file. If the first letter on a
- line is a '#'-symbol the rest of the line is treated as a comment.
-
- If you want the parameter to contain spaces, you must enclose the entire
- parameter within double quotes ("). Within those quotes, you specify a
- quote as \".
-
- NOTE: You must specify options and their arguments on the same line.
-
- Example, set default time out and proxy in a config file:
-
- # We want a 30 minute timeout:
- -m 1800
- # ... and we use a proxy for all accesses:
- proxy = proxy.our.domain.com:8080
-
- White spaces ARE significant at the end of lines, but all white spaces
- leading up to the first characters of each line are ignored.
-
- Prevent curl from reading the default file by using -q as the first command
- line parameter, like:
-
- curl -q www.thatsite.com
-
- Force curl to get and display a local help page in case it is invoked
- without URL by making a config file similar to:
-
- # default url to get
- url = "http://help.with.curl.com/curlhelp.html"
-
- You can specify another config file to be read by using the -K/--config
- flag. If you set config file name to "-" it'll read the config from stdin,
- which can be handy if you want to hide options from being visible in process
- tables etc:
-
- echo "user = user:passwd" | curl -K - http://that.secret.site.com
-
-EXTRA HEADERS
-
- When using curl in your own very special programs, you may end up needing
- to pass on your own custom headers when getting a web page. You can do
- this by using the -H flag.
-
- Example, send the header "X-you-and-me: yes" to the server when getting a
- page:
-
- curl -H "X-you-and-me: yes" www.love.com
-
- This can also be useful in case you want curl to send a different text in a
- header than it normally does. The -H header you specify then replaces the
- header curl would normally send. If you replace an internal header with an
- empty one, you prevent that header from being sent. To prevent the Host:
- header from being used:
-
- curl -H "Host:" www.server.com
-
-FTP and PATH NAMES
-
- Do note that when getting files with the ftp:// URL, the given path is
- relative the directory you enter. To get the file 'README' from your home
- directory at your ftp site, do:
-
- curl ftp://user:passwd@my.site.com/README
-
- But if you want the README file from the root directory of that very same
- site, you need to specify the absolute file name:
-
- curl ftp://user:passwd@my.site.com//README
-
- (I.e with an extra slash in front of the file name.)
-
-SFTP and SCP and PATH NAMES
-
- With sftp: and scp: URLs, the path name given is the absolute name on the
- server. To access a file relative to the remote user's home directory,
- prefix the file with /~/ , such as:
-
- curl -u $USER sftp://home.example.com/~/.bashrc
-
-FTP and firewalls
-
- The FTP protocol requires one of the involved parties to open a second
- connection as soon as data is about to get transferred. There are two ways to
- do this.
-
- The default way for curl is to issue the PASV command which causes the
- server to open another port and await another connection performed by the
- client. This is good if the client is behind a firewall that doesn't allow
- incoming connections.
-
- curl ftp.download.com
-
- If the server, for example, is behind a firewall that doesn't allow connections
- on ports other than 21 (or if it just doesn't support the PASV command), the
- other way to do it is to use the PORT command and instruct the server to
- connect to the client on the given IP number and port (as parameters to the
- PORT command).
-
- The -P flag to curl supports a few different options. Your machine may have
- several IP-addresses and/or network interfaces and curl allows you to select
- which of them to use. Default address can also be used:
-
- curl -P - ftp.download.com
-
- Download with PORT but use the IP address of our 'le0' interface (this does
- not work on windows):
-
- curl -P le0 ftp.download.com
-
- Download with PORT but use 192.168.0.10 as our IP address to use:
-
- curl -P 192.168.0.10 ftp.download.com
-
-NETWORK INTERFACE
-
- Get a web page from a server using a specified port for the interface:
-
- curl --interface eth0:1 http://www.netscape.com/
-
- or
-
- curl --interface 192.168.1.10 http://www.netscape.com/
-
-HTTPS
-
- Secure HTTP requires SSL libraries to be installed and used when curl is
- built. If that is done, curl is capable of retrieving and posting documents
- using the HTTPS protocol.
-
- Example:
-
- curl https://www.secure-site.com
-
- Curl is also capable of using your personal certificates to get/post files
- from sites that require valid certificates. The only drawback is that the
- certificate needs to be in PEM-format. PEM is a standard and open format to
- store certificates with, but it is not used by the most commonly used
- browsers (Netscape and MSIE both use the so called PKCS#12 format). If you
- want curl to use the certificates you use with your (favourite) browser, you
- may need to download/compile a converter that can convert your browser's
- formatted certificates to PEM formatted ones. This kind of converter is
- included in recent versions of OpenSSL, and for older versions Dr Stephen
- N. Henson has written a patch for SSLeay that adds this functionality. You
- can get his patch (that requires an SSLeay installation) from his site at:
- https://web.archive.org/web/20170715155512/www.drh-consultancy.demon.co.uk/
-
- Example on how to automatically retrieve a document using a certificate with
- a personal password:
-
- curl -E /path/to/cert.pem:password https://secure.site.com/
-
- If you neglect to specify the password on the command line, you will be
- prompted for the correct password before any data can be received.
-
- Many older SSL-servers have problems with SSLv3 or TLS, which newer versions
- of OpenSSL etc use, therefore it is sometimes useful to specify what
- SSL-version curl should use. Use -3, -2 or -1 to specify that exact SSL
- version to use (for SSLv3, SSLv2 or TLSv1 respectively):
-
- curl -2 https://secure.site.com/
-
- Otherwise, curl will first attempt to use v3 and then v2.
-
- To use OpenSSL to convert your favourite browser's certificate into a PEM
- formatted one that curl can use, do something like this:
-
- In Netscape, you start with hitting the 'Security' menu button.
-
- Select 'certificates->yours' and then pick a certificate in the list
-
- Press the 'Export' button
-
- enter your PIN code for the certs
-
- select a proper place to save it
-
- Run the 'openssl' application to convert the certificate. If you cd to the
- openssl installation, you can do it like:
-
- # ./apps/openssl pkcs12 -in [file you saved] -clcerts -out [PEMfile]
-
- In Firefox, select Options, then Advanced, then the Encryption tab,
- View Certificates. This opens the Certificate Manager, where you can
- Export. Be sure to select PEM for the Save as type.
-
- In Internet Explorer, select Internet Options, then the Content tab, then
- Certificates. Then you can Export, and depending on the format you may
- need to convert to PEM.
-
- In Chrome, select Settings, then Show Advanced Settings. Under HTTPS/SSL
- select Manage Certificates.
-
-RESUMING FILE TRANSFERS
-
- To continue a file transfer where it was previously aborted, curl supports
- resume on HTTP(S) downloads as well as FTP uploads and downloads.
-
- Continue downloading a document:
-
- curl -C - -o file ftp://ftp.server.com/path/file
-
- Continue uploading a document(*1):
-
- curl -C - -T file ftp://ftp.server.com/path/file
-
- Continue downloading a document from a web server(*2):
-
- curl -C - -o file http://www.server.com/
-
- (*1) = This requires that the FTP server supports the non-standard command
- SIZE. If it doesn't, curl will say so.
-
- (*2) = This requires that the web server supports at least HTTP/1.1. If it
- doesn't, curl will say so.
-
-TIME CONDITIONS
-
- HTTP allows a client to specify a time condition for the document it
- requests. It is If-Modified-Since or If-Unmodified-Since. Curl allows you to
- specify them with the -z/--time-cond flag.
-
- For example, you can easily make a download that only gets performed if the
- remote file is newer than a local copy. It would be made like:
-
- curl -z local.html http://remote.server.com/remote.html
-
- Or you can download a file only if the local file is newer than the remote
- one. Do this by prepending the date string with a '-', as in:
-
- curl -z -local.html http://remote.server.com/remote.html
-
- You can specify a "free text" date as condition. Tell curl to only download
- the file if it was updated since January 12, 2012:
-
- curl -z "Jan 12 2012" http://remote.server.com/remote.html
-
- Curl will then accept a wide range of date formats. You always make the date
- check the other way around by prepending it with a dash '-'.
-
-DICT
-
- For fun try
-
- curl dict://dict.org/m:curl
- curl dict://dict.org/d:heisenbug:jargon
- curl dict://dict.org/d:daniel:web1913
-
- Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define'
- and 'lookup'. For example,
-
- curl dict://dict.org/find:curl
-
- Commands that break the URL description of the RFC (but not the DICT
- protocol) are
-
- curl dict://dict.org/show:db
- curl dict://dict.org/show:strat
-
- Authentication is still missing (but this is not required by the RFC)
-
-LDAP
-
- If you have installed the OpenLDAP library, curl can take advantage of it
- and offer ldap:// support.
- On Windows, curl will use WinLDAP from Platform SDK by default.
-
- Default protocol version used by curl is LDAPv3. LDAPv2 will be used as
- fallback mechanism in case if LDAPv3 will fail to connect.
-
- LDAP is a complex thing and writing an LDAP query is not an easy task. I do
- advise you to dig up the syntax description for that elsewhere. One such
- place might be:
-
- RFC 2255, "The LDAP URL Format" https://curl.haxx.se/rfc/rfc2255.txt
-
- To show you an example, this is how I can get all people from my local LDAP
- server that has a certain sub-domain in their email address:
-
- curl -B "ldap://ldap.frontec.se/o=frontec??sub?mail=*sth.frontec.se"
-
- If I want the same info in HTML format, I can get it by not using the -B
- (enforce ASCII) flag.
-
- You also can use authentication when accessing LDAP catalog:
-
- curl -u user:passwd "ldap://ldap.frontec.se/o=frontec??sub?mail=*"
- curl "ldap://user:passwd@ldap.frontec.se/o=frontec??sub?mail=*"
-
- By default, if user and password provided, OpenLDAP/WinLDAP will use basic
- authentication. On Windows you can control this behavior by providing
- one of --basic, --ntlm or --digest option in curl command line
-
- curl --ntlm "ldap://user:passwd@ldap.frontec.se/o=frontec??sub?mail=*"
-
- On Windows, if no user/password specified, auto-negotiation mechanism will
- be used with current logon credentials (SSPI/SPNEGO).
-
-ENVIRONMENT VARIABLES
-
- Curl reads and understands the following environment variables:
-
- http_proxy, HTTPS_PROXY, FTP_PROXY
-
- They should be set for protocol-specific proxies. General proxy should be
- set with
-
- ALL_PROXY
-
- A comma-separated list of host names that shouldn't go through any proxy is
- set in (only an asterisk, '*' matches all hosts)
-
- NO_PROXY
-
- If the host name matches one of these strings, or the host is within the
- domain of one of these strings, transactions with that node will not be
- proxied. When a domain is used, it needs to start with a period. A user can
- specify that both www.example.com and foo.example.com should not use a
- proxy by setting NO_PROXY to ".example.com". By including the full name you
- can exclude specific host names, so to make www.example.com not use a proxy
- but still have foo.example.com do it, set NO_PROXY to "www.example.com"
-
- The usage of the -x/--proxy flag overrides the environment variables.
-
-NETRC
-
- Unix introduced the .netrc concept a long time ago. It is a way for a user
- to specify name and password for commonly visited FTP sites in a file so
- that you don't have to type them in each time you visit those sites. You
- realize this is a big security risk if someone else gets hold of your
- passwords, so therefore most unix programs won't read this file unless it is
- only readable by yourself (curl doesn't care though).
-
- Curl supports .netrc files if told to (using the -n/--netrc and
- --netrc-optional options). This is not restricted to just FTP,
- so curl can use it for all protocols where authentication is used.
-
- A very simple .netrc file could look something like:
-
- machine curl.haxx.se login iamdaniel password mysecret
-
-CUSTOM OUTPUT
-
- To better allow script programmers to get to know about the progress of
- curl, the -w/--write-out option was introduced. Using this, you can specify
- what information from the previous transfer you want to extract.
-
- To display the amount of bytes downloaded together with some text and an
- ending newline:
-
- curl -w 'We downloaded %{size_download} bytes\n' www.download.com
-
-KERBEROS FTP TRANSFER
-
- Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need
- the kerberos package installed and used at curl build time for it to be
- available.
-
- First, get the krb-ticket the normal way, like with the kinit/kauth tool.
- Then use curl in way similar to:
-
- curl --krb private ftp://krb4site.com -u username:fakepwd
-
- There's no use for a password on the -u switch, but a blank one will make
- curl ask for one and you already entered the real password to kinit/kauth.
-
-TELNET
-
- The curl telnet support is basic and very easy to use. Curl passes all data
- passed to it on stdin to the remote server. Connect to a remote telnet
- server using a command line similar to:
-
- curl telnet://remote.server.com
-
- And enter the data to pass to the server on stdin. The result will be sent
- to stdout or to the file you specify with -o.
-
- You might want the -N/--no-buffer option to switch off the buffered output
- for slow connections or similar.
-
- Pass options to the telnet protocol negotiation, by using the -t option. To
- tell the server we use a vt100 terminal, try something like:
-
- curl -tTTYPE=vt100 telnet://remote.server.com
-
- Other interesting options for it -t include:
-
- - XDISPLOC=<X display> Sets the X display location.
-
- - NEW_ENV=<var,val> Sets an environment variable.
-
- NOTE: The telnet protocol does not specify any way to login with a specified
- user and password so curl can't do that automatically. To do that, you need
- to track when the login prompt is received and send the username and
- password accordingly.
-
-PERSISTENT CONNECTIONS
-
- Specifying multiple files on a single command line will make curl transfer
- all of them, one after the other in the specified order.
-
- libcurl will attempt to use persistent connections for the transfers so that
- the second transfer to the same host can use the same connection that was
- already initiated and was left open in the previous transfer. This greatly
- decreases connection time for all but the first transfer and it makes a far
- better use of the network.
-
- Note that curl cannot use persistent connections for transfers that are used
- in subsequence curl invokes. Try to stuff as many URLs as possible on the
- same command line if they are using the same host, as that'll make the
- transfers faster. If you use an HTTP proxy for file transfers, practically
- all transfers will be persistent.
-
-MULTIPLE TRANSFERS WITH A SINGLE COMMAND LINE
-
- As is mentioned above, you can download multiple files with one command line
- by simply adding more URLs. If you want those to get saved to a local file
- instead of just printed to stdout, you need to add one save option for each
- URL you specify. Note that this also goes for the -O option (but not
- --remote-name-all).
-
- For example: get two files and use -O for the first and a custom file
- name for the second:
-
- curl -O http://url.com/file.txt ftp://ftp.com/moo.exe -o moo.jpg
-
- You can also upload multiple files in a similar fashion:
-
- curl -T local1 ftp://ftp.com/moo.exe -T local2 ftp://ftp.com/moo2.txt
-
-IPv6
-
- curl will connect to a server with IPv6 when a host lookup returns an IPv6
- address and fall back to IPv4 if the connection fails. The --ipv4 and --ipv6
- options can specify which address to use when both are available. IPv6
- addresses can also be specified directly in URLs using the syntax:
-
- http://[2001:1890:1112:1::20]/overview.html
-
- When this style is used, the -g option must be given to stop curl from
- interpreting the square brackets as special globbing characters. Link local
- and site local addresses including a scope identifier, such as fe80::1234%1,
- may also be used, but the scope portion must be numeric or match an existing
- network interface on Linux and the percent character must be URL escaped. The
- previous example in an SFTP URL might look like:
-
- sftp://[fe80::1234%251]/
-
- IPv6 addresses provided other than in URLs (e.g. to the --proxy, --interface
- or --ftp-port options) should not be URL encoded.
-
-METALINK
-
- Curl supports Metalink (both version 3 and 4 (RFC 5854) are supported), a way
- to list multiple URIs and hashes for a file. Curl will make use of the mirrors
- listed within for failover if there are errors (such as the file or server not
- being available). It will also verify the hash of the file after the download
- completes. The Metalink file itself is downloaded and processed in memory and
- not stored in the local file system.
-
- Example to use a remote Metalink file:
-
- curl --metalink http://www.example.com/example.metalink
-
- To use a Metalink file in the local file system, use FILE protocol (file://):
-
- curl --metalink file://example.metalink
-
- Please note that if FILE protocol is disabled, there is no way to use a local
- Metalink file at the time of this writing. Also note that if --metalink and
- --include are used together, --include will be ignored. This is because including
- headers in the response will break Metalink parser and if the headers are included
- in the file described in Metalink file, hash check will fail.
-
-MAILING LISTS
-
- For your convenience, we have several open mailing lists to discuss curl,
- its development and things relevant to this. Get all info at
- https://curl.haxx.se/mail/. Some of the lists available are:
-
- curl-users
-
- Users of the command line tool. How to use it, what doesn't work, new
- features, related tools, questions, news, installations, compilations,
- running, porting etc.
-
- curl-library
-
- Developers using or developing libcurl. Bugs, extensions, improvements.
-
- curl-announce
-
- Low-traffic. Only receives announcements of new public versions. At worst,
- that makes something like one or two mails per month, but usually only one
- mail every second month.
-
- curl-and-php
-
- Using the curl functions in PHP. Everything curl with a PHP angle. Or PHP
- with a curl angle.
-
- curl-and-python
-
- Python hackers using curl with or without the python binding pycurl.
-
- Please direct curl questions, feature requests and trouble reports to one of
- these mailing lists instead of mailing any individual.
diff --git a/docs/MANUAL.md b/docs/MANUAL.md
new file mode 100644
index 000000000..80ab92a63
--- /dev/null
+++ b/docs/MANUAL.md
@@ -0,0 +1,1011 @@
+# curl tutorial
+
+## Simple Usage
+
+Get the main page from Netscape's web-server:
+
+ curl http://www.netscape.com/
+
+Get the README file the user's home directory at funet's ftp-server:
+
+ curl ftp://ftp.funet.fi/README
+
+Get a web page from a server using port 8000:
+
+ curl http://www.weirdserver.com:8000/
+
+Get a directory listing of an FTP site:
+
+ curl ftp://cool.haxx.se/
+
+Get the definition of curl from a dictionary:
+
+ curl dict://dict.org/m:curl
+
+Fetch two documents at once:
+
+ curl ftp://cool.haxx.se/ http://www.weirdserver.com:8000/
+
+Get a file off an FTPS server:
+
+ curl ftps://files.are.secure.com/secrets.txt
+
+or use the more appropriate FTPS way to get the same file:
+
+ curl --ftp-ssl ftp://files.are.secure.com/secrets.txt
+
+Get a file from an SSH server using SFTP:
+
+ curl -u username sftp://example.com/etc/issue
+
+Get a file from an SSH server using SCP using a private key (not
+password-protected) to authenticate:
+
+ curl -u username: --key ~/.ssh/id_rsa scp://example.com/~/file.txt
+
+Get a file from an SSH server using SCP using a private key
+(password-protected) to authenticate:
+
+ curl -u username: --key ~/.ssh/id_rsa --pass private_key_password
+ scp://example.com/~/file.txt
+
+Get the main page from an IPv6 web server:
+
+ curl "http://[2001:1890:1112:1::20]/"
+
+Get a file from an SMB server:
+
+ curl -u "domain\username:passwd" smb://server.example.com/share/file.txt
+
+## Download to a File
+
+Get a web page and store in a local file with a specific name:
+
+ curl -o thatpage.html http://www.netscape.com/
+
+Get a web page and store in a local file, make the local file get the name of
+the remote document (if no file name part is specified in the URL, this will
+fail):
+
+ curl -O http://www.netscape.com/index.html
+
+Fetch two files and store them with their remote names:
+
+ curl -O www.haxx.se/index.html -O curl.haxx.se/download.html
+
+## Using Passwords
+
+### FTP
+
+To ftp files using name+passwd, include them in the URL like:
+
+ curl ftp://name:passwd@machine.domain:port/full/path/to/file
+
+or specify them with the -u flag like
+
+ curl -u name:passwd ftp://machine.domain:port/full/path/to/file
+
+### FTPS
+
+It is just like for FTP, but you may also want to specify and use SSL-specific
+options for certificates etc.
+
+Note that using `FTPS://` as prefix is the "implicit" way as described in the
+standards while the recommended "explicit" way is done by using FTP:// and the
+`--ftp-ssl` option.
+
+### SFTP / SCP
+
+This is similar to FTP, but you can use the `--key` option to specify a
+private key to use instead of a password. Note that the private key may itself
+be protected by a password that is unrelated to the login password of the
+remote system; this password is specified using the `--pass` option.
+Typically, curl will automatically extract the public key from the private key
+file, but in cases where curl does not have the proper library support, a
+matching public key file must be specified using the `--pubkey` option.
+
+### HTTP
+
+Curl also supports user and password in HTTP URLs, thus you can pick a file
+like:
+
+ curl http://name:passwd@machine.domain/full/path/to/file
+
+or specify user and password separately like in
+
+ curl -u name:passwd http://machine.domain/full/path/to/file
+
+HTTP offers many different methods of authentication and curl supports
+several: Basic, Digest, NTLM and Negotiate (SPNEGO). Without telling which
+method to use, curl defaults to Basic. You can also ask curl to pick the most
+secure ones out of the ones that the server accepts for the given URL, by
+using `--anyauth`.
+
+**Note**! According to the URL specification, HTTP URLs can not contain a user
+and password, so that style will not work when using curl via a proxy, even
+though curl allows it at other times. When using a proxy, you _must_ use the
+`-u` style for user and password.
+
+### HTTPS
+
+Probably most commonly used with private certificates, as explained below.
+
+## Proxy
+
+curl supports both HTTP and SOCKS proxy servers, with optional authentication.
+It does not have special support for FTP proxy servers since there are no
+standards for those, but it can still be made to work with many of them. You
+can also use both HTTP and SOCKS proxies to transfer files to and from FTP
+servers.
+
+Get an ftp file using an HTTP proxy named my-proxy that uses port 888:
+
+ curl -x my-proxy:888 ftp://ftp.leachsite.com/README
+
+Get a file from an HTTP server that requires user and password, using the
+same proxy as above:
+
+ curl -u user:passwd -x my-proxy:888 http://www.get.this/
+
+Some proxies require special authentication. Specify by using -U as above:
+
+ curl -U user:passwd -x my-proxy:888 http://www.get.this/
+
+A comma-separated list of hosts and domains which do not use the proxy can be
+specified as:
+
+ curl --noproxy localhost,get.this -x my-proxy:888 http://www.get.this/
+
+If the proxy is specified with `--proxy1.0` instead of `--proxy` or `-x`, then
+curl will use HTTP/1.0 instead of HTTP/1.1 for any `CONNECT` attempts.
+
+curl also supports SOCKS4 and SOCKS5 proxies with `--socks4` and `--socks5`.
+
+See also the environment variables Curl supports that offer further proxy
+control.
+
+Most FTP proxy servers are set up to appear as a normal FTP server from the
+client's perspective, with special commands to select the remote FTP server.
+curl supports the `-u`, `-Q` and `--ftp-account` options that can be used to
+set up transfers through many FTP proxies. For example, a file can be uploaded
+to a remote FTP server using a Blue Coat FTP proxy with the options:
+
+ curl -u "username@ftp.server Proxy-Username:Remote-Pass"
+ --ftp-account Proxy-Password --upload-file local-file
+ ftp://my-ftp.proxy.server:21/remote/upload/path/
+
+See the manual for your FTP proxy to determine the form it expects to set up
+transfers, and curl's `-v` option to see exactly what curl is sending.
+
+## Ranges
+
+HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only
+one or more subparts of a specified document. Curl supports this with the `-r`
+flag.
+
+Get the first 100 bytes of a document:
+
+ curl -r 0-99 http://www.get.this/
+
+Get the last 500 bytes of a document:
+
+ curl -r -500 http://www.get.this/
+
+Curl also supports simple ranges for FTP files as well. Then you can only
+specify start and stop position.
+
+Get the first 100 bytes of a document using FTP:
+
+ curl -r 0-99 ftp://www.get.this/README
+
+## Uploading
+
+### FTP / FTPS / SFTP / SCP
+
+Upload all data on stdin to a specified server:
+
+ curl -T - ftp://ftp.upload.com/myfile
+
+Upload data from a specified file, login with user and password:
+
+ curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile
+
+Upload a local file to the remote site, and use the local file name at the
+remote site too:
+
+ curl -T uploadfile -u user:passwd ftp://ftp.upload.com/
+
+Upload a local file to get appended to the remote file:
+
+ curl -T localfile -a ftp://ftp.upload.com/remotefile
+
+Curl also supports ftp upload through a proxy, but only if the proxy is
+configured to allow that kind of tunneling. If it does, you can run curl in a
+fashion similar to:
+
+ curl --proxytunnel -x proxy:port -T localfile ftp.upload.com
+
+### SMB / SMBS
+
+ curl -T file.txt -u "domain\username:passwd"
+ smb://server.example.com/share/
+
+### HTTP
+
+Upload all data on stdin to a specified HTTP site:
+
+ curl -T - http://www.upload.com/myfile
+
+Note that the HTTP server must have been configured to accept PUT before this
+can be done successfully.
+
+For other ways to do HTTP data upload, see the POST section below.
+
+## Verbose / Debug
+
+If curl fails where it isn't supposed to, if the servers don't let you in, if
+you can't understand the responses: use the `-v` flag to get verbose
+fetching. Curl will output lots of info and what it sends and receives in
+order to let the user see all client-server interaction (but it won't show you
+the actual data).
+
+ curl -v ftp://ftp.upload.com/
+
+To get even more details and information on what curl does, try using the
+`--trace` or `--trace-ascii` options with a given file name to log to, like
+this:
+
+ curl --trace trace.txt www.haxx.se
+
+
+## Detailed Information
+
+Different protocols provide different ways of getting detailed information
+about specific files/documents. To get curl to show detailed information about
+a single file, you should use `-I`/`--head` option. It displays all available
+info on a single file for HTTP and FTP. The HTTP information is a lot more
+extensive.
+
+For HTTP, you can get the header information (the same as `-I` would show)
+shown before the data by using `-i`/`--include`. Curl understands the
+`-D`/`--dump-header` option when getting files from both FTP and HTTP, and it
+will then store the headers in the specified file.
+
+Store the HTTP headers in a separate file (headers.txt in the example):
+
+ curl --dump-header headers.txt curl.haxx.se
+
+Note that headers stored in a separate file can be very useful at a later time
+if you want curl to use cookies sent by the server. More about that in the
+cookies section.
+
+## POST (HTTP)
+
+It's easy to post data using curl. This is done using the `-d <data>` option.
+The post data must be urlencoded.
+
+Post a simple "name" and "phone" guestbook.
+
+ curl -d "name=Rafael%20Sagula&phone=3320780" http://www.where.com/guest.cgi
+
+How to post a form with curl, lesson #1:
+
+Dig out all the `<input>` tags in the form that you want to fill in.
+
+If there's a "normal" post, you use `-d` to post. `-d` takes a full "post
+string", which is in the format
+
+ <variable1>=<data1>&<variable2>=<data2>&...
+
+The 'variable' names are the names set with `"name="` in the `<input>` tags,
+and the data is the contents you want to fill in for the inputs. The data
+*must* be properly URL encoded. That means you replace space with + and that
+you replace weird letters with %XX where XX is the hexadecimal representation
+of the letter's ASCII code.
+
+Example:
+
+(page located at `http://www.formpost.com/getthis/`)
+
+ <form action="post.cgi" method="post">
+ <input name=user size=10>
+ <input name=pass type=password size=10>
+ <input name=id type=hidden value="blablabla">
+ <input name=ding value="submit">
+ </form>
+
+We want to enter user 'foobar' with password '12345'.
+
+To post to this, you enter a curl command line like:
+
+ curl -d "user=foobar&pass=12345&id=blablabla&ding=submit"
+ http://www.formpost.com/getthis/post.cgi
+
+While `-d` uses the application/x-www-form-urlencoded mime-type, generally
+understood by CGI's and similar, curl also supports the more capable
+multipart/form-data type. This latter type supports things like file upload.
+
+`-F` accepts parameters like `-F "name=contents"`. If you want the contents to
+be read from a file, use `@filename` as contents. When specifying a file, you
+can also specify the file content type by appending `;type=<mime type>` to the
+file name. You can also post the contents of several files in one field. For
+example, the field name 'coolfiles' is used to send three files, with
+different content types using the following syntax:
+
+ curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html"
+ http://www.post.com/postit.cgi
+
+If the content-type is not specified, curl will try to guess from the file
+extension (it only knows a few), or use the previously specified type (from an
+earlier file if several files are specified in a list) or else it will use the
+default type 'application/octet-stream'.
+
+Emulate a fill-in form with `-F`. Let's say you fill in three fields in a
+form. One field is a file name which to post, one field is your name and one
+field is a file description. We want to post the file we have written named
+"cooltext.txt". To let curl do the posting of this data instead of your
+favourite browser, you have to read the HTML source of the form page and find
+the names of the input fields. In our example, the input field names are
+'file', 'yourname' and 'filedescription'.
+
+ curl -F "file=@cooltext.txt" -F "yourname=Daniel"
+ -F "filedescription=Cool text file with cool text inside"
+ http://www.post.com/postit.cgi
+
+To send two files in one post you can do it in two ways:
+
+Send multiple files in a single "field" with a single field name:
+
+ curl -F "pictures=@dog.gif,cat.gif" $URL
+
+Send two fields with two field names
+
+ curl -F "docpicture=@dog.gif" -F "catpicture=@cat.gif" $URL
+
+To send a field value literally without interpreting a leading `@` or `<`, or
+an embedded `;type=`, use `--form-string` instead of `-F`. This is recommended
+when the value is obtained from a user or some other unpredictable
+source. Under these circumstances, using `-F` instead of `--form-string` could
+allow a user to trick curl into uploading a file.
+
+## Referrer
+
+An HTTP request has the option to include information about which address
+referred it to the actual page. Curl allows you to specify the referrer to be
+used on the command line. It is especially useful to fool or trick stupid
+servers or CGI scripts that rely on that information being available or
+contain certain data.
+
+ curl -e www.coolsite.com http://www.showme.com/
+
+## User Agent
+
+An HTTP request has the option to include information about the browser that
+generated the request. Curl allows it to be specified on the command line. It
+is especially useful to fool or trick stupid servers or CGI scripts that only
+accept certain browsers.
+
+Example:
+
+ curl -A 'Mozilla/3.0 (Win95; I)' http://www.nationsbank.com/
+
+Other common strings:
+
+- `Mozilla/3.0 (Win95; I)` - Netscape Version 3 for Windows 95
+- `Mozilla/3.04 (Win95; U)` - Netscape Version 3 for Windows 95
+- `Mozilla/2.02 (OS/2; U)` - Netscape Version 2 for OS/2
+- `Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)` - Netscape for AIX
+- `Mozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)` - Netscape for Linux
+
+Note that Internet Explorer tries hard to be compatible in every way:
+
+- `Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)` - MSIE for W95
+
+Mozilla is not the only possible User-Agent name:
+
+- `Konqueror/1.0` - KDE File Manager desktop client
+- `Lynx/2.7.1 libwww-FM/2.14` - Lynx command line browser
+
+## Cookies
+
+Cookies are generally used by web servers to keep state information at the
+client's side. The server sets cookies by sending a response line in the
+headers that looks like `Set-Cookie: <data>` where the data part then
+typically contains a set of `NAME=VALUE` pairs (separated by semicolons `;`
+like `NAME1=VALUE1; NAME2=VALUE2;`). The server can also specify for what path
+the "cookie" should be used for (by specifying `path=value`), when the cookie
+should expire (`expire=DATE`), for what domain to use it (`domain=NAME`) and
+if it should be used on secure connections only (`secure`).
+
+If you've received a page from a server that contains a header like:
+
+ Set-Cookie: sessionid=boo123; path="/foo";
+
+it means the server wants that first pair passed on when we get anything in a
+path beginning with "/foo".
+
+Example, get a page that wants my name passed in a cookie:
+
+ curl -b "name=Daniel" www.sillypage.com
+
+Curl also has the ability to use previously received cookies in following
+sessions. If you get cookies from a server and store them in a file in a
+manner similar to:
+
+ curl --dump-header headers www.example.com
+
+... you can then in a second connect to that (or another) site, use the
+cookies from the 'headers' file like:
+
+ curl -b headers www.example.com
+
+While saving headers to a file is a working way to store cookies, it is
+however error-prone and not the preferred way to do this. Instead, make curl
+save the incoming cookies using the well-known netscape cookie format like
+this:
+
+ curl -c cookies.txt www.example.com
+
+Note that by specifying `-b` you enable the "cookie awareness" and with `-L`
+you can make curl follow a location: (which often is used in combination with
+cookies). So that if a site sends cookies and a location, you can use a
+non-existing file to trigger the cookie awareness like:
+
+ curl -L -b empty.txt www.example.com
+
+The file to read cookies from must be formatted using plain HTTP headers OR as
+netscape's cookie file. Curl will determine what kind it is based on the file
+contents. In the above command, curl will parse the header and store the
+cookies received from www.example.com. curl will send to the server the
+stored cookies which match the request as it follows the location. The file
+"empty.txt" may be a nonexistent file.
+
+To read and write cookies from a netscape cookie file, you can set both `-b`
+and `-c` to use the same file:
+
+ curl -b cookies.txt -c cookies.txt www.example.com
+
+## Progress Meter
+
+The progress meter exists to show a user that something actually is
+happening. The different fields in the output have the following meaning:
+
+ % Total % Received % Xferd Average Speed Time Curr.
+ Dload Upload Total Current Left Speed
+ 0 151M 0 38608 0 0 9406 0 4:41:43 0:00:04 4:41:39 9287
+
+From left-to-right:
+
+ - % - percentage completed of the whole transfer
+ - Total - total size of the whole expected transfer
+ - % - percentage completed of the download
+ - Received - currently downloaded amount of bytes
+ - % - percentage completed of the upload
+ - Xferd - currently uploaded amount of bytes
+ - Average Speed Dload - the average transfer speed of the download
+ - Average Speed Upload - the average transfer speed of the upload
+ - Time Total - expected time to complete the operation
+ - Time Current - time passed since the invoke
+ - Time Left - expected time left to completion
+ - Curr.Speed - the average transfer speed the last 5 seconds (the first
+ 5 seconds of a transfer is based on less time of course.)
+
+The `-#` option will display a totally different progress bar that doesn't
+need much explanation!
+
+## Speed Limit
+
+Curl allows the user to set the transfer speed conditions that must be met to
+let the transfer keep going. By using the switch `-y` and `-Y` you can make
+curl abort transfers if the transfer speed is below the specified lowest limit
+for a specified time.
+
+To have curl abort the download if the speed is slower than 3000 bytes per
+second for 1 minute, run:
+
+ curl -Y 3000 -y 60 www.far-away-site.com
+
+This can very well be used in combination with the overall time limit, so
+that the above operation must be completed in whole within 30 minutes:
+
+ curl -m 1800 -Y 3000 -y 60 www.far-away-site.com
+
+Forcing curl not to transfer data faster than a given rate is also possible,
+which might be useful if you're using a limited bandwidth connection and you
+don't want your transfer to use all of it (sometimes referred to as
+"bandwidth throttle").
+
+Make curl transfer data no faster than 10 kilobytes per second:
+
+ curl --limit-rate 10K www.far-away-site.com
+
+or
+
+ curl --limit-rate 10240 www.far-away-site.com
+
+Or prevent curl from uploading data faster than 1 megabyte per second:
+
+ curl -T upload --limit-rate 1M ftp://uploadshereplease.com
+
+When using the `--limit-rate` option, the transfer rate is regulated on a
+per-second basis, which will cause the total transfer speed to become lower
+than the given number. Sometimes of course substantially lower, if your
+transfer stalls during periods.
+
+## Config File
+
+Curl automatically tries to read the `.curlrc` file (or `_curlrc` file on
+Microsoft Windows systems) from the user's home dir on startup.
+
+The config file could be made up with normal command line switches, but you
+can also specify the long options without the dashes to make it more
+readable. You can separate the options and the parameter with spaces, or with
+`=` or `:`. Comments can be used within the file. If the first letter on a
+line is a `#`-symbol the rest of the line is treated as a comment.
+
+If you want the parameter to contain spaces, you must enclose the entire
+parameter within double quotes (`"`). Within those quotes, you specify a quote
+as `\"`.
+
+NOTE: You must specify options and their arguments on the same line.
+
+Example, set default time out and proxy in a config file:
+
+ # We want a 30 minute timeout:
+ -m 1800
+ # ... and we use a proxy for all accesses:
+ proxy = proxy.our.domain.com:8080
+
+White spaces ARE significant at the end of lines, but all white spaces leading
+up to the first characters of each line are ignored.
+
+Prevent curl from reading the default file by using -q as the first command
+line parameter, like:
+
+ curl -q www.thatsite.com
+
+Force curl to get and display a local help page in case it is invoked without
+URL by making a config file similar to:
+
+ # default url to get
+ url = "http://help.with.curl.com/curlhelp.html"
+
+You can specify another config file to be read by using the `-K`/`--config`
+flag. If you set config file name to `-` it'll read the config from stdin,
+which can be handy if you want to hide options from being visible in process
+tables etc:
+
+ echo "user = user:passwd" | curl -K - http://that.secret.site.com
+
+## Extra Headers
+
+When using curl in your own very special programs, you may end up needing
+to pass on your own custom headers when getting a web page. You can do
+this by using the `-H` flag.
+
+Example, send the header `X-you-and-me: yes` to the server when getting a
+page:
+
+ curl -H "X-you-and-me: yes" www.love.com
+
+This can also be useful in case you want curl to send a different text in a
+header than it normally does. The `-H` header you specify then replaces the
+header curl would normally send. If you replace an internal header with an
+empty one, you prevent that header from being sent. To prevent the `Host:`
+header from being used:
+
+ curl -H "Host:" www.server.com
+
+## FTP and Path Names
+
+Do note that when getting files with a `ftp://` URL, the given path is
+relative the directory you enter. To get the file `README` from your home
+directory at your ftp site, do:
+
+ curl ftp://user:passwd@my.site.com/README
+
+But if you want the README file from the root directory of that very same
+site, you need to specify the absolute file name:
+
+ curl ftp://user:passwd@my.site.com//README
+
+(I.e with an extra slash in front of the file name.)
+
+## SFTP and SCP and Path Names
+
+With sftp: and scp: URLs, the path name given is the absolute name on the
+server. To access a file relative to the remote user's home directory, prefix
+the file with `/~/` , such as:
+
+ curl -u $USER sftp://home.example.com/~/.bashrc
+
+## FTP and Firewalls
+
+The FTP protocol requires one of the involved parties to open a second
+connection as soon as data is about to get transferred. There are two ways to
+do this.
+
+The default way for curl is to issue the PASV command which causes the server
+to open another port and await another connection performed by the
+client. This is good if the client is behind a firewall that doesn't allow
+incoming connections.
+
+ curl ftp.download.com
+
+If the server, for example, is behind a firewall that doesn't allow
+connections on ports other than 21 (or if it just doesn't support the `PASV`
+command), the other way to do it is to use the `PORT` command and instruct the
+server to connect to the client on the given IP number and port (as parameters
+to the PORT command).
+
+The `-P` flag to curl supports a few different options. Your machine may have
+several IP-addresses and/or network interfaces and curl allows you to select
+which of them to use. Default address can also be used:
+
+ curl -P - ftp.download.com
+
+Download with `PORT` but use the IP address of our `le0` interface (this does
+not work on windows):
+
+ curl -P le0 ftp.download.com
+
+Download with `PORT` but use 192.168.0.10 as our IP address to use:
+
+ curl -P 192.168.0.10 ftp.download.com
+
+## Network Interface
+
+Get a web page from a server using a specified port for the interface:
+
+ curl --interface eth0:1 http://www.netscape.com/
+
+or
+
+ curl --interface 192.168.1.10 http://www.netscape.com/
+
+## HTTPS
+
+Secure HTTP requires a TLS library to be installed and used when curl is
+built. If that is done, curl is capable of retrieving and posting documents
+using the HTTPS protocol.
+
+Example:
+
+ curl https://www.secure-site.com
+
+curl is also capable of using client certificates to get/post files from sites
+that require valid certificates. The only drawback is that the certificate
+needs to be in PEM-format. PEM is a standard and open format to store
+certificates with, but it is not used by the most commonly used browsers. If
+you want curl to use the certificates you use with your (favourite) browser,
+you may need to download/compile a converter that can convert your browser's
+formatted certificates to PEM formatted ones.
+
+Example on how to automatically retrieve a document using a certificate with a
+personal password:
+
+ curl -E /path/to/cert.pem:password https://secure.site.com/
+
+If you neglect to specify the password on the command line, you will be
+prompted for the correct password before any data can be received.
+
+Many older HTTPS servers have problems with specific SSL or TLS versions,
+which newer versions of OpenSSL etc use, therefore it is sometimes useful to
+specify what SSL-version curl should use. Use -3, -2 or -1 to specify that
+exact SSL version to use (for SSLv3, SSLv2 or TLSv1 respectively):
+
+ curl -2 https://secure.site.com/
+
+Otherwise, curl will attempt to use a sensible TLS default version.
+
+## Resuming File Transfers
+
+To continue a file transfer where it was previously aborted, curl supports
+esume on HTTP(S) downloads as well as FTP uploads and downloads.
+
+Continue downloading a document:
+
+ curl -C - -o file ftp://ftp.server.com/path/file
+
+Continue uploading a document:
+
+ curl -C - -T file ftp://ftp.server.com/path/file
+
+Continue downloading a document from a web server
+
+ curl -C - -o file http://www.server.com/
+
+## Time Conditions
+
+HTTP allows a client to specify a time condition for the document it requests.
+It is `If-Modified-Since` or `If-Unmodified-Since`. curl allows you to specify
+them with the `-z`/`--time-cond` flag.
+
+For example, you can easily make a download that only gets performed if the
+remote file is newer than a local copy. It would be made like:
+
+ curl -z local.html http://remote.server.com/remote.html
+
+Or you can download a file only if the local file is newer than the remote
+one. Do this by prepending the date string with a `-`, as in:
+
+ curl -z -local.html http://remote.server.com/remote.html
+
+You can specify a "free text" date as condition. Tell curl to only download
+the file if it was updated since January 12, 2012:
+
+ curl -z "Jan 12 2012" http://remote.server.com/remote.html
+
+Curl will then accept a wide range of date formats. You always make the date
+check the other way around by prepending it with a dash (`-`).
+
+## DICT
+
+For fun try
+
+ curl dict://dict.org/m:curl
+ curl dict://dict.org/d:heisenbug:jargon
+ curl dict://dict.org/d:daniel:web1913
+
+Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define' and
+'lookup'. For example,
+
+ curl dict://dict.org/find:curl
+
+Commands that break the URL description of the RFC (but not the DICT
+protocol) are
+
+ curl dict://dict.org/show:db
+ curl dict://dict.org/show:strat
+
+Authentication support is still missing
+
+## LDAP
+
+If you have installed the OpenLDAP library, curl can take advantage of it and
+offer `ldap://` support. On Windows, curl will use WinLDAP from Platform SDK
+by default.
+
+Default protocol version used by curl is LDAPv3. LDAPv2 will be used as
+fallback mechanism in case if LDAPv3 will fail to connect.
+
+LDAP is a complex thing and writing an LDAP query is not an easy task. I do
+advise you to dig up the syntax description for that elsewhere. One such place
+might be: [RFC 2255, The LDAP URL
+Format](https://curl.haxx.se/rfc/rfc2255.txt)
+
+To show you an example, this is how I can get all people from my local LDAP
+server that has a certain sub-domain in their email address:
+
+ curl -B "ldap://ldap.frontec.se/o=frontec??sub?mail=*sth.frontec.se"
+
+If I want the same info in HTML format, I can get it by not using the `-B`
+(enforce ASCII) flag.
+
+You also can use authentication when accessing LDAP catalog:
+
+ curl -u user:passwd "ldap://ldap.frontec.se/o=frontec??sub?mail=*"
+ curl "ldap://user:passwd@ldap.frontec.se/o=frontec??sub?mail=*"
+
+By default, if user and password provided, OpenLDAP/WinLDAP will use basic
+authentication. On Windows you can control this behavior by providing one of
+`--basic`, `--ntlm` or `--digest` option in curl command line
+
+ curl --ntlm "ldap://user:passwd@ldap.frontec.se/o=frontec??sub?mail=*"
+
+On Windows, if no user/password specified, auto-negotiation mechanism will be
+used with current logon credentials (SSPI/SPNEGO).
+
+## Environment Variables
+
+Curl reads and understands the following environment variables:
+
+ http_proxy, HTTPS_PROXY, FTP_PROXY
+
+They should be set for protocol-specific proxies. General proxy should be set
+with
+
+ ALL_PROXY
+
+A comma-separated list of host names that shouldn't go through any proxy is
+set in (only an asterisk, `*` matches all hosts)
+
+ NO_PROXY
+
+If the host name matches one of these strings, or the host is within the
+domain of one of these strings, transactions with that node will not be
+proxied. When a domain is used, it needs to start with a period. A user can
+specify that both www.example.com and foo.example.com should not use a proxy
+by setting `NO_PROXY` to `.example.com`. By including the full name you can
+exclude specific host names, so to make `www.example.com` not use a proxy but
+still have `foo.example.com` do it, set `NO_PROXY` to `www.example.com`.
+
+The usage of the `-x`/`--proxy` flag overrides the environment variables.
+
+## Netrc
+
+Unix introduced the `.netrc` concept a long time ago. It is a way for a user
+to specify name and password for commonly visited FTP sites in a file so that
+you don't have to type them in each time you visit those sites. You realize
+this is a big security risk if someone else gets hold of your passwords, so
+therefore most unix programs won't read this file unless it is only readable
+by yourself (curl doesn't care though).
+
+Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
+`--netrc-optional` options). This is not restricted to just FTP, so curl can
+use it for all protocols where authentication is used.
+
+A very simple `.netrc` file could look something like:
+
+ machine curl.haxx.se login iamdaniel password mysecret
+
+## Custom Output
+
+To better allow script programmers to get to know about the progress of curl,
+the `-w`/`--write-out` option was introduced. Using this, you can specify what
+information from the previous transfer you want to extract.
+
+To display the amount of bytes downloaded together with some text and an
+ending newline:
+
+ curl -w 'We downloaded %{size_download} bytes\n' www.download.com
+
+## Kerberos FTP Transfer
+
+Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the
+kerberos package installed and used at curl build time for it to be available.
+
+First, get the krb-ticket the normal way, like with the kinit/kauth tool.
+Then use curl in way similar to:
+
+ curl --krb private ftp://krb4site.com -u username:fakepwd
+
+There's no use for a password on the `-u` switch, but a blank one will make
+curl ask for one and you already entered the real password to kinit/kauth.
+
+## TELNET
+
+The curl telnet support is basic and very easy to use. Curl passes all data
+passed to it on stdin to the remote server. Connect to a remote telnet server
+using a command line similar to:
+
+ curl telnet://remote.server.com
+
+And enter the data to pass to the server on stdin. The result will be sent to
+stdout or to the file you specify with `-o`.
+
+You might want the `-N`/`--no-buffer` option to switch off the buffered output
+for slow connections or similar.
+
+Pass options to the telnet protocol negotiation, by using the `-t` option. To
+tell the server we use a vt100 terminal, try something like:
+
+ curl -tTTYPE=vt100 telnet://remote.server.com
+
+Other interesting options for it `-t` include:
+
+ - `XDISPLOC=<X display>` Sets the X display location.
+ - `NEW_ENV=<var,val>` Sets an environment variable.
+
+NOTE: The telnet protocol does not specify any way to login with a specified
+user and password so curl can't do that automatically. To do that, you need to
+track when the login prompt is received and send the username and password
+accordingly.
+
+## Persistent Connections
+
+Specifying multiple files on a single command line will make curl transfer all
+of them, one after the other in the specified order.
+
+libcurl will attempt to use persistent connections for the transfers so that
+the second transfer to the same host can use the same connection that was
+already initiated and was left open in the previous transfer. This greatly
+decreases connection time for all but the first transfer and it makes a far
+better use of the network.
+
+Note that curl cannot use persistent connections for transfers that are used
+in subsequence curl invokes. Try to stuff as many URLs as possible on the same
+command line if they are using the same host, as that'll make the transfers
+faster. If you use an HTTP proxy for file transfers, practically all transfers
+will be persistent.
+
+## Multiple Transfers With A Single Command Line
+
+As is mentioned above, you can download multiple files with one command line
+by simply adding more URLs. If you want those to get saved to a local file
+instead of just printed to stdout, you need to add one save option for each
+URL you specify. Note that this also goes for the `-O` option (but not
+`--remote-name-all`).
+
+For example: get two files and use `-O` for the first and a custom file
+name for the second:
+
+ curl -O http://url.com/file.txt ftp://ftp.com/moo.exe -o moo.jpg
+
+You can also upload multiple files in a similar fashion:
+
+ curl -T local1 ftp://ftp.com/moo.exe -T local2 ftp://ftp.com/moo2.txt
+
+## IPv6
+
+curl will connect to a server with IPv6 when a host lookup returns an IPv6
+address and fall back to IPv4 if the connection fails. The `--ipv4` and
+`--ipv6` options can specify which address to use when both are
+available. IPv6 addresses can also be specified directly in URLs using the
+syntax:
+
+ http://[2001:1890:1112:1::20]/overview.html
+
+When this style is used, the `-g` option must be given to stop curl from
+interpreting the square brackets as special globbing characters. Link local
+and site local addresses including a scope identifier, such as `fe80::1234%1`,
+may also be used, but the scope portion must be numeric or match an existing
+network interface on Linux and the percent character must be URL escaped. The
+previous example in an SFTP URL might look like:
+
+ sftp://[fe80::1234%251]/
+
+IPv6 addresses provided other than in URLs (e.g. to the `--proxy`,
+`--interface` or `--ftp-port` options) should not be URL encoded.
+
+## Metalink
+
+Curl supports Metalink (both version 3 and 4 (RFC 5854) are supported), a way
+to list multiple URIs and hashes for a file. Curl will make use of the mirrors
+listed within for failover if there are errors (such as the file or server not
+being available). It will also verify the hash of the file after the download
+completes. The Metalink file itself is downloaded and processed in memory and
+not stored in the local file system.
+
+Example to use a remote Metalink file:
+
+ curl --metalink http://www.example.com/example.metalink
+
+To use a Metalink file in the local file system, use FILE protocol
+(`file://`):
+
+ curl --metalink file://example.metalink
+
+Please note that if FILE protocol is disabled, there is no way to use a local
+Metalink file at the time of this writing. Also note that if `--metalink` and
+`--include` are used together, `--include` will be ignored. This is because
+including headers in the response will break Metalink parser and if the
+headers are included in the file described in Metalink file, hash check will
+fail.
+
+## Mailing Lists
+
+For your convenience, we have several open mailing lists to discuss curl, its
+development and things relevant to this. Get all info at
+https://curl.haxx.se/mail/.
+
+Please direct curl questions, feature requests and trouble reports to one of
+these mailing lists instead of mailing any individual.
+
+Available lists include:
+
+### curl-users
+
+Users of the command line tool. How to use it, what doesn't work, new
+features, related tools, questions, news, installations, compilations,
+running, porting etc.
+
+### curl-library
+
+Developers using or developing libcurl. Bugs, extensions, improvements.
+
+### curl-announce
+
+Low-traffic. Only receives announcements of new public versions. At worst,
+that makes something like one or two mails per month, but usually only one
+mail every second month.
+
+### curl-and-php
+
+Using the curl functions in PHP. Everything curl with a PHP angle. Or PHP with
+a curl angle.
+
+### curl-and-python
+
+Python hackers using curl with or without the python binding pycurl.
+
diff --git a/docs/Makefile.am b/docs/Makefile.am
index 28e947742..de3487010 100644
--- a/docs/Makefile.am
+++ b/docs/Makefile.am
@@ -49,6 +49,7 @@ EXTRA_DIST = \
CODE_STYLE.md \
CONTRIBUTE.md \
DEPRECATE.md \
+ EXPERIMENTAL.md \
FAQ \
FEATURES \
GOVERNANCE.md \
@@ -56,6 +57,7 @@ EXTRA_DIST = \
HISTORY.md \
HTTP-COOKIES.md \
HTTP2.md \
+ HTTP3.md \
INSTALL \
INSTALL.cmake \
INSTALL.md \
@@ -63,6 +65,7 @@ EXTRA_DIST = \
KNOWN_BUGS \
LICENSE-MIXING.md \
MAIL-ETIQUETTE \
+ PARALLEL-TRANSFERS.md \
README.cmake \
README.md \
README.netware \
diff --git a/docs/PARALLEL-TRANSFERS.md b/docs/PARALLEL-TRANSFERS.md
new file mode 100644
index 000000000..d3b38aee1
--- /dev/null
+++ b/docs/PARALLEL-TRANSFERS.md
@@ -0,0 +1,58 @@
+# Parallel transfers
+
+curl 7.66.0 introduces support for doing multiple transfers simultaneously; in
+parallel.
+
+## -Z, --parallel
+
+When this command line option is used, curl will perform the transfers given
+to it at the same time. It will do up to `--parallel-max` concurrent
+transfers, with a default value of 50.
+
+## Progress meter
+
+The progress meter that is displayed when doing parallel transfers is
+completely different than the regular one used for each single transfer.
+
+ It shows:
+
+ o percent download (if known, which means *all* transfers need to have a
+ known size)
+ o precent upload (if known, with the same caveat as for download)
+ o total amount of downloaded data
+ o total amount of uploaded data
+ o number of transfers to perform
+ o number of concurrent transfers being transferred right now
+ o number of transfers queued up waiting to start
+ o total time all transfers are expected to take (if sizes are known)
+ o current time the transfers have spent so far
+ o estimated time left (if sizes are known)
+ o current transfer speed (the faster of UL/DL speeds measured over the last
+ few seconds)
+
+Example:
+
+ DL% UL% Dled Uled Xfers Live Qd Total Current Left Speed
+ 72 -- 37.9G 0 101 30 23 0:00:55 0:00:34 0:00:22 2752M
+
+## Behavior differences
+
+Connections are shared fine between different easy handles, but the
+"authentication contexts" are not. So for example doing HTTP Digest auth with
+one handle for a particular transfer and then continue on with another handle
+that reuses the same connection, the second handle can't send the necessary
+Authorization header at once since the context is only kept in the original
+easy handle.
+
+To fix this, the authorization state could be made possible to share with the
+share API as well, as a context per origin + path (realm?) basically.
+
+Visible in test 153, 1412 and more.
+
+## Feedback!
+
+This is early days for parallel transfer support. Keep your eyes open for
+unintended side effects or downright bugs.
+
+Tell us what you think and how you think we could improve this feature!
+
diff --git a/docs/ROADMAP.md b/docs/ROADMAP.md
index 10e7effee..1d47682bf 100644
--- a/docs/ROADMAP.md
+++ b/docs/ROADMAP.md
@@ -5,10 +5,19 @@ Roadmap of things Daniel Stenberg wants to work on next. It is intended to
serve as a guideline for others for information, feedback and possible
participation.
-HTTP/3
-------
+HSTS
+----
+
+ Complete and merge [the existing PR](https://github.com/curl/curl/pull/2682).
+
+ Loading a huge preload file is probably not too interesting to most people,
+ but using a custom file and reacting to HSTS response header probably are
+ good features.
- See the [QUIC and HTTP/3 wiki page](https://github.com/curl/curl/wiki/QUIC).
+DNS-over-TLS
+------------
+
+ Similar to DNS-over-HTTPS. Could share quite a lot of generic code.
ESNI (Encrypted SNI)
--------------------
@@ -16,44 +25,32 @@ ESNI (Encrypted SNI)
See Daniel's post on [Support of Encrypted
SNI](https://curl.haxx.se/mail/lib-2019-03/0000.html) on the mailing list.
-HSTS
-----
+ Initial work exists in https://github.com/curl/curl/pull/4011
-Complete and merge [the existing PR](https://github.com/curl/curl/pull/2682).
+tiny-curl
+---------
-Parallel transfers for the curl tool
-------------------------------------
+ There's no immediate action for this but users seem keen on being able to
+ building custom minimized versions of libcurl for their products. Make sure
+ new features that are "niche" can still be disabled at build-time.
-This will require several new command line options to enable and control.
-
- 1. switch to creating a list of all the transfers first before any transfer
- is done
- 2. make the transfers using the multi interface
- 3. optionally fire up more transfers before the previous has completed
-
-Option to refuse HTTPS => HTTP redirects
-----------------------------------------
-
-Possibly as a new bit to `CURLOPT_FOLLOWLOCATION` ?
-
-Option to let CURLOPT_CUSTOMREQUEST be overridden on redirect
--------------------------------------------------------------
-
-(This is a common problem for people using `-X` and `-L` together.)
+MQTT
+----
-Possibly as a new bit to `CURLOPT_FOLLOWLOCATION` ?
+ Support receiving and sending MQTT messages. Initial work exists in
+ https://github.com/curl/curl/pull/3514
Hardcode “localhost”
--------------------
-No need to resolve it. Avoid a risk where this is resolved over the network
-and actually responds with something else than a local address. Some operating
-systems already do this. Also:
-https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
+ No need to resolve it. Avoid a risk where this is resolved over the network
+ and actually responds with something else than a local address. Some
+ operating systems already do this. Also:
+ https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
-Consider "menu config"-style build feature selection
-----------------------------------------------------
+"menu config"-style build feature selection
+-------------------------------------------
-Allow easier building of custom libcurl versions with only a selected feature
-where the available features are easily browsable and toggle-able ON/OFF or
-similar.
+ Allow easier building of custom libcurl versions with only a selected feature
+ where the available features are easily browsable and toggle-able ON/OFF or
+ similar.
diff --git a/docs/THANKS b/docs/THANKS
index 385ecd851..73b84cfdb 100644
--- a/docs/THANKS
+++ b/docs/THANKS
@@ -52,6 +52,7 @@ Alex Fishman
Alex Grebenschikov
Alex Gruz
Alex Malinovich
+Alex Mayorga
Alex McLellan
Alex Neblett
Alex Nichols
@@ -84,6 +85,7 @@ Alfonso Martone
Alfred Gebert
Allen Pulsifer
Alona Rossen
+Amit Katyal
Amol Pattekar
Amr Shahin
Anatol Belski
@@ -172,6 +174,7 @@ Ayoub Boudhar
Balaji Parasuram
Balaji S Rao
Balaji Salunke
+Balazs Kovacsics
Balint Szilakszi
Barry Abrahamson
Bart Whiteley
@@ -230,6 +233,7 @@ Brad King
Brad Spencer
Bradford Bruce
Brandon Casey
+Brandon Dong
Brandon Wang
Brendan Jurd
Brent Beardsley
@@ -261,6 +265,7 @@ Camille Moncelier
Caolan McNamara
Carie Pointer
Carlo Cannas
+Carlo Marcelo Arenas Belón
Carlo Teubner
Carlo Wood
Carlos ORyan
@@ -315,6 +320,7 @@ Clemens Gruber
Cliff Crosland
Clifford Wolf
Clint Clayton
+Clément Notin
Cody Jones
Cody Mack
Colby Ranger
@@ -714,6 +720,7 @@ Ian Wilkes
Ignacio Vazquez-Abrams
Igor Franchuk
Igor Khristophorov
+Igor Makarov
Igor Novoseltsev
Igor Polyakov
Ihor Karpenko
@@ -726,6 +733,7 @@ Ingmar Runge
Ingo Ralf Blum
Ingo Wilken
Irfan Adilovic
+Ironbars13 on github
Irving Wolfe
Isaac Boukris
Isaiah Norton
@@ -775,6 +783,7 @@ Jari Sundell
Jason Baietto
Jason Glasgow
Jason Juang
+Jason Lee
Jason Liu
Jason McDonald
Jason S. Priebe
@@ -809,6 +818,7 @@ Jens Schleusener
Jeremie Rapin
Jeremy Friesner
Jeremy Huddleston
+Jeremy Lainé
Jeremy Lin
Jeremy Pearson
Jeremy Tan
@@ -929,6 +939,7 @@ Julien Chaffraix
Julien Nabet
Julien Royer
Jun-ichiro itojun Hagino
+Junho Choi
Jurij Smakov
Juro Bystricky
Justin Clift
@@ -996,13 +1007,16 @@ Kristiyan Tsaklev
Kristoffer Gleditsch
Kunal Ekawde
Kurt Fankhauser
+Kyle Abramowitz
Kyle Edwards
Kyle J. McKay
Kyle L. Huff
Kyle Sallee
+Kyohei Kadota
Kyselgov E.N
Lachlan O'Dea
Ladar Levison
+Lance Ware
Larry Campbell
Larry Fahnoe
Larry Lin
@@ -1207,6 +1221,7 @@ Michael Kaufmann
Michael Kilburn
Michael Kujawa
Michael König
+Michael Lee
Michael Maltese
Michael Mealling
Michael Mueller
@@ -1220,6 +1235,7 @@ Michael Wallner
Michal Bonino
Michal Marek
Michal Trybus
+Michal Čaplygin
Michał Antoniak
Michał Fita
Michał Górny
@@ -1549,6 +1565,7 @@ Roger Leigh
Roland Blom
Roland Krikava
Roland Zimmermann
+Rolf Eike Beer
Rolland Dudemaine
Romain Coltel
Romain Fliedel
@@ -1682,7 +1699,6 @@ Stephen Kick
Stephen More
Stephen Toub
Sterling Hughes
-Steve Brokenshire
Steve Green
Steve H Truong
Steve Havelka
@@ -1723,6 +1739,7 @@ Teemu Yli-Elsila
Temprimus
Terri Oda
Terry Wu
+The Infinnovation team
TheAssassin on github
Theodore Dubois
Thomas Braun
@@ -1736,6 +1753,7 @@ Thomas Petazzoni
Thomas Ruecker
Thomas Schwinge
Thomas Tonino
+Thomas Vegas
Thomas van Hesteren
Thorsten Schöning
Tiit Pikma
@@ -1921,6 +1939,7 @@ cbartl on github
cclauss on github
clbr on github
cmfrolick on github
+codesniffer13 on github
d912e3 on github
daboul on github
dasimx on github
@@ -1956,20 +1975,24 @@ madblobfish on github
marc-groundctl on github
masbug on github
mccormickt12 on github
+migueljcrum on github
mkzero on github
moohoorama on github
nedres on github
neex on github
neheb on github
nevv on HackerOne/curl
+niallor on github
nianxuejie on github
niner on github
nk
nopjmp on github
olesteban on github
omau on github
+osabc on github
ovidiu-benea on github
patelvivekv1993 on github
+patnyb on github
pendrek at hackerone
pszemus on github
silveja1 on github
diff --git a/docs/THANKS-filter b/docs/THANKS-filter
index 29dc24c8a..d2adda578 100644
--- a/docs/THANKS-filter
+++ b/docs/THANKS-filter
@@ -98,3 +98,4 @@ s/Jason Priebe/Jason S. Priebe/
s/Ale Vesely/Alessandro Vesely/
s/Yamada Yasuharu/Yasuharu Yamada/
s/Jim Gallagher/James Gallagher/
+s/Steve Brokenshire/Stephen Brokenshire/
diff --git a/docs/TODO b/docs/TODO
index 5e1fcefae..6d30d26a4 100644
--- a/docs/TODO
+++ b/docs/TODO
@@ -18,11 +18,8 @@
1. libcurl
1.1 TFO support on Windows
- 1.2 More data sharing
1.3 struct lifreq
- 1.4 signal-based resolver timeouts
1.5 get rid of PATH_MAX
- 1.6 Modified buffer size approach
1.7 Support HTTP/2 for HTTP(S) proxies
1.8 CURLOPT_RESOLVE for any port number
1.9 Cache negative name resolves
@@ -36,12 +33,10 @@
1.17 Add support for IRIs
1.18 try next proxy if one doesn't work
1.20 SRV and URI DNS records
- 1.21 Have the URL API offer IDN decoding
1.22 CURLINFO_PAUSE_STATE
1.23 Offer API to flush the connection pool
1.24 TCP Fast Open for windows
1.25 Expose tried IP addresses that failed
- 1.26 CURL_REFUSE_CLEARTEXT
1.27 hardcode the "localhost" addresses
1.28 FD_CLOEXEC
1.29 Upgrade to websockets
@@ -62,7 +57,6 @@
4.1 HOST
4.2 Alter passive/active on failure and retry
4.3 Earlier bad letter detection
- 4.4 REST for large files
4.5 ASCII support
4.6 GSSAPI via Windows SSPI
4.7 STAT for LIST without data connection
@@ -70,12 +64,9 @@
5. HTTP
5.1 Better persistency for HTTP 1.0
- 5.2 support FF3 sqlite cookie files
5.3 Rearrange request header order
5.4 Allow SAN names in HTTP/2 server push
5.5 auth= in URLs
- 5.6 Refuse "downgrade" redirects
- 5.7 QUIC
6. TELNET
6.1 ditch stdin
@@ -83,12 +74,10 @@
6.3 feature negotiation debug data
7. SMTP
- 7.1 Pipelining
7.2 Enhanced capability support
7.3 Add CURLOPT_MAIL_CLIENT option
8. POP3
- 8.1 Pipelining
8.2 Enhanced capability support
9. IMAP
@@ -104,10 +93,8 @@
11.4 Create remote directories
12. New protocols
- 12.1 RSYNC
13. SSL
- 13.1 Disable specific versions
13.2 Provide mutex locking API
13.3 Support in-memory certs/ca certs/keys
13.4 Cache/share OpenSSL contexts
@@ -115,15 +102,12 @@
13.6 Provide callback for cert verification
13.7 improve configure --with-ssl
13.8 Support DANE
- 13.9 Configurable loading of OpenSSL configuration file
13.10 Support Authority Information Access certificate extension (AIA)
13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
13.12 Support HSTS
- 13.13 Support HPKP
13.14 Support the clienthello extension
14. GnuTLS
- 14.1 SSL engine stuff
14.2 check connection
15. WinSSL/SChannel
@@ -138,7 +122,6 @@
17. SSH protocols
17.1 Multiplexing
- 17.2 SFTP performance
17.3 Support better than MD5 hostkey hash
17.4 Support CURLOPT_PREQUOTE
@@ -146,16 +129,12 @@
18.1 sync
18.2 glob posts
18.3 prevent file overwriting
- 18.4 simultaneous parallel transfers
18.5 UTF-8 filenames in Content-Disposition
- 18.6 warning when setting an option
18.7 at least N milliseconds between requests
18.9 Choose the name of file in braces for complex URLs
18.10 improve how curl works in a windows console window
18.11 Windows: set attribute 'archive' for completed downloads
18.12 keep running, read instructions from pipe/socket
- 18.13 support metalink in http headers
- 18.14 --fail without --location should treat 3xx as a failure
18.15 --retry should resume
18.16 send only part of --data
18.17 consider file name from the redirected URL with -O ?
@@ -202,58 +181,20 @@
See https://github.com/curl/curl/pull/3378
-1.2 More data sharing
-
- curl_share_* functions already exist and work, and they can be extended to
- share more. For example, enable sharing of the ares channel.
-
1.3 struct lifreq
Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
To support IPv6 interface addresses for network interfaces properly.
-1.4 signal-based resolver timeouts
-
- libcurl built without an asynchronous resolver library uses alarm() to time
- out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
- signal handler back into the library with a sigsetjmp, which effectively
- causes libcurl to continue running within the signal handler. This is
- non-portable and could cause problems on some platforms. A discussion on the
- problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html
-
- Also, alarm() provides timeout resolution only to the nearest second. alarm
- ought to be replaced by setitimer on systems that support it.
-
1.5 get rid of PATH_MAX
Having code use and rely on PATH_MAX is not nice:
https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
- Currently the SSH based code uses it a bit, but to remove PATH_MAX from there
- we need libssh2 to properly tell us when we pass in a too small buffer and
- its current API (as of libssh2 1.2.7) doesn't.
-
-1.6 Modified buffer size approach
-
- Current libcurl allocates a fixed 16K size buffer for download and an
- additional 16K for upload. They are always unconditionally part of the easy
- handle. If CRLF translations are requested, an additional 32K "scratch
- buffer" is allocated. A total of 64K transfer buffers in the worst case.
-
- First, while the handles are not actually in use these buffers could be freed
- so that lingering handles just kept in queues or whatever waste less memory.
-
- Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
- since each need to be individually acked and therefore libssh2 must be
- allowed to send (or receive) many separate ones in parallel to achieve high
- transfer speeds. A current libcurl build with a 16K buffer makes that
- impossible, but one with a 512K buffer will reach MUCH faster transfers. But
- allocating 512K unconditionally for all buffers just in case they would like
- to do fast SFTP transfers at some point is not a good solution either.
-
- Dynamically allocate buffer size depending on protocol in use in combination
- with freeing it after each individual transfer? Other suggestions?
+ Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
+ there we need libssh2 to properly tell us when we pass in a too small buffer
+ and its current API (as of libssh2 1.2.7) doesn't.
1.7 Support HTTP/2 for HTTP(S) proxies
@@ -377,12 +318,6 @@
Offer support for resolving SRV and URI DNS records for libcurl to know which
server to connect to for various protocols (including HTTP!).
-1.21 Have the URL API offer IDN decoding
-
- Similar to how URL decoding/encoding is done, we could have URL functions to
- convert IDN host names to punycode (probably not the reverse).
- https://github.com/curl/curl/issues/3232
-
1.22 CURLINFO_PAUSE_STATE
Return information about the transfer's current pause state, in both
@@ -407,21 +342,6 @@
https://github.com/curl/curl/issues/2126
-1.26 CURL_REFUSE_CLEARTEXT
-
- An environment variable that when set will make libcurl refuse to use any
- cleartext network protocol. That's all non-encrypted ones (FTP, HTTP, Gopher,
- etc). By adding the check to libcurl and not just curl, this environment
- variable can then help users to block all libcurl-using programs from
- accessing the network using unsafe protocols.
-
- The variable could be given some sort of syntax or different levels and be
- used to also allow for example users to refuse libcurl to do transfers with
- HTTPS certificate checks disabled.
-
- It could also automatically refuse usernames in URLs when set
- (see CURLOPT_DISALLOW_USERNAME_IN_URL)
-
1.27 hardcode the "localhost" addresses
There's this new spec getting adopted that says "localhost" should always and
@@ -539,12 +459,6 @@
Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
process to avoid doing a resolve and connect in vain.
-4.4 REST for large files
-
- REST fix for servers not behaving well on >2GB requests. This should fail if
- the server doesn't set the pointer to the requested index. The tricky
- (impossible?) part is to figure out if the server did the right thing or not.
-
4.5 ASCII support
FTP ASCII transfers do not follow RFC959. They don't convert the data
@@ -577,12 +491,6 @@
"Better" support for persistent connections over HTTP 1.0
https://curl.haxx.se/bug/feature.cgi?id=1089001
-5.2 support FF3 sqlite cookie files
-
- Firefox 3 is changing from its former format to a a sqlite database instead.
- We should consider how (lib)curl can/should support this.
- https://curl.haxx.se/bug/feature.cgi?id=1871388
-
5.3 Rearrange request header order
Server implementors often make an effort to detect browser and to reject
@@ -611,36 +519,19 @@
For example:
- http://test:pass;auth=NTLM@example.com would be equivalent to specifying --user
- test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
+ http://test:pass;auth=NTLM@example.com would be equivalent to specifying
+ --user test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
Additionally this should be implemented for proxy base URLs as well.
-5.6 Refuse "downgrade" redirects
-
- See https://github.com/curl/curl/issues/226
-
- Consider a way to tell curl to refuse to "downgrade" protocol with a redirect
- and/or possibly a bit that refuses redirect to change protocol completely.
-
-5.7 QUIC
-
- The standardization process of QUIC has been taken to the IETF and can be
- followed on the [IETF QUIC Mailing
- list](https://www.ietf.org/mailman/listinfo/quic). I'd like us to get on the
- bandwagon. Ideally, this would be done with a separate library/project to
- handle the binary/framing layer in a similar fashion to how HTTP/2 is
- implemented. This, to allow other projects to benefit from the work and to
- thus broaden the interest and chance of others to participate.
-
6. TELNET
6.1 ditch stdin
-Reading input (to send to the remote server) on stdin is a crappy solution for
-library purposes. We need to invent a good way for the application to be able
-to provide the data to send.
+ Reading input (to send to the remote server) on stdin is a crappy solution
+ for library purposes. We need to invent a good way for the application to be
+ able to provide the data to send.
6.2 ditch telnet-specific select
@@ -650,15 +541,11 @@ to provide the data to send.
6.3 feature negotiation debug data
- Add telnet feature negotiation data to the debug callback as header data.
+ Add telnet feature negotiation data to the debug callback as header data.
7. SMTP
-7.1 Pipelining
-
- Add support for pipelining emails.
-
7.2 Enhanced capability support
Add the ability, for an application that uses libcurl, to obtain the list of
@@ -677,10 +564,6 @@ to provide the data to send.
8. POP3
-8.1 Pipelining
-
- Add support for pipelining commands.
-
8.2 Enhanced capability support
Add the ability, for an application that uses libcurl, to obtain the list of
@@ -725,18 +608,8 @@ that doesn't exist on the server, just like --ftp-create-dirs.
12. New protocols
-12.1 RSYNC
-
- There's no RFC for the protocol or an URI/URL format. An implementation
- should most probably use an existing rsync library, such as librsync.
-
13. SSL
-13.1 Disable specific versions
-
- Provide an option that allows for disabling specific SSL versions, such as
- SSLv2 https://curl.haxx.se/bug/feature.cgi?id=1767276
-
13.2 Provide mutex locking API
Provide a libcurl API for setting mutex callbacks in the underlying SSL
@@ -801,17 +674,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
Björn Stenberg wrote a separate initial take on DANE that was never
completed.
-13.9 Configurable loading of OpenSSL configuration file
-
- libcurl calls the OpenSSL function CONF_modules_load_file() in openssl.c,
- Curl_ossl_init(). "We regard any changes in the OpenSSL configuration as a
- security risk or at least as unnecessary."
-
- Please add a configuration switch or something similar to disable the
- CONF_modules_load_file() call.
-
- See https://github.com/curl/curl/issues/2724
-
13.10 Support Authority Information Access certificate extension (AIA)
AIA can provide various things like CRLs but more importantly information
@@ -844,21 +706,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
RFC 6797: https://tools.ietf.org/html/rfc6797
-13.13 Support HPKP
-
- "HTTP Public Key Pinning" is TOFU (trust on first use), time-based
- features indicated by a HTTP header send by the webserver. It's purpose is
- to prevent Man-in-the-middle attacks by trusted CAs by allowing webadmins
- to specify which CAs/certificates/public keys to trust when connection to
- their websites.
-
- It can be build based on PINNEDPUBLICKEY.
-
- Wikipedia: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
- OWASP: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
- Doc: https://developer.mozilla.org/de/docs/Web/Security/Public_Key_Pinning
- RFC: https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21
-
13.14 Support the clienthello extension
Certain stupid networks and middle boxes have a problem with SSL handshake
@@ -871,10 +718,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
14. GnuTLS
-14.1 SSL engine stuff
-
- Is this even possible?
-
14.2 check connection
Add a way to check if the connection seems to be alive, to correspond to the
@@ -949,11 +792,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
To fix this, libcurl would have to detect an existing connection and "attach"
the new transfer to the existing one.
-17.2 SFTP performance
-
- libcurl's SFTP transfer performance is sub par and can be improved, mostly by
- the approach mentioned in "1.6 Modified buffer size approach".
-
17.3 Support better than MD5 hostkey hash
libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
@@ -992,16 +830,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
existing). So that index.html becomes first index.html.1 and then
index.html.2 etc.
-18.4 simultaneous parallel transfers
-
- The client could be told to use maximum N simultaneous parallel transfers and
- then just make sure that happens. It should of course not make more than one
- connection to the same remote host. This would require the client to use the
- multi interface. https://curl.haxx.se/bug/feature.cgi?id=1558595
-
- Using the multi interface would also allow properly using parallel transfers
- with HTTP/2 and supporting HTTP/2 server push from the command line.
-
18.5 UTF-8 filenames in Content-Disposition
RFC 6266 documents how UTF-8 names can be passed to a client in the
@@ -1009,12 +837,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
https://github.com/curl/curl/issues/1888
-18.6 warning when setting an option
-
- Display a warning when libcurl returns an error when setting an option.
- This can be useful to tell when support for a particular feature hasn't been
- compiled into the library.
-
18.7 at least N milliseconds between requests
Allow curl command lines issue a lot of request against services that limit
@@ -1063,30 +885,6 @@ that doesn't exist on the server, just like --ftp-create-dirs.
invoke can talk to the still running instance and ask for transfers to get
done, and thus maintain its connection pool, DNS cache and more.
-18.13 support metalink in http headers
-
- Curl has support for downloading a metalink xml file, processing it, and then
- downloading the target of the metalink. This is done via the --metalink option.
- It would be nice if metalink also supported downloading via metalink
- information that is stored in HTTP headers (RFC 6249). Theoretically this could
- also be supported with the --metalink option.
-
- See https://tools.ietf.org/html/rfc6249
-
- See also https://lists.gnu.org/archive/html/bug-wget/2015-06/msg00034.html for
- an implematation of this in wget.
-
-18.14 --fail without --location should treat 3xx as a failure
-
- To allow a command line like this to detect a redirect and consider it a
- failure:
-
- curl -v --fail -O https://example.com/curl-7.48.0.tar.gz
-
- ... --fail must treat 3xx responses as failures too. The least problematic
- way to implement this is probably to add that new logic in the command line
- tool only and not in the underlying CURLOPT_FAILONERROR logic.
-
18.15 --retry should resume
When --retry is used and curl actually retries transfer, it should use the
@@ -1202,17 +1000,17 @@ that doesn't exist on the server, just like --ftp-create-dirs.
20.5 Add support for concurrent connections
- Tests 836, 882 and 938 were designed to verify that separate connections aren't
- used when using different login credentials in protocols that shouldn't re-use
- a connection under such circumstances.
+ Tests 836, 882 and 938 were designed to verify that separate connections
+ aren't used when using different login credentials in protocols that
+ shouldn't re-use a connection under such circumstances.
Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
- connections. The read while() loop seems to loop until it receives a disconnect
- from the client, where it then enters the waiting for connections loop. When
- the client opens a second connection to the server, the first connection hasn't
- been dropped (unless it has been forced - which we shouldn't do in these tests)
- and thus the wait for connections loop is never entered to receive the second
- connection.
+ connections. The read while() loop seems to loop until it receives a
+ disconnect from the client, where it then enters the waiting for connections
+ loop. When the client opens a second connection to the server, the first
+ connection hasn't been dropped (unless it has been forced - which we
+ shouldn't do in these tests) and thus the wait for connections loop is never
+ entered to receive the second connection.
20.6 Use the RFC6265 test suite
diff --git a/docs/cmdline-opts/Makefile.inc b/docs/cmdline-opts/Makefile.inc
index 7a8af6f9e..6b4387475 100644
--- a/docs/cmdline-opts/Makefile.inc
+++ b/docs/cmdline-opts/Makefile.inc
@@ -65,6 +65,7 @@ DPAGES = \
http1.0.d \
http1.1.d http2.d \
http2-prior-knowledge.d \
+ http3.d \
ignore-content-length.d \
include.d \
insecure.d \
@@ -100,7 +101,10 @@ DPAGES = \
noproxy.d \
ntlm.d ntlm-wb.d \
oauth2-bearer.d \
- output.d pass.d \
+ output.d \
+ pass.d \
+ parallel.d \
+ parallel-max.d \
path-as-is.d \
pinnedpubkey.d \
post301.d \
@@ -154,6 +158,7 @@ DPAGES = \
retry-delay.d \
retry-max-time.d \
retry.d \
+ sasl-authzid.d \
sasl-ir.d \
service-name.d \
show-error.d \
diff --git a/docs/cmdline-opts/config.d b/docs/cmdline-opts/config.d
index ef9894b8e..df3d39220 100644
--- a/docs/cmdline-opts/config.d
+++ b/docs/cmdline-opts/config.d
@@ -40,7 +40,7 @@ Unix-like systems (which returns the home dir given the current user in your
system). On Windows, it then checks for the APPDATA variable, or as a last
resort the '%USERPROFILE%\\Application Data'.
-2) On windows, if there is no _curlrc file in the home dir, it checks for one
+2) On windows, if there is no .curlrc file in the home dir, it checks for one
in the same dir the curl executable is placed. On Unix-like systems, it will
simply try to load .curlrc from the determined home dir.
diff --git a/docs/cmdline-opts/http0.9.d b/docs/cmdline-opts/http0.9.d
index 33fe72d18..7e783f696 100644
--- a/docs/cmdline-opts/http0.9.d
+++ b/docs/cmdline-opts/http0.9.d
@@ -10,5 +10,4 @@ HTTP/0.9 is a completely headerless response and therefore you can also
connect with this to non-HTTP servers and still get a response since curl will
simply transparently downgrade - if allowed.
-A future curl version will deny continuing if the response isn't at least
-HTTP/1.0 unless this option is used.
+Since curl 7.66.0, HTTP/0.9 is disabled by default.
diff --git a/docs/cmdline-opts/http2.d b/docs/cmdline-opts/http2.d
index 04cff00a4..cf8f2988e 100644
--- a/docs/cmdline-opts/http2.d
+++ b/docs/cmdline-opts/http2.d
@@ -6,5 +6,6 @@ Mutexed: http1.1 http1.0 http2-prior-knowledge
Requires: HTTP/2
See-also: no-alpn
Help: Use HTTP 2
+See-also: http1.1 http3
---
Tells curl to use HTTP version 2.
diff --git a/docs/cmdline-opts/http3.d b/docs/cmdline-opts/http3.d
new file mode 100644
index 000000000..ca85e3a64
--- /dev/null
+++ b/docs/cmdline-opts/http3.d
@@ -0,0 +1,19 @@
+Long: http3
+Tags: Versions
+Protocols: HTTP
+Added: 7.66.0
+Mutexed: http1.1 http1.0 http2 http2-prior-knowledge
+Requires: HTTP/3
+Help: Use HTTP v3
+See-also: http1.1 http2
+---
+
+WARNING: this option is experiemental. Do not use in production.
+
+Tells curl to use HTTP version 3 directly to the host and port number used in
+the URL. A normal HTTP/3 transaction will be done to a host and then get
+redirected via Alt-SVc, but this option allows a user to circumvent that when
+you know that the target speaks HTTP/3 on the given host and port.
+
+This option will make curl fail if a QUIC connection cannot be established, it
+cannot fall back to a lower HTTP version on its own.
diff --git a/docs/cmdline-opts/parallel-max.d b/docs/cmdline-opts/parallel-max.d
new file mode 100644
index 000000000..a8c79c743
--- /dev/null
+++ b/docs/cmdline-opts/parallel-max.d
@@ -0,0 +1,9 @@
+Long: parallel-max
+Help: Maximum concurrency for parallel transfers
+Added: 7.66.0
+See-also: parallel
+---
+When asked to do parallel transfers, using --parallel, this option controls
+the maximum amount of transfers to do simultaneously.
+
+The default is 50.
diff --git a/docs/cmdline-opts/parallel.d b/docs/cmdline-opts/parallel.d
new file mode 100644
index 000000000..fac84e624
--- /dev/null
+++ b/docs/cmdline-opts/parallel.d
@@ -0,0 +1,7 @@
+Short: Z
+Long: parallel
+Help: Perform transfers in parallel
+Added: 7.66.0
+---
+Makes curl perform its transfers in parallel as compared to the regular serial
+manner.
diff --git a/docs/cmdline-opts/retry.d b/docs/cmdline-opts/retry.d
index 32d1c799b..3db89b71c 100644
--- a/docs/cmdline-opts/retry.d
+++ b/docs/cmdline-opts/retry.d
@@ -14,4 +14,7 @@ for all forthcoming retries it will double the waiting time until it reaches
using --retry-delay you disable this exponential backoff algorithm. See also
--retry-max-time to limit the total time allowed for retries.
+Since curl 7.66.0, curl will comply with the Retry-After: response header if
+one was present to know when to issue the next retry.
+
If this option is used several times, the last one will be used.
diff --git a/docs/cmdline-opts/sasl-authzid.d b/docs/cmdline-opts/sasl-authzid.d
new file mode 100644
index 000000000..b34db97fc
--- /dev/null
+++ b/docs/cmdline-opts/sasl-authzid.d
@@ -0,0 +1,11 @@
+Long: sasl-authzid
+Help: Use this identity to act as during SASL PLAIN authentication
+Added: 7.66.0
+---
+Use this authorisation identity (authzid), during SASL PLAIN authentication,
+in addition to the authentication identity (authcid) as specified by --user.
+
+If the option isn't specified, the server will derive the authzid from the
+authcid, but if specified, and depending on the server implementation, it may
+be used to access another user's inbox, that the user has been granted access
+to, or a shared mailbox for example.
diff --git a/docs/examples/Makefile.inc b/docs/examples/Makefile.inc
index 8dd55b9df..6fd8ecd76 100644
--- a/docs/examples/Makefile.inc
+++ b/docs/examples/Makefile.inc
@@ -5,7 +5,7 @@
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
-# Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+# Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
@@ -35,7 +35,8 @@ check_PROGRAMS = 10-at-a-time anyauthput cookie_interface debug fileupload \
http2-upload http2-serverpush getredirect ftpuploadfrommem \
ftpuploadresume sslbackend postit2-formadd multi-formadd \
shared-connection-cache sftpuploadresume http2-pushinmemory parseurl \
- urlapi
+ urlapi imap-authzid pop3-authzid smtp-authzid http3 altsvc \
+ http3-present
# These examples require external dependencies that may not be commonly
# available on POSIX systems, so don't bother attempting to compile them here.
diff --git a/docs/examples/altsvc.c b/docs/examples/altsvc.c
new file mode 100644
index 000000000..24ef42585
--- /dev/null
+++ b/docs/examples/altsvc.c
@@ -0,0 +1,56 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+/* <DESC>
+ * HTTP with Alt-Svc support
+ * </DESC>
+ */
+#include <stdio.h>
+#include <curl/curl.h>
+
+int main(void)
+{
+ CURL *curl;
+ CURLcode res;
+
+ curl = curl_easy_init();
+ if(curl) {
+ curl_easy_setopt(curl, CURLOPT_URL, "https://example.com");
+
+ /* cache the alternatives in this file */
+ curl_easy_setopt(curl, CURLOPT_ALTSVC, "altsvc.txt");
+
+ /* restrict which HTTP versions to use alternatives */
+ curl_easy_setopt(curl, CURLOPT_ALTSVC_CTRL, (long)
+ CURLALTSVC_H1|CURLALTSVC_H2|CURLALTSVC_H3);
+
+ /* Perform the request, res will get the return code */
+ res = curl_easy_perform(curl);
+ /* Check for errors */
+ if(res != CURLE_OK)
+ fprintf(stderr, "curl_easy_perform() failed: %s\n",
+ curl_easy_strerror(res));
+
+ /* always cleanup */
+ curl_easy_cleanup(curl);
+ }
+ return 0;
+}
diff --git a/docs/examples/curlx.c b/docs/examples/curlx.c
index eb37a6a72..a4d59427a 100644
--- a/docs/examples/curlx.c
+++ b/docs/examples/curlx.c
@@ -277,7 +277,7 @@ int main(int argc, char **argv)
int tabLength = 100;
char *binaryptr;
- char *mimetype;
+ char *mimetype = NULL;
char *mimetypeaccept = NULL;
char *contenttype;
const char **pp;
@@ -294,7 +294,7 @@ int main(int argc, char **argv)
binaryptr = malloc(tabLength);
- p.verbose = 0;
+ memset(&p, '\0', sizeof(p));
p.errorbio = BIO_new_fp(stderr, BIO_NOCLOSE);
curl_global_init(CURL_GLOBAL_DEFAULT);
@@ -372,7 +372,7 @@ int main(int argc, char **argv)
args++;
}
- if(mimetype == NULL || mimetypeaccept == NULL)
+ if(mimetype == NULL || mimetypeaccept == NULL || p.p12file == NULL)
badarg = 1;
if(badarg) {
diff --git a/docs/examples/ephiperfifo.c b/docs/examples/ephiperfifo.c
index 4668c6ca3..9f89125a9 100644
--- a/docs/examples/ephiperfifo.c
+++ b/docs/examples/ephiperfifo.c
@@ -73,12 +73,6 @@ callback.
#include <gnurl/curl.h>
-#ifdef __GNUC__
-#define _Unused __attribute__((unused))
-#else
-#define _Unused
-#endif
-
#define MSG_OUT stdout /* Send info to stdout, change to stderr if you want */
@@ -114,7 +108,7 @@ typedef struct _SockInfo
GlobalInfo *global;
} SockInfo;
-#define __case(code) \
+#define mycase(code) \
case code: s = __STRING(code)
/* Die if we get a bad CURLMcode somewhere */
@@ -123,14 +117,14 @@ static void mcode_or_die(const char *where, CURLMcode code)
if(CURLM_OK != code) {
const char *s;
switch(code) {
- __case(CURLM_BAD_HANDLE); break;
- __case(CURLM_BAD_EASY_HANDLE); break;
- __case(CURLM_OUT_OF_MEMORY); break;
- __case(CURLM_INTERNAL_ERROR); break;
- __case(CURLM_UNKNOWN_OPTION); break;
- __case(CURLM_LAST); break;
+ mycase(CURLM_BAD_HANDLE); break;
+ mycase(CURLM_BAD_EASY_HANDLE); break;
+ mycase(CURLM_OUT_OF_MEMORY); break;
+ mycase(CURLM_INTERNAL_ERROR); break;
+ mycase(CURLM_UNKNOWN_OPTION); break;
+ mycase(CURLM_LAST); break;
default: s = "CURLM_unknown"; break;
- __case(CURLM_BAD_SOCKET);
+ mycase(CURLM_BAD_SOCKET);
fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s);
/* ignore this error */
return;
@@ -336,22 +330,21 @@ static int sock_cb(CURL *e, curl_socket_t s, int what, void *cbp, void *sockp)
/* CURLOPT_WRITEFUNCTION */
-static size_t write_cb(void *ptr _Unused, size_t size, size_t nmemb,
- void *data)
+static size_t write_cb(void *ptr, size_t size, size_t nmemb, void *data)
{
- size_t realsize = size * nmemb;
- (void)_Unused;
+ (void)ptr;
(void)data;
-
- return realsize;
+ return size * nmemb;
}
/* CURLOPT_PROGRESSFUNCTION */
-static int prog_cb(void *p, double dltotal, double dlnow, double ult _Unused,
- double uln _Unused)
+static int prog_cb(void *p, double dltotal, double dlnow, double ult,
+ double uln)
{
ConnInfo *conn = (ConnInfo *)p;
+ (void)ult;
+ (void)uln;
fprintf(MSG_OUT, "Progress: %s (%g/%g)\n", conn->url, dlnow, dltotal);
return 0;
@@ -469,12 +462,14 @@ void SignalHandler(int signo)
}
}
-int main(int argc _Unused, char **argv _Unused)
+int main(int argc, char **argv)
{
GlobalInfo g;
struct itimerspec its;
struct epoll_event ev;
struct epoll_event events[10];
+ (void)argc;
+ (void)argv;
g_should_exit_ = 0;
signal(SIGINT, SignalHandler);
@@ -547,5 +542,6 @@ int main(int argc _Unused, char **argv _Unused)
fflush(MSG_OUT);
curl_multi_cleanup(g.multi);
+ clean_fifo(&g);
return 0;
}
diff --git a/docs/examples/hiperfifo.c b/docs/examples/hiperfifo.c
index fb25259c2..a7f71125a 100644
--- a/docs/examples/hiperfifo.c
+++ b/docs/examples/hiperfifo.c
@@ -72,12 +72,6 @@ callback.
#include <errno.h>
#include <sys/cdefs.h>
-#ifdef __GNUC__
-#define _Unused __attribute__((unused))
-#else
-#define _Unused
-#endif
-
#define MSG_OUT stdout /* Send info to stdout, change to stderr if you want */
@@ -115,7 +109,7 @@ typedef struct _SockInfo
GlobalInfo *global;
} SockInfo;
-#define __case(code) \
+#define mycase(code) \
case code: s = __STRING(code)
/* Die if we get a bad CURLMcode somewhere */
@@ -124,14 +118,14 @@ static void mcode_or_die(const char *where, CURLMcode code)
if(CURLM_OK != code) {
const char *s;
switch(code) {
- __case(CURLM_BAD_HANDLE); break;
- __case(CURLM_BAD_EASY_HANDLE); break;
- __case(CURLM_OUT_OF_MEMORY); break;
- __case(CURLM_INTERNAL_ERROR); break;
- __case(CURLM_UNKNOWN_OPTION); break;
- __case(CURLM_LAST); break;
+ mycase(CURLM_BAD_HANDLE); break;
+ mycase(CURLM_BAD_EASY_HANDLE); break;
+ mycase(CURLM_OUT_OF_MEMORY); break;
+ mycase(CURLM_INTERNAL_ERROR); break;
+ mycase(CURLM_UNKNOWN_OPTION); break;
+ mycase(CURLM_LAST); break;
default: s = "CURLM_unknown"; break;
- __case(CURLM_BAD_SOCKET);
+ mycase(CURLM_BAD_SOCKET);
fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s);
/* ignore this error */
return;
@@ -143,9 +137,10 @@ static void mcode_or_die(const char *where, CURLMcode code)
/* Update the event timer after curl_multi library calls */
-static int multi_timer_cb(CURLM *multi _Unused, long timeout_ms, GlobalInfo *g)
+static int multi_timer_cb(CURLM *multi, long timeout_ms, GlobalInfo *g)
{
struct timeval timeout;
+ (void)multi;
timeout.tv_sec = timeout_ms/1000;
timeout.tv_usec = (timeout_ms%1000)*1000;
@@ -220,10 +215,12 @@ static void event_cb(int fd, short kind, void *userp)
/* Called by libevent when our timeout expires */
-static void timer_cb(int fd _Unused, short kind _Unused, void *userp)
+static void timer_cb(int fd, short kind, void *userp)
{
GlobalInfo *g = (GlobalInfo *)userp;
CURLMcode rc;
+ (void)fd;
+ (void)kind;
rc = curl_multi_socket_action(g->multi,
CURL_SOCKET_TIMEOUT, 0, &g->still_running);
@@ -303,22 +300,21 @@ static int sock_cb(CURL *e, curl_socket_t s, int what, void *cbp, void *sockp)
/* CURLOPT_WRITEFUNCTION */
-static size_t write_cb(void *ptr _Unused, size_t size, size_t nmemb,
- void *data)
+static size_t write_cb(void *ptr, size_t size, size_t nmemb, void *data)
{
- size_t realsize = size * nmemb;
- (void)_Unused;
+ (void)ptr;
(void)data;
-
- return realsize;
+ return size * nmemb;
}
/* CURLOPT_PROGRESSFUNCTION */
-static int prog_cb(void *p, double dltotal, double dlnow, double ult _Unused,
- double uln _Unused)
+static int prog_cb(void *p, double dltotal, double dlnow, double ult,
+ double uln)
{
ConnInfo *conn = (ConnInfo *)p;
+ (void)ult;
+ (void)uln;
fprintf(MSG_OUT, "Progress: %s (%g/%g)\n", conn->url, dlnow, dltotal);
return 0;
@@ -361,12 +357,14 @@ static void new_conn(char *url, GlobalInfo *g)
}
/* This gets called whenever data is received from the fifo */
-static void fifo_cb(int fd _Unused, short event _Unused, void *arg)
+static void fifo_cb(int fd, short event, void *arg)
{
char s[1024];
long int rv = 0;
int n = 0;
GlobalInfo *g = (GlobalInfo *)arg;
+ (void)fd;
+ (void)event;
do {
s[0]='\0';
@@ -427,9 +425,11 @@ static void clean_fifo(GlobalInfo *g)
unlink(fifo);
}
-int main(int argc _Unused, char **argv _Unused)
+int main(int argc, char **argv)
{
GlobalInfo g;
+ (void)argc;
+ (void)argv;
memset(&g, 0, sizeof(GlobalInfo));
g.evbase = event_base_new();
diff --git a/docs/examples/http3-present.c b/docs/examples/http3-present.c
new file mode 100644
index 000000000..857952dc7
--- /dev/null
+++ b/docs/examples/http3-present.c
@@ -0,0 +1,47 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+/* <DESC>
+ * Checks if HTTP/3 support is present in libcurl.
+ * </DESC>
+ */
+#include <stdio.h>
+#include <curl/curl.h>
+
+int main(void)
+{
+ curl_version_info_data *ver;
+
+ curl_global_init(CURL_GLOBAL_ALL);
+
+ ver = curl_version_info(CURLVERSION_NOW);
+ if(ver->features & CURL_VERSION_HTTP2)
+ printf("HTTP/2 support is present\n");
+
+ if(ver->features & CURL_VERSION_HTTP3)
+ printf("HTTP/3 support is present\n");
+
+ if(ver->features & CURL_VERSION_ALTSVC)
+ printf("Alt-svc support is present\n");
+
+ curl_global_cleanup();
+ return 0;
+}
diff --git a/docs/examples/http3.c b/docs/examples/http3.c
new file mode 100644
index 000000000..240a7edd4
--- /dev/null
+++ b/docs/examples/http3.c
@@ -0,0 +1,54 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+/* <DESC>
+ * Very simple HTTP/3 GET
+ * </DESC>
+ */
+#include <stdio.h>
+#include <curl/curl.h>
+
+int main(void)
+{
+ CURL *curl;
+ CURLcode res;
+
+ curl = curl_easy_init();
+ if(curl) {
+ curl_easy_setopt(curl, CURLOPT_URL, "https://example.com");
+
+ /* Forcing HTTP/3 will make the connection fail if the server isn't
+ accessible over QUIC + HTTP/3 on the given host and port.
+ Consider using CURLOPT_ALTSVC instead! */
+ curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_3);
+
+ /* Perform the request, res will get the return code */
+ res = curl_easy_perform(curl);
+ /* Check for errors */
+ if(res != CURLE_OK)
+ fprintf(stderr, "curl_easy_perform() failed: %s\n",
+ curl_easy_strerror(res));
+
+ /* always cleanup */
+ curl_easy_cleanup(curl);
+ }
+ return 0;
+}
diff --git a/docs/examples/imap-authzid.c b/docs/examples/imap-authzid.c
new file mode 100644
index 000000000..bfe7d71d7
--- /dev/null
+++ b/docs/examples/imap-authzid.c
@@ -0,0 +1,71 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+
+/* <DESC>
+ * IMAP example showing how to retreieve e-mails from a shared mailed box
+ * </DESC>
+ */
+
+#include <stdio.h>
+#include <curl/curl.h>
+
+/* This is a simple example showing how to fetch mail using libcurl's IMAP
+ * capabilities.
+ *
+ * Note that this example requires libcurl 7.66.0 or above.
+ */
+
+int main(void)
+{
+ CURL *curl;
+ CURLcode res = CURLE_OK;
+
+ curl = curl_easy_init();
+ if(curl) {
+ /* Set the username and password */
+ curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
+ curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
+
+ /* Set the authorisation identity (identity to act as) */
+ curl_easy_setopt(curl, CURLOPT_SASL_AUTHZID, "shared-mailbox");
+
+ /* Force PLAIN authentication */
+ curl_easy_setopt(curl, CURLOPT_LOGIN_OPTIONS, "AUTH=PLAIN");
+
+ /* This will fetch message 1 from the user's inbox */
+ curl_easy_setopt(curl, CURLOPT_URL,
+ "imap://imap.example.com/INBOX/;UID=1");
+
+ /* Perform the fetch */
+ res = curl_easy_perform(curl);
+
+ /* Check for errors */
+ if(res != CURLE_OK)
+ fprintf(stderr, "curl_easy_perform() failed: %s\n",
+ curl_easy_strerror(res));
+
+ /* Always cleanup */
+ curl_easy_cleanup(curl);
+ }
+
+ return (int)res;
+}
diff --git a/docs/examples/pop3-authzid.c b/docs/examples/pop3-authzid.c
new file mode 100644
index 000000000..57363579a
--- /dev/null
+++ b/docs/examples/pop3-authzid.c
@@ -0,0 +1,70 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+
+/* <DESC>
+ * POP3 example showing how to retrieve e-mails from a shared mailbox
+ * </DESC>
+ */
+
+#include <stdio.h>
+#include <curl/curl.h>
+
+/* This is a simple example showing how to retrieve mail using libcurl's POP3
+ * capabilities.
+ *
+ * Note that this example requires libcurl 7.66.0 or above.
+ */
+
+int main(void)
+{
+ CURL *curl;
+ CURLcode res = CURLE_OK;
+
+ curl = curl_easy_init();
+ if(curl) {
+ /* Set the username and password */
+ curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
+ curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
+
+ /* Set the authorisation identity (identity to act as) */
+ curl_easy_setopt(curl, CURLOPT_SASL_AUTHZID, "shared-mailbox");
+
+ /* Force PLAIN authentication */
+ curl_easy_setopt(curl, CURLOPT_LOGIN_OPTIONS, "AUTH=PLAIN");
+
+ /* This will retrieve message 1 from the user's mailbox */
+ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com/1");
+
+ /* Perform the retr */
+ res = curl_easy_perform(curl);
+
+ /* Check for errors */
+ if(res != CURLE_OK)
+ fprintf(stderr, "curl_easy_perform() failed: %s\n",
+ curl_easy_strerror(res));
+
+ /* Always cleanup */
+ curl_easy_cleanup(curl);
+ }
+
+ return (int)res;
+}
diff --git a/docs/examples/smtp-authzid.c b/docs/examples/smtp-authzid.c
new file mode 100644
index 000000000..decdb719d
--- /dev/null
+++ b/docs/examples/smtp-authzid.c
@@ -0,0 +1,161 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+
+/* <DESC>
+ * Send e-mail on behalf of another user with SMTP
+ * </DESC>
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <curl/curl.h>
+
+/*
+ * This is a simple example show how to send an email using libcurl's SMTP
+ * capabilities.
+ *
+ * Note that this example requires libcurl 7.66.0 or above.
+ */
+
+/* The libcurl options want plain addresses, the viewable headers in the mail
+ * can very well get a full name as well.
+ */
+#define FROM_ADDR "<ursel@example.org>"
+#define SENDER_ADDR "<kurt@example.org>"
+#define TO_ADDR "<addressee@example.net>"
+
+#define FROM_MAIL "Ursel " FROM_ADDR
+#define SENDER_MAIL "Kurt " SENDER_ADDR
+#define TO_MAIL "A Receiver " TO_ADDR
+
+static const char *payload_text[] = {
+ "Date: Mon, 29 Nov 2010 21:54:29 +1100\r\n",
+ "To: " TO_MAIL "\r\n",
+ "From: " FROM_MAIL "\r\n",
+ "Sender: " SENDER_MAIL "\r\n",
+ "Message-ID: <dcd7cb36-11db-487a-9f3a-e652a9458efd@"
+ "rfcpedant.example.org>\r\n",
+ "Subject: SMTP example message\r\n",
+ "\r\n", /* empty line to divide headers from body, see RFC5322 */
+ "The body of the message starts here.\r\n",
+ "\r\n",
+ "It could be a lot of lines, could be MIME encoded, whatever.\r\n",
+ "Check RFC5322.\r\n",
+ NULL
+};
+
+struct upload_status {
+ int lines_read;
+};
+
+static size_t payload_source(void *ptr, size_t size, size_t nmemb, void *userp)
+{
+ struct upload_status *upload_ctx = (struct upload_status *)userp;
+ const char *data;
+
+ if((size == 0) || (nmemb == 0) || ((size*nmemb) < 1)) {
+ return 0;
+ }
+
+ data = payload_text[upload_ctx->lines_read];
+
+ if(data) {
+ size_t len = strlen(data);
+ memcpy(ptr, data, len);
+ upload_ctx->lines_read++;
+
+ return len;
+ }
+
+ return 0;
+}
+
+int main(void)
+{
+ CURL *curl;
+ CURLcode res = CURLE_OK;
+ struct curl_slist *recipients = NULL;
+ struct upload_status upload_ctx;
+
+ upload_ctx.lines_read = 0;
+
+ curl = curl_easy_init();
+ if(curl) {
+ /* This is the URL for your mailserver. In this example we connect to the
+ smtp-submission port as we require an authenticated connection. */
+ curl_easy_setopt(curl, CURLOPT_URL, "smtp://mail.example.com:587");
+
+ /* Set the username and password */
+ curl_easy_setopt(curl, CURLOPT_USERNAME, "kurt");
+ curl_easy_setopt(curl, CURLOPT_PASSWORD, "xipj3plmq");
+
+ /* Set the authorisation identity (identity to act as) */
+ curl_easy_setopt(curl, CURLOPT_SASL_AUTHZID, "ursel");
+
+ /* Force PLAIN authentication */
+ curl_easy_setopt(curl, CURLOPT_LOGIN_OPTIONS, "AUTH=PLAIN");
+
+ /* Note that this option isn't strictly required, omitting it will result
+ * in libcurl sending the MAIL FROM command with empty sender data. All
+ * autoresponses should have an empty reverse-path, and should be directed
+ * to the address in the reverse-path which triggered them. Otherwise,
+ * they could cause an endless loop. See RFC 5321 Section 4.5.5 for more
+ * details.
+ */
+ curl_easy_setopt(curl, CURLOPT_MAIL_FROM, FROM_ADDR);
+
+ /* Add a recipient, in this particular case it corresponds to the
+ * To: addressee in the header. */
+ recipients = curl_slist_append(recipients, TO_ADDR);
+ curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
+
+ /* We're using a callback function to specify the payload (the headers and
+ * body of the message). You could just use the CURLOPT_READDATA option to
+ * specify a FILE pointer to read from. */
+ curl_easy_setopt(curl, CURLOPT_READFUNCTION, payload_source);
+ curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
+ curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);
+
+ /* Send the message */
+ res = curl_easy_perform(curl);
+
+ /* Check for errors */
+ if(res != CURLE_OK)
+ fprintf(stderr, "curl_easy_perform() failed: %s\n",
+ curl_easy_strerror(res));
+
+ /* Free the list of recipients */
+ curl_slist_free_all(recipients);
+
+ /* curl won't send the QUIT command until you call cleanup, so you should
+ * be able to re-use this connection for additional messages (setting
+ * CURLOPT_MAIL_FROM and CURLOPT_MAIL_RCPT as required, and calling
+ * curl_easy_perform() again. It may not be a good idea to keep the
+ * connection open for a very long time though (more than a few minutes
+ * may result in the server timing out the connection), and you do want to
+ * clean up in the end.
+ */
+ curl_easy_cleanup(curl);
+ }
+
+ return (int)res;
+}
diff --git a/docs/libcurl/Makefile.inc b/docs/libcurl/Makefile.inc
index e472ea37b..380c153b8 100644
--- a/docs/libcurl/Makefile.inc
+++ b/docs/libcurl/Makefile.inc
@@ -46,6 +46,7 @@ man_MANS = \
gnurl_multi_info_read.3 \
gnurl_multi_init.3 \
gnurl_multi_perform.3 \
+ gnurl_multi_poll.3 \
gnurl_multi_remove_handle.3 \
gnurl_multi_setopt.3 \
gnurl_multi_socket.3 \
diff --git a/docs/libcurl/curl_multi_poll.3 b/docs/libcurl/curl_multi_poll.3
new file mode 100644
index 000000000..9fc72c55d
--- /dev/null
+++ b/docs/libcurl/curl_multi_poll.3
@@ -0,0 +1,110 @@
+.\" **************************************************************************
+.\" * _ _ ____ _
+.\" * Project ___| | | | _ \| |
+.\" * / __| | | | |_) | |
+.\" * | (__| |_| | _ <| |___
+.\" * \___|\___/|_| \_\_____|
+.\" *
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" *
+.\" * This software is licensed as described in the file COPYING, which
+.\" * you should have received as part of this distribution. The terms
+.\" * are also available at https://curl.haxx.se/docs/copyright.html.
+.\" *
+.\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+.\" * copies of the Software, and permit persons to whom the Software is
+.\" * furnished to do so, under the terms of the COPYING file.
+.\" *
+.\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+.\" * KIND, either express or implied.
+.\" *
+.\" **************************************************************************
+.TH curl_multi_poll 3 "29 Jul 2019" "libcurl 7.66.0" "libcurl Manual"
+.SH NAME
+curl_multi_poll - polls on all easy handles in a multi handle
+.SH SYNOPSIS
+.nf
+#include <curl/curl.h>
+
+CURLMcode curl_multi_poll(CURLM *multi_handle,
+ struct curl_waitfd extra_fds[],
+ unsigned int extra_nfds,
+ int timeout_ms,
+ int *numfds);
+.ad
+.SH DESCRIPTION
+\fIcurl_multi_poll(3)\fP polls all file descriptors used by the curl easy
+handles contained in the given multi handle set. It will block until activity
+is detected on at least one of the handles or \fItimeout_ms\fP has passed.
+Alternatively, if the multi handle has a pending internal timeout that has a
+shorter expiry time than \fItimeout_ms\fP, that shorter time will be used
+instead to make sure timeout accuracy is reasonably kept.
+
+The calling application may pass additional curl_waitfd structures which are
+similar to \fIpoll(2)\fP's pollfd structure to be waited on in the same call.
+
+On completion, if \fInumfds\fP is non-NULL, it will be populated with the
+total number of file descriptors on which interesting events occurred. This
+number can include both libcurl internal descriptors as well as descriptors
+provided in \fIextra_fds\fP.
+
+If no extra file descriptors are provided and libcurl has no file descriptor
+to offer to wait for, this function will instead wait during \fItimeout_ms\fP
+milliseconds (or shorter if an internal timer indicates so). This is the
+detail that makes this function different than \fIcurl_multi_wait(3)\fP.
+
+This function is encouraged to be used instead of select(3) when using the
+multi interface to allow applications to easier circumvent the common problem
+with 1024 maximum file descriptors.
+.SH curl_waitfd
+.nf
+struct curl_waitfd {
+ curl_socket_t fd;
+ short events;
+ short revents;
+};
+.fi
+.IP CURL_WAIT_POLLIN
+Bit flag to curl_waitfd.events indicating the socket should poll on read
+events such as new data received.
+.IP CURL_WAIT_POLLPRI
+Bit flag to curl_waitfd.events indicating the socket should poll on high
+priority read events such as out of band data.
+.IP CURL_WAIT_POLLOUT
+Bit flag to curl_waitfd.events indicating the socket should poll on write
+events such as the socket being clear to write without blocking.
+.SH EXAMPLE
+.nf
+CURL *easy_handle;
+CURLM *multi_handle;
+
+/* add the individual easy handle */
+curl_multi_add_handle(multi_handle, easy_handle);
+
+do {
+ CURLMcode mc;
+ int numfds;
+
+ mc = curl_multi_perform(multi_handle, &still_running);
+
+ if(mc == CURLM_OK) {
+ /* wait for activity or timeout */
+ mc = curl_multi_poll(multi_handle, NULL, 0, 1000, &numfds);
+ }
+
+ if(mc != CURLM_OK) {
+ fprintf(stderr, "curl_multi failed, code %d.\\n", mc);
+ break;
+ }
+
+} while(still_running);
+
+curl_multi_remove_handle(multi_handle, easy_handle);
+.fi
+.SH RETURN VALUE
+CURLMcode type, general libcurl multi interface error code. See
+\fIlibcurl-errors(3)\fP
+.SH AVAILABILITY
+This function was added in libcurl 7.66.0.
+.SH "SEE ALSO"
+.BR curl_multi_fdset "(3), " curl_multi_perform "(3), " curl_multi_wait "(3)"
diff --git a/docs/libcurl/gnurl_easy_getinfo.3 b/docs/libcurl/gnurl_easy_getinfo.3
index 3133c4f90..3c3d4779e 100644
--- a/docs/libcurl/gnurl_easy_getinfo.3
+++ b/docs/libcurl/gnurl_easy_getinfo.3
@@ -157,6 +157,9 @@ Upload size. See \fICURLINFO_CONTENT_LENGTH_UPLOAD_T(3)\fP
.IP CURLINFO_CONTENT_TYPE
Content type from the Content-Type header.
See \fICURLINFO_CONTENT_TYPE(3)\fP
+.IP CURLINFO_RETRY_AFTER
+The value from the from the Retry-After header.
+See \fICURLINFO_RETRY_AFTER(3)\fP
.IP CURLINFO_PRIVATE
User's private data pointer.
See \fICURLINFO_PRIVATE(3)\fP
diff --git a/docs/libcurl/gnurl_easy_setopt.3 b/docs/libcurl/gnurl_easy_setopt.3
index e8250b0e5..8e622fc17 100644
--- a/docs/libcurl/gnurl_easy_setopt.3
+++ b/docs/libcurl/gnurl_easy_setopt.3
@@ -256,6 +256,8 @@ TLS authentication methods. See \fICURLOPT_TLSAUTH_TYPE(3)\fP
Proxy TLS authentication methods. See \fICURLOPT_PROXY_TLSAUTH_TYPE(3)\fP
.IP CURLOPT_PROXYAUTH
HTTP proxy authentication methods. See \fICURLOPT_PROXYAUTH(3)\fP
+.IP CURLOPT_SASL_AUTHZID
+SASL authorisation identity (identity to act as). See \fICURLOPT_SASL_AUTHZID(3)\fP
.IP CURLOPT_SASL_IR
Enable SASL initial response. See \fICURLOPT_SASL_IR(3)\fP
.IP CURLOPT_XOAUTH2_BEARER
diff --git a/docs/libcurl/gnurl_global_init_mem.3 b/docs/libcurl/gnurl_global_init_mem.3
index d49d4b13d..ddd64db7f 100644
--- a/docs/libcurl/gnurl_global_init_mem.3
+++ b/docs/libcurl/gnurl_global_init_mem.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2011, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -59,6 +59,8 @@ to that man page for documentation.
.SH "CAUTION"
Manipulating these gives considerable powers to the application to severely
screw things up for libcurl. Take care!
+.SH AVAILABILITY
+Added in 7.12.0
.SH "SEE ALSO"
.BR curl_global_init "(3), "
.BR curl_global_cleanup "(3), "
diff --git a/docs/libcurl/gnurl_version_info.3 b/docs/libcurl/gnurl_version_info.3
index 09e0fe535..1ba8d8401 100644
--- a/docs/libcurl/gnurl_version_info.3
+++ b/docs/libcurl/gnurl_version_info.3
@@ -78,6 +78,15 @@ typedef struct {
(MAJOR << 24) | (MINOR << 12) | PATCH */
const char *brotli_version; /* human readable string. */
+ /* when 'age is CURLVERSION_SIXTH or alter (7.66.0 or later), these fields
+ also exist */
+ unsigned int nghttp2_ver_num; /* Numeric nghttp2 version
+ (MAJOR << 16) | (MINOR << 8) | PATCH */
+ const char *nghttp2_version; /* human readable string. */
+
+ const char *quic_version; /* human readable quic (+ HTTP/3) library +
+ version or NULL */
+
} curl_version_info_data;
.fi
@@ -99,6 +108,40 @@ environment.
\fIfeatures\fP can have none, one or more bits set, and the currently defined
bits are:
.RS
+.IP CURL_VERSION_ALTSVC
+HTTP Alt-Svc parsing and the associated options (Added in 7.64.1)
+.IP CURL_VERSION_ASYNCHDNS
+libcurl was built with support for asynchronous name lookups, which allows
+more exact timeouts (even on Windows) and less blocking when using the multi
+interface. (added in 7.10.7)
+.IP CURL_VERSION_BROTLI
+supports HTTP Brotli content encoding using libbrotlidec (Added in 7.57.0)
+.IP CURL_VERSION_CONV
+libcurl was built with support for character conversions, as provided by the
+CURLOPT_CONV_* callbacks. (Added in 7.15.4)
+.IP CURL_VERSION_CURLDEBUG
+libcurl was built with memory tracking debug capabilities. This is mainly of
+interest for libcurl hackers. (added in 7.19.6)
+.IP CURL_VERSION_DEBUG
+libcurl was built with debug capabilities (added in 7.10.6)
+.IP CURL_VERSION_GSSAPI
+libcurl was built with support for GSS-API. This makes libcurl use provided
+functions for Kerberos and SPNEGO authentication. It also allows libcurl
+to use the current user credentials without the app having to pass them on.
+(Added in 7.38.0)
+.IP CURL_VERSION_GSSNEGOTIATE
+supports HTTP GSS-Negotiate (added in 7.10.6)
+.IP CURL_VERSION_HTTPS_PROXY
+libcurl was built with support for HTTPS-proxy.
+(Added in 7.52.0)
+.IP CURL_VERSION_HTTP2
+libcurl was built with support for HTTP2.
+(Added in 7.33.0)
+.IP CURL_VERSION_HTTP3
+HTTP/3 and QUIC support are built-in (Added in 7.66.0)
+.IP CURL_VERSION_IDN
+libcurl was built with support for IDNA, domain names with international
+letters. (Added in 7.12.0)
.IP CURL_VERSION_IPV6
supports IPv6
.IP CURL_VERSION_KERBEROS4
@@ -106,68 +149,38 @@ supports Kerberos V4 (when using FTP)
.IP CURL_VERSION_KERBEROS5
supports Kerberos V5 authentication for FTP, IMAP, POP3, SMTP and SOCKSv5 proxy
(Added in 7.40.0)
-.IP CURL_VERSION_SSL
-supports SSL (HTTPS/FTPS) (Added in 7.10)
+.IP CURL_VERSION_LARGEFILE
+libcurl was built with support for large files. (Added in 7.11.1)
.IP CURL_VERSION_LIBZ
supports HTTP deflate using libz (Added in 7.10)
+.IP CURL_VERSION_MULTI_SSL
+libcurl was built with multiple SSL backends. For details, see
+\fIcurl_global_sslset(3)\fP.
+(Added in 7.56.0)
.IP CURL_VERSION_NTLM
supports HTTP NTLM (added in 7.10.6)
-.IP CURL_VERSION_GSSNEGOTIATE
-supports HTTP GSS-Negotiate (added in 7.10.6)
-.IP CURL_VERSION_DEBUG
-libcurl was built with debug capabilities (added in 7.10.6)
-.IP CURL_VERSION_CURLDEBUG
-libcurl was built with memory tracking debug capabilities. This is mainly of
-interest for libcurl hackers. (added in 7.19.6)
-.IP CURL_VERSION_ASYNCHDNS
-libcurl was built with support for asynchronous name lookups, which allows
-more exact timeouts (even on Windows) and less blocking when using the multi
-interface. (added in 7.10.7)
+.IP CURL_VERSION_NTLM_WB
+libcurl was built with support for NTLM delegation to a winbind helper.
+(Added in 7.22.0)
+.IP CURL_VERSION_PSL
+libcurl was built with support for Mozilla's Public Suffix List. This makes
+libcurl ignore cookies with a domain that's on the list.
+(Added in 7.47.0)
.IP CURL_VERSION_SPNEGO
libcurl was built with support for SPNEGO authentication (Simple and Protected
GSS-API Negotiation Mechanism, defined in RFC 2478.) (added in 7.10.8)
-.IP CURL_VERSION_LARGEFILE
-libcurl was built with support for large files. (Added in 7.11.1)
-.IP CURL_VERSION_IDN
-libcurl was built with support for IDNA, domain names with international
-letters. (Added in 7.12.0)
+.IP CURL_VERSION_SSL
+supports SSL (HTTPS/FTPS) (Added in 7.10)
.IP CURL_VERSION_SSPI
libcurl was built with support for SSPI. This is only available on Windows and
makes libcurl use Windows-provided functions for Kerberos, NTLM, SPNEGO and
Digest authentication. It also allows libcurl to use the current user
credentials without the app having to pass them on. (Added in 7.13.2)
-.IP CURL_VERSION_GSSAPI
-libcurl was built with support for GSS-API. This makes libcurl use provided
-functions for Kerberos and SPNEGO authentication. It also allows libcurl
-to use the current user credentials without the app having to pass them on.
-(Added in 7.38.0)
-.IP CURL_VERSION_CONV
-libcurl was built with support for character conversions, as provided by the
-CURLOPT_CONV_* callbacks. (Added in 7.15.4)
.IP CURL_VERSION_TLSAUTH_SRP
libcurl was built with support for TLS-SRP. (Added in 7.21.4)
-.IP CURL_VERSION_NTLM_WB
-libcurl was built with support for NTLM delegation to a winbind helper.
-(Added in 7.22.0)
-.IP CURL_VERSION_HTTP2
-libcurl was built with support for HTTP2.
-(Added in 7.33.0)
.IP CURL_VERSION_UNIX_SOCKETS
libcurl was built with support for Unix domain sockets.
(Added in 7.40.0)
-.IP CURL_VERSION_PSL
-libcurl was built with support for Mozilla's Public Suffix List. This makes
-libcurl ignore cookies with a domain that's on the list.
-(Added in 7.47.0)
-.IP CURL_VERSION_HTTPS_PROXY
-libcurl was built with support for HTTPS-proxy.
-(Added in 7.52.0)
-.IP CURL_VERSION_MULTI_SSL
-libcurl was built with multiple SSL backends. For details, see
-\fIcurl_global_sslset(3)\fP.
-(Added in 7.56.0)
-.IP CURL_VERSION_BROTLI
-supports HTTP Brotli content encoding using libbrotlidec (Added in 7.57.0)
.RE
\fIssl_version\fP is an ASCII string for the TLS library name + version
used. If libcurl has no SSL support, this is NULL. For example "Schannel",
diff --git a/docs/libcurl/libgnurl-errors.3 b/docs/libcurl/libgnurl-errors.3
index 26def4fec..2697efd5b 100644
--- a/docs/libcurl/libgnurl-errors.3
+++ b/docs/libcurl/libgnurl-errors.3
@@ -254,6 +254,8 @@ Status returned failure when asked with \fICURLOPT_SSL_VERIFYSTATUS(3)\fP.
Stream error in the HTTP/2 framing layer.
.IP "CURLE_RECURSIVE_API_CALL (93)"
An API function was called from inside a callback.
+.IP "CURLE_AUTH_ERROR (94)"
+An authentication function returned an error.
.IP "CURLE_OBSOLETE*"
These error codes will never be returned. They were used in an old libcurl
version and are currently unused.
diff --git a/docs/libcurl/opts/CURLINFO_RETRY_AFTER.3 b/docs/libcurl/opts/CURLINFO_RETRY_AFTER.3
new file mode 100644
index 000000000..9e58ca62d
--- /dev/null
+++ b/docs/libcurl/opts/CURLINFO_RETRY_AFTER.3
@@ -0,0 +1,63 @@
+.\" **************************************************************************
+.\" * _ _ ____ _
+.\" * Project ___| | | | _ \| |
+.\" * / __| | | | |_) | |
+.\" * | (__| |_| | _ <| |___
+.\" * \___|\___/|_| \_\_____|
+.\" *
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" *
+.\" * This software is licensed as described in the file COPYING, which
+.\" * you should have received as part of this distribution. The terms
+.\" * are also available at https://curl.haxx.se/docs/copyright.html.
+.\" *
+.\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+.\" * copies of the Software, and permit persons to whom the Software is
+.\" * furnished to do so, under the terms of the COPYING file.
+.\" *
+.\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+.\" * KIND, either express or implied.
+.\" *
+.\" **************************************************************************
+.\"
+.TH CURLINFO_RETRY_AFTER 3 "6 Aug 2019" "libcurl 7.66.0" "curl_easy_getinfo options"
+.SH NAME
+CURLINFO_RETRY_AFTER \- returns the Retry-After retry delay
+.SH SYNOPSIS
+#include <curl/curl.h>
+
+CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_RETRY_AFTER, curl_off_t *retry);
+.SH DESCRIPTION
+Pass a pointer to a curl_off_t variable to receive the number of seconds the
+HTTP server suggesets the client should wait until the next request is
+issued. The information from the "Retry-After:" header.
+
+While the HTTP header might contain a fixed date string, the
+\fICURLINFO_RETRY_AFTER(3)\fP will alwaus return number of seconds to wait -
+or zero if there was no header or the header couldn't be parsed.
+.SH DEFAULT
+Returns zero delay if there was no header.
+.SH PROTOCOLS
+HTTP(S)
+.SH EXAMPLE
+.nf
+CURL *curl = curl_easy_init();
+if(curl) {
+ CURLcode res;
+ curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
+ res = curl_easy_perform(curl);
+ if(res == CURLE_OK) {
+ curl_off_t wait = 0;
+ curl_easy_getinfo(curl, CURLINFO_RETRY_AFTER, &wait);
+ if(wait)
+ printf("Wait for %" CURL_FORMAT_CURL_OFF_T " seconds\\n", wait);
+ }
+ curl_easy_cleanup(curl);
+}
+.fi
+.SH AVAILABILITY
+Added in curl 7.66.0
+.SH RETURN VALUE
+Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
+.SH "SEE ALSO"
+.BR CURLOPT_STDERR "(3), " CURLOPT_HEADERFUNCTION "(3), "
diff --git a/docs/libcurl/opts/CURLOPT_SASL_AUTHZID.3 b/docs/libcurl/opts/CURLOPT_SASL_AUTHZID.3
new file mode 100644
index 000000000..65445475d
--- /dev/null
+++ b/docs/libcurl/opts/CURLOPT_SASL_AUTHZID.3
@@ -0,0 +1,64 @@
+.\" **************************************************************************
+.\" * _ _ ____ _
+.\" * Project ___| | | | _ \| |
+.\" * / __| | | | |_) | |
+.\" * | (__| |_| | _ <| |___
+.\" * \___|\___/|_| \_\_____|
+.\" *
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" *
+.\" * This software is licensed as described in the file COPYING, which
+.\" * you should have received as part of this distribution. The terms
+.\" * are also available at https://curl.haxx.se/docs/copyright.html.
+.\" *
+.\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+.\" * copies of the Software, and permit persons to whom the Software is
+.\" * furnished to do so, under the terms of the COPYING file.
+.\" *
+.\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+.\" * KIND, either express or implied.
+.\" *
+.\" **************************************************************************
+.\"
+.TH CURLOPT_SASL_AUTHZID 3 "11 Sep 2019" "libcurl 7.66.0" "curl_easy_setopt options"
+.SH NAME
+CURLOPT_SASL_AUTHZID \- authorisation identity (identity to act as)
+.SH SYNOPSIS
+#include <curl/curl.h>
+
+CURLcode curl_easy_setopt(CURL *handle, CURLOPT_SASL_AUTHZID, char *authzid);
+.SH DESCRIPTION
+Pass a char * as parameter, which should be pointing to the zero terminated
+authorisation identity (authzid) for the transfer. Only applicable to the PLAIN
+SASL authentication mechanism where it is optional.
+
+When not specified only the authentication identity (authcid) as specified by
+the username will be sent to the server, along with the password. The server
+will derive a authzid from the authcid when not provided, which it will then
+uses internally.
+
+When the authzid is specified, the use of which is server dependent, it can be
+used to access another user's inbox, that the user has been granted access to,
+or a shared mailbox for example.
+.SH DEFAULT
+blank
+.SH PROTOCOLS
+IMAP, POP3 and SMTP
+.SH EXAMPLE
+.nf
+CURL *curl = curl_easy_init();
+if(curl) {
+ curl_easy_setopt(curl, CURLOPT_URL, "imap://example.com/");
+ curl_easy_setopt(curl, CURLOPT_USERNAME, "Kurt");
+ curl_easy_setopt(curl, CURLOPT_PASSWORD, "xipj3plmq");
+ curl_easy_setopt(curl, CURLOPT_SASL_AUTHZID, "Ursel");
+ ret = curl_easy_perform(curl);
+ curl_easy_cleanup(curl);
+}
+.fi
+.SH AVAILABILITY
+Added in 7.66.0
+.SH RETURN VALUE
+Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
+.SH "SEE ALSO"
+.BR CURLOPT_USERNAME "(3), " CURLOPT_PASSWORD "(3), ".BR CURLOPT_USERPWD "(3)"
diff --git a/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME.3 b/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME.3
index ce04fa6f4..03ade4a3c 100644
--- a/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -34,6 +34,8 @@ This time is most often very near to the \fICURLINFO_PRETRANSFER_TIME(3)\fP
time, except for cases such as HTTP pipelining where the pretransfer time can
be delayed due to waits in line for the pipeline and more.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME_T.3
index e218fcc37..1ae9e6c9e 100644
--- a/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_APPCONNECT_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -35,6 +35,8 @@ This time is most often very near to the \fICURLINFO_PRETRANSFER_TIME_T(3)\fP
time, except for cases such as HTTP pipelining where the pretransfer time can
be delayed due to waits in line for the pipeline and more.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_CONNECT_TIME.3 b/docs/libcurl/opts/GNURLINFO_CONNECT_TIME.3
index 3767cd84d..651850cde 100644
--- a/docs/libcurl/opts/GNURLINFO_CONNECT_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_CONNECT_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -31,6 +31,8 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_CONNECT_TIME, double *timep);
Pass a pointer to a double to receive the total time in seconds from the start
until the connection to the remote host (or proxy) was completed.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_CONNECT_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_CONNECT_TIME_T.3
index eaa6f551e..3479cc851 100644
--- a/docs/libcurl/opts/GNURLINFO_CONNECT_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_CONNECT_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -30,6 +30,9 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_CONNECT_TIME_T, curl_off_t *ti
.SH DESCRIPTION
Pass a pointer to a curl_off_t to receive the total time in microseconds
from the start until the connection to the remote host (or proxy) was completed.
+
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_HTTP_VERSION.3 b/docs/libcurl/opts/GNURLINFO_HTTP_VERSION.3
index 3af6665f0..caa43c725 100644
--- a/docs/libcurl/opts/GNURLINFO_HTTP_VERSION.3
+++ b/docs/libcurl/opts/GNURLINFO_HTTP_VERSION.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2016, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -28,9 +28,10 @@ CURLINFO_HTTP_VERSION \- get the http version used in the connection
CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_HTTP_VERSION, long *p);
.SH DESCRIPTION
-Pass a pointer to a long to receive the version used in the last http connection.
-The returned value will be CURL_HTTP_VERSION_1_0, CURL_HTTP_VERSION_1_1, or
-CURL_HTTP_VERSION_2_0, or 0 if the version can't be determined.
+Pass a pointer to a long to receive the version used in the last http
+connection. The returned value will be CURL_HTTP_VERSION_1_0,
+CURL_HTTP_VERSION_1_1, CURL_HTTP_VERSION_2_0, CURL_HTTP_VERSION_3 or 0 if the
+version can't be determined.
.SH PROTOCOLS
HTTP
.SH EXAMPLE
diff --git a/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME.3 b/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME.3
index ecb93050c..52d374d1f 100644
--- a/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -31,6 +31,8 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_NAMELOOKUP_TIME, double *timep
Pass a pointer to a double to receive the total time in seconds from the start
until the name resolving was completed.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME_T.3
index 012cd7343..542df9736 100644
--- a/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_NAMELOOKUP_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -31,6 +31,8 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_NAMELOOKUP_TIME_T, curl_off_t
Pass a pointer to a curl_off_t to receive the total time in microseconds
from the start until the name resolving was completed.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME.3 b/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME.3
index 8026f82e2..515293439 100644
--- a/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -34,6 +34,8 @@ pre-transfer commands and negotiations that are specific to the particular
protocol(s) involved. It does \fInot\fP involve the sending of the protocol-
specific request that triggers a transfer.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME_T.3
index e67fab94b..1cccdef70 100644
--- a/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_PRETRANSFER_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -35,6 +35,8 @@ pre-transfer commands and negotiations that are specific to the particular
protocol(s) involved. It does \fInot\fP involve the sending of the protocol-
specific request that triggers a transfer.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME.3 b/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME.3
index af7fc5dd9..6ac4707c8 100644
--- a/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -33,6 +33,8 @@ start until the first byte is received by libcurl. This includes
\fICURLINFO_PRETRANSFER_TIME(3)\fP and also the time the server needs to
calculate the result.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME_T.3
index 5e19ab590..db71fd8e2 100644
--- a/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_STARTTRANSFER_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -34,6 +34,8 @@ start until the first byte is received by libcurl. This includes
\fICURLINFO_PRETRANSFER_TIME_T(3)\fP and also the time the server needs to
calculate the result.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_TOTAL_TIME.3 b/docs/libcurl/opts/GNURLINFO_TOTAL_TIME.3
index da1ae6465..bab982cdc 100644
--- a/docs/libcurl/opts/GNURLINFO_TOTAL_TIME.3
+++ b/docs/libcurl/opts/GNURLINFO_TOTAL_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -32,6 +32,8 @@ Pass a pointer to a double to receive the total time in seconds for the
previous transfer, including name resolving, TCP connect etc. The double
represents the time in seconds, including fractions.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLINFO_TOTAL_TIME_T.3 b/docs/libcurl/opts/GNURLINFO_TOTAL_TIME_T.3
index 3796e8fc7..70cd7e567 100644
--- a/docs/libcurl/opts/GNURLINFO_TOTAL_TIME_T.3
+++ b/docs/libcurl/opts/GNURLINFO_TOTAL_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -32,6 +32,8 @@ Pass a pointer to a curl_off_t to receive the total time in microseconds
for the previous transfer, including name resolving, TCP connect etc.
The curl_off_t represents the time in microseconds.
+When a redirect is followed, the time from each request is added together.
+
See also the TIMES overview in the \fIcurl_easy_getinfo(3)\fP man page.
.SH PROTOCOLS
All
diff --git a/docs/libcurl/opts/GNURLOPT_ALTSVC.3 b/docs/libcurl/opts/GNURLOPT_ALTSVC.3
index 156f4e979..d1d44629e 100644
--- a/docs/libcurl/opts/GNURLOPT_ALTSVC.3
+++ b/docs/libcurl/opts/GNURLOPT_ALTSVC.3
@@ -39,6 +39,8 @@ Pass in a pointer to a \fIfilename\fP to instruct libcurl to use that file as
the Alt-Svc cache to read existing cache contents from and possibly also write
it back to a after a transfer, unless \fBCURLALTSVC_READONLYFILE\fP is set in
\fICURLOPT_ALTSVC_CTRL(3)\fP.
+
+Specify a blank file name ("") to make libcurl not load from a file at all.
.SH DEFAULT
NULL. The alt-svc cache is not read nor written to file.
.SH PROTOCOLS
diff --git a/docs/libcurl/opts/GNURLOPT_ALTSVC_CTRL.3 b/docs/libcurl/opts/GNURLOPT_ALTSVC_CTRL.3
index aed1253dd..fa8e88967 100644
--- a/docs/libcurl/opts/GNURLOPT_ALTSVC_CTRL.3
+++ b/docs/libcurl/opts/GNURLOPT_ALTSVC_CTRL.3
@@ -28,7 +28,6 @@ CURLOPT_ALTSVC_CTRL \- control alt-svc behavior
#include <gnurl/curl.h>
#define CURLALTSVC_IMMEDIATELY (1<<0)
-#define CURLALTSVC_ALTUSED (1<<1)
#define CURLALTSVC_READONLYFILE (1<<2)
#define CURLALTSVC_H1 (1<<3)
#define CURLALTSVC_H2 (1<<4)
@@ -53,10 +52,8 @@ sure both the source and the destination are legitimate.
Setting any bit will enable the alt-svc engine.
.IP "CURLALTSVC_IMMEDIATELY"
If an Alt-Svc: header is received, this instructs libcurl to switch to one of
-those alternatives asap rather than to save it and use for the next request.
-.IP "CURLALTSVC_ALTUSED"
-Issue the Alt-Used: header in all requests that have been redirected by
-alt-svc.
+those alternatives asap rather than to save it and use for the next
+request. (Not currently supported).
.IP "CURLALTSVC_READONLYFILE"
Do not write the alt-svc cache back to the file specified with
\fICURLOPT_ALTSVC(3)\fP even if it gets updated. By default a file specified
diff --git a/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3 b/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
index 48bdbdaaf..5a569fef9 100644
--- a/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
+++ b/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
@@ -52,7 +52,7 @@ an error to the library. This will cause the transfer to get aborted and the
libcurl function in progress will return \fICURLE_WRITE_ERROR\fP.
A complete HTTP header that is passed to this function can be up to
-\fICURL_MAX_HTTP_HEADER\fP (100K) bytes.
+\fICURL_MAX_HTTP_HEADER\fP (100K) bytes and includes the final line terminator.
If this option is not set, or if it is set to NULL, but
\fICURLOPT_HEADERDATA(3)\fP is set to anything but NULL, the function used to
@@ -67,6 +67,9 @@ negotiation. If you need to operate on only the headers from the final
response, you will need to collect headers in the callback yourself and use
HTTP status lines, for example, to delimit response boundaries.
+For an HTTP transfer, the status line and the blank line preceding the response
+body are both included as headers and passed to this function.
+
When a server sends a chunked encoded transfer, it may contain a trailer. That
trailer is identical to an HTTP header and if such a trailer is received it is
passed to the application using this callback as well. There are several ways
diff --git a/docs/libcurl/opts/GNURLOPT_HTTP09_ALLOWED.3 b/docs/libcurl/opts/GNURLOPT_HTTP09_ALLOWED.3
index e1a658072..8856c1a19 100644
--- a/docs/libcurl/opts/GNURLOPT_HTTP09_ALLOWED.3
+++ b/docs/libcurl/opts/GNURLOPT_HTTP09_ALLOWED.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -31,12 +31,12 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_HTTP09_ALLOWED, long allowed);
Pass the long argument \fIallowed\fP set to 1L to allow HTTP/0.9 responses.
A HTTP/0.9 response is a server response entirely without headers and only a
-body, while you can connect to lots of random TCP services and still get a
-response that curl might consider to be HTTP/0.9.
+body. You can connect to lots of random TCP services and still get a response
+that curl might consider to be HTTP/0.9!
.SH DEFAULT
-curl allows HTTP/0.9 responses by default.
+curl allowed HTTP/0.9 responses by default before 7.66.0
-A future curl version will require this option to be set to allow HTTP/0.9
+Since 7.66.0, libcurl requires this option set to 1L to allow HTTP/0.9
responses.
.SH PROTOCOLS
HTTP
diff --git a/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3 b/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
index 3716ff933..260363e16 100644
--- a/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
+++ b/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -57,6 +57,14 @@ Issue non-TLS HTTP requests using HTTP/2 without HTTP/1.1 Upgrade. It requires
prior knowledge that the server supports HTTP/2 straight away. HTTPS requests
will still do HTTP/2 the standard way with negotiated protocol version in the
TLS handshake. (Added in 7.49.0)
+.IP CURL_HTTP_VERSION_3
+(Added in 7.66.0) Setting this value will make libcurl attempt to use HTTP/3
+directly to server given in the URL. Note that this cannot gracefully
+downgrade to earlier HTTP version if the server doesn't support HTTP/3.
+
+For more reliably upgrading to HTTP/3, set the prefered version to something
+lower and let the server announce its HTTP/3 support via Alt-Svc:. See
+\fICURLOPT_ALTSVC(3)\fP.
.SH DEFAULT
Since curl 7.62.0: CURL_HTTP_VERSION_2TLS
@@ -82,4 +90,4 @@ Along with HTTP
Returns CURLE_OK if HTTP is supported, and CURLE_UNKNOWN_OPTION if not.
.SH "SEE ALSO"
.BR CURLOPT_SSLVERSION "(3), " CURLOPT_HTTP200ALIASES "(3), "
-.BR CURLOPT_HTTP09_ALLOWED "(3), "
+.BR CURLOPT_HTTP09_ALLOWED "(3), " CURLOPT_ALTSVC "(3) "
diff --git a/docs/libcurl/opts/GNURLOPT_POST.3 b/docs/libcurl/opts/GNURLOPT_POST.3
index 70f3da8db..0b3080e0d 100644
--- a/docs/libcurl/opts/GNURLOPT_POST.3
+++ b/docs/libcurl/opts/GNURLOPT_POST.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -55,7 +55,8 @@ If you use POST to an HTTP 1.1 server, you can send data without knowing the
size before starting the POST if you use chunked encoding. You enable this by
adding a header like "Transfer-Encoding: chunked" with
\fICURLOPT_HTTPHEADER(3)\fP. With HTTP 1.0 or without chunked transfer, you
-must specify the size in the request.
+must specify the size in the request. (Since 7.66.0, libcurl will
+automatically use chunked encoding for POSTs if the size is unknown.)
When setting \fICURLOPT_POST(3)\fP to 1, libcurl will automatically set
\fICURLOPT_NOBODY(3)\fP and \fICURLOPT_HTTPGET(3)\fP to 0.
diff --git a/docs/libcurl/opts/GNURLOPT_PROXY_SSL_VERIFYHOST.3 b/docs/libcurl/opts/GNURLOPT_PROXY_SSL_VERIFYHOST.3
index b68aea1ca..afcc51413 100644
--- a/docs/libcurl/opts/GNURLOPT_PROXY_SSL_VERIFYHOST.3
+++ b/docs/libcurl/opts/GNURLOPT_PROXY_SSL_VERIFYHOST.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2016, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -42,8 +42,15 @@ Curl considers the proxy the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the host name in the
proxy string which you told curl to use.
-When the \fIverify\fP value is 1L, \fIcurl_easy_setopt\fP will return an error
-and the option value will not be changed due to old legacy reasons.
+If \fIverify\fP value is set to 1:
+
+In 7.28.0 and earlier: treated as a debug option of some sorts, not supported
+anymore due to frequently leading to programmer mistakes.
+
+From 7.28.1 to 7.65.3: setting it to 1 made curl_easy_setopt() return an error
+and leaving the flag untouched.
+
+From 7.66.0: treats 1 and 2 the same.
When the \fIverify\fP value is 0L, the connection succeeds regardless of the
names used in the certificate. Use that ability with caution!
diff --git a/docs/libcurl/opts/GNURLOPT_READFUNCTION.3 b/docs/libcurl/opts/GNURLOPT_READFUNCTION.3
index 412a9cd70..3bd7fc2ce 100644
--- a/docs/libcurl/opts/GNURLOPT_READFUNCTION.3
+++ b/docs/libcurl/opts/GNURLOPT_READFUNCTION.3
@@ -69,8 +69,37 @@ The default internal read callback is fread().
.SH PROTOCOLS
This is used for all protocols when doing uploads.
.SH EXAMPLE
-Here's an example setting a read callback for reading that to upload to an FTP
-site: https://curl.haxx.se/libcurl/c/ftpupload.html
+.nf
+size_t read_callback(void *ptr, size_t size, size_t nmemb, void *userdata)
+{
+ FILE *readhere = (FILE *)userdata;
+ curl_off_t nread;
+
+ /* copy as much data as possible into the 'ptr' buffer, but no more than
+ 'size' * 'nmemb' bytes! */
+ size_t retcode = fread(ptr, size, nmemb, readhere);
+
+ nread = (curl_off_t)retcode;
+
+ fprintf(stderr, "*** We read %" CURL_FORMAT_CURL_OFF_T
+ " bytes from file\\n", nread);
+ return retcode;
+}
+
+void setup(char *uploadthis)
+{
+ FILE *file = fopen("rb", uploadthis);
+ CURLcode result;
+
+ /* set callback to use */
+ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_callback);
+
+ /* pass in suitable argument to callback */
+ curl_easy_setopt(curl, CURLOPT_READDATA, uploadthis);
+
+ result = curl_easy_perform(curl);
+}
+.fi
.SH AVAILABILITY
CURL_READFUNC_PAUSE return code was added in 7.18.0 and CURL_READFUNC_ABORT
was added in 7.12.1.
diff --git a/docs/libcurl/opts/GNURLOPT_SSL_VERIFYHOST.3 b/docs/libcurl/opts/GNURLOPT_SSL_VERIFYHOST.3
index c680d2e37..9513c65b1 100644
--- a/docs/libcurl/opts/GNURLOPT_SSL_VERIFYHOST.3
+++ b/docs/libcurl/opts/GNURLOPT_SSL_VERIFYHOST.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2015, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -45,11 +45,15 @@ Curl considers the server the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the host name in the
URL to which you told Curl to connect.
-When the \fIverify\fP value is 1, \fIcurl_easy_setopt\fP will return an error
-and the option value will not be changed. It was previously (in 7.28.0 and
-earlier) a debug option of some sorts, but it is no longer supported due to
-frequently leading to programmer mistakes. Future versions will stop returning
-an error for 1 and just treat 1 and 2 the same.
+If \fIverify\fP value is set to 1:
+
+In 7.28.0 and earlier: treated as a debug option of some sorts, not supported
+anymore due to frequently leading to programmer mistakes.
+
+From 7.28.1 to 7.65.3: setting it to 1 made curl_easy_setopt() return an error
+and leaving the flag untouched.
+
+From 7.66.0: treats 1 and 2 the same.
When the \fIverify\fP value is 0, the connection succeeds regardless of the
names in the certificate. Use that ability with caution!
diff --git a/docs/libcurl/opts/Makefile.inc b/docs/libcurl/opts/Makefile.inc
index a0b8bd95e..58b58dc41 100644
--- a/docs/libcurl/opts/Makefile.inc
+++ b/docs/libcurl/opts/Makefile.inc
@@ -43,6 +43,7 @@ man_MANS = \
GNURLINFO_REDIRECT_URL.3 \
GNURLINFO_REQUEST_SIZE.3 \
GNURLINFO_RESPONSE_CODE.3 \
+ GNURLINFO_RETRY_AFTER.3 \
GNURLINFO_RTSP_CLIENT_CSEQ.3 \
GNURLINFO_RTSP_CSEQ_RECV.3 \
GNURLINFO_RTSP_SERVER_CSEQ.3 \
@@ -272,6 +273,7 @@ man_MANS = \
GNURLOPT_RTSP_SESSION_ID.3 \
GNURLOPT_RTSP_STREAM_URI.3 \
GNURLOPT_RTSP_TRANSPORT.3 \
+ GNURLOPT_SASL_AUTHZID.3 \
GNURLOPT_SASL_IR.3 \
GNURLOPT_SEEKDATA.3 \
GNURLOPT_SEEKFUNCTION.3 \
diff --git a/docs/libcurl/symbols-in-versions b/docs/libcurl/symbols-in-versions
index 5244a7cdb..9daad949f 100644
--- a/docs/libcurl/symbols-in-versions
+++ b/docs/libcurl/symbols-in-versions
@@ -12,7 +12,6 @@
Name Introduced Deprecated Removed
-CURLALTSVC_ALTUSED 7.64.1
CURLALTSVC_H1 7.64.1
CURLALTSVC_H2 7.64.1
CURLALTSVC_H3 7.64.1
@@ -40,6 +39,7 @@ CURLCLOSEPOLICY_SLOWEST 7.7
CURLE_ABORTED_BY_CALLBACK 7.1
CURLE_AGAIN 7.18.2
CURLE_ALREADY_COMPLETE 7.7.2
+CURLE_AUTH_ERROR 7.66.0
CURLE_BAD_CALLING_ORDER 7.1 7.17.0
CURLE_BAD_CONTENT_ENCODING 7.10
CURLE_BAD_DOWNLOAD_RESUME 7.10
@@ -266,6 +266,7 @@ CURLINFO_REDIRECT_TIME_T 7.61.0
CURLINFO_REDIRECT_URL 7.18.2
CURLINFO_REQUEST_SIZE 7.4.1
CURLINFO_RESPONSE_CODE 7.10.8
+CURLINFO_RETRY_AFTER 7.66.0
CURLINFO_RTSP_CLIENT_CSEQ 7.20.0
CURLINFO_RTSP_CSEQ_RECV 7.20.0
CURLINFO_RTSP_SERVER_CSEQ 7.20.0
@@ -554,6 +555,7 @@ CURLOPT_RTSP_SERVER_CSEQ 7.20.0
CURLOPT_RTSP_SESSION_ID 7.20.0
CURLOPT_RTSP_STREAM_URI 7.20.0
CURLOPT_RTSP_TRANSPORT 7.20.0
+CURLOPT_SASL_AUTHZID 7.66.0
CURLOPT_SASL_IR 7.31.0
CURLOPT_SEEKDATA 7.18.0
CURLOPT_SEEKFUNCTION 7.18.0
@@ -786,6 +788,7 @@ CURLVERSION_FOURTH 7.16.1
CURLVERSION_NOW 7.10
CURLVERSION_SECOND 7.11.1
CURLVERSION_THIRD 7.12.0
+CURLVERSION_SIXTH 7.66.0
CURL_CHUNK_BGN_FUNC_FAIL 7.21.0
CURL_CHUNK_BGN_FUNC_OK 7.21.0
CURL_CHUNK_BGN_FUNC_SKIP 7.21.0
@@ -830,6 +833,7 @@ CURL_HTTP_VERSION_2 7.43.0
CURL_HTTP_VERSION_2TLS 7.47.0
CURL_HTTP_VERSION_2_0 7.33.0
CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE 7.49.0
+CURL_HTTP_VERSION_3 7.66.0
CURL_HTTP_VERSION_NONE 7.9.1
CURL_IPRESOLVE_V4 7.10.8
CURL_IPRESOLVE_V6 7.10.8
@@ -924,6 +928,7 @@ CURL_VERSION_DEBUG 7.10.6
CURL_VERSION_GSSAPI 7.38.0
CURL_VERSION_GSSNEGOTIATE 7.10.6 7.38.0
CURL_VERSION_HTTP2 7.33.0
+CURL_VERSION_HTTP3 7.66.0
CURL_VERSION_HTTPS_PROXY 7.52.0
CURL_VERSION_IDN 7.12.0
CURL_VERSION_IPV6 7.10