Commit 2442015a authored by Georg Brandl's avatar Georg Brandl

Create http package. #2883.

parent 744c2cd3
......@@ -230,7 +230,7 @@ Because the default handlers handle redirects (codes in the 300 range), and
codes in the 100-299 range indicate success, you will usually only see error
codes in the 400-599 range.
``BaseHTTPServer.BaseHTTPRequestHandler.responses`` is a useful dictionary of
:attr:`http.server.BaseHTTPRequestHandler.responses` is a useful dictionary of
response codes in that shows all the response codes used by RFC 2616. The
dictionary is reproduced here for convenience ::
......@@ -385,7 +385,7 @@ redirect. The URL of the page fetched may not be the same as the URL requested.
**info** - this returns a dictionary-like object that describes the page
fetched, particularly the headers sent by the server. It is currently an
``httplib.HTTPMessage`` instance.
``http.client.HTTPMessage`` instance.
Typical headers include 'Content-length', 'Content-type', and so on. See the
`Quick Reference to HTTP Headers <http://www.cs.tut.fi/~jkorpela/http.html>`_
......@@ -526,13 +526,13 @@ Sockets and Layers
==================
The Python support for fetching resources from the web is layered. urllib2 uses
the httplib library, which in turn uses the socket library.
the http.client library, which in turn uses the socket library.
As of Python 2.3 you can specify how long a socket should wait for a response
before timing out. This can be useful in applications which have to fetch web
pages. By default the socket module has *no timeout* and can hang. Currently,
the socket timeout is not exposed at the httplib or urllib2 levels. However,
you can set the default timeout globally for all sockets using ::
the socket timeout is not exposed at the http.client or urllib2 levels.
However, you can set the default timeout globally for all sockets using ::
import socket
import urllib2
......
:mod:`CGIHTTPServer` --- CGI-capable HTTP request handler
=========================================================
.. module:: CGIHTTPServer
:synopsis: This module provides a request handler for HTTP servers which can run CGI
scripts.
.. sectionauthor:: Moshe Zadka <moshez@zadka.site.co.il>
The :mod:`CGIHTTPServer` module defines a request-handler class, interface
compatible with :class:`BaseHTTPServer.BaseHTTPRequestHandler` and inherits
behavior from :class:`SimpleHTTPServer.SimpleHTTPRequestHandler` but can also
run CGI scripts.
.. note::
This module can run CGI scripts on Unix and Windows systems; on Mac OS it will
only be able to run Python scripts within the same process as itself.
.. note::
CGI scripts run by the :class:`CGIHTTPRequestHandler` class cannot execute
redirects (HTTP code 302), because code 200 (script output follows) is sent
prior to execution of the CGI script. This pre-empts the status code.
The :mod:`CGIHTTPServer` module defines the following class:
.. class:: CGIHTTPRequestHandler(request, client_address, server)
This class is used to serve either files or output of CGI scripts from the
current directory and below. Note that mapping HTTP hierarchic structure to
local directory structure is exactly as in
:class:`SimpleHTTPServer.SimpleHTTPRequestHandler`.
The class will however, run the CGI script, instead of serving it as a file, if
it guesses it to be a CGI script. Only directory-based CGI are used --- the
other common server configuration is to treat special extensions as denoting CGI
scripts.
The :func:`do_GET` and :func:`do_HEAD` functions are modified to run CGI scripts
and serve the output, instead of serving files, if the request leads to
somewhere below the ``cgi_directories`` path.
The :class:`CGIHTTPRequestHandler` defines the following data member:
.. attribute:: cgi_directories
This defaults to ``['/cgi-bin', '/htbin']`` and describes directories to
treat as containing CGI scripts.
The :class:`CGIHTTPRequestHandler` defines the following methods:
.. method:: do_POST()
This method serves the ``'POST'`` request type, only allowed for CGI
scripts. Error 501, "Can only POST to CGI scripts", is output when trying
to POST to a non-CGI url.
Note that CGI scripts will be run with UID of user nobody, for security reasons.
Problems with the CGI script will be translated to error 403.
For example usage, see the implementation of the :func:`test` function.
.. seealso::
Module :mod:`BaseHTTPServer`
Base class implementation for Web server and request handler.
......@@ -1159,8 +1159,8 @@ convert between Unicode and the ACE. Furthermore, the :mod:`socket` module
transparently converts Unicode host names to ACE, so that applications need not
be concerned about converting host names themselves when they pass them to the
socket module. On top of that, modules that have host names as function
parameters, such as :mod:`httplib` and :mod:`ftplib`, accept Unicode host names
(:mod:`httplib` then also transparently sends an IDNA hostname in the
parameters, such as :mod:`http.client` and :mod:`ftplib`, accept Unicode host
names (:mod:`http.client` then also transparently sends an IDNA hostname in the
:mailheader:`Host` field if it sends that field at all).
When receiving host names from the wire (such as in reverse name lookup), no
......
:mod:`http.client` --- HTTP protocol client
===========================================
:mod:`httplib` --- HTTP protocol client
=======================================
.. module:: httplib
.. module:: http.client
:synopsis: HTTP and HTTPS protocol client (requires sockets).
.. index::
pair: HTTP; protocol
single: HTTP; httplib (standard module)
single: HTTP; http.client (standard module)
.. index:: module: urllib
......@@ -39,10 +38,10 @@ The module provides the following classes:
For example, the following calls all create instances that connect to the server
at the same host and port::
>>> h1 = httplib.HTTPConnection('www.cwi.nl')
>>> h2 = httplib.HTTPConnection('www.cwi.nl:80')
>>> h3 = httplib.HTTPConnection('www.cwi.nl', 80)
>>> h3 = httplib.HTTPConnection('www.cwi.nl', 80, timeout=10)
>>> h1 = http.client.HTTPConnection('www.cwi.nl')
>>> h2 = http.client.HTTPConnection('www.cwi.nl:80')
>>> h3 = http.client.HTTPConnection('www.cwi.nl', 80)
>>> h3 = http.client.HTTPConnection('www.cwi.nl', 80, timeout=10)
.. class:: HTTPSConnection(host[, port[, key_file[, cert_file[, strict[, timeout]]]]])
......@@ -338,7 +337,7 @@ and also the following constants for integer status codes:
This dictionary maps the HTTP 1.1 status codes to the W3C names.
Example: ``httplib.responses[httplib.NOT_FOUND]`` is ``'Not Found'``.
Example: ``http.client.responses[http.client.NOT_FOUND]`` is ``'Not Found'``.
.. _httpconnection-objects:
......@@ -464,15 +463,13 @@ HTTPResponse Objects
Reason phrase returned by server.
.. _httplib-examples:
Examples
--------
Here is an example session that uses the ``GET`` method::
>>> import httplib
>>> conn = httplib.HTTPConnection("www.python.org")
>>> import http.client
>>> conn = http.client.HTTPConnection("www.python.org")
>>> conn.request("GET", "/index.html")
>>> r1 = conn.getresponse()
>>> print(r1.status, r1.reason)
......@@ -487,11 +484,11 @@ Here is an example session that uses the ``GET`` method::
Here is an example session that shows how to ``POST`` requests::
>>> import httplib, urllib
>>> import http.client, urllib
>>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
>>> headers = {"Content-type": "application/x-www-form-urlencoded",
... "Accept": "text/plain"}
>>> conn = httplib.HTTPConnection("musi-cal.mojam.com:80")
>>> conn = http.client.HTTPConnection("musi-cal.mojam.com:80")
>>> conn.request("POST", "/cgi-bin/query", params, headers)
>>> response = conn.getresponse()
>>> print(response.status, response.reason)
......
:mod:`http.cookiejar` --- Cookie handling for HTTP clients
==========================================================
:mod:`cookielib` --- Cookie handling for HTTP clients
=====================================================
.. module:: cookielib
.. module:: http.cookiejar
:synopsis: Classes for automatic handling of HTTP cookies.
.. moduleauthor:: John J. Lee <jjl@pobox.com>
.. sectionauthor:: John J. Lee <jjl@pobox.com>
The :mod:`cookielib` module defines classes for automatic handling of HTTP
The :mod:`http.cookiejar` module defines classes for automatic handling of HTTP
cookies. It is useful for accessing web sites that require small pieces of data
-- :dfn:`cookies` -- to be set on the client machine by an HTTP response from a
web server, and then returned to the server in later HTTP requests.
......@@ -18,7 +17,7 @@ Both the regular Netscape cookie protocol and the protocol defined by
:rfc:`2109` cookies are parsed as Netscape cookies and subsequently treated
either as Netscape or RFC 2965 cookies according to the 'policy' in effect.
Note that the great majority of cookies on the Internet are Netscape cookies.
:mod:`cookielib` attempts to follow the de-facto Netscape cookie protocol (which
:mod:`http.cookiejar` attempts to follow the de-facto Netscape cookie protocol (which
differs substantially from that set out in the original Netscape specification),
including taking note of the ``max-age`` and ``port`` cookie-attributes
introduced with RFC 2965.
......@@ -94,7 +93,7 @@ The following classes are provided:
.. class:: Cookie()
This class represents Netscape, RFC 2109 and RFC 2965 cookies. It is not
expected that users of :mod:`cookielib` construct their own :class:`Cookie`
expected that users of :mod:`http.cookiejar` construct their own :class:`Cookie`
instances. Instead, if necessary, call :meth:`make_cookies` on a
:class:`CookieJar` instance.
......@@ -104,9 +103,10 @@ The following classes are provided:
Module :mod:`urllib2`
URL opening with automatic cookie handling.
Module :mod:`Cookie`
Module :mod:`http.cookies`
HTTP cookie classes, principally useful for server-side code. The
:mod:`cookielib` and :mod:`Cookie` modules do not depend on each other.
:mod:`http.cookiejar` and :mod:`http.cookies` modules do not depend on each
other.
http://wwwsearch.sf.net/ClientCookie/
Extensions to this module, including a class for reading Microsoft Internet
......@@ -115,7 +115,7 @@ The following classes are provided:
http://wp.netscape.com/newsref/std/cookie_spec.html
The specification of the original Netscape cookie protocol. Though this is
still the dominant protocol, the 'Netscape cookie protocol' implemented by all
the major browsers (and :mod:`cookielib`) only bears a passing resemblance to
the major browsers (and :mod:`http.cookiejar`) only bears a passing resemblance to
the one sketched out in ``cookie_spec.html``.
:rfc:`2109` - HTTP State Management Mechanism
......@@ -339,7 +339,7 @@ methods:
Return boolean value indicating whether cookie should be accepted from server.
*cookie* is a :class:`cookielib.Cookie` instance. *request* is an object
*cookie* is a :class:`Cookie` instance. *request* is an object
implementing the interface defined by the documentation for
:meth:`CookieJar.extract_cookies`.
......@@ -348,7 +348,7 @@ methods:
Return boolean value indicating whether cookie should be returned to server.
*cookie* is a :class:`cookielib.Cookie` instance. *request* is an object
*cookie* is a :class:`Cookie` instance. *request* is an object
implementing the interface defined by the documentation for
:meth:`CookieJar.add_cookie_header`.
......@@ -424,10 +424,10 @@ The easiest way to provide your own policy is to override this class and call
its methods in your overridden implementations before adding your own additional
checks::
import cookielib
class MyCookiePolicy(cookielib.DefaultCookiePolicy):
import http.cookiejar
class MyCookiePolicy(http.cookiejar.DefaultCookiePolicy):
def set_ok(self, cookie, request):
if not cookielib.DefaultCookiePolicy.set_ok(self, cookie, request):
if not http.cookiejar.DefaultCookiePolicy.set_ok(self, cookie, request):
return False
if i_dont_want_to_store_this_cookie(cookie):
return False
......@@ -584,8 +584,6 @@ combinations of the above flags:
Equivalent to ``DomainStrictNoDots|DomainStrictNonDomain``.
.. _cookielib-cookie-objects:
Cookie Objects
--------------
......@@ -594,7 +592,7 @@ standard cookie-attributes specified in the various cookie standards. The
correspondence is not one-to-one, because there are complicated rules for
assigning default values, because the ``max-age`` and ``expires``
cookie-attributes contain equivalent information, and because RFC 2109 cookies
may be 'downgraded' by :mod:`cookielib` from version 1 to version 0 (Netscape)
may be 'downgraded' by :mod:`http.cookiejar` from version 1 to version 0 (Netscape)
cookies.
Assignment to these attributes should not be necessary other than in rare
......@@ -606,7 +604,7 @@ internal consistency, so you should know what you're doing if you do that.
Integer or :const:`None`. Netscape cookies have :attr:`version` 0. RFC 2965 and
RFC 2109 cookies have a ``version`` cookie-attribute of 1. However, note that
:mod:`cookielib` may 'downgrade' RFC 2109 cookies to Netscape cookies, in which
:mod:`http.cookiejar` may 'downgrade' RFC 2109 cookies to Netscape cookies, in which
case :attr:`version` is 0.
......@@ -664,7 +662,7 @@ internal consistency, so you should know what you're doing if you do that.
True if this cookie was received as an RFC 2109 cookie (ie. the cookie
arrived in a :mailheader:`Set-Cookie` header, and the value of the Version
cookie-attribute in that header was 1). This attribute is provided because
:mod:`cookielib` may 'downgrade' RFC 2109 cookies to Netscape cookies, in
:mod:`http.cookiejar` may 'downgrade' RFC 2109 cookies to Netscape cookies, in
which case :attr:`version` is 0.
......@@ -713,23 +711,21 @@ The :class:`Cookie` class also defines the following method:
cookie has expired at the specified time.
.. _cookielib-examples:
Examples
--------
The first example shows the most common usage of :mod:`cookielib`::
The first example shows the most common usage of :mod:`http.cookiejar`::
import cookielib, urllib2
cj = cookielib.CookieJar()
import http.cookiejar, urllib2
cj = http.cookiejar.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
r = opener.open("http://example.com/")
This example illustrates how to open a URL using your Netscape, Mozilla, or Lynx
cookies (assumes Unix/Netscape convention for location of the cookies file)::
import os, cookielib, urllib2
cj = cookielib.MozillaCookieJar()
import os, http.cookiejar, urllib2
cj = http.cookiejar.MozillaCookieJar()
cj.load(os.path.join(os.environ["HOME"], ".netscape/cookies.txt"))
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
r = opener.open("http://example.com/")
......@@ -740,7 +736,7 @@ Netscape cookies, and block some domains from setting cookies or having them
returned::
import urllib2
from cookielib import CookieJar, DefaultCookiePolicy
from http.cookiejar import CookieJar, DefaultCookiePolicy
policy = DefaultCookiePolicy(
rfc2965=True, strict_ns_domain=Policy.DomainStrict,
blocked_domains=["ads.net", ".ads.net"])
......
:mod:`http.cookies` --- HTTP state management
=============================================
:mod:`Cookie` --- HTTP state management
=======================================
.. module:: Cookie
.. module:: http.cookies
:synopsis: Support for HTTP state management (cookies).
.. moduleauthor:: Timothy O'Malley <timo@alum.mit.edu>
.. sectionauthor:: Moshe Zadka <moshez@zadka.site.co.il>
The :mod:`Cookie` module defines classes for abstracting the concept of
The :mod:`http.cookies` module defines classes for abstracting the concept of
cookies, an HTTP state management mechanism. It supports both simple string-only
cookies, and provides an abstraction for having any serializable data-type as
cookie value.
......@@ -63,7 +62,7 @@ result, the parsing rules used are a bit less strict.
The same security warning from :class:`SerialCookie` applies here.
A further security note is warranted. For backwards compatibility, the
:mod:`Cookie` module exports a class named :class:`Cookie` which is just an
:mod:`http.cookies` module exports a class named :class:`Cookie` which is just an
alias for :class:`SmartCookie`. This is probably a mistake and will likely be
removed in a future version. You should not use the :class:`Cookie` class in
your applications, for the same reason why you should not use the
......@@ -72,9 +71,9 @@ your applications, for the same reason why you should not use the
.. seealso::
Module :mod:`cookielib`
HTTP cookie handling for web *clients*. The :mod:`cookielib` and :mod:`Cookie`
modules do not depend on each other.
Module :mod:`http.cookiejar`
HTTP cookie handling for web *clients*. The :mod:`http.cookiejar` and
:mod:`http.cookies` modules do not depend on each other.
:rfc:`2109` - HTTP State Management Mechanism
This is the state management specification implemented by this module.
......@@ -206,15 +205,15 @@ Morsel Objects
Example
-------
The following example demonstrates how to use the :mod:`Cookie` module.
The following example demonstrates how to use the :mod:`http.cookies` module.
.. doctest::
:options: +NORMALIZE_WHITESPACE
>>> import Cookie
>>> C = Cookie.SimpleCookie()
>>> C = Cookie.SerialCookie()
>>> C = Cookie.SmartCookie()
>>> from http import cookies
>>> C = cookies.SimpleCookie()
>>> C = cookies.SerialCookie()
>>> C = cookies.SmartCookie()
>>> C["fig"] = "newton"
>>> C["sugar"] = "wafer"
>>> print(C) # generate HTTP headers
......@@ -223,32 +222,32 @@ The following example demonstrates how to use the :mod:`Cookie` module.
>>> print(C.output()) # same thing
Set-Cookie: fig=newton
Set-Cookie: sugar=wafer
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["rocky"] = "road"
>>> C["rocky"]["path"] = "/cookie"
>>> print(C.output(header="Cookie:"))
Cookie: rocky=road; Path=/cookie
>>> print(C.output(attrs=[], header="Cookie:"))
Cookie: rocky=road
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C.load("chips=ahoy; vienna=finger") # load from a string (HTTP header)
>>> print(C)
Set-Cookie: chips=ahoy
Set-Cookie: vienna=finger
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
>>> print(C)
Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["oreo"] = "doublestuff"
>>> C["oreo"]["path"] = "/"
>>> print(C)
Set-Cookie: oreo=doublestuff; Path=/
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["twix"] = "none for you"
>>> C["twix"].value
'none for you'
>>> C = Cookie.SimpleCookie()
>>> C = cookies.SimpleCookie()
>>> C["number"] = 7 # equivalent to C["number"] = str(7)
>>> C["string"] = "seven"
>>> C["number"].value
......@@ -258,7 +257,7 @@ The following example demonstrates how to use the :mod:`Cookie` module.
>>> print(C)
Set-Cookie: number=7
Set-Cookie: string=seven
>>> C = Cookie.SerialCookie()
>>> C = cookies.SerialCookie()
>>> C["number"] = 7
>>> C["string"] = "seven"
>>> C["number"].value
......@@ -268,7 +267,7 @@ The following example demonstrates how to use the :mod:`Cookie` module.
>>> print(C)
Set-Cookie: number="I7\012."
Set-Cookie: string="S'seven'\012p1\012."
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["number"] = 7
>>> C["string"] = "seven"
>>> C["number"].value
......
......@@ -26,7 +26,7 @@ is currently supported on most popular platforms. Here is an overview:
wsgiref.rst
urllib.rst
urllib2.rst
httplib.rst
http.client.rst
ftplib.rst
poplib.rst
imaplib.rst
......@@ -37,10 +37,8 @@ is currently supported on most popular platforms. Here is an overview:
uuid.rst
urlparse.rst
socketserver.rst
basehttpserver.rst
simplehttpserver.rst
cgihttpserver.rst
cookielib.rst
cookie.rst
http.server.rst
http.cookies.rst
http.cookiejar.rst
xmlrpc.client.rst
xmlrpc.server.rst
:mod:`SimpleHTTPServer` --- Simple HTTP request handler
=======================================================
.. module:: SimpleHTTPServer
:synopsis: This module provides a basic request handler for HTTP servers.
.. sectionauthor:: Moshe Zadka <moshez@zadka.site.co.il>
The :mod:`SimpleHTTPServer` module defines a single class,
:class:`SimpleHTTPRequestHandler`, which is interface-compatible with
:class:`BaseHTTPServer.BaseHTTPRequestHandler`.
The :mod:`SimpleHTTPServer` module defines the following class:
.. class:: SimpleHTTPRequestHandler(request, client_address, server)
This class serves files from the current directory and below, directly
mapping the directory structure to HTTP requests.
A lot of the work, such as parsing the request, is done by the base class
:class:`BaseHTTPServer.BaseHTTPRequestHandler`. This class implements the
:func:`do_GET` and :func:`do_HEAD` functions.
The following are defined as class-level attributes of
:class:`SimpleHTTPRequestHandler`:
.. attribute:: server_version
This will be ``"SimpleHTTP/" + __version__``, where ``__version__`` is
defined at the module level.
.. attribute:: extensions_map
A dictionary mapping suffixes into MIME types. The default is
signified by an empty string, and is considered to be
``application/octet-stream``. The mapping is used case-insensitively,
and so should contain only lower-cased keys.
The :class:`SimpleHTTPRequestHandler` class defines the following methods:
.. method:: do_HEAD()
This method serves the ``'HEAD'`` request type: it sends the headers it
would send for the equivalent ``GET`` request. See the :meth:`do_GET`
method for a more complete explanation of the possible headers.
.. method:: do_GET()
The request is mapped to a local file by interpreting the request as a
path relative to the current working directory.
If the request was mapped to a directory, the directory is checked for a
file named ``index.html`` or ``index.htm`` (in that order). If found, the
file's contents are returned; otherwise a directory listing is generated
by calling the :meth:`list_directory` method. This method uses
:func:`os.listdir` to scan the directory, and returns a ``404`` error
response if the :func:`listdir` fails.
If the request was mapped to a file, it is opened and the contents are
returned. Any :exc:`IOError` exception in opening the requested file is
mapped to a ``404``, ``'File not found'`` error. Otherwise, the content
type is guessed by calling the :meth:`guess_type` method, which in turn
uses the *extensions_map* variable.
A ``'Content-type:'`` header with the guessed content type is output,
followed by a ``'Content-Length:'`` header with the file's size and a
``'Last-Modified:'`` header with the file's modification time.
Then follows a blank line signifying the end of the headers, and then the
contents of the file are output. If the file's MIME type starts with
``text/`` the file is opened in text mode; otherwise binary mode is used.
For example usage, see the implementation of the :func:`test` function.
.. seealso::
Module :mod:`BaseHTTPServer`
Base class implementation for Web server and request handler.
......@@ -489,7 +489,7 @@ sends some bytes, and reads part of the response::
pprint.pprint(ssl_sock.getpeercert())
print(pprint.pformat(ssl_sock.getpeercert()))
# Set a simple HTTP request -- use httplib in actual code.
# Set a simple HTTP request -- use http.client in actual code.
ssl_sock.write("""GET / HTTP/1.0\r
Host: www.verisign.com\r\n\r\n""")
......
......@@ -37,7 +37,7 @@ The :mod:`urllib2` module defines the following functions:
determine if a redirect was followed
* :meth:`info` --- return the meta-information of the page, such as headers, in
the form of an ``httplib.HTTPMessage`` instance
the form of an ``http.client.HTTPMessage`` instance
(see `Quick Reference to HTTP Headers <http://www.cs.tut.fi/~jkorpela/http.html>`_)
Raises :exc:`URLError` on errors.
......@@ -100,7 +100,7 @@ The following exceptions are raised as appropriate:
An HTTP status code as defined in `RFC 2616 <http://www.faqs.org/rfcs/rfc2616.html>`_.
This numeric value corresponds to a value found in the dictionary of
codes as found in :attr:`BaseHTTPServer.BaseHTTPRequestHandler.responses`.
codes as found in :attr:`http.server.BaseHTTPRequestHandler.responses`.
......@@ -133,10 +133,11 @@ The following classes are provided:
HTTP cookies:
*origin_req_host* should be the request-host of the origin transaction, as
defined by :rfc:`2965`. It defaults to ``cookielib.request_host(self)``. This
is the host name or IP address of the original request that was initiated by the
user. For example, if the request is for an image in an HTML document, this
should be the request-host of the request for the page containing the image.
defined by :rfc:`2965`. It defaults to ``http.cookiejar.request_host(self)``.
This is the host name or IP address of the original request that was
initiated by the user. For example, if the request is for an image in an
HTML document, this should be the request-host of the request for the page
containing the image.
*unverifiable* should indicate whether the request is unverifiable, as defined
by RFC 2965. It defaults to False. An unverifiable request is one whose URL
......@@ -627,7 +628,7 @@ HTTPCookieProcessor Objects
.. attribute:: HTTPCookieProcessor.cookiejar
The :class:`cookielib.CookieJar` in which cookies are stored.
The :class:`http.cookiejar.CookieJar` in which cookies are stored.
.. _proxy-handler:
......
......@@ -260,7 +260,7 @@ manipulation of WSGI response headers using a mapping-like interface.
:synopsis: A simple WSGI HTTP server.
This module implements a simple HTTP server (based on :mod:`BaseHTTPServer`)
This module implements a simple HTTP server (based on :mod:`http.server`)
that serves WSGI applications. Each server instance serves a single WSGI
application on a given host and port. If you want to serve multiple
applications on a single host and port, you should create a WSGI application
......@@ -303,13 +303,13 @@ request. (E.g., using the :func:`shift_path_info` function from
Create a :class:`WSGIServer` instance. *server_address* should be a
``(host,port)`` tuple, and *RequestHandlerClass* should be the subclass of
:class:`BaseHTTPServer.BaseHTTPRequestHandler` that will be used to process
:class:`http.server.BaseHTTPRequestHandler` that will be used to process
requests.
You do not normally need to call this constructor, as the :func:`make_server`
function can handle all the details for you.
:class:`WSGIServer` is a subclass of :class:`BaseHTTPServer.HTTPServer`, so all
:class:`WSGIServer` is a subclass of :class:`http.server.HTTPServer`, so all
of its methods (such as :meth:`serve_forever` and :meth:`handle_request`) are
available. :class:`WSGIServer` also provides these WSGI-specific methods:
......
......@@ -506,14 +506,14 @@ transport. The following example shows how:
::
import xmlrpc.client, httplib
import xmlrpc.client, http.client
class ProxiedTransport(xmlrpc.client.Transport):
def set_proxy(self, proxy):
self.proxy = proxy
def make_connection(self, host):
self.realhost = host
h = httplib.HTTP(self.proxy)
h = http.client.HTTP(self.proxy)
return h
def send_request(self, connection, handler, request_body):
connection.putrequest("POST", 'http://%s%s' % (self.realhost, handler))
......
......@@ -449,7 +449,7 @@ The :mod:`asynchat` and :mod:`asyncore` modules contain the following notice::
Cookie management
-----------------
The :mod:`Cookie` module contains the following notice::
The :mod:`http.cookies` module contains the following notice::
Copyright 2000 by Timothy O'Malley <timo@alum.mit.edu>
......
This diff is collapsed.
"""Simple HTTP Server.
This module builds on BaseHTTPServer by implementing the standard GET
and HEAD requests in a fairly straightforward manner.
"""
__version__ = "0.6"
__all__ = ["SimpleHTTPRequestHandler"]
import os
import sys
import posixpath
import BaseHTTPServer
import urllib
import cgi
import shutil
import mimetypes
from io import BytesIO
class SimpleHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
"""Simple HTTP request handler with GET and HEAD commands.
This serves files from the current directory and any of its
subdirectories. The MIME type for files is determined by
calling the .guess_type() method.
The GET and HEAD requests are identical except that the HEAD
request omits the actual contents of the file.
"""
server_version = "SimpleHTTP/" + __version__
def do_GET(self):
"""Serve a GET request."""
f = self.send_head()
if f:
self.copyfile(f, self.wfile)
f.close()
def do_HEAD(self):
"""Serve a HEAD request."""
f = self.send_head()
if f:
f.close()
def send_head(self):
"""Common code for GET and HEAD commands.
This sends the response code and MIME headers.
Return value is either a file object (which has to be copied
to the outputfile by the caller unless the command was HEAD,
and must be closed by the caller under all circumstances), or
None, in which case the caller has nothing further to do.
"""
path = self.translate_path(self.path)
f = None
if os.path.isdir(path):
if not self.path.endswith('/'):
# redirect browser - doing basically what apache does
self.send_response(301)
self.send_header("Location", self.path + "/")
self.end_headers()
return None
for index in "index.html", "index.htm":
index = os.path.join(path, index)
if os.path.exists(index):
path = index
break
else:
return self.list_directory(path)
ctype = self.guess_type(path)
try:
f = open(path, 'rb')
except IOError:
self.send_error(404, "File not found")
return None
self.send_response(200)
self.send_header("Content-type", ctype)
fs = os.fstat(f.fileno())
self.send_header("Content-Length", str(fs[6]))
self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
self.end_headers()
return f
def list_directory(self, path):
"""Helper to produce a directory listing (absent index.html).
Return value is either a file object, or None (indicating an
error). In either case, the headers are sent, making the
interface the same as for send_head().
"""
try:
list = os.listdir(path)
except os.error:
self.send_error(404, "No permission to list directory")
return None
list.sort(key=lambda a: a.lower())
r = []
displaypath = cgi.escape(urllib.unquote(self.path))
r.append('<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">')
r.append("<html>\n<title>Directory listing for %s</title>\n" % displaypath)
r.append("<body>\n<h2>Directory listing for %s</h2>\n" % displaypath)
r.append("<hr>\n<ul>\n")
for name in list:
fullname = os.path.join(path, name)
displayname = linkname = name
# Append / for directories or @ for symbolic links
if os.path.isdir(fullname):
displayname = name + "/"
linkname = name + "/"
if os.path.islink(fullname):
displayname = name + "@"
# Note: a link to a directory displays with @ and links with /
r.append('<li><a href="%s">%s</a>\n'
% (urllib.quote(linkname), cgi.escape(displayname)))
r.append("</ul>\n<hr>\n</body>\n</html>\n")
enc = sys.getfilesystemencoding()
encoded = ''.join(r).encode(enc)
f = BytesIO()
f.write(encoded)
f.seek(0)
self.send_response(200)
self.send_header("Content-type", "text/html; charset=%s" % enc)
self.send_header("Content-Length", str(len(encoded)))
self.end_headers()
return f
def translate_path(self, path):
"""Translate a /-separated PATH to the local filename syntax.
Components that mean special things to the local file system
(e.g. drive or directory names) are ignored. (XXX They should
probably be diagnosed.)
"""
# abandon query parameters
path = path.split('?',1)[0]
path = path.split('#',1)[0]
path = posixpath.normpath(urllib.unquote(path))
words = path.split('/')
words = filter(None, words)
path = os.getcwd()
for word in words:
drive, word = os.path.splitdrive(word)
head, word = os.path.split(word)
if word in (os.curdir, os.pardir): continue
path = os.path.join(path, word)
return path
def copyfile(self, source, outputfile):
"""Copy all data between two file objects.
The SOURCE argument is a file object open for reading
(or anything with a read() method) and the DESTINATION
argument is a file object open for writing (or
anything with a write() method).
The only reason for overriding this would be to change
the block size or perhaps to replace newlines by CRLF
-- note however that this the default server uses this
to copy binary data as well.
"""
shutil.copyfileobj(source, outputfile)
def guess_type(self, path):
"""Guess the type of a file.
Argument is a PATH (a filename).
Return value is a string of the form type/subtype,
usable for a MIME Content-type header.
The default implementation looks the file's extension
up in the table self.extensions_map, using application/octet-stream
as a default; however it would be permissible (if
slow) to look inside the data to make a better guess.
"""
base, ext = posixpath.splitext(path)
if ext in self.extensions_map:
return self.extensions_map[ext]
ext = ext.lower()
if ext in self.extensions_map:
return self.extensions_map[ext]
else:
return self.extensions_map['']
if not mimetypes.inited:
mimetypes.init() # try to read system mime.types
extensions_map = mimetypes.types_map.copy()
extensions_map.update({
'': 'application/octet-stream', # Default
'.py': 'text/plain',
'.c': 'text/plain',
'.h': 'text/plain',
})
def test(HandlerClass = SimpleHTTPRequestHandler,
ServerClass = BaseHTTPServer.HTTPServer):
BaseHTTPServer.test(HandlerClass, ServerClass)
if __name__ == '__main__':
test()
......@@ -12,7 +12,7 @@ libwww-perl, I hope.
"""
import time, re
from cookielib import (_warn_unhandled_exception, FileCookieJar, LoadError,
from http.cookiejar import (_warn_unhandled_exception, FileCookieJar, LoadError,
Cookie, MISSING_FILENAME_TEXT,
join_header_words, split_header_words,
iso2time, time2isoz)
......
......@@ -2,7 +2,7 @@
import re, time
from cookielib import (_warn_unhandled_exception, FileCookieJar, LoadError,
from http.cookiejar import (_warn_unhandled_exception, FileCookieJar, LoadError,
Cookie, MISSING_FILENAME_TEXT)
class MozillaCookieJar(FileCookieJar):
......@@ -73,7 +73,7 @@ class MozillaCookieJar(FileCookieJar):
domain_specified = (domain_specified == "TRUE")
if name == "":
# cookies.txt regards 'Set-Cookie: foo' as a cookie
# with no name, whereas cookielib regards it as a
# with no name, whereas http.cookiejar regards it as a
# cookie with no value.
name = value
value = None
......@@ -134,7 +134,7 @@ class MozillaCookieJar(FileCookieJar):
expires = ""
if cookie.value is None:
# cookies.txt regards 'Set-Cookie: foo' as a cookie
# with no name, whereas cookielib regards it as a
# with no name, whereas http.cookiejar regards it as a
# cookie with no value.
name = ""
value = cookie.name
......
......@@ -11,7 +11,7 @@ import os
import socket
import platform
import configparser
import httplib
import http.client
import base64
import urlparse
......@@ -151,9 +151,9 @@ class upload(PyPIRCCommand):
urlparse.urlparse(self.repository)
assert not params and not query and not fragments
if schema == 'http':
http = httplib.HTTPConnection(netloc)
http = http.client.HTTPConnection(netloc)
elif schema == 'https':
http = httplib.HTTPSConnection(netloc)
http = http.client.HTTPSConnection(netloc)
else:
raise AssertionError("unsupported schema "+schema)
......
# This directory is a Python package.
......@@ -33,7 +33,7 @@ try:
import threading as _threading
except ImportError:
import dummy_threading as _threading
import httplib # only for the default HTTP port
import http.client # only for the default HTTP port
from calendar import timegm
debug = False # set to True to enable debugging via the logging module
......@@ -45,11 +45,11 @@ def _debug(*args):
global logger
if not logger:
import logging
logger = logging.getLogger("cookielib")
logger = logging.getLogger("http.cookiejar")
return logger.debug(*args)
DEFAULT_HTTP_PORT = str(httplib.HTTP_PORT)
DEFAULT_HTTP_PORT = str(http.client.HTTP_PORT)
MISSING_FILENAME_TEXT = ("a filename was not supplied (nor was the CookieJar "
"instance initialised with one)")
......@@ -61,7 +61,7 @@ def _warn_unhandled_exception():
f = io.StringIO()
traceback.print_exc(None, f)
msg = f.getvalue()
warnings.warn("cookielib bug!\n%s" % msg, stacklevel=2)
warnings.warn("http.cookiejar bug!\n%s" % msg, stacklevel=2)
# Date/time conversion
......
......@@ -48,25 +48,25 @@ The Basics
Importing is easy..
>>> import Cookie
>>> from http import cookies
Most of the time you start by creating a cookie. Cookies come in
three flavors, each with slightly different encoding semantics, but
more on that later.
>>> C = Cookie.SimpleCookie()
>>> C = Cookie.SerialCookie()
>>> C = Cookie.SmartCookie()
>>> C = cookies.SimpleCookie()
>>> C = cookies.SerialCookie()
>>> C = cookies.SmartCookie()
[Note: Long-time users of Cookie.py will remember using
Cookie.Cookie() to create an Cookie object. Although deprecated, it
[Note: Long-time users of cookies.py will remember using
cookies.Cookie() to create an Cookie object. Although deprecated, it
is still supported by the code. See the Backward Compatibility notes
for more information.]
Once you've created your Cookie, you can add values just as if it were
a dictionary.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["fig"] = "newton"
>>> C["sugar"] = "wafer"
>>> C.output()
......@@ -77,7 +77,7 @@ appropriate format for a Set-Cookie: header. This is the
default behavior. You can change the header and printed
attributes by using the .output() function
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["rocky"] = "road"
>>> C["rocky"]["path"] = "/cookie"
>>> print(C.output(header="Cookie:"))
......@@ -89,7 +89,7 @@ The load() method of a Cookie extracts cookies from a string. In a
CGI script, you would use this method to extract the cookies from the
HTTP_COOKIE environment variable.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C.load("chips=ahoy; vienna=finger")
>>> C.output()
'Set-Cookie: chips=ahoy\r\nSet-Cookie: vienna=finger'
......@@ -98,7 +98,7 @@ The load() method is darn-tootin smart about identifying cookies
within a string. Escaped quotation marks, nested semicolons, and other
such trickeries do not confuse it.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
>>> print(C)
Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"
......@@ -107,7 +107,7 @@ Each element of the Cookie also supports all of the RFC 2109
Cookie attributes. Here's an example which sets the Path
attribute.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["oreo"] = "doublestuff"
>>> C["oreo"]["path"] = "/"
>>> print(C)
......@@ -116,7 +116,7 @@ attribute.
Each dictionary element has a 'value' attribute, which gives you
back the value associated with the key.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["twix"] = "none for you"
>>> C["twix"].value
'none for you'
......@@ -135,7 +135,7 @@ The SimpleCookie expects that all values should be standard strings.
Just to be sure, SimpleCookie invokes the str() builtin to convert
the value to a string, when the values are set dictionary-style.
>>> C = Cookie.SimpleCookie()
>>> C = cookies.SimpleCookie()
>>> C["number"] = 7
>>> C["string"] = "seven"
>>> C["number"].value
......@@ -154,7 +154,7 @@ Python object to a value, and recover the exact same object when the
cookie has been returned. (SerialCookie can yield some
strange-looking cookie values, however.)
>>> C = Cookie.SerialCookie()
>>> C = cookies.SerialCookie()
>>> C["number"] = 7
>>> C["string"] = "seven"
>>> C["number"].value
......@@ -178,7 +178,7 @@ when the load() method parses out values, it attempts to de-serialize
the value. If it fails, then it fallsback to treating the value
as a string.
>>> C = Cookie.SmartCookie()
>>> C = cookies.SmartCookie()
>>> C["number"] = 7
>>> C["string"] = "seven"
>>> C["number"].value
......@@ -193,10 +193,10 @@ Backwards Compatibility
-----------------------
In order to keep compatibilty with earlier versions of Cookie.py,
it is still possible to use Cookie.Cookie() to create a Cookie. In
it is still possible to use cookies.Cookie() to create a Cookie. In
fact, this simply returns a SmartCookie.
>>> C = Cookie.Cookie()
>>> C = cookies.Cookie()
>>> print(C.__class__.__name__)
SmartCookie
......@@ -721,8 +721,8 @@ Cookie = SmartCookie
###########################################################
def _test():
import doctest, Cookie
return doctest.testmod(Cookie)
import doctest, http.cookies
return doctest.testmod(http.cookies)
if __name__ == "__main__":
_test()
......
......@@ -1002,9 +1002,9 @@ class HTTPHandler(logging.Handler):
Send the record to the Web server as an URL-encoded dictionary
"""
try:
import httplib, urllib
import http.client, urllib
host = self.host
h = httplib.HTTP(host)
h = http.client.HTTP(host)
url = self.url
data = urllib.urlencode(self.mapLogRecord(record))
if self.method == "GET":
......
......@@ -1921,7 +1921,7 @@ def apropos(key):
# --------------------------------------------------- web browser interface
def serve(port, callback=None, completer=None):
import BaseHTTPServer, mimetools, select
import http.server, mimetools, select
# Patch up mimetools.Message so it doesn't break if rfc822 is reloaded.
class Message(mimetools.Message):
......@@ -1933,7 +1933,7 @@ def serve(port, callback=None, completer=None):
self.parsetype()
self.parseplist()
class DocHandler(BaseHTTPServer.BaseHTTPRequestHandler):
class DocHandler(http.server.BaseHTTPRequestHandler):
def send_document(self, title, contents):
try:
self.send_response(200)
......@@ -1978,7 +1978,7 @@ pydoc</strong> by Ka-Ping Yee &lt;ping@lfw.org&gt;</font>'''
def log_message(self, *args): pass
class DocServer(BaseHTTPServer.HTTPServer):
class DocServer(http.server.HTTPServer):
def __init__(self, port, callback):
host = (sys.platform == 'mac') and '127.0.0.1' or 'localhost'
self.address = ('', port)
......@@ -1997,7 +1997,7 @@ pydoc</strong> by Ka-Ping Yee &lt;ping@lfw.org&gt;</font>'''
self.base.server_activate(self)
if self.callback: self.callback(self)
DocServer.base = BaseHTTPServer.HTTPServer
DocServer.base = http.server.HTTPServer
DocServer.handler = DocHandler
DocHandler.MessageClass = Message
try:
......
......@@ -4,11 +4,11 @@ We don't want to require the 'network' resource.
"""
import os, unittest
from SimpleHTTPServer import SimpleHTTPRequestHandler
from http.server import SimpleHTTPRequestHandler
from test import support
class SocketlessRequestHandler (SimpleHTTPRequestHandler):
class SocketlessRequestHandler(SimpleHTTPRequestHandler):
def __init__(self):
pass
......
......@@ -33,12 +33,10 @@ class AllTest(unittest.TestCase):
# than an AttributeError somewhere deep in CGIHTTPServer.
import _socket
self.check_all("BaseHTTPServer")
self.check_all("CGIHTTPServer")
self.check_all("http.server")
self.check_all("configparser")
self.check_all("Cookie")
self.check_all("Queue")
self.check_all("SimpleHTTPServer")
self.check_all("http.cookies")
self.check_all("queue")
self.check_all("socketserver")
self.check_all("aifc")
self.check_all("base64")
......@@ -77,7 +75,7 @@ class AllTest(unittest.TestCase):
self.check_all("gzip")
self.check_all("heapq")
self.check_all("htmllib")
self.check_all("httplib")
self.check_all("http.client")
self.check_all("ihooks")
self.check_all("imaplib")
self.check_all("imghdr")
......
from xmlrpc.server import DocXMLRPCServer
import httplib
import http.client
from test import support
import threading
import time
......@@ -65,7 +65,7 @@ class DocXMLRPCHTTPGETServer(unittest.TestCase):
time.sleep(0.001)
n -= 1
self.client = httplib.HTTPConnection("localhost:%d" % PORT)
self.client = http.client.HTTPConnection("localhost:%d" % PORT)
def tearDown(self):
self.client.close()
......
# Simple test suite for Cookie.py
# Simple test suite for http/cookies.py
from test.support import run_unittest, run_doctest
import unittest
import Cookie
from http import cookies
import warnings
warnings.filterwarnings("ignore",
......@@ -34,7 +34,7 @@ class CookieTests(unittest.TestCase):
]
for case in cases:
C = Cookie.SimpleCookie()
C = cookies.SimpleCookie()
C.load(case['data'])
self.assertEqual(repr(C), case['repr'])
self.assertEqual(C.output(sep='\n'), case['output'])
......@@ -42,7 +42,7 @@ class CookieTests(unittest.TestCase):
self.assertEqual(C[k].value, v)
def test_load(self):
C = Cookie.SimpleCookie()
C = cookies.SimpleCookie()
C.load('Customer="WILE_E_COYOTE"; Version=1; Path=/acme')
self.assertEqual(C['Customer'].value, 'WILE_E_COYOTE')
......@@ -68,7 +68,7 @@ class CookieTests(unittest.TestCase):
def test_quoted_meta(self):
# Try cookie with quoted meta-data
C = Cookie.SimpleCookie()
C = cookies.SimpleCookie()
C.load('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"')
self.assertEqual(C['Customer'].value, 'WILE_E_COYOTE')
self.assertEqual(C['Customer']['version'], '1')
......@@ -76,7 +76,7 @@ class CookieTests(unittest.TestCase):
def test_main():
run_unittest(CookieTests)
run_doctest(Cookie)
run_doctest(cookies)
if __name__ == '__main__':
test_main()
import httplib
import http.client as httplib
import io
import socket
......@@ -48,8 +48,6 @@ class HeaderTests(TestCase):
# Some headers are added automatically, but should not be added by
# .request() if they are explicitly set.
import httplib
class HeaderCountingBuffer(list):
def __init__(self):
self.count = {}
......
......@@ -4,16 +4,15 @@ Written by Cody A.W. Somerville <cody-somerville@ubuntu.com>,
Josip Dzolonga, and Michael Otteneder for the 2007/08 GHOP contest.
"""
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from CGIHTTPServer import CGIHTTPRequestHandler
from http.server import BaseHTTPRequestHandler, HTTPServer, \
SimpleHTTPRequestHandler, CGIHTTPRequestHandler
import os
import sys
import base64
import shutil
import urllib
import httplib
import http.client
import tempfile
import threading
......@@ -59,7 +58,7 @@ class BaseTestCase(unittest.TestCase):
self.thread.stop()
def request(self, uri, method='GET', body=None, headers={}):
self.connection = httplib.HTTPConnection('localhost', self.PORT)
self.connection = http.client.HTTPConnection('localhost', self.PORT)
self.connection.request(method, uri, body, headers)
return self.connection.getresponse()
......@@ -92,7 +91,7 @@ class BaseHTTPServerTestCase(BaseTestCase):
def setUp(self):
BaseTestCase.setUp(self)
self.con = httplib.HTTPConnection('localhost', self.PORT)
self.con = http.client.HTTPConnection('localhost', self.PORT)
self.con.connect()
def test_command(self):
......@@ -343,7 +342,7 @@ class CGIHTTPServerTestCase(BaseTestCase):
def test_main(verbose=None):
try:
cwd = os.getcwd()
support.run_unittest(#BaseHTTPServerTestCase,
support.run_unittest(BaseHTTPServerTestCase,
SimpleHTTPServerTestCase,
CGIHTTPServerTestCase
)
......
......@@ -168,7 +168,6 @@ class PyclbrTest(TestCase):
'getproxies_internetconfig',)) # not on all platforms
cm('pickle')
cm('aifc', ignore=('openfp',)) # set with = in module
cm('Cookie', ignore=('Cookie',)) # Cookie is an alias for SmartCookie
cm('sre_parse', ignore=('dump',)) # from sre_constants import *
cm('pdb')
cm('pydoc')
......
......@@ -3,7 +3,7 @@ import shelve
import glob
from test import support
from collections import MutableMapping
from test.test_anydbm import dbm_iterator
from test.test_dbm import dbm_iterator
def L1(s):
return s.decode("latin-1")
......
......@@ -855,7 +855,7 @@ class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase):
In this case (and in this case only), it should be possible to
create a file object, read a line from it, create another file
object, read another line from it, without loss of data in the
first file object's buffer. Note that httplib relies on this
first file object's buffer. Note that http.client relies on this
when reading multiple requests from the same socket."""
bufsize = 0 # Use unbuffered mode
......
......@@ -15,8 +15,7 @@ import shutil
import traceback
import asyncore
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from http.server import HTTPServer, SimpleHTTPRequestHandler
# Optionally test SSL support, if we have it in the tested platform
skip_expected = False
......
......@@ -8,7 +8,6 @@ import warnings
class TestUntestedModules(unittest.TestCase):
def test_at_least_import_untested_modules(self):
with support.catch_warning():
import CGIHTTPServer
import aifc
import bdb
import cgitb
......
"""Regresssion tests for urllib"""
import urllib
import httplib
import http.client
import io
import unittest
from test import support
......@@ -107,14 +107,14 @@ class urlopen_HttpTests(unittest.TestCase):
def readline(self, length=None):
if self.closed: return b""
return io.BytesIO.readline(self, length)
class FakeHTTPConnection(httplib.HTTPConnection):
class FakeHTTPConnection(http.client.HTTPConnection):
def connect(self):
self.sock = FakeSocket(fakedata)
self._connection_class = httplib.HTTPConnection
httplib.HTTPConnection = FakeHTTPConnection
self._connection_class = http.client.HTTPConnection
http.client.HTTPConnection = FakeHTTPConnection
def unfakehttp(self):
httplib.HTTPConnection = self._connection_class
http.client.HTTPConnection = self._connection_class
def test_read(self):
self.fakehttp(b"Hello!")
......
......@@ -77,7 +77,7 @@ def test_request_headers_methods():
Note the case normalization of header names here, to .capitalize()-case.
This should be preserved for backwards-compatibility. (In the HTTP case,
normalization to .title()-case is done by urllib2 before sending headers to
httplib).
http.client).
>>> url = "http://example.com"
>>> r = Request(url, headers={"Spam-eggs": "blah"})
......@@ -348,12 +348,12 @@ class MockHTTPHandler(urllib2.BaseHandler):
self._count = 0
self.requests = []
def http_open(self, req):
import mimetools, httplib, copy
import mimetools, http.client, copy
from io import StringIO
self.requests.append(copy.deepcopy(req))
if self._count == 0:
self._count = self._count + 1
name = httplib.responses[self.code]
name = http.client.responses[self.code]
msg = mimetools.Message(StringIO(self.headers))
return self.parent.error(
"http", req, MockFile(), self.code, name, msg)
......@@ -875,9 +875,8 @@ class HandlerTests(unittest.TestCase):
def test_cookie_redirect(self):
# cookies shouldn't leak into redirected requests
from cookielib import CookieJar
from test.test_cookielib import interact_netscape
from http.cookiejar import CookieJar
from test.test_http_cookiejar import interact_netscape
cj = CookieJar()
interact_netscape(cj, "http://www.example.com/", "spam=eggs")
......
......@@ -4,20 +4,20 @@ import mimetools
import threading
import urlparse
import urllib2
import BaseHTTPServer
import http.server
import unittest
import hashlib
from test import support
# Loopback http server infrastructure
class LoopbackHttpServer(BaseHTTPServer.HTTPServer):
class LoopbackHttpServer(http.server.HTTPServer):
"""HTTP server w/ a few modifications that make it useful for
loopback testing purposes.
"""
def __init__(self, server_address, RequestHandlerClass):
BaseHTTPServer.HTTPServer.__init__(self,
http.server.HTTPServer.__init__(self,
server_address,
RequestHandlerClass)
......@@ -26,7 +26,7 @@ class LoopbackHttpServer(BaseHTTPServer.HTTPServer):
self.socket.settimeout(1.0)
def get_request(self):
"""BaseHTTPServer method, overridden."""
"""HTTPServer method, overridden."""
request, client_address = self.socket.accept()
......@@ -188,7 +188,7 @@ class DigestAuthHandler:
# Proxy test infrastructure
class FakeProxyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
class FakeProxyHandler(http.server.BaseHTTPRequestHandler):
"""This is a 'fake proxy' that makes it look like the entire
internet has gone down due to a sudden zombie invasion. It main
utility is in providing us with authentication support for
......@@ -283,7 +283,7 @@ class ProxyAuthTests(unittest.TestCase):
def GetRequestHandler(responses):
class FakeHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
class FakeHTTPRequestHandler(http.server.BaseHTTPRequestHandler):
server_version = "TestHTTP/"
requests = []
......
......@@ -33,7 +33,7 @@ class AuthTests(unittest.TestCase):
## could be used to HTTP authentication.
#
# def test_basic_auth(self):
# import httplib
# import http.client
#
# test_url = "http://www.python.org/test/test_urllib2/basic_auth"
# test_hostport = "www.python.org"
......@@ -61,14 +61,14 @@ class AuthTests(unittest.TestCase):
# # reasons, let's not implement it! (it's already implemented for proxy
# # specification strings (that is, URLs or authorities specifying a
# # proxy), so we must keep that)
# self.assertRaises(httplib.InvalidURL,
# self.assertRaises(http.client.InvalidURL,
# urllib2.urlopen, "http://evil:thing@example.com")
class CloseSocketTest(unittest.TestCase):
def test_close(self):
import socket, httplib, gc
import socket, http.client, gc
# calling .close() on urllib2's response objects should close the
# underlying socket
......@@ -77,7 +77,7 @@ class CloseSocketTest(unittest.TestCase):
response = _urlopen_with_retry("http://www.python.org/")
abused_fileobject = response.fp
httpresponse = abused_fileobject.raw
self.assert_(httpresponse.__class__ is httplib.HTTPResponse)
self.assert_(httpresponse.__class__ is http.client.HTTPResponse)
fileobject = httpresponse.fp
self.assert_(not fileobject.closed)
......
......@@ -7,7 +7,7 @@ import xmlrpc.client as xmlrpclib
import xmlrpc.server
import threading
import mimetools
import httplib
import http.client
import socket
import os
from test import support
......@@ -340,9 +340,9 @@ class SimpleServerTestCase(unittest.TestCase):
# [ch] The test 404 is causing lots of false alarms.
def XXXtest_404(self):
# send POST with httplib, it should return 404 header and
# send POST with http.client, it should return 404 header and
# 'Not Found' message.
conn = httplib.HTTPConnection('localhost', PORT)
conn = http.client.HTTPConnection('localhost', PORT)
conn.request('POST', '/this-is-not-valid')
response = conn.getresponse()
conn.close()
......
......@@ -6,7 +6,7 @@ import sys
import unittest
from test import support
import xmlrpclib.client as xmlrpclib
import xmlrpc.client as xmlrpclib
class CurrentTimeTest(unittest.TestCase):
......
......@@ -22,7 +22,7 @@ used to query various info about the object, if available.
(mimetools.Message objects are queried with the getheader() method.)
"""
import httplib
import http.client
import os
import socket
import sys
......@@ -352,7 +352,7 @@ class URLopener:
try:
response = http_conn.getresponse()
except httplib.BadStatusLine:
except http.client.BadStatusLine:
# something went wrong with the HTTP status line
raise IOError('http protocol error', 0,
'got a bad status line', None)
......@@ -369,7 +369,7 @@ class URLopener:
def open_http(self, url, data=None):
"""Use HTTP protocol."""
return self._open_generic_http(httplib.HTTPConnection, url, data)
return self._open_generic_http(http.client.HTTPConnection, url, data)
def http_error(self, url, fp, errcode, errmsg, headers, data=None):
"""Handle http errors.
......@@ -395,7 +395,7 @@ class URLopener:
if _have_ssl:
def _https_connection(self, host):
return httplib.HTTPSConnection(host,
return http.client.HTTPSConnection(host,
key_file=self.key_file,
cert_file=self.cert_file)
......
......@@ -89,7 +89,7 @@ f = urllib2.urlopen('http://www.python.org/')
import base64
import hashlib
import httplib
import http.client
import io
import mimetools
import os
......@@ -441,7 +441,7 @@ def build_opener(*handlers):
default_classes = [ProxyHandler, UnknownHandler, HTTPHandler,
HTTPDefaultErrorHandler, HTTPRedirectHandler,
FTPHandler, FileHandler, HTTPErrorProcessor]
if hasattr(httplib, 'HTTPS'):
if hasattr(http.client, 'HTTPS'):
default_classes.append(HTTPSHandler)
skip = set()
for klass in default_classes:
......@@ -1047,7 +1047,7 @@ class AbstractHTTPHandler(BaseHandler):
def do_open(self, http_class, req):
"""Return an addinfourl object for the request, using http_class.
http_class must implement the HTTPConnection API from httplib.
http_class must implement the HTTPConnection API from http.client.
The addinfourl return value is a file-like object. It also
has methods and attributes including:
- info(): return a mimetools.Message object for the headers
......@@ -1082,7 +1082,7 @@ class AbstractHTTPHandler(BaseHandler):
# object initialized properly.
# XXX Should an HTTPResponse object really be passed to
# BufferedReader? If so, we should change httplib to support
# BufferedReader? If so, we should change http.client to support
# this use directly.
# Add some fake methods to the reader to satisfy BufferedReader.
......@@ -1101,23 +1101,23 @@ class AbstractHTTPHandler(BaseHandler):
class HTTPHandler(AbstractHTTPHandler):
def http_open(self, req):
return self.do_open(httplib.HTTPConnection, req)
return self.do_open(http.client.HTTPConnection, req)
http_request = AbstractHTTPHandler.do_request_
if hasattr(httplib, 'HTTPS'):
if hasattr(http.client, 'HTTPS'):
class HTTPSHandler(AbstractHTTPHandler):
def https_open(self, req):
return self.do_open(httplib.HTTPSConnection, req)
return self.do_open(http.client.HTTPSConnection, req)
https_request = AbstractHTTPHandler.do_request_
class HTTPCookieProcessor(BaseHandler):
def __init__(self, cookiejar=None):
import cookielib
import http.cookiejar
if cookiejar is None:
cookiejar = cookielib.CookieJar()
cookiejar = http.cookiejar.CookieJar()
self.cookiejar = cookiejar
def http_request(self, request):
......
......@@ -10,7 +10,7 @@ For example usage, see the 'if __name__=="__main__"' block at the end of the
module. See also the BaseHTTPServer module docs for other API information.
"""
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from http.server import BaseHTTPRequestHandler, HTTPServer
import urllib, sys
from wsgiref.handlers import SimpleHandler
......
......@@ -135,7 +135,7 @@ Exported functions:
"""
import re, time, operator
import httplib
import http.client
# --------------------------------------------------------------------
# Internal stuff
......@@ -1196,7 +1196,7 @@ class Transport:
def send_request(self, host, handler, request_body, debug):
host, extra_headers, x509 = self.get_host_info(host)
connection = httplib.HTTPConnection(host)
connection = http.client.HTTPConnection(host)
if debug:
connection.set_debuglevel(1)
headers = {}
......@@ -1261,10 +1261,10 @@ class SafeTransport(Transport):
import socket
if not hasattr(socket, "ssl"):
raise NotImplementedError(
"your version of httplib doesn't support HTTPS")
"your version of http.client doesn't support HTTPS")
host, extra_headers, x509 = self.get_host_info(host)
connection = httplib.HTTPSConnection(host, None, **(x509 or {}))
connection = http.client.HTTPSConnection(host, None, **(x509 or {}))
if debug:
connection.set_debuglevel(1)
headers = {}
......
......@@ -105,8 +105,9 @@ server.handle_request()
# Based on code written by Fredrik Lundh.
from xmlrpc.client import Fault, dumps, loads
from http.server import BaseHTTPRequestHandler
import http.server
import socketserver
import BaseHTTPServer
import sys
import os
import re
......@@ -408,7 +409,7 @@ class SimpleXMLRPCDispatcher:
else:
raise Exception('method "%s" is not supported' % method)
class SimpleXMLRPCRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
class SimpleXMLRPCRequestHandler(BaseHTTPRequestHandler):
"""Simple XML-RPC request handler class.
Handles all HTTP POST requests and attempts to decode them as
......@@ -500,7 +501,7 @@ class SimpleXMLRPCRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
"""Selectively log an accepted request."""
if self.server.logRequests:
BaseHTTPServer.BaseHTTPRequestHandler.log_request(self, code, size)
BaseHTTPRequestHandler.log_request(self, code, size)
class SimpleXMLRPCServer(socketserver.TCPServer,
SimpleXMLRPCDispatcher):
......@@ -560,10 +561,9 @@ class CGIXMLRPCRequestHandler(SimpleXMLRPCDispatcher):
"""
code = 400
message, explain = \
BaseHTTPServer.BaseHTTPRequestHandler.responses[code]
message, explain = BaseHTTPRequestHandler.responses[code]
response = BaseHTTPServer.DEFAULT_ERROR_MESSAGE % \
response = http.server.DEFAULT_ERROR_MESSAGE % \
{
'code' : code,
'message' : message,
......
......@@ -1795,14 +1795,10 @@ List of modules and packages in base distribution
Standard library modules
Operation Result
aifc Stuff to parse AIFF-C and AIFF files.
dbm Generic interface to all dbm clones. (dbm.bsd, dbm.gnu,
dbm.ndbm, dbm.dumb)
asynchat Support for 'chat' style protocols
asyncore Asynchronous File I/O (in select style)
atexit Register functions to be called at exit of Python interpreter.
base64 Conversions to/from base64 RFC-MIME transport encoding .
BaseHTTPServer Base class forhttp services.
Bastion "Bastionification" utility (control access to instance vars)
bdb A generic Python debugger base class.
binhex Macintosh binhex compression/decompression.
bisect List bisection algorithms.
......@@ -1810,7 +1806,6 @@ bz2 Support for bz2 compression/decompression.
calendar Calendar printing functions.
cgi Wraps the WWW Forms Common Gateway Interface (CGI).
cgitb Utility for handling CGI tracebacks.
CGIHTTPServer CGI http services.
cmd A generic class to build line-oriented command interpreters.
datetime Basic date and time types.
code Utilities needed to emulate Python's interactive interpreter
......@@ -1818,10 +1813,12 @@ codecs Lookup existing Unicode encodings and register new ones.
colorsys Conversion functions between RGB and other color systems.
commands Tools for executing UNIX commands .
compileall Force "compilation" of all .py files in a directory.
ConfigParser Configuration file parser (much like windows .ini files)
configparser Configuration file parser (much like windows .ini files)
copy Generic shallow and deep copying operations.
copy_reg Helper to provide extensibility for pickle/cPickle.
copyreg Helper to provide extensibility for pickle/cPickle.
csv Read and write files with comma separated values.
dbm Generic interface to all dbm clones (dbm.bsd, dbm.gnu,
dbm.ndbm, dbm.dumb).
dircache Sorted list of files in a dir, using a cache.
difflib Tool for creating delta between sequences.
dis Bytecode disassembler.
......@@ -1844,11 +1841,11 @@ getpass Utilities to get a password and/or the current user name.
glob filename globbing.
gzip Read & write gzipped files.
heapq Priority queue implemented using lists organized as heaps.
HMAC Keyed-Hashing for Message Authentication -- RFC 2104.
htmlentitydefs Proposed entity definitions for HTML.
htmllib HTML parsing utilities.
HTMLParser A parser for HTML and XHTML.
httplib HTTP client class.
hmac Keyed-Hashing for Message Authentication -- RFC 2104.
html.entities HTML entity definitions.
html.parser A parser for HTML and XHTML.
http.client HTTP client class.
http.server HTTP server services.
ihooks Hooks into the "import" mechanism.
imaplib IMAP4 client.Based on RFC 2060.
imghdr Recognizing image files based on their first few bytes.
......@@ -1864,7 +1861,6 @@ macurl2path Mac specific module for conversion between pathnames and URLs.
mailbox A class to handle a unix-style or mmdf-style mailbox.
mailcap Mailcap file handling (RFC 1524).
mhlib MH (mailbox) interface.
mimetools Various tools used by MIME-reading or MIME-writing programs.
mimetypes Guess the MIME type of a file.
mmap Interface to memory-mapped files - they behave like mutable
strings./font>
......@@ -1892,13 +1888,11 @@ pty Pseudo terminal utilities.
pyexpat Interface to the Expay XML parser.
py_compile Routine to "compile" a .py file to a .pyc file.
pyclbr Parse a Python file and retrieve classes and methods.
Queue A multi-producer, multi-consumer queue.
queue A multi-producer, multi-consumer queue.
quopri Conversions to/from quoted-printable transport encoding.
random Random variable generators
re Regular Expressions.
repr Redo repr() but with limits on most sizes.
rexec Restricted execution facilities ("safe" exec, eval, etc).
rfc822 RFC-822 message manipulation class.
reprlib Redo repr() but with limits on most sizes.
rlcompleter Word completion for GNU readline 2.0.
robotparser Parse robots.txt files, useful for web spiders.
sched A generally useful event scheduler class.
......@@ -1906,7 +1900,6 @@ sgmllib A parser for SGML.
shelve Manage shelves of pickled objects.
shlex Lexical analyzer class for simple shell-like syntaxes.
shutil Utility functions usable in a shell-like program.
SimpleHTTPServer Simple extension to base http class
site Append module search paths for third-party packages to
sys.path.
smtplib SMTP Client class (RFC 821)
......@@ -1916,8 +1909,6 @@ stat Constants and functions for interpreting stat/lstat struct.
statvfs Constants for interpreting statvfs struct as returned by
os.statvfs()and os.fstatvfs() (if they exist).
string A collection of string operations.
StringIO File-like objects that read/write a string buffer (a fasterC
implementation exists in built-in module: cStringIO).
sunau Stuff to parse Sun and NeXT audio files.
sunaudio Interpret sun audio headers.
symbol Non-terminal symbols of Python grammar (from "graminit.h").
......@@ -1927,7 +1918,6 @@ telnetlib TELNET client class. Based on RFC 854.
tempfile Temporary file name allocation.
textwrap Object for wrapping and filling text.
threading Proposed new higher-level threading interfaces
threading_api (doc of the threading module)
token Tokens (from "token.h").
tokenize Compiles a regular expression that recognizes Python tokens.
traceback Format and print Python stack traces.
......@@ -1939,17 +1929,13 @@ unicodedata Interface to unicode properties.
urllib Open an arbitrary URL.
urlparse Parse URLs according to latest draft of standard.
user Hook to allow user-specified customization code to run.
UserDict A wrapper to allow subclassing of built-in dict class.
UserList A wrapper to allow subclassing of built-in list class.
UserString A wrapper to allow subclassing of built-in string class.
uu UUencode/UUdecode.
unittest Utilities for implementing unit testing.
wave Stuff to parse WAVE files.
weakref Tools for creating and managing weakly referenced objects.
webbrowser Platform independent URL launcher.
xdrlib Implements (a subset of) Sun XDR (eXternal Data
Representation)
xmllib A parser for XML, using the derived class as static DTD.
Representation).
xml.dom Classes for processing XML using the Document Object Model.
xml.sax Classes for processing XML using the SAX API.
xmlrpc.client Support for remote procedure calls using XML.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment