Commit a0fcb938 authored by Lars Gustäbel's avatar Lars Gustäbel

Added errors argument to TarFile class that allows the user to

specify an error handling scheme for character conversion. Additional
scheme "utf-8" in read mode. Unicode input filenames are now
supported by design. The values of the pax_headers dictionary are now
limited to unicode objects.

Fixed: The prefix field is no longer used in PAX_FORMAT (in
conformance with POSIX).
Fixed: In read mode use a possible pax header size field.
Fixed: Strip trailing slashes from pax header name values.
Fixed: Give values in user-specified pax_headers precedence when
writing.

Added unicode tests. Added pax/regtype4 member to testtar.tar all
possible number fields in a pax header.

Added two chapters to the documentation about the different formats
tarfile.py supports and how unicode issues are handled.
parent 0ac60199
...@@ -133,24 +133,20 @@ Some facts and figures: ...@@ -133,24 +133,20 @@ Some facts and figures:
\versionadded{2.6} \versionadded{2.6}
\end{excdesc} \end{excdesc}
Each of the following constants defines a tar archive format that the
\module{tarfile} module is able to create. See section \ref{tar-formats} for
details.
\begin{datadesc}{USTAR_FORMAT} \begin{datadesc}{USTAR_FORMAT}
\POSIX{}.1-1988 (ustar) format. It supports filenames up to a length of \POSIX{}.1-1988 (ustar) format.
at best 256 characters and linknames up to 100 characters. The maximum
file size is 8 gigabytes. This is an old and limited but widely
supported format.
\end{datadesc} \end{datadesc}
\begin{datadesc}{GNU_FORMAT} \begin{datadesc}{GNU_FORMAT}
GNU tar format. It supports arbitrarily long filenames and linknames and GNU tar format.
files bigger than 8 gigabytes. It is the defacto standard on GNU/Linux
systems.
\end{datadesc} \end{datadesc}
\begin{datadesc}{PAX_FORMAT} \begin{datadesc}{PAX_FORMAT}
\POSIX{}.1-2001 (pax) format. It is the most flexible format with \POSIX{}.1-2001 (pax) format.
virtually no limits. It supports long filenames and linknames, large files
and stores pathnames in a portable way. However, not all tar
implementations today are able to handle pax archives properly.
\end{datadesc} \end{datadesc}
\begin{datadesc}{DEFAULT_FORMAT} \begin{datadesc}{DEFAULT_FORMAT}
...@@ -175,15 +171,15 @@ Some facts and figures: ...@@ -175,15 +171,15 @@ Some facts and figures:
The \class{TarFile} object provides an interface to a tar archive. A tar The \class{TarFile} object provides an interface to a tar archive. A tar
archive is a sequence of blocks. An archive member (a stored file) is made up archive is a sequence of blocks. An archive member (a stored file) is made up
of a header block followed by data blocks. It is possible, to store a file in a of a header block followed by data blocks. It is possible to store a file in a
tar archive several times. Each archive member is represented by a tar archive several times. Each archive member is represented by a
\class{TarInfo} object, see \citetitle{TarInfo Objects} (section \class{TarInfo} object, see \citetitle{TarInfo Objects} (section
\ref{tarinfo-objects}) for details. \ref{tarinfo-objects}) for details.
\begin{classdesc}{TarFile}{name=None, mode='r', fileobj=None, \begin{classdesc}{TarFile}{name=None, mode='r', fileobj=None,
format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False, format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False,
ignore_zeros=False, encoding=None, pax_headers=None, debug=0, ignore_zeros=False, encoding=None, errors=None, pax_headers=None,
errorlevel=0} debug=0, errorlevel=0}
All following arguments are optional and can be accessed as instance All following arguments are optional and can be accessed as instance
attributes as well. attributes as well.
...@@ -231,18 +227,14 @@ tar archive several times. Each archive member is represented by a ...@@ -231,18 +227,14 @@ tar archive several times. Each archive member is represented by a
If \code{2}, all \emph{non-fatal} errors are raised as \exception{TarError} If \code{2}, all \emph{non-fatal} errors are raised as \exception{TarError}
exceptions as well. exceptions as well.
The \var{encoding} argument defines the local character encoding. It The \var{encoding} and \var{errors} arguments control the way strings are
defaults to the value from \function{sys.getfilesystemencoding()} or if converted to unicode objects and vice versa. The default settings will work
that is \code{None} to \code{"ascii"}. \var{encoding} is used only in for most users. See section \ref{tar-unicode} for in-depth information.
connection with the pax format which stores text data in \emph{UTF-8}. If
it is not set correctly, character conversion will fail with a
\exception{UnicodeError}.
\versionadded{2.6} \versionadded{2.6}
The \var{pax_headers} argument must be a dictionary whose elements are The \var{pax_headers} argument is an optional dictionary of unicode strings
either unicode objects, numbers or strings that can be decoded to unicode which will be added as a pax global header if \var{format} is
using \var{encoding}. This information will be added to the archive as a \constant{PAX_FORMAT}.
pax global header.
\versionadded{2.6} \versionadded{2.6}
\end{classdesc} \end{classdesc}
...@@ -287,7 +279,7 @@ tar archive several times. Each archive member is represented by a ...@@ -287,7 +279,7 @@ tar archive several times. Each archive member is represented by a
Extract all members from the archive to the current working directory Extract all members from the archive to the current working directory
or directory \var{path}. If optional \var{members} is given, it must be or directory \var{path}. If optional \var{members} is given, it must be
a subset of the list returned by \method{getmembers()}. a subset of the list returned by \method{getmembers()}.
Directory informations like owner, modification time and permissions are Directory information like owner, modification time and permissions are
set after all members have been extracted. This is done to work around two set after all members have been extracted. This is done to work around two
problems: A directory's modification time is reset each time a file is problems: A directory's modification time is reset each time a file is
created in it. And, if a directory's permissions do not allow writing, created in it. And, if a directory's permissions do not allow writing,
...@@ -365,6 +357,11 @@ tar archive several times. Each archive member is represented by a ...@@ -365,6 +357,11 @@ tar archive several times. Each archive member is represented by a
\deprecated{2.6}{Use the \member{format} attribute instead.} \deprecated{2.6}{Use the \member{format} attribute instead.}
\end{memberdesc} \end{memberdesc}
\begin{memberdesc}{pax_headers}
A dictionary containing key-value pairs of pax global headers.
\versionadded{2.6}
\end{memberdesc}
%----------------- %-----------------
% TarInfo Objects % TarInfo Objects
%----------------- %-----------------
...@@ -384,8 +381,8 @@ the file's data itself. ...@@ -384,8 +381,8 @@ the file's data itself.
Create a \class{TarInfo} object. Create a \class{TarInfo} object.
\end{classdesc} \end{classdesc}
\begin{methoddesc}{frombuf}{} \begin{methoddesc}{frombuf}{buf}
Create and return a \class{TarInfo} object from a string buffer. Create and return a \class{TarInfo} object from string buffer \var{buf}.
\versionadded[Raises \exception{HeaderError} if the buffer is \versionadded[Raises \exception{HeaderError} if the buffer is
invalid.]{2.6} invalid.]{2.6}
\end{methoddesc} \end{methoddesc}
...@@ -396,10 +393,11 @@ the file's data itself. ...@@ -396,10 +393,11 @@ the file's data itself.
\versionadded{2.6} \versionadded{2.6}
\end{methoddesc} \end{methoddesc}
\begin{methoddesc}{tobuf}{\optional{format}} \begin{methoddesc}{tobuf}{\optional{format\optional{, encoding
Create a string buffer from a \class{TarInfo} object. See \optional{, errors}}}}
\class{TarFile}'s \member{format} argument for information. Create a string buffer from a \class{TarInfo} object. For information
\versionchanged[The \var{format} parameter]{2.6} on the arguments see the constructor of the \class{TarFile} class.
\versionchanged[The arguments were added]{2.6}
\end{methoddesc} \end{methoddesc}
A \code{TarInfo} object has the following public data attributes: A \code{TarInfo} object has the following public data attributes:
...@@ -452,6 +450,12 @@ A \code{TarInfo} object has the following public data attributes: ...@@ -452,6 +450,12 @@ A \code{TarInfo} object has the following public data attributes:
Group name. Group name.
\end{memberdesc} \end{memberdesc}
\begin{memberdesc}{pax_headers}
A dictionary containing key-value pairs of an associated pax
extended header.
\versionadded{2.6}
\end{memberdesc}
A \class{TarInfo} object also provides some convenient query methods: A \class{TarInfo} object also provides some convenient query methods:
\begin{methoddesc}{isfile}{} \begin{methoddesc}{isfile}{}
...@@ -554,3 +558,103 @@ for tarinfo in tar: ...@@ -554,3 +558,103 @@ for tarinfo in tar:
tar.extract(tarinfo) tar.extract(tarinfo)
tar.close() tar.close()
\end{verbatim} \end{verbatim}
%------------
% Tar format
%------------
\subsection{Supported tar formats \label{tar-formats}}
There are three tar formats that can be created with the \module{tarfile}
module:
\begin{itemize}
\item
The \POSIX{}.1-1988 ustar format (\constant{USTAR_FORMAT}). It supports
filenames up to a length of at best 256 characters and linknames up to 100
characters. The maximum file size is 8 gigabytes. This is an old and limited
but widely supported format.
\item
The GNU tar format (\constant{GNU_FORMAT}). It supports long filenames and
linknames, files bigger than 8 gigabytes and sparse files. It is the de facto
standard on GNU/Linux systems. \module{tarfile} fully supports the GNU tar
extensions for long names, sparse file support is read-only.
\item
The \POSIX{}.1-2001 pax format (\constant{PAX_FORMAT}). It is the most
flexible format with virtually no limits. It supports long filenames and
linknames, large files and stores pathnames in a portable way. However, not
all tar implementations today are able to handle pax archives properly.
The \emph{pax} format is an extension to the existing \emph{ustar} format. It
uses extra headers for information that cannot be stored otherwise. There are
two flavours of pax headers: Extended headers only affect the subsequent file
header, global headers are valid for the complete archive and affect all
following files. All the data in a pax header is encoded in \emph{UTF-8} for
portability reasons.
\end{itemize}
There are some more variants of the tar format which can be read, but not
created:
\begin{itemize}
\item
The ancient V7 format. This is the first tar format from \UNIX{} Seventh
Edition, storing only regular files and directories. Names must not be longer
than 100 characters, there is no user/group name information. Some archives
have miscalculated header checksums in case of fields with non-\ASCII{}
characters.
\item
The SunOS tar extended format. This format is a variant of the \POSIX{}.1-2001
pax format, but is not compatible.
\end{itemize}
%----------------
% Unicode issues
%----------------
\subsection{Unicode issues \label{tar-unicode}}
The tar format was originally conceived to make backups on tape drives with the
main focus on preserving file system information. Nowadays tar archives are
commonly used for file distribution and exchanging archives over networks. One
problem of the original format (that all other formats are merely variants of)
is that there is no concept of supporting different character encodings.
For example, an ordinary tar archive created on a \emph{UTF-8} system cannot be
read correctly on a \emph{Latin-1} system if it contains non-\ASCII{}
characters. Names (i.e. filenames, linknames, user/group names) containing
these characters will appear damaged. Unfortunately, there is no way to
autodetect the encoding of an archive.
The pax format was designed to solve this problem. It stores non-\ASCII{} names
using the universal character encoding \emph{UTF-8}. When a pax archive is
read, these \emph{UTF-8} names are converted to the encoding of the local
file system.
The details of unicode conversion are controlled by the \var{encoding} and
\var{errors} keyword arguments of the \class{TarFile} class.
The default value for \var{encoding} is the local character encoding. It is
deduced from \function{sys.getfilesystemencoding()} and
\function{sys.getdefaultencoding()}. In read mode, \var{encoding} is used
exclusively to convert unicode names from a pax archive to strings in the local
character encoding. In write mode, the use of \var{encoding} depends on the
chosen archive format. In case of \constant{PAX_FORMAT}, input names that
contain non-\ASCII{} characters need to be decoded before being stored as
\emph{UTF-8} strings. The other formats do not make use of \var{encoding}
unless unicode objects are used as input names. These are converted to
8-bit character strings before they are added to the archive.
The \var{errors} argument defines how characters are treated that cannot be
converted to or from \var{encoding}. Possible values are listed in section
\ref{codec-base-classes}. In read mode, there is an additional scheme
\code{'utf-8'} which means that bad characters are replaced by their
\emph{UTF-8} representation. This is the default scheme. In write mode the
default value for \var{errors} is \code{'strict'} to ensure that name
information is not altered unnoticed.
...@@ -125,6 +125,17 @@ GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, ...@@ -125,6 +125,17 @@ GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK,
PAX_FIELDS = ("path", "linkpath", "size", "mtime", PAX_FIELDS = ("path", "linkpath", "size", "mtime",
"uid", "gid", "uname", "gname") "uid", "gid", "uname", "gname")
# Fields in a pax header that are numbers, all other fields
# are treated as strings.
PAX_NUMBER_FIELDS = {
"atime": float,
"ctime": float,
"mtime": float,
"uid": int,
"gid": int,
"size": int
}
#--------------------------------------------------------- #---------------------------------------------------------
# Bits used in the mode field, values in octal. # Bits used in the mode field, values in octal.
#--------------------------------------------------------- #---------------------------------------------------------
...@@ -154,7 +165,7 @@ TOEXEC = 0001 # execute/search by other ...@@ -154,7 +165,7 @@ TOEXEC = 0001 # execute/search by other
#--------------------------------------------------------- #---------------------------------------------------------
ENCODING = sys.getfilesystemencoding() ENCODING = sys.getfilesystemencoding()
if ENCODING is None: if ENCODING is None:
ENCODING = "ascii" ENCODING = sys.getdefaultencoding()
#--------------------------------------------------------- #---------------------------------------------------------
# Some useful functions # Some useful functions
...@@ -218,6 +229,26 @@ def itn(n, digits=8, format=DEFAULT_FORMAT): ...@@ -218,6 +229,26 @@ def itn(n, digits=8, format=DEFAULT_FORMAT):
s = chr(0200) + s s = chr(0200) + s
return s return s
def uts(s, encoding, errors):
"""Convert a unicode object to a string.
"""
if errors == "utf-8":
# An extra error handler similar to the -o invalid=UTF-8 option
# in POSIX.1-2001. Replace untranslatable characters with their
# UTF-8 representation.
try:
return s.encode(encoding, "strict")
except UnicodeEncodeError:
x = []
for c in s:
try:
x.append(c.encode(encoding, "strict"))
except UnicodeEncodeError:
x.append(c.encode("utf8"))
return "".join(x)
else:
return s.encode(encoding, errors)
def calc_chksums(buf): def calc_chksums(buf):
"""Calculate the checksum for a member's header by summing up all """Calculate the checksum for a member's header by summing up all
characters except for the chksum field which is treated as if characters except for the chksum field which is treated as if
...@@ -922,7 +953,7 @@ class TarInfo(object): ...@@ -922,7 +953,7 @@ class TarInfo(object):
def __repr__(self): def __repr__(self):
return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self)) return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self))
def get_info(self): def get_info(self, encoding, errors):
"""Return the TarInfo's attributes as a dictionary. """Return the TarInfo's attributes as a dictionary.
""" """
info = { info = {
...@@ -944,24 +975,29 @@ class TarInfo(object): ...@@ -944,24 +975,29 @@ class TarInfo(object):
if info["type"] == DIRTYPE and not info["name"].endswith("/"): if info["type"] == DIRTYPE and not info["name"].endswith("/"):
info["name"] += "/" info["name"] += "/"
for key in ("name", "linkname", "uname", "gname"):
if type(info[key]) is unicode:
info[key] = info[key].encode(encoding, errors)
return info return info
def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING): def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING, errors="strict"):
"""Return a tar header as a string of 512 byte blocks. """Return a tar header as a string of 512 byte blocks.
""" """
info = self.get_info(encoding, errors)
if format == USTAR_FORMAT: if format == USTAR_FORMAT:
return self.create_ustar_header() return self.create_ustar_header(info)
elif format == GNU_FORMAT: elif format == GNU_FORMAT:
return self.create_gnu_header() return self.create_gnu_header(info)
elif format == PAX_FORMAT: elif format == PAX_FORMAT:
return self.create_pax_header(encoding) return self.create_pax_header(info, encoding, errors)
else: else:
raise ValueError("invalid format") raise ValueError("invalid format")
def create_ustar_header(self): def create_ustar_header(self, info):
"""Return the object as a ustar header block. """Return the object as a ustar header block.
""" """
info = self.get_info()
info["magic"] = POSIX_MAGIC info["magic"] = POSIX_MAGIC
if len(info["linkname"]) > LENGTH_LINK: if len(info["linkname"]) > LENGTH_LINK:
...@@ -972,10 +1008,9 @@ class TarInfo(object): ...@@ -972,10 +1008,9 @@ class TarInfo(object):
return self._create_header(info, USTAR_FORMAT) return self._create_header(info, USTAR_FORMAT)
def create_gnu_header(self): def create_gnu_header(self, info):
"""Return the object as a GNU header block sequence. """Return the object as a GNU header block sequence.
""" """
info = self.get_info()
info["magic"] = GNU_MAGIC info["magic"] = GNU_MAGIC
buf = "" buf = ""
...@@ -987,12 +1022,11 @@ class TarInfo(object): ...@@ -987,12 +1022,11 @@ class TarInfo(object):
return buf + self._create_header(info, GNU_FORMAT) return buf + self._create_header(info, GNU_FORMAT)
def create_pax_header(self, encoding): def create_pax_header(self, info, encoding, errors):
"""Return the object as a ustar header block. If it cannot be """Return the object as a ustar header block. If it cannot be
represented this way, prepend a pax extended header sequence represented this way, prepend a pax extended header sequence
with supplement information. with supplement information.
""" """
info = self.get_info()
info["magic"] = POSIX_MAGIC info["magic"] = POSIX_MAGIC
pax_headers = self.pax_headers.copy() pax_headers = self.pax_headers.copy()
...@@ -1002,7 +1036,11 @@ class TarInfo(object): ...@@ -1002,7 +1036,11 @@ class TarInfo(object):
("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK), ("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK),
("uname", "uname", 32), ("gname", "gname", 32)): ("uname", "uname", 32), ("gname", "gname", 32)):
val = info[name].decode(encoding) if hname in pax_headers:
# The pax header has priority.
continue
val = info[name].decode(encoding, errors)
# Try to encode the string as ASCII. # Try to encode the string as ASCII.
try: try:
...@@ -1011,27 +1049,23 @@ class TarInfo(object): ...@@ -1011,27 +1049,23 @@ class TarInfo(object):
pax_headers[hname] = val pax_headers[hname] = val
continue continue
if len(val) > length: if len(info[name]) > length:
if name == "name": pax_headers[hname] = val
# Try to squeeze a longname in the prefix and name fields as in
# ustar format.
try:
info["prefix"], info["name"] = self._posix_split_name(info["name"])
except ValueError:
pax_headers[hname] = val
else:
continue
else:
pax_headers[hname] = val
# Test number fields for values that exceed the field limit or values # Test number fields for values that exceed the field limit or values
# that like to be stored as float. # that like to be stored as float.
for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)): for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)):
if name in pax_headers:
# The pax header has priority. Avoid overflow.
info[name] = 0
continue
val = info[name] val = info[name]
if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float): if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float):
pax_headers[name] = unicode(val) pax_headers[name] = unicode(val)
info[name] = 0 info[name] = 0
# Create a pax extended header if necessary.
if pax_headers: if pax_headers:
buf = self._create_pax_generic_header(pax_headers) buf = self._create_pax_generic_header(pax_headers)
else: else:
...@@ -1040,26 +1074,10 @@ class TarInfo(object): ...@@ -1040,26 +1074,10 @@ class TarInfo(object):
return buf + self._create_header(info, USTAR_FORMAT) return buf + self._create_header(info, USTAR_FORMAT)
@classmethod @classmethod
def create_pax_global_header(cls, pax_headers, encoding): def create_pax_global_header(cls, pax_headers):
"""Return the object as a pax global header block sequence. """Return the object as a pax global header block sequence.
""" """
new_headers = {} return cls._create_pax_generic_header(pax_headers, type=XGLTYPE)
for key, val in pax_headers.iteritems():
key = cls._to_unicode(key, encoding)
val = cls._to_unicode(val, encoding)
new_headers[key] = val
return cls._create_pax_generic_header(new_headers, type=XGLTYPE)
@staticmethod
def _to_unicode(value, encoding):
if isinstance(value, unicode):
return value
elif isinstance(value, (int, long, float)):
return unicode(value)
elif isinstance(value, str):
return unicode(value, encoding)
else:
raise ValueError("unable to convert to unicode: %r" % value)
def _posix_split_name(self, name): def _posix_split_name(self, name):
"""Split a name longer than 100 chars into a prefix """Split a name longer than 100 chars into a prefix
...@@ -1091,9 +1109,9 @@ class TarInfo(object): ...@@ -1091,9 +1109,9 @@ class TarInfo(object):
" ", # checksum field " ", # checksum field
info.get("type", REGTYPE), info.get("type", REGTYPE),
stn(info.get("linkname", ""), 100), stn(info.get("linkname", ""), 100),
stn(info.get("magic", ""), 8), stn(info.get("magic", POSIX_MAGIC), 8),
stn(info.get("uname", ""), 32), stn(info.get("uname", "root"), 32),
stn(info.get("gname", ""), 32), stn(info.get("gname", "root"), 32),
itn(info.get("devmajor", 0), 8, format), itn(info.get("devmajor", 0), 8, format),
itn(info.get("devminor", 0), 8, format), itn(info.get("devminor", 0), 8, format),
stn(info.get("prefix", ""), 155) stn(info.get("prefix", ""), 155)
...@@ -1254,12 +1272,9 @@ class TarInfo(object): ...@@ -1254,12 +1272,9 @@ class TarInfo(object):
offset += self._block(self.size) offset += self._block(self.size)
tarfile.offset = offset tarfile.offset = offset
# Patch the TarInfo object with saved extended # Patch the TarInfo object with saved global
# header information. # header information.
for keyword, value in tarfile.pax_headers.iteritems(): self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors)
if keyword in PAX_FIELDS:
setattr(self, keyword, value)
self.pax_headers[keyword] = value
return self return self
...@@ -1270,18 +1285,17 @@ class TarInfo(object): ...@@ -1270,18 +1285,17 @@ class TarInfo(object):
buf = tarfile.fileobj.read(self._block(self.size)) buf = tarfile.fileobj.read(self._block(self.size))
# Fetch the next header and process it. # Fetch the next header and process it.
b = tarfile.fileobj.read(BLOCKSIZE) next = self.fromtarfile(tarfile)
t = self.frombuf(b) if next is None:
t.offset = self.offset raise HeaderError("missing subsequent header")
next = t._proc_member(tarfile)
# Patch the TarInfo object from the next header with # Patch the TarInfo object from the next header with
# the longname information. # the longname information.
next.offset = self.offset next.offset = self.offset
if self.type == GNUTYPE_LONGNAME: if self.type == GNUTYPE_LONGNAME:
next.name = buf.rstrip(NUL) next.name = nts(buf)
elif self.type == GNUTYPE_LONGLINK: elif self.type == GNUTYPE_LONGLINK:
next.linkname = buf.rstrip(NUL) next.linkname = nts(buf)
return next return next
...@@ -1356,21 +1370,10 @@ class TarInfo(object): ...@@ -1356,21 +1370,10 @@ class TarInfo(object):
else: else:
pax_headers = tarfile.pax_headers.copy() pax_headers = tarfile.pax_headers.copy()
# Fields in POSIX.1-2001 that are numbers, all other fields
# are treated as UTF-8 strings.
type_mapping = {
"atime": float,
"ctime": float,
"mtime": float,
"uid": int,
"gid": int,
"size": int
}
# Parse pax header information. A record looks like that: # Parse pax header information. A record looks like that:
# "%d %s=%s\n" % (length, keyword, value). length is the size # "%d %s=%s\n" % (length, keyword, value). length is the size
# of the complete record including the length field itself and # of the complete record including the length field itself and
# the newline. # the newline. keyword and value are both UTF-8 encoded strings.
regex = re.compile(r"(\d+) ([^=]+)=", re.U) regex = re.compile(r"(\d+) ([^=]+)=", re.U)
pos = 0 pos = 0
while True: while True:
...@@ -1383,35 +1386,55 @@ class TarInfo(object): ...@@ -1383,35 +1386,55 @@ class TarInfo(object):
value = buf[match.end(2) + 1:match.start(1) + length - 1] value = buf[match.end(2) + 1:match.start(1) + length - 1]
keyword = keyword.decode("utf8") keyword = keyword.decode("utf8")
keyword = keyword.encode(tarfile.encoding)
value = value.decode("utf8") value = value.decode("utf8")
if keyword in type_mapping:
try:
value = type_mapping[keyword](value)
except ValueError:
value = 0
else:
value = value.encode(tarfile.encoding)
pax_headers[keyword] = value pax_headers[keyword] = value
pos += length pos += length
# Fetch the next header that will be patched with the # Fetch the next header.
# supplement information from the pax header (extended next = self.fromtarfile(tarfile)
# only).
t = self.fromtarfile(tarfile)
if self.type != XGLTYPE and t is not None: if self.type in (XHDTYPE, SOLARIS_XHDTYPE):
# Patch the TarInfo object from the next header with if next is None:
# the pax header's information. raise HeaderError("missing subsequent header")
for keyword, value in pax_headers.items():
if keyword in PAX_FIELDS:
setattr(t, keyword, value)
pax_headers[keyword] = value
t.pax_headers = pax_headers.copy()
return t # Patch the TarInfo object with the extended header info.
next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors)
next.offset = self.offset
if pax_headers.has_key("size"):
# If the extended header replaces the size field,
# we need to recalculate the offset where the next
# header starts.
offset = next.offset_data
if next.isreg() or next.type not in SUPPORTED_TYPES:
offset += next._block(next.size)
tarfile.offset = offset
return next
def _apply_pax_info(self, pax_headers, encoding, errors):
"""Replace fields with supplemental information from a previous
pax extended or global header.
"""
for keyword, value in pax_headers.iteritems():
if keyword not in PAX_FIELDS:
continue
if keyword == "path":
value = value.rstrip("/")
if keyword in PAX_NUMBER_FIELDS:
try:
value = PAX_NUMBER_FIELDS[keyword](value)
except ValueError:
value = 0
else:
value = uts(value, encoding, errors)
setattr(self, keyword, value)
self.pax_headers = pax_headers.copy()
def _block(self, count): def _block(self, count):
"""Round up a byte count by BLOCKSIZE and return it, """Round up a byte count by BLOCKSIZE and return it,
...@@ -1462,8 +1485,9 @@ class TarFile(object): ...@@ -1462,8 +1485,9 @@ class TarFile(object):
format = DEFAULT_FORMAT # The format to use when creating an archive. format = DEFAULT_FORMAT # The format to use when creating an archive.
encoding = ENCODING # Transfer UTF-8 strings from POSIX.1-2001 encoding = ENCODING # Encoding for 8-bit character strings.
# headers to this encoding.
errors = None # Error handler for unicode conversion.
tarinfo = TarInfo # The default TarInfo class to use. tarinfo = TarInfo # The default TarInfo class to use.
...@@ -1471,7 +1495,7 @@ class TarFile(object): ...@@ -1471,7 +1495,7 @@ class TarFile(object):
def __init__(self, name=None, mode="r", fileobj=None, format=None, def __init__(self, name=None, mode="r", fileobj=None, format=None,
tarinfo=None, dereference=None, ignore_zeros=None, encoding=None, tarinfo=None, dereference=None, ignore_zeros=None, encoding=None,
pax_headers=None, debug=None, errorlevel=None): errors=None, pax_headers=None, debug=None, errorlevel=None):
"""Open an (uncompressed) tar archive `name'. `mode' is either 'r' to """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to
read from an existing archive, 'a' to append data to an existing read from an existing archive, 'a' to append data to an existing
file or 'w' to create a new file overwriting an existing one. `mode' file or 'w' to create a new file overwriting an existing one. `mode'
...@@ -1512,6 +1536,19 @@ class TarFile(object): ...@@ -1512,6 +1536,19 @@ class TarFile(object):
self.ignore_zeros = ignore_zeros self.ignore_zeros = ignore_zeros
if encoding is not None: if encoding is not None:
self.encoding = encoding self.encoding = encoding
if errors is not None:
self.errors = errors
elif mode == "r":
self.errors = "utf-8"
else:
self.errors = "strict"
if pax_headers is not None and self.format == PAX_FORMAT:
self.pax_headers = pax_headers
else:
self.pax_headers = {}
if debug is not None: if debug is not None:
self.debug = debug self.debug = debug
if errorlevel is not None: if errorlevel is not None:
...@@ -1524,7 +1561,6 @@ class TarFile(object): ...@@ -1524,7 +1561,6 @@ class TarFile(object):
self.offset = 0L # current position in the archive file self.offset = 0L # current position in the archive file
self.inodes = {} # dictionary caching the inodes of self.inodes = {} # dictionary caching the inodes of
# archive members already added # archive members already added
self.pax_headers = {} # save contents of global pax headers
if self.mode == "r": if self.mode == "r":
self.firstmember = None self.firstmember = None
...@@ -1543,9 +1579,8 @@ class TarFile(object): ...@@ -1543,9 +1579,8 @@ class TarFile(object):
if self.mode in "aw": if self.mode in "aw":
self._loaded = True self._loaded = True
if pax_headers: if self.pax_headers:
buf = self.tarinfo.create_pax_global_header( buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy())
pax_headers.copy(), self.encoding)
self.fileobj.write(buf) self.fileobj.write(buf)
self.offset += len(buf) self.offset += len(buf)
...@@ -1817,8 +1852,6 @@ class TarFile(object): ...@@ -1817,8 +1852,6 @@ class TarFile(object):
self.inodes[inode] = arcname self.inodes[inode] = arcname
elif stat.S_ISDIR(stmd): elif stat.S_ISDIR(stmd):
type = DIRTYPE type = DIRTYPE
if arcname[-1:] != "/":
arcname += "/"
elif stat.S_ISFIFO(stmd): elif stat.S_ISFIFO(stmd):
type = FIFOTYPE type = FIFOTYPE
elif stat.S_ISLNK(stmd): elif stat.S_ISLNK(stmd):
...@@ -1952,7 +1985,7 @@ class TarFile(object): ...@@ -1952,7 +1985,7 @@ class TarFile(object):
tarinfo = copy.copy(tarinfo) tarinfo = copy.copy(tarinfo)
buf = tarinfo.tobuf(self.format, self.encoding) buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
self.fileobj.write(buf) self.fileobj.write(buf)
self.offset += len(buf) self.offset += len(buf)
......
# encoding: iso8859-1 # -*- coding: iso-8859-15 -*-
import sys import sys
import os import os
...@@ -372,9 +372,9 @@ class LongnameTest(ReadTest): ...@@ -372,9 +372,9 @@ class LongnameTest(ReadTest):
def test_read_longname(self): def test_read_longname(self):
# Test reading of longname (bug #1471427). # Test reading of longname (bug #1471427).
name = self.subdir + "/" + "123/" * 125 + "longname" longname = self.subdir + "/" + "123/" * 125 + "longname"
try: try:
tarinfo = self.tar.getmember(name) tarinfo = self.tar.getmember(longname)
except KeyError: except KeyError:
self.fail("longname not found") self.fail("longname not found")
self.assert_(tarinfo.type != tarfile.DIRTYPE, "read longname as dirtype") self.assert_(tarinfo.type != tarfile.DIRTYPE, "read longname as dirtype")
...@@ -393,13 +393,24 @@ class LongnameTest(ReadTest): ...@@ -393,13 +393,24 @@ class LongnameTest(ReadTest):
tarinfo = self.tar.getmember(longname) tarinfo = self.tar.getmember(longname)
offset = tarinfo.offset offset = tarinfo.offset
self.tar.fileobj.seek(offset) self.tar.fileobj.seek(offset)
fobj = StringIO.StringIO(self.tar.fileobj.read(1536)) fobj = StringIO.StringIO(self.tar.fileobj.read(3 * 512))
self.assertRaises(tarfile.ReadError, tarfile.open, name="foo.tar", fileobj=fobj) self.assertRaises(tarfile.ReadError, tarfile.open, name="foo.tar", fileobj=fobj)
def test_header_offset(self):
# Test if the start offset of the TarInfo object includes
# the preceding extended header.
longname = self.subdir + "/" + "123/" * 125 + "longname"
offset = self.tar.getmember(longname).offset
fobj = open(tarname)
fobj.seek(offset)
tarinfo = tarfile.TarInfo.frombuf(fobj.read(512))
self.assertEqual(tarinfo.type, self.longnametype)
class GNUReadTest(LongnameTest): class GNUReadTest(LongnameTest):
subdir = "gnu" subdir = "gnu"
longnametype = tarfile.GNUTYPE_LONGNAME
def test_sparse_file(self): def test_sparse_file(self):
tarinfo1 = self.tar.getmember("ustar/sparse") tarinfo1 = self.tar.getmember("ustar/sparse")
...@@ -410,26 +421,40 @@ class GNUReadTest(LongnameTest): ...@@ -410,26 +421,40 @@ class GNUReadTest(LongnameTest):
"sparse file extraction failed") "sparse file extraction failed")
class PaxReadTest(ReadTest): class PaxReadTest(LongnameTest):
subdir = "pax" subdir = "pax"
longnametype = tarfile.XHDTYPE
def test_pax_globheaders(self): def test_pax_global_headers(self):
tar = tarfile.open(tarname, encoding="iso8859-1") tar = tarfile.open(tarname, encoding="iso8859-1")
tarinfo = tar.getmember("pax/regtype1") tarinfo = tar.getmember("pax/regtype1")
self.assertEqual(tarinfo.uname, "foo") self.assertEqual(tarinfo.uname, "foo")
self.assertEqual(tarinfo.gname, "bar") self.assertEqual(tarinfo.gname, "bar")
self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), "") self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), u"")
tarinfo = tar.getmember("pax/regtype2") tarinfo = tar.getmember("pax/regtype2")
self.assertEqual(tarinfo.uname, "") self.assertEqual(tarinfo.uname, "")
self.assertEqual(tarinfo.gname, "bar") self.assertEqual(tarinfo.gname, "bar")
self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), "") self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), u"")
tarinfo = tar.getmember("pax/regtype3") tarinfo = tar.getmember("pax/regtype3")
self.assertEqual(tarinfo.uname, "tarfile") self.assertEqual(tarinfo.uname, "tarfile")
self.assertEqual(tarinfo.gname, "tarfile") self.assertEqual(tarinfo.gname, "tarfile")
self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), "") self.assertEqual(tarinfo.pax_headers.get("VENDOR.umlauts"), u"")
def test_pax_number_fields(self):
# All following number fields are read from the pax header.
tar = tarfile.open(tarname, encoding="iso8859-1")
tarinfo = tar.getmember("pax/regtype4")
self.assertEqual(tarinfo.size, 7011)
self.assertEqual(tarinfo.uid, 123)
self.assertEqual(tarinfo.gid, 123)
self.assertEqual(tarinfo.mtime, 1041808783.0)
self.assertEqual(type(tarinfo.mtime), float)
self.assertEqual(float(tarinfo.pax_headers["atime"]), 1041808783.0)
self.assertEqual(float(tarinfo.pax_headers["ctime"]), 1041808783.0)
class WriteTest(unittest.TestCase): class WriteTest(unittest.TestCase):
...@@ -700,68 +725,161 @@ class PaxWriteTest(GNUWriteTest): ...@@ -700,68 +725,161 @@ class PaxWriteTest(GNUWriteTest):
n = tar.getmembers()[0].name n = tar.getmembers()[0].name
self.assert_(name == n, "PAX longname creation failed") self.assert_(name == n, "PAX longname creation failed")
def test_iso8859_15_filename(self): def test_pax_global_header(self):
self._test_unicode_filename("iso8859-15") pax_headers = {
u"foo": u"bar",
u"uid": u"0",
u"mtime": u"1.23",
u"test": u"",
u"": u"test"}
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT, \
pax_headers=pax_headers)
tar.addfile(tarfile.TarInfo("test"))
tar.close()
# Test if the global header was written correctly.
tar = tarfile.open(tmpname, encoding="iso8859-1")
self.assertEqual(tar.pax_headers, pax_headers)
self.assertEqual(tar.getmembers()[0].pax_headers, pax_headers)
# Test if all the fields are unicode.
for key, val in tar.pax_headers.iteritems():
self.assert_(type(key) is unicode)
self.assert_(type(val) is unicode)
if key in tarfile.PAX_NUMBER_FIELDS:
try:
tarfile.PAX_NUMBER_FIELDS[key](val)
except (TypeError, ValueError):
self.fail("unable to convert pax header field")
def test_pax_extended_header(self):
# The fields from the pax header have priority over the
# TarInfo.
pax_headers = {u"path": u"foo", u"uid": u"123"}
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT, encoding="iso8859-1")
t = tarfile.TarInfo()
t.name = u"" # non-ASCII
t.uid = 8**8 # too large
t.pax_headers = pax_headers
tar.addfile(t)
tar.close()
tar = tarfile.open(tmpname, encoding="iso8859-1")
t = tar.getmembers()[0]
self.assertEqual(t.pax_headers, pax_headers)
self.assertEqual(t.name, "foo")
self.assertEqual(t.uid, 123)
class UstarUnicodeTest(unittest.TestCase):
# All *UnicodeTests FIXME
format = tarfile.USTAR_FORMAT
def test_iso8859_1_filename(self):
self._test_unicode_filename("iso8859-1")
def test_utf7_filename(self):
self._test_unicode_filename("utf7")
def test_utf8_filename(self): def test_utf8_filename(self):
self._test_unicode_filename("utf8") self._test_unicode_filename("utf8")
def test_utf16_filename(self):
self._test_unicode_filename("utf16")
def _test_unicode_filename(self, encoding): def _test_unicode_filename(self, encoding):
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT) tar = tarfile.open(tmpname, "w", format=self.format, encoding=encoding, errors="strict")
name = u"\u20ac".encode(encoding) # Euro sign name = u""
tar.encoding = encoding
tar.addfile(tarfile.TarInfo(name)) tar.addfile(tarfile.TarInfo(name))
tar.close() tar.close()
tar = tarfile.open(tmpname, encoding=encoding) tar = tarfile.open(tmpname, encoding=encoding)
self.assertEqual(tar.getmembers()[0].name, name) self.assert_(type(tar.getnames()[0]) is not unicode)
self.assertEqual(tar.getmembers()[0].name, name.encode(encoding))
tar.close() tar.close()
def test_unicode_filename_error(self): def test_unicode_filename_error(self):
# The euro sign filename cannot be translated to iso8859-1 encoding. tar = tarfile.open(tmpname, "w", format=self.format, encoding="ascii", errors="strict")
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT, encoding="utf8") tarinfo = tarfile.TarInfo()
name = u"\u20ac".encode("utf8") # Euro sign
tar.addfile(tarfile.TarInfo(name)) tarinfo.name = ""
if self.format == tarfile.PAX_FORMAT:
self.assertRaises(UnicodeError, tar.addfile, tarinfo)
else:
tar.addfile(tarinfo)
tarinfo.name = u""
self.assertRaises(UnicodeError, tar.addfile, tarinfo)
tarinfo.name = "foo"
tarinfo.uname = u""
self.assertRaises(UnicodeError, tar.addfile, tarinfo)
def test_unicode_argument(self):
tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict")
for t in tar:
self.assert_(type(t.name) is str)
self.assert_(type(t.linkname) is str)
self.assert_(type(t.uname) is str)
self.assert_(type(t.gname) is str)
tar.close() tar.close()
self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="iso8859-1") def test_uname_unicode(self):
for name in (u"", ""):
t = tarfile.TarInfo("foo")
t.uname = name
t.gname = name
def test_pax_headers(self): fobj = StringIO.StringIO()
self._test_pax_headers({"foo": "bar", "uid": 0, "mtime": 1.23}) tar = tarfile.open("foo.tar", mode="w", fileobj=fobj, format=self.format, encoding="iso8859-1")
tar.addfile(t)
tar.close()
fobj.seek(0)
self._test_pax_headers({"euro": u"\u20ac".encode("utf8")}) tar = tarfile.open("foo.tar", fileobj=fobj, encoding="iso8859-1")
t = tar.getmember("foo")
self.assertEqual(t.uname, "")
self.assertEqual(t.gname, "")
self._test_pax_headers({"euro": u"\u20ac"},
{"euro": u"\u20ac".encode("utf8")})
self._test_pax_headers({u"\u20ac": "euro"}, class GNUUnicodeTest(UstarUnicodeTest):
{u"\u20ac".encode("utf8"): "euro"})
def _test_pax_headers(self, pax_headers, cmp_headers=None): format = tarfile.GNU_FORMAT
if cmp_headers is None:
cmp_headers = pax_headers
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT, \
pax_headers=pax_headers, encoding="utf8")
tar.addfile(tarfile.TarInfo("test"))
tar.close()
tar = tarfile.open(tmpname, encoding="utf8") class PaxUnicodeTest(UstarUnicodeTest):
self.assertEqual(tar.pax_headers, cmp_headers)
def test_truncated_header(self): format = tarfile.PAX_FORMAT
tar = tarfile.open(tmpname, "w", format=tarfile.PAX_FORMAT)
tarinfo = tarfile.TarInfo("123/" * 126 + "longname") def _create_unicode_name(self, name):
tar.addfile(tarinfo) tar = tarfile.open(tmpname, "w", format=self.format)
t = tarfile.TarInfo()
t.pax_headers["path"] = name
tar.addfile(t)
tar.close() tar.close()
# Simulate a premature EOF. def test_error_handlers(self):
open(tmpname, "rb+").truncate(1536) # Test if the unicode error handlers work correctly for characters
tar = tarfile.open(tmpname) # that cannot be expressed in a given encoding.
self.assertEqual(tar.getmembers(), []) self._create_unicode_name(u"")
for handler, name in (("utf-8", u"".encode("utf8")),
("replace", "???"), ("ignore", "")):
tar = tarfile.open(tmpname, format=self.format, encoding="ascii",
errors=handler)
self.assertEqual(tar.getnames()[0], name)
self.assertRaises(UnicodeError, tarfile.open, tmpname,
encoding="ascii", errors="strict")
def test_error_handler_utf8(self):
# Create a pathname that has one component representable using
# iso8859-1 and the other only in iso8859-15.
self._create_unicode_name(u"/")
tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1",
errors="utf-8")
self.assertEqual(tar.getnames()[0], "/" + u"".encode("utf8"))
class AppendTest(unittest.TestCase): class AppendTest(unittest.TestCase):
...@@ -836,63 +954,58 @@ class LimitsTest(unittest.TestCase): ...@@ -836,63 +954,58 @@ class LimitsTest(unittest.TestCase):
def test_ustar_limits(self): def test_ustar_limits(self):
# 100 char name # 100 char name
tarinfo = tarfile.TarInfo("0123456789" * 10) tarinfo = tarfile.TarInfo("0123456789" * 10)
tarinfo.create_ustar_header() tarinfo.tobuf(tarfile.USTAR_FORMAT)
# 101 char name that cannot be stored # 101 char name that cannot be stored
tarinfo = tarfile.TarInfo("0123456789" * 10 + "0") tarinfo = tarfile.TarInfo("0123456789" * 10 + "0")
self.assertRaises(ValueError, tarinfo.create_ustar_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.USTAR_FORMAT)
# 256 char name with a slash at pos 156 # 256 char name with a slash at pos 156
tarinfo = tarfile.TarInfo("123/" * 62 + "longname") tarinfo = tarfile.TarInfo("123/" * 62 + "longname")
tarinfo.create_ustar_header() tarinfo.tobuf(tarfile.USTAR_FORMAT)
# 256 char name that cannot be stored # 256 char name that cannot be stored
tarinfo = tarfile.TarInfo("1234567/" * 31 + "longname") tarinfo = tarfile.TarInfo("1234567/" * 31 + "longname")
self.assertRaises(ValueError, tarinfo.create_ustar_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.USTAR_FORMAT)
# 512 char name # 512 char name
tarinfo = tarfile.TarInfo("123/" * 126 + "longname") tarinfo = tarfile.TarInfo("123/" * 126 + "longname")
self.assertRaises(ValueError, tarinfo.create_ustar_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.USTAR_FORMAT)
# 512 char linkname # 512 char linkname
tarinfo = tarfile.TarInfo("longlink") tarinfo = tarfile.TarInfo("longlink")
tarinfo.linkname = "123/" * 126 + "longname" tarinfo.linkname = "123/" * 126 + "longname"
self.assertRaises(ValueError, tarinfo.create_ustar_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.USTAR_FORMAT)
# uid > 8 digits # uid > 8 digits
tarinfo = tarfile.TarInfo("name") tarinfo = tarfile.TarInfo("name")
tarinfo.uid = 010000000 tarinfo.uid = 010000000
self.assertRaises(ValueError, tarinfo.create_ustar_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.USTAR_FORMAT)
def test_gnu_limits(self): def test_gnu_limits(self):
tarinfo = tarfile.TarInfo("123/" * 126 + "longname") tarinfo = tarfile.TarInfo("123/" * 126 + "longname")
tarinfo.create_gnu_header() tarinfo.tobuf(tarfile.GNU_FORMAT)
tarinfo = tarfile.TarInfo("longlink") tarinfo = tarfile.TarInfo("longlink")
tarinfo.linkname = "123/" * 126 + "longname" tarinfo.linkname = "123/" * 126 + "longname"
tarinfo.create_gnu_header() tarinfo.tobuf(tarfile.GNU_FORMAT)
# uid >= 256 ** 7 # uid >= 256 ** 7
tarinfo = tarfile.TarInfo("name") tarinfo = tarfile.TarInfo("name")
tarinfo.uid = 04000000000000000000L tarinfo.uid = 04000000000000000000L
self.assertRaises(ValueError, tarinfo.create_gnu_header) self.assertRaises(ValueError, tarinfo.tobuf, tarfile.GNU_FORMAT)
def test_pax_limits(self): def test_pax_limits(self):
# A 256 char name that can be stored without an extended header.
tarinfo = tarfile.TarInfo("123/" * 62 + "longname")
self.assert_(len(tarinfo.create_pax_header("utf8")) == 512,
"create_pax_header attached superfluous extended header")
tarinfo = tarfile.TarInfo("123/" * 126 + "longname") tarinfo = tarfile.TarInfo("123/" * 126 + "longname")
tarinfo.create_pax_header("utf8") tarinfo.tobuf(tarfile.PAX_FORMAT)
tarinfo = tarfile.TarInfo("longlink") tarinfo = tarfile.TarInfo("longlink")
tarinfo.linkname = "123/" * 126 + "longname" tarinfo.linkname = "123/" * 126 + "longname"
tarinfo.create_pax_header("utf8") tarinfo.tobuf(tarfile.PAX_FORMAT)
tarinfo = tarfile.TarInfo("name") tarinfo = tarfile.TarInfo("name")
tarinfo.uid = 04000000000000000000L tarinfo.uid = 04000000000000000000L
tarinfo.create_pax_header("utf8") tarinfo.tobuf(tarfile.PAX_FORMAT)
class GzipMiscReadTest(MiscReadTest): class GzipMiscReadTest(MiscReadTest):
...@@ -940,6 +1053,9 @@ def test_main(): ...@@ -940,6 +1053,9 @@ def test_main():
StreamWriteTest, StreamWriteTest,
GNUWriteTest, GNUWriteTest,
PaxWriteTest, PaxWriteTest,
UstarUnicodeTest,
GNUUnicodeTest,
PaxUnicodeTest,
AppendTest, AppendTest,
LimitsTest, LimitsTest,
] ]
......
This diff was suppressed by a .gitattributes entry.
...@@ -220,6 +220,9 @@ Core and builtins ...@@ -220,6 +220,9 @@ Core and builtins
Library Library
------- -------
- tarfile.py: Improved unicode support. Unicode input names are now
officially supported. Added "errors" argument to the TarFile class.
- urllib.ftpwrapper class now accepts an optional timeout. - urllib.ftpwrapper class now accepts an optional timeout.
- shlex.split() now has an optional "posix" parameter. - shlex.split() now has an optional "posix" parameter.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment