gzip
packagestandard libraryThis package is not in the latest version of its module.
Details
Validgo.mod file
The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.
Redistributable license
Redistributable licenses place minimal restrictions on how software can be used, modified, and redistributed.
Tagged version
Modules with tagged versions give importers more predictable builds.
Stable version
When a project reaches major version v1 it is considered stable.
- Learn more about best practices
Repository
Links
Documentation¶
Overview¶
Package gzip implements reading and writing of gzip format compressed files,as specified inRFC 1952.
Example (CompressingReader)¶
package mainimport ("compress/gzip""io""log""net/http""net/http/httptest""os""strings")func main() {// This is an example of writing a compressing reader.// This can be useful for an HTTP client body, as shown.const testdata = "the data to be compressed"// This HTTP handler is just for testing purposes.handler := http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {zr, err := gzip.NewReader(req.Body)if err != nil {log.Fatal(err)}// Just output the data for the example.if _, err := io.Copy(os.Stdout, zr); err != nil {log.Fatal(err)}})ts := httptest.NewServer(handler)defer ts.Close()// The remainder is the example code.// The data we want to compress, as an io.ReaderdataReader := strings.NewReader(testdata)// bodyReader is the body of the HTTP request, as an io.Reader.// httpWriter is the body of the HTTP request, as an io.Writer.bodyReader, httpWriter := io.Pipe()// Make sure that bodyReader is always closed, so that the// goroutine below will always exit.defer bodyReader.Close()// gzipWriter compresses data to httpWriter.gzipWriter := gzip.NewWriter(httpWriter)// errch collects any errors from the writing goroutine.errch := make(chan error, 1)go func() {defer close(errch)sentErr := falsesendErr := func(err error) {if !sentErr {errch <- errsentErr = true}}// Copy our data to gzipWriter, which compresses it to// gzipWriter, which feeds it to bodyReader.if _, err := io.Copy(gzipWriter, dataReader); err != nil && err != io.ErrClosedPipe {sendErr(err)}if err := gzipWriter.Close(); err != nil && err != io.ErrClosedPipe {sendErr(err)}if err := httpWriter.Close(); err != nil && err != io.ErrClosedPipe {sendErr(err)}}()// Send an HTTP request to the test server.req, err := http.NewRequest("PUT", ts.URL, bodyReader)if err != nil {log.Fatal(err)}// Note that passing req to http.Client.Do promises that it// will close the body, in this case bodyReader.resp, err := ts.Client().Do(req)if err != nil {log.Fatal(err)}// Check whether there was an error compressing the data.if err := <-errch; err != nil {log.Fatal(err)}// For this example we don't care about the response.resp.Body.Close()}
Output:the data to be compressed
Example (WriterReader)¶
package mainimport ("bytes""compress/gzip""fmt""io""log""os""time")func main() {var buf bytes.Bufferzw := gzip.NewWriter(&buf)// Setting the Header fields is optional.zw.Name = "a-new-hope.txt"zw.Comment = "an epic space opera by George Lucas"zw.ModTime = time.Date(1977, time.May, 25, 0, 0, 0, 0, time.UTC)_, err := zw.Write([]byte("A long time ago in a galaxy far, far away..."))if err != nil {log.Fatal(err)}if err := zw.Close(); err != nil {log.Fatal(err)}zr, err := gzip.NewReader(&buf)if err != nil {log.Fatal(err)}fmt.Printf("Name: %s\nComment: %s\nModTime: %s\n\n", zr.Name, zr.Comment, zr.ModTime.UTC())if _, err := io.Copy(os.Stdout, zr); err != nil {log.Fatal(err)}if err := zr.Close(); err != nil {log.Fatal(err)}}
Output:Name: a-new-hope.txtComment: an epic space opera by George LucasModTime: 1977-05-25 00:00:00 +0000 UTCA long time ago in a galaxy far, far away...
Index¶
Examples¶
Constants¶
const (NoCompression =flate.NoCompressionBestSpeed =flate.BestSpeedBestCompression =flate.BestCompressionDefaultCompression =flate.DefaultCompressionHuffmanOnly =flate.HuffmanOnly)
These constants are copied from theflate package, so that code that importscompress/gzip does not also have to importcompress/flate.
Variables¶
var (// ErrChecksum is returned when reading GZIP data that has an invalid checksum.ErrChecksum =errors.New("gzip: invalid checksum")// ErrHeader is returned when reading GZIP data that has an invalid header.ErrHeader =errors.New("gzip: invalid header"))
Functions¶
This section is empty.
Types¶
typeHeader¶
type Header struct {Commentstring// commentExtra []byte// "extra data"ModTimetime.Time// modification timeNamestring// file nameOSbyte// operating system type}
The gzip file stores a header giving metadata about the compressed file.That header is exposed as the fields of theWriter andReader structs.
Strings must be UTF-8 encoded and may only contain Unicode code pointsU+0001 through U+00FF, due to limitations of the GZIP file format.
typeReader¶
type Reader struct {Header// valid after NewReader or Reader.Reset// contains filtered or unexported fields}
A Reader is anio.Reader that can be read to retrieveuncompressed data from a gzip-format compressed file.
In general, a gzip file can be a concatenation of gzip files,each with its own header. Reads from the Readerreturn the concatenation of the uncompressed data of each.Only the first header is recorded in the Reader fields.
Gzip files store a length and checksum of the uncompressed data.The Reader will return anErrChecksum whenReader.Readreaches the end of the uncompressed data if it does nothave the expected length or checksum. Clients should treat datareturned byReader.Read as tentative until they receive theio.EOFmarking the end of the data.
funcNewReader¶
NewReader creates a newReader reading the given reader.If r does not also implementio.ByteReader,the decompressor may read more data than necessary from r.
It is the caller's responsibility to callReader.Close when done.
The Reader.Header fields will be valid in theReader returned.
func (*Reader)Close¶
Close closes theReader. It does not close the underlying reader.In order for the GZIP checksum to be verified, the reader must befully consumed until theio.EOF.
func (*Reader)Multistream¶added ingo1.4
Multistream controls whether the reader supports multistream files.
If enabled (the default), theReader expects the input to be a sequenceof individually gzipped data streams, each with its own header andtrailer, ending at EOF. The effect is that the concatenation of a sequenceof gzipped files is treated as equivalent to the gzip of the concatenationof the sequence. This is standard behavior for gzip readers.
Calling Multistream(false) disables this behavior; disabling the behaviorcan be useful when reading file formats that distinguish individual gzipdata streams or mix gzip data streams with other data streams.In this mode, when theReader reaches the end of the data stream,Reader.Read returnsio.EOF. The underlying reader must implementio.ByteReaderin order to be left positioned just after the gzip stream.To start the next stream, call z.Reset(r) followed by z.Multistream(false).If there is no next stream, z.Reset(r) will returnio.EOF.
Example¶
package mainimport ("bytes""compress/gzip""fmt""io""log""os""time")func main() {var buf bytes.Bufferzw := gzip.NewWriter(&buf)var files = []struct {name stringcomment stringmodTime time.Timedata string}{{"file-1.txt", "file-header-1", time.Date(2006, time.February, 1, 3, 4, 5, 0, time.UTC), "Hello Gophers - 1"},{"file-2.txt", "file-header-2", time.Date(2007, time.March, 2, 4, 5, 6, 1, time.UTC), "Hello Gophers - 2"},}for _, file := range files {zw.Name = file.namezw.Comment = file.commentzw.ModTime = file.modTimeif _, err := zw.Write([]byte(file.data)); err != nil {log.Fatal(err)}if err := zw.Close(); err != nil {log.Fatal(err)}zw.Reset(&buf)}zr, err := gzip.NewReader(&buf)if err != nil {log.Fatal(err)}for {zr.Multistream(false)fmt.Printf("Name: %s\nComment: %s\nModTime: %s\n\n", zr.Name, zr.Comment, zr.ModTime.UTC())if _, err := io.Copy(os.Stdout, zr); err != nil {log.Fatal(err)}fmt.Print("\n\n")err = zr.Reset(&buf)if err == io.EOF {break}if err != nil {log.Fatal(err)}}if err := zr.Close(); err != nil {log.Fatal(err)}}
Output:Name: file-1.txtComment: file-header-1ModTime: 2006-02-01 03:04:05 +0000 UTCHello Gophers - 1Name: file-2.txtComment: file-header-2ModTime: 2007-03-02 04:05:06 +0000 UTCHello Gophers - 2
typeWriter¶
type Writer struct {Header// written at first call to Write, Flush, or Close// contains filtered or unexported fields}
A Writer is anio.WriteCloser.Writes to a Writer are compressed and written to w.
funcNewWriter¶
NewWriter returns a newWriter.Writes to the returned writer are compressed and written to w.
It is the caller's responsibility to call Close on theWriter when done.Writes may be buffered and not flushed until Close.
Callers that wish to set the fields in Writer.Header must do so beforethe first call to Write, Flush, or Close.
funcNewWriterLevel¶
NewWriterLevel is likeNewWriter but specifies the compression level insteadof assumingDefaultCompression.
The compression level can beDefaultCompression,NoCompression,HuffmanOnlyor any integer value betweenBestSpeed andBestCompression inclusive.The error returned will be nil if the level is valid.
func (*Writer)Close¶
Close closes theWriter by flushing any unwritten data to the underlyingio.Writer and writing the GZIP footer.It does not close the underlyingio.Writer.
func (*Writer)Flush¶added ingo1.1
Flush flushes any pending compressed data to the underlying writer.
It is useful mainly in compressed network protocols, to ensure thata remote reader has enough data to reconstruct a packet. Flush doesnot return until the data has been written. If the underlyingwriter returns an error, Flush returns that error.
In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH.