- API reference
- Series
- pandas.Serie...
pandas.Series.to_pickle#
- Series.to_pickle(path,*,compression='infer',protocol=5,storage_options=None)[source]#
Pickle (serialize) object to file.
- Parameters:
- pathstr, path object, or file-like object
String, path object (implementing
os.PathLike[str]
), or file-likeobject implementing a binarywrite()
function. File path wherethe pickled object will be stored.- compressionstr or dict, default ‘infer’
For on-the-fly compression of the output data. If ‘infer’ and ‘path’ ispath-like, then detect compression from the following extensions: ‘.gz’,‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’(otherwise no compression).Set to
None
for no compression.Can also be a dict with key'method'
setto one of {'zip'
,'gzip'
,'bz2'
,'zstd'
,'xz'
,'tar'
} andother key-value pairs are forwarded tozipfile.ZipFile
,gzip.GzipFile
,bz2.BZ2File
,zstandard.ZstdCompressor
,lzma.LZMAFile
ortarfile.TarFile
, respectively.As an example, the following could be passed for faster compression and to createa reproducible gzip archive:compression={'method':'gzip','compresslevel':1,'mtime':1}
.Added in version 1.5.0:Added support for.tar files.
- protocolint
Int which indicates which protocol should be used by the pickler,default HIGHEST_PROTOCOL (see[1] paragraph 12.1.2). The possiblevalues are 0, 1, 2, 3, 4, 5. A negative value for the protocolparameter is equivalent to setting its value to HIGHEST_PROTOCOL.
- storage_optionsdict, optional
Extra options that make sense for a particular storage connection, e.g.host, port, username, password, etc. For HTTP(S) URLs the key-value pairsare forwarded to
urllib.request.Request
as header options. For otherURLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs areforwarded tofsspec.open
. Please seefsspec
andurllib
for moredetails, and for more examples on storage options referhere.
See also
read_pickle
Load pickled pandas object (or any object) from file.
DataFrame.to_hdf
Write DataFrame to an HDF5 file.
DataFrame.to_sql
Write DataFrame to a SQL database.
DataFrame.to_parquet
Write a DataFrame to the binary parquet format.
Examples
>>>original_df=pd.DataFrame({"foo":range(5),"bar":range(5,10)})>>>original_df foo bar0 0 51 1 62 2 73 3 84 4 9>>>original_df.to_pickle("./dummy.pkl")
>>>unpickled_df=pd.read_pickle("./dummy.pkl")>>>unpickled_df foo bar0 0 51 1 62 2 73 3 84 4 9