Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Apache Parquet

From Wikipedia, the free encyclopedia
Column-oriented data storage format
This articlerelies excessively onreferences toprimary sources. Please improve this article by addingsecondary or tertiary sources.
Find sources: "Apache Parquet" – news ·newspapers ·books ·scholar ·JSTOR
(October 2016) (Learn how and when to remove this message)
Apache Parquet
Initial release13 March 2013; 12 years ago (2013-03-13)
Stable release
2.10.0 / 20 November 2023; 16 months ago (2023-11-20)[1]
Repository
Written inJava (reference implementation)[2]
Operating systemCross-platform
TypeColumn-oriented DBMS
LicenseApache License 2.0
Websiteparquet.apache.org

Apache Parquet is afree and open-sourcecolumn-oriented data storage format in theApache Hadoop ecosystem. It is similar toRCFile andORC, the other columnar-storage file formats inHadoop, and is compatible with most of the data processing frameworks aroundHadoop. It provides efficientdata compression andencoding schemes with enhanced performance to handle complex data in bulk.

History

[edit]

Theopen-source project to build Apache Parquet began as a joint effort betweenTwitter[3] andCloudera.[4] Parquet was designed as an improvement on the Trevni columnar storage format created byDoug Cutting, the creator of Hadoop. The first version, Apache Parquet 1.0, was released in July 2013. Since April 27, 2015, Apache Parquet has been a top-level Apache Software Foundation (ASF)-sponsored project.[5][6]

Features

[edit]

Apache Parquet is implemented using the record-shredding and assembly algorithm,[7] which accommodates the complexdata structures that can be used to store data.[8] The values in each column are stored in contiguous memory locations, providing the following benefits:[9]

  • Column-wise compression is efficient in storage space
  • Encoding and compression techniques specific to the type of data in each column can be used
  • Queries that fetch specific column values need not read the entire row, thus improving performance

Apache Parquet is implemented using theApache Thrift framework, which increases its flexibility; it can work with a number of programming languages likeC++,Java,Python,PHP, etc.[10]

As of August 2015,[11] Parquet supports the big-data-processing frameworks includingApache Hive,Apache Drill,Apache Impala,Apache Crunch,Apache Pig,Cascading,Presto andApache Spark. It is one of the external data formats used by thepandasPython data manipulation and analysis library.

Compression and encoding

[edit]

In Parquet, compression is performed column by column, which enables different encoding schemes to be used for text and integer data. This strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.

Parquet supports various compression formats:snappy,gzip,LZO,brotli,zstd, andLZ4.[12]

Dictionary encoding

[edit]

Parquet has an automaticdictionary encoding enabled dynamically for data with asmall number of unique values (i.e. below 105) that enables significant compression and boosts processing speed.[13]

Bit packing

[edit]

Storage of integers is usually done with dedicated 32 or 64 bits per integer. For small integers, packing multiple integers into the same space makes storage more efficient.[13]

Run-length encoding (RLE)

[edit]

To optimize storage of multiple occurrences of the same value,run-length encoding is used, which is where a single value is stored once along with the number of occurrences.[13]

Parquet implements a hybrid of bit packing and RLE, in which the encoding switches based on which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.[13]

Cloud Storage and Data Lakes

[edit]

Parquet is widely used as the underlying file format in modern cloud-baseddata lake architectures. Cloud storage systems such as Amazon S3, Azure Data Lake Storage, and Google Cloud Storage commonly store data in Parquet format due to its efficient columnar representation and retrieval capabilities.[14] Data lakehouse frameworks—includingApache Iceberg,[15] Delta Lake,[16] and Apache Hudi[17]—build an additional metadata layer on top of Parquet files to support features such as schema evolution, time-travel queries, andACID-compliant transactions. In these architectures, Parquet files serve as the immutable storage layer while the table formats manage data versioning and transactional integrity.

Comparison

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(October 2016) (Learn how and when to remove this message)

Apache Parquet is comparable toRCFile andOptimized Row Columnar (ORC) file formats — all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limitedschema evolution,[citation needed] i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.

Apache Arrow is designed as an in-memory complement to on-disk columnar formats like Parquet and ORC. The Arrow and Parquet projects include libraries that allow for reading and writing between the two formats.[citation needed]

Implementations

[edit]

Known implementations of Parquet include:

See also

[edit]

References

[edit]
  1. ^"Apache Parquet – Releases".Apache.org.Archived from the original on 22 February 2023. Retrieved22 February 2023.
  2. ^"Parquet-MR source code".GitHub.Archived from the original on 11 June 2018. Retrieved2 July 2019.
  3. ^"Release Date".Archived from the original on 2016-10-20. Retrieved2016-09-12.
  4. ^"Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog". 2013-03-13. Archived fromthe original on 2013-05-04. Retrieved2018-10-22.
  5. ^"Apache Parquet paves the way for better Hadoop data storage". 28 April 2015.Archived from the original on 31 May 2017. Retrieved21 May 2017.
  6. ^"The Apache Software Foundation Announces Apache™ Parquet™ as a Top-Level Project : The Apache Software Foundation Blog". 27 April 2015.Archived from the original on 20 August 2017. Retrieved21 May 2017.
  7. ^"The striping and assembly algorithms from the Google-inspired Dremel paper".github.Archived from the original on 26 October 2020. Retrieved13 November 2017.
  8. ^"Apache Parquet Documentation". Archived fromthe original on 2016-09-05. Retrieved2016-09-12.
  9. ^"Apache Parquet Cloudera".Archived from the original on 2016-09-19. Retrieved2016-09-12.
  10. ^"Apache Thrift".Archived from the original on 2021-03-12. Retrieved2016-09-14.
  11. ^"Supported Frameworks".Archived from the original on 2015-02-02. Retrieved2016-09-12.
  12. ^"Parquet Compression".Apache Parquet Documentation. Apache Software Foundation. 11 March 2024. Retrieved2 December 2024.
  13. ^abcd"Announcing Parquet 1.0: Columnar Storage for Hadoop | Twitter Blogs".blog.twitter.com.Archived from the original on 2016-10-20. Retrieved2016-09-14.
  14. ^"Apache Parquet".Apache Parquet. Retrieved2025-03-13.
  15. ^"Apache Iceberg". Retrieved2025-03-13.
  16. ^"Delta Lake". Retrieved2025-03-13.
  17. ^"Apache Hudi". Retrieved2025-03-13.

External links

[edit]
Top-level
projects
Commons
Incubator
Other projects
Attic
Licenses
Retrieved from "https://en.wikipedia.org/w/index.php?title=Apache_Parquet&oldid=1283781075"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp