Movatterモバイル変換


[0]ホーム

URL:


hadoop-logo Apache Hadoop

The Apache® Hadoop® project develops open-source software for reliable, scalable, distributed computing.

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

Learn more »Download »Getting started »

Latest news

This is a release of Apache Hadoop 3.4.1 line.

Users of Apache Hadoop 3.4.0 and earlier should upgrade tothis release.

All users are encouraged to read theoverview of major changessince release 3.4.0.

We have also introduced a lean tar which is a small tar file that does not contain the AWS SDKbecause the size of AWS SDK is itself 500 MB. This can ease usage for non AWS users.Even AWS users can add this jar explicitly if desired.

For details of bug fixes, improvements, and other enhancements sincethe previous 3.4.0 release, please checkrelease notesandchangelog.

This is the first release of Apache Hadoop 3.4 line. It contains 2888 bug fixes, improvements and enhancements since 3.3.

Users are encouraged to read theoverview of major changes.For details of please checkrelease notes andchangelog.

This is a release of Apache Hadoop 3.3 line.

It contains 117 bug fixes, improvements and enhancements since 3.3.5.Users of Apache Hadoop 3.3.5 and earlier should upgrade to this release.

Feature highlights:

SBOM artifacts

Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) usingCycloneDX Maven plugin. For more information on SBOM, please go toSBOM.

HDFS RBF: RDBMS based token storage support

HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL,HADOOP-18535which improves token operation through over the original Zookeeper-based implementation.

New File System APIs

HADOOP-18671 moved a number ofHDFS-specific APIs to Hadoop Common to make it possible for certain applications thatdepend on HDFS semantics to run on other Hadoop compatible file systems.

In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverableinterface, while setSafeMode() is exposed through SafeMode interface.

Users are encouraged to read theoverview of major changes since release 3.3.5.For details of 117 bug fixes, improvements, and other enhancements since the previous 3.3.5 release,please checkrelease notes andchangelog.

This is a release of Apache Hadoop 3.3 line.

Key changes include

  • A big update of dependencies to try and keep those reports oftransitive CVEs under control -both genuine and false positives.
  • Critical fix to ABFS input stream prefetching for correct reading.
  • Vectored IO API for all FSDataInputStream implementations, withhigh-performance versions for file:// and s3a:// filesystems.file:// through java native IOs3a:// parallel GET requests.
  • Arm64 binaries. Note, because the arm64 release was on a differentplatform, the jar files may not match those of the x86release -and therefore the maven artifacts.
  • Security fixes in Hadoop’s own code.

Users of Apache Hadoop 3.3.4 and earlier should upgrade tothis release.

All users are encouraged to read theoverview of major changessince release 3.3.4.

For details of bug fixes, improvements, and other enhancements sincethe previous 3.3.4 release, please checkrelease notesandchangelog.

Azure ABFS: Critical Stream Prefetch Fix

The ABFS connector has a critical bug fixhttps://issues.apache.org/jira/browse/HADOOP-18546:ABFS. Disable purging list of in-progress reads in abfs stream close().

All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgradeto this release or disable prefetching by settingfs.azure.readaheadqueue.depth to0.

This is a release of Apache Hadoop 3.3 line.

It contains a small number security and critical integration fixes since 3.3.3.

Users of Apache Hadoop 3.3.3 should upgrade to this release.

Users of hadoop 2.x and hadoop 3.2 should also upgrade to the 3.3.x line.As well as feature enhancements, this is the sole branch currentlyreceiving fixes for anything other than critical security/data integrityissues.

Users are encouraged to read theoverview of major changes since release 3.3.3.For details of bug fixes, improvements, and other enhancements since the previous 3.3.3 release,please checkrelease notes andchangelog.

Release archive →

News archive →

Modules

The project includes these modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

Who Uses Hadoop?

A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the HadoopPoweredBy wiki page.

Related projects

Other Hadoop-related projects at Apache include:

  • Ambari™: A web-based tool for provisioning,managing, and monitoring Apache Hadoop clusters which includessupport for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase,ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboardfor viewing cluster health such as heatmaps and ability to viewMapReduce, Pig and Hive applications visually alongwith features todiagnose their performance characteristics in a user-friendlymanner.
  • Avro™: A data serialization system.
  • Cassandra™: A scalable multi-master databasewith no single points of failure.
  • Chukwa™: A data collection system for managinglarge distributed systems.
  • HBase™: A scalable, distributed database thatsupports structured data storage for large tables.
  • Hive™: A data warehouse infrastructure that providesdata summarization and ad hoc querying.
  • Mahout™: A Scalable machine learning and datamining library.
  • Ozone™: A scalable, redundant, anddistributed object store for Hadoop.
  • Pig™: A high-level data-flow language and executionframework for parallel computation.
  • Spark™: A fast and general compute engine forHadoop data. Spark provides a simple and expressive programmingmodel that supports a wide range of applications, including ETL,machine learning, stream processing, and graph computation.
  • Submarine: A unified AI platform which allowsengineers and data scientists to run Machine Learning and Deep Learning workload indistributed cluster.
  • Tez™: A generalized data-flow programming framework,built on Hadoop YARN, which provides a powerful and flexible engineto execute an arbitrary DAG of tasks to process data for both batchand interactive use-cases. Tez is being adopted by Hive™, Pig™ andother frameworks in the Hadoop ecosystem, and also by othercommercial software (e.g. ETL tools), to replace Hadoop™ MapReduceas the underlying execution engine.
  • ZooKeeper™: A high-performance coordinationservice for distributed applications.


[8]ページ先頭

©2009-2025 Movatter.jp