Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Koalas: pandas API on Apache Spark

License

NotificationsYou must be signed in to change notification settings

shengjh/koalas

 
 

Repository files navigation

pandas API on Apache Spark
Explore Koalas docs »

Live notebook ·Issues ·Mailing list
Help Thirsty Koalas Devastated by Recent Fires

The Koalas project makes data scientists more productive when interacting with big data, by implementing the pandas DataFrame API on top of Apache Spark.

pandas is the de facto standard (single-node) DataFrame implementation in Python, while Spark is the de facto standard for big data processing. With this package, you can:

  • Be immediately productive with Spark, with no learning curve, if you are already familiar with pandas.
  • Have a single codebase that works both with pandas (tests, smaller datasets) and with Spark (distributed datasets).

We would love to have you try it and give us feedback, through ourmailing lists orGitHub issues.

Try the Koalas 10 minutes tutorial on a live Jupyter notebookhere. The initial launch can take up to several minutes.

Github ActionscodecovDocumentation StatusLatest ReleaseConda VersionBinder

Getting Started

Koalas can be installed in many ways such as Conda and pip.

# Condaconda install koalas -c conda-forge
# pippip install koalas

SeeInstallation for more details.

If you are a Databricks Runtime user, you can install Koalas using the Libraries tab on the cluster UI, or usingdbutils in a notebook as below for the regular Databricks Runtime,

dbutils.library.installPyPI("koalas")dbutils.library.restartPython()

For Databricks Runtime for Machine Learning 6.0 and above, you can install it as follows.

%shpip install koalas

Note that Koalas requires Databricks Runtime 5.x or above. In the future, we will package Koalas out-of-the-box in both the regular Databricks Runtime and Databricks Runtime for Machine Learning.

Lastly, if your PyArrow version is 0.15+ and your PySpark version is lower than 3.0, it is best for you to setARROW_PRE_0_15_IPC_FORMAT environment variable to1 manually.Koalas will try its best to set it for you but it is impossible to set it if there is a Spark context already launched.

Now you can turn a pandas DataFrame into a Koalas DataFrame that is API-compliant with the former:

importdatabricks.koalasasksimportpandasaspdpdf=pd.DataFrame({'x':range(3),'y':['a','b','b'],'z':['a','b','b']})# Create a Koalas DataFrame from pandas DataFramedf=ks.from_pandas(pdf)# Rename the columnsdf.columns= ['x','y','z1']# Do some operations in place:df['x2']=df.x*df.x

For more details, seeGetting Started andDependencies in the official documentation.

Contributing Guide

SeeContributing Guide andDesign Principles in the official documentation.

FAQ

SeeFAQ in the official documentation.

Best Practices

SeeBest Practices in the official documentation.

Koalas Talks and Blogs

SeeKoalas Talks and Blogs in the official documentation.

About

Koalas: pandas API on Apache Spark

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python99.4%
  • Shell0.6%

[8]ページ先頭

©2009-2025 Movatter.jp