- Notifications
You must be signed in to change notification settings - Fork0
Koalas: pandas API on Apache Spark
License
shengjh/koalas
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
pandas API on Apache Spark
Explore Koalas docs »
Live notebook ·Issues ·Mailing list
Help Thirsty Koalas Devastated by Recent Fires
The Koalas project makes data scientists more productive when interacting with big data, by implementing the pandas DataFrame API on top of Apache Spark.
pandas is the de facto standard (single-node) DataFrame implementation in Python, while Spark is the de facto standard for big data processing. With this package, you can:
- Be immediately productive with Spark, with no learning curve, if you are already familiar with pandas.
- Have a single codebase that works both with pandas (tests, smaller datasets) and with Spark (distributed datasets).
We would love to have you try it and give us feedback, through ourmailing lists orGitHub issues.
Try the Koalas 10 minutes tutorial on a live Jupyter notebookhere. The initial launch can take up to several minutes.
Koalas can be installed in many ways such as Conda and pip.
# Condaconda install koalas -c conda-forge
# pippip install koalas
SeeInstallation for more details.
If you are a Databricks Runtime user, you can install Koalas using the Libraries tab on the cluster UI, or usingdbutils
in a notebook as below for the regular Databricks Runtime,
dbutils.library.installPyPI("koalas")dbutils.library.restartPython()
For Databricks Runtime for Machine Learning 6.0 and above, you can install it as follows.
%shpip install koalas
Note that Koalas requires Databricks Runtime 5.x or above. In the future, we will package Koalas out-of-the-box in both the regular Databricks Runtime and Databricks Runtime for Machine Learning.
Lastly, if your PyArrow version is 0.15+ and your PySpark version is lower than 3.0, it is best for you to setARROW_PRE_0_15_IPC_FORMAT
environment variable to1
manually.Koalas will try its best to set it for you but it is impossible to set it if there is a Spark context already launched.
Now you can turn a pandas DataFrame into a Koalas DataFrame that is API-compliant with the former:
importdatabricks.koalasasksimportpandasaspdpdf=pd.DataFrame({'x':range(3),'y':['a','b','b'],'z':['a','b','b']})# Create a Koalas DataFrame from pandas DataFramedf=ks.from_pandas(pdf)# Rename the columnsdf.columns= ['x','y','z1']# Do some operations in place:df['x2']=df.x*df.x
For more details, seeGetting Started andDependencies in the official documentation.
SeeContributing Guide andDesign Principles in the official documentation.
SeeFAQ in the official documentation.
SeeBest Practices in the official documentation.
SeeKoalas Talks and Blogs in the official documentation.
About
Koalas: pandas API on Apache Spark
Resources
License
Stars
Watchers
Forks
Packages0
Languages
- Python99.4%
- Shell0.6%