Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

License

NotificationsYou must be signed in to change notification settings

aws/aws-sdk-pandas

Pandas on AWS

Easy integration with Athena, Glue, Redshift, Timestream, OpenSearch, Neptune, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS SDK for pandastracker

AnAWS Professional Service open source initiative |aws-proserve-opensource@amazon.com

PyPiCondaPython VersionCode style: ruffLicense

Checked with mypyStatic CheckingDocumentation Status

SourceDownloadsInstallation Command
PyPiPyPI Downloadspip install awswrangler
CondaConda Downloadsconda install -c conda-forge awswrangler

⚠️Starting version 3.0, optional modules must be installed explicitly:
➡️pip install 'awswrangler[redshift]'

Table of contents

Quick Start

Installation command:pip install awswrangler

⚠️Starting version 3.0, optional modules must be installed explicitly:
➡️pip install 'awswrangler[redshift]'

importawswrangleraswrimportpandasaspdfromdatetimeimportdatetimedf=pd.DataFrame({"id": [1,2],"value": ["foo","boo"]})# Storing data on Data Lakewr.s3.to_parquet(df=df,path="s3://bucket/dataset/",dataset=True,database="my_db",table="my_table")# Retrieving the data directly from Amazon S3df=wr.s3.read_parquet("s3://bucket/dataset/",dataset=True)# Retrieving the data from Amazon Athenadf=wr.athena.read_sql_query("SELECT * FROM my_table",database="my_db")# Get a Redshift connection from Glue Catalog and retrieving data from Redshift Spectrumcon=wr.redshift.connect("my-glue-connection")df=wr.redshift.read_sql_query("SELECT * FROM external_schema.my_table",con=con)con.close()# Amazon Timestream Writedf=pd.DataFrame({"time": [datetime.now(),datetime.now()],"my_dimension": ["foo","boo"],"measure": [1.0,1.1],})rejected_records=wr.timestream.write(df,database="sampleDB",table="sampleTable",time_col="time",measure_col="measure",dimensions_cols=["my_dimension"],)# Amazon Timestream Querywr.timestream.query("""SELECT time, measure_value::double, my_dimensionFROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3""")

At scale

AWS SDK for pandas can also run your workflows at scale by leveragingModin andRay. Both projects aim to speed up data workloads by distributing processing over a cluster of workers.

Read ourdocs or head to our latesttutorials to learn more.

Getting Help

The best way to interact with our team is through GitHub. You can open anissue and choose from one of our templates for bug reports, feature requests...You may also find help on these community resources:

Logging

Enabling internal logging examples:

importlogginglogging.basicConfig(level=logging.INFO,format="[%(name)s][%(funcName)s] %(message)s")logging.getLogger("awswrangler").setLevel(logging.DEBUG)logging.getLogger("botocore.credentials").setLevel(logging.CRITICAL)

Into AWS lambda:

importlogginglogging.getLogger("awswrangler").setLevel(logging.DEBUG)

About

pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp