- Notifications
You must be signed in to change notification settings - Fork237
data load tool (dlt) is an open source Python library that makes data loading easy 🛠️
License
dlt-hub/dlt
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Be it a Google Colab notebook, AWS Lambda function, an Airflow DAG, your local laptop,
or a GPT-4 assisted development playground—dlt can be dropped in anywhere.
dlt supports Python 3.9+. Python 3.13 is supported but considered experimental at this time as not all of dlts extras have python 3.13. support. We additionally maintain aforked version of pendulum for 3.13 until there is a release for 3.13.
pip install dlt
More options:Install via Conda or Pixi
Load chess game data from chess.com API and save it in DuckDB:
importdltfromdlt.sources.helpersimportrequests# Create a dlt pipeline that will load# chess player data to the DuckDB destinationpipeline=dlt.pipeline(pipeline_name='chess_pipeline',destination='duckdb',dataset_name='player_data')# Grab some player data from Chess.com APIdata= []forplayerin ['magnuscarlsen','rpragchess']:response=requests.get(f'https://api.chess.com/pub/player/{player}')response.raise_for_status()data.append(response.json())# Extract, normalize, and load the datapipeline.run(data,table_name='player')
Try it out in ourColab Demo
- Automatic Schema: Data structure inspection and schema creation for the destination.
- Data Normalization: Consistent and verified data before loading.
- Seamless Integration: Colab, AWS Lambda, Airflow, and local environments.
- Scalable: Adapts to growing data needs in production.
- Easy Maintenance: Clear data pipeline structure for updates.
- Rapid Exploration: Quickly explore and gain insights from new data sources.
- Versatile Usage: Suitable for ad-hoc exploration to advanced loading infrastructures.
- Start in Seconds with CLI: Powerful CLI for managing, deploying and inspecting local pipelines.
- Incremental Loading: Load only new or changed data and avoid loading old records again.
- Open Source: Free and Apache 2.0 Licensed.
Explore ready to use sources (e.g. Google Sheets) in theVerified Sources docs and supported destinations (e.g. DuckDB) in theDestinations docs.
For detailed usage and configuration, please refer to theofficial documentation.
You can find examples for various use cases in theexamples folder.
dlt
follows the semantic versioning with theMAJOR.MINOR.PATCH
pattern.
major
means breaking changes and removed deprecationsminor
new features, sometimes automatic migrationspatch
bug fixes
We suggest that you allow onlypatch
level updates automatically:
- Using theCompatible Release Specifier. For exampledlt~=1.0 allows only versions>=1.0 and less than<1.1
- Poetrycaret requirements. For example^1.0 allows only versions>=1.0 to<1.0
The dlt project is quickly growing, and we're excited to have you join our community! Here's how you can get involved:
- Connect with the Community: Join other dlt users and contributors on ourSlack
- Report issues and suggest features: Please use theGitHub Issues to report bugs or suggest new features. Before creating a new issue, make sure to search the tracker for possible duplicates and add a comment if you find one.
- Track progress of our work and our plans: Please check out ourpublic Github project
- Contribute Verified Sources: Contribute your custom sources to thedlt-hub/verified-sources to help other folks in handling their data tasks.
- Contribute code: Check out ourcontributing guidelines for information on how to make a pull request.
- Improve documentation: Help us enhance the dlt documentation.
dlt
is released under theApache 2.0 License.
About
data load tool (dlt) is an open source Python library that makes data loading easy 🛠️