- Notifications
You must be signed in to change notification settings - Fork5
A SQLite driver for S3 and Amazon Athena 😳
License
dacort/athena-sqlite
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Using Athena's newQuery Federation functionality, read SQLite databases from S3.
Install it from the Serverless Application Repository:AthenaSQLiteConnector.
I occasionally like to put together fun side projects over Thanksgiving and Christmas holidays.
I'd always joked it would a crazy idea to be able to read SQLite using Athena, so...here we are!
- I decided to use Python as I'm most familiar with it and because of the next point
- UsingAPSW, we can implement aVirtual File System (VFS) for S3
- Using theAthena query federation example, we can see what calls need to be implemented
The PyArrow library unfortunately weighs in over 250MB, so we have to use a custom compilation step to build a Lambda Layer.
Drop SQLite databases in a single prefix in S3, and Athena will list each file as a database and automatically detect tables and schemas.
Currently, all data types are strings. I'll fix this eventually. All good things in time.
This project is under active development and very much in it's infancy.
Many things are hard-coded or broken into various pieces as I experiment and figure out how everything works.
The documentation for this is a work in progress. It's currently in between me creating the resources manually and building the assets for the AWS SAR,and most of the docs will be automated away.
- Docker
- Python 3.7
First you need to build Lambda layer. There are two Dockerfiles and build scripts in thelambda-layer/
directory.
We'll execute each of the build scripts and copy the results to the target directory. This is referenced by the SAR template,athena-sqlite.yaml
.
cd lambda-layer./build.sh./build-pyarrow.shcp -R layer/ ../target/
For the purpose of this test, we just have a sample sqlite database you can upload.
aws s3 cp sample-data/sample_data.sqlite s3://<TARGET_BUCKET>/<TARGET_PREFIX>/
Feel free to upload your own SQLite databases as well!
There are three components to the Lambda code:
vfs.py
- A SQLite Virtual File System implementation for S3s3qlite.py
- The actual Lambda function that handles Athena metadata/data requestssqlite_db.py
- Helper functions for access SQLite databases on S3
Create a function with the code inlambda-function/s3qlite.py that uses the previously created layer.The handler will bes3qlite.lambda_handler
Also include thevfs.py
andsqlite_db.py
files in your Lambda function
Configure two environment variables for your lambda function:
TARGET_BUCKET
- The name of your S3 bucket where SQLite files liveTARGET_PREFIX
- The prefix (e.g.data/sqlite
) that you uploaded the sample sqlite database to
Note that the IAM role you associate the function with will also needs3:GetObject
ands3:ListBucket
access to wherever your lovely SQLite databases are stored.
Follow the Athena documentation forConnecting to a data source.The primary thing to note here is that you need to create a workgroup namedAmazonAthenaPreviewFunctionality
and use that for your testing.Some functionality will work in the primary workgroup, but you'll get weird errors when you try to query data.
I named my functions3qlite
:)
Here's a couple basic queries that should work:
SELECT*FROM"s3qlite"."sample_data"."records"limit10;SELECTCOUNT(*)FROM"s3qlite"."sample_data"."records";
If you deploy the SAR app, the data catalog isn't registered automatically, but you can still run queries by using the speciallambda:
schema:
SELECT*FROM"lambda:s3qlite".sample_data.recordsLIMIT10;
Wheres3qlite
is the value you provided for theAthenaCatalogName
parameter.
- Move these into issues :)
- Move vfs.py into it's own module
- Maybe add write support to it someday 😱
- Publish to SAR
- Add tests...always tests
- struct types, probably
- Don't read the entire file every time :)
- Escape column names with invalid characters
- Implement recursive listing
These are mostly notes I made while figuring out how to get SAR working.
Need to grant SAR access to the bucket
aws s3api put-bucket-policy --bucket<BUCKET> --region us-east-1 --policy'{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "serverlessrepo.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<BUCKET>/*" } ]}'
For publishing to the SAR, we just execute two commands
sam package --template-file athena-sqlite.yaml --s3-bucket<BUCKET> --output-template-file target/out.yamlsam publish --template target/out.yaml --region us-east-1
If you want to deploy using CloudFormation, use this command:
sam deploy --template-file ./target/out.yaml --stack-name athena-sqlite --capabilities CAPABILITY_IAM --parameter-overrides'DataBucket=<BUCKET> DataPrefix=tmp/sqlite' --region us-east-1