- Notifications
You must be signed in to change notification settings - Fork126
Complete SeaDatabricksClient (Execution Phase)#580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
* Modified the gitignore file to not have .idea file* [PECO-1803] Splitting the PySql connector into the core and the non core part (#417)* Implemented ColumnQueue to test the fetchall without pyarrowRemoved tokenremoved token* order of fields in row corrected* Changed the folder structure and tested the basic setup to work* Refractored the code to make connector to work* Basic Setup of connector, core and sqlalchemy is working* Basic integration of core, connect and sqlalchemy is working* Setup working dynamic change from ColumnQueue to ArrowQueue* Refractored the test code and moved to respective folders* Added the unit test for column_queueFixed __version__Fix* venv_main added to git ignore* Added code for merging columnar table* Merging code for columnar* Fixed the retry_close sesssion test issue with logging* Fixed the databricks_sqlalchemy tests and introduced pytest.ini for the sqla_testing* Added pyarrow_test mark on pytest* Fixed databricks.sqlalchemy to databricks_sqlalchemy imports* Added poetry.lock* Added dist folder* Changed the pyproject.toml* Minor Fix* Added the pyarrow skip tag on unit tests and tested their working* Fixed the Decimal and timestamp conversion issue in non arrow pipeline* Removed not required files and reformatted* Fixed test_retry error* Changed the folder structure to src / databricks* Removed the columnar non arrow flow to another PR* Moved the README to the root* removed columnQueue instance* Revmoved databricks_sqlalchemy dependency in core* Changed the pysql_supports_arrow predicate, introduced changes in the pyproject.toml* Ran the black formatter with the original version* Extra .py removed from all the __init__.py files names* Undo formatting check* Check* Check* Check* Check* Check* Check* Check* Check* Check* Check* Check* Check* Check* Check* BIG UPDATE* Refeactor code* Refractor* Fixed versioning* Minor refractoring* Minor refractoring* Changed the folder structure such that sqlalchemy has not reference here* Fixed README.md and CONTRIBUTING.md* Added manual publish* On push trigger added* Manually setting the publish step* Changed versioning in pyproject.toml* Bumped up the version to 4.0.0.b3 and also changed the structure to have pyarrow as optional* Removed the sqlalchemy tests from integration.yml file* [PECO-1803] Print warning message if pyarrow is not installed (#468)Print warning message if pyarrow is not installedSigned-off-by: Jacky Hu <jacky.hu@databricks.com>* [PECO-1803] Remove sqlalchemy and update README.md (#469)Remove sqlalchemy and update README.mdSigned-off-by: Jacky Hu <jacky.hu@databricks.com>* Removed all sqlalchemy related stuff* generated the lock file* Fixed failing tests* removed poetry.lock* Updated the lock file* Fixed poetry numpy 2.2.2 issue* Workflow fixes---------Signed-off-by: Jacky Hu <jacky.hu@databricks.com>Co-authored-by: Jacky Hu <jacky.hu@databricks.com>Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Removed python3.8 support* Minor fixSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Support for Py till 3.12Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
bumped up the versionSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Remove upper caps on dependencies (#452)* Remove upper caps on numpy and pyarrow versionsSigned-off-by: David Black <dblack@atlassian.com>* Added CI CD upto python 3.13Signed-off-by: David Black <dblack@atlassian.com>* Specify pandas 2.2.3 as the lower bound for python 3.13Signed-off-by: David Black <dblack@atlassian.com>* Specify pyarrow 18.0.0 as the lower bound for python 3.13Signed-off-by: David Black <dblack@atlassian.com>* Move `numpy` to dev dependenciesSigned-off-by: Dave Hirschfeld <dave.hirschfeld@gmail.com>* Updated lockfileSigned-off-by: Dave Hirschfeld <dave.hirschfeld@gmail.com>---------Signed-off-by: David Black <dblack@atlassian.com>Signed-off-by: Dave Hirschfeld <dave.hirschfeld@gmail.com>Co-authored-by: David Black <dblack@atlassian.com>Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Improve debugging + add PR review template* case sensitivity of PR templateSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Base changes* Black formatter* Cache version fix* Added the changed test_retry.py file* retry_test_mixins changesSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Updated the codeownersSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: Shivam Raj <shivam.raj@databricks.com>Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Added check for 2 formats* Wrote unit tests* Added more supporting formats* Added the T format datetime* Added more timestamp formats* Added python-dateutil librarySigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: Shivam Raj <shivam.raj@databricks.com>Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Changed bound for python-datetutilSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Added examples and fixed the async execute not working without pyarrowSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
* Added version check* Removed packagingSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Updated the version to 4.0.3Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
ensure maintenance of current APIs of Connection while delegatingresponsibilitySigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
… through ConnectionSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
as in CONTRIBUTING.mdSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
in case the openSession takes long, the initialisation of the sessionwill not complete immediately. This could make the session attributeinaccessible. If the Connection is deleted in this time, the open()check will throw because the session attribute does not exist. Thus, wedefault to the Connection being closed in this case. This was not anissue before because open was a direct attribute of the Connectionclass. Caught in the integration tests.Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
WARNING: incomplete - result set still aligned heavily with thrift andthe necessity of some defined functions is to be validatedSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
new codeownersSigned-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
Signed-off-by: varun-edachali-dbx <varun.edachali@databricks.com>
Thanks for your contribution! To satisfy the DCO policy in ourcontributing guide every commit message must include a sign-off message. One or more of your commits is missing this message. You can reword previous commit messages with an interactive rebase ( |
What type of PR is this?
Description
Implement execution relevant methods (as defined in the backend interface) for the SEA client.
How is this tested?
Related Tickets & Documents
https://docs.google.com/document/d/1Y-eXLhNqqhrMVGnOlG8sdFrCxBTN1GdQvuKG4IfHmo0/edit?usp=sharing
https://databricks.atlassian.net/browse/PECOBLR-483?atlOrigin=eyJpIjoiODU3ODU0ZjEwN2VkNDI1Y2E4ODFiZDBlMDY4MmNjYjMiLCJwIjoiaiJ9