- Notifications
You must be signed in to change notification settings - Fork19
A drop-in replacement for dplyr, powered by DuckDB for speed.
License
Unknown, MIT licenses found
Licenses found
tidyverse/duckplyr
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Adrop-in replacement for dplyr, powered by DuckDB forspeed.
dplyr is the grammar of datamanipulation in the tidyverse. The duckplyr package will run all of yourexisting dplyr code with identical results, usingDuckDB where possible to compute the resultsfaster. In addition, you can analyze larger-than-memory datasetsstraight from files on your disk or from the web.
If you are new to dplyr, the best place to start is thedatatransformation chapter inR forData Science.
Install duckplyr from CRAN with:
install.packages("duckplyr")
You can also install the development version of duckplyr fromR-universe:
install.packages("duckplyr",repos= c("https://tidyverse.r-universe.dev","https://cloud.r-project.org"))
Or fromGitHub with:
# install.packages("pak")pak::pak("tidyverse/duckplyr")
Callinglibrary(duckplyr)
overwrites dplyr methods, enabling duckplyrfor the entire session.
library(conflicted)library(duckplyr)#> Loading required package: dplyr#> ✔ Overwriting dplyr methods with duckplyr methods.#> ℹ Turn off with `duckplyr::methods_restore()`.
conflict_prefer("filter","dplyr",quiet=TRUE)
The following code aggregates the inflight delay by year and month forthe first half of the year. We use a variant of thenycflights13::flights
dataset, where the timezone has been set to UTCto work around a current limitation of duckplyr, seevignette("limits")
.
flights_df()#> # A tibble: 336,776 × 19#> year month day dep_time sched_d…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl>#> 1 2013 1 1 517 515 2 830 819 11#> 2 2013 1 1 533 529 4 850 830 20#> 3 2013 1 1 542 540 2 923 850 33#> 4 2013 1 1 544 545 -1 1004 1022 -18#> 5 2013 1 1 554 600 -6 812 837 -25#> 6 2013 1 1 554 558 -4 740 728 12#> 7 2013 1 1 555 600 -5 913 854 19#> 8 2013 1 1 557 600 -3 709 723 -14#> 9 2013 1 1 557 600 -3 838 846 -8#> 10 2013 1 1 558 600 -2 753 745 8#> # ℹ 336,766 more rows#> # ℹ abbreviated names: ¹sched_dep_time, ²dep_delay, ³arr_time,#> # ⁴sched_arr_time, ⁵arr_delay#> # ℹ 10 more variables: carrier <chr>, flight <int>, tailnum <chr>,#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,#> # hour <dbl>, minute <dbl>, time_hour <dttm>out<- flights_df()|> filter(!is.na(arr_delay),!is.na(dep_delay))|> mutate(inflight_delay=arr_delay-dep_delay)|> summarize(.by= c(year,month),mean_inflight_delay= mean(inflight_delay),median_inflight_delay= median(inflight_delay), )|> filter(month<=6)
The result is a plain tibble:
class(out)#> [1] "tbl_df" "tbl" "data.frame"
Nothing has been computed yet. Querying the number of rows, or a column,starts the computation:
out$month#> [1] 1 2 3 4 5 6
Note that, unlike dplyr, the results are not ordered, see?config
fordetails. However, once materialized, the results are stable:
out#> # A tibble: 6 × 4#> year month mean_inflight_delay median_inflight_delay#> <int> <int> <dbl> <dbl>#> 1 2013 1 -3.86 -5#> 2 2013 2 -5.15 -6#> 3 2013 3 -7.36 -9#> 4 2013 4 -2.67 -5#> 5 2013 5 -9.37 -10#> 6 2013 6 -4.24 -7
If a computation is not supported by DuckDB, duckplyr will automaticallyfall back to dplyr.
flights_df()|> summarize(.by=origin,dest= paste(sort(unique(dest)),collapse="") )#> # A tibble: 3 × 2#> origin dest#> <chr> <chr>#> 1 EWR ALB ANC ATL AUS AVL BDL BNA BOS BQN BTV BUF BWI BZN CAE CHS C…#> 2 LGA ATL AVL BGR BHM BNA BOS BTV BUF BWI CAE CAK CHO CHS CLE CLT C…#> 3 JFK ABQ ACK ATL AUS BHM BNA BOS BQN BTV BUF BUR BWI CHS CLE CLT C…
Restart R, or callduckplyr::methods_restore()
to revert to thedefault dplyr implementation.
duckplyr::methods_restore()#> ℹ Restoring dplyr methods.
An extended variant of thenycflights13::flights
dataset is alsoavailable for download as Parquet files.
year<-2022:2024base_url<-"https://blobs.duckdb.org/flight-data-partitioned/"files<- paste0("Year=",year,"/data_0.parquet")urls<- paste0(base_url,files)tibble(urls)#> # A tibble: 3 × 1#> urls#> <chr>#> 1 https://blobs.duckdb.org/flight-data-partitioned/Year=2022/data_0.pa…#> 2 https://blobs.duckdb.org/flight-data-partitioned/Year=2023/data_0.pa…#> 3 https://blobs.duckdb.org/flight-data-partitioned/Year=2024/data_0.pa…
Using thehttpfs DuckDBextension, wecan query these files directly from R, without even downloading themfirst.
db_exec("INSTALL httpfs")db_exec("LOAD httpfs")flights<- read_parquet_duckdb(urls)
Like with local data frames, queries on the remote data are executedlazily. Unlike with local data frames, the default is to disallowautomatic materialization if the result is too large in order to protectmemory: the results are not materialized until explicitly requested,with acollect()
call for instance.
nrow(flights)#> Error: Materialization would result in more than 9090 rows. Use collect() or as_tibble() to materialize.
For printing, only the first few rows of the result are fetched.
flights#> # A duckplyr data frame: 110 variables#> Year Quarter Month DayofMonth DayOfWeek FlightDate Report…¹ DOT_I…²#> <dbl> <dbl> <dbl> <dbl> <dbl> <date> <chr> <dbl>#> 1 2022 1 1 14 5 2022-01-14 YX 20452#> 2 2022 1 1 15 6 2022-01-15 YX 20452#> 3 2022 1 1 16 7 2022-01-16 YX 20452#> 4 2022 1 1 17 1 2022-01-17 YX 20452#> 5 2022 1 1 18 2 2022-01-18 YX 20452#> 6 2022 1 1 19 3 2022-01-19 YX 20452#> 7 2022 1 1 20 4 2022-01-20 YX 20452#> 8 2022 1 1 21 5 2022-01-21 YX 20452#> 9 2022 1 1 22 6 2022-01-22 YX 20452#> 10 2022 1 1 23 7 2022-01-23 YX 20452#> # ℹ more rows#> # ℹ abbreviated names: ¹Reporting_Airline, ²DOT_ID_Reporting_Airline#> # ℹ 102 more variables: IATA_CODE_Reporting_Airline <chr>,#> # Tail_Number <chr>, Flight_Number_Reporting_Airline <dbl>,#> # OriginAirportID <dbl>, OriginAirportSeqID <dbl>,#> # OriginCityMarketID <dbl>, Origin <chr>, OriginCityName <chr>,#> # OriginState <chr>, OriginStateFips <chr>, OriginStateName <chr>,#> # OriginWac <dbl>, DestAirportID <dbl>, DestAirportSeqID <dbl>,#> # DestCityMarketID <dbl>, Dest <chr>, DestCityName <chr>,#> # DestState <chr>, DestStateFips <chr>, DestStateName <chr>,#> # DestWac <dbl>, CRSDepTime <chr>, DepTime <chr>, DepDelay <dbl>,#> # DepDelayMinutes <dbl>, DepDel15 <dbl>, …
flights|> count(Year)#> # A duckplyr data frame: 2 variables#> Year n#> <dbl> <int>#> 1 2022 6729125#> 2 2023 6847899#> 3 2024 3461319
Complex queries can be executed on the remote data. Note how only therelevant columns are fetched and the 2024 data isn’t even touched, asit’s not needed for the result.
out<-flights|> mutate(InFlightDelay=ArrDelay-DepDelay)|> summarize(.by= c(Year,Month),MeanInFlightDelay= mean(InFlightDelay,na.rm=TRUE),MedianInFlightDelay= median(InFlightDelay,na.rm=TRUE), )|> filter(Year<2024)out|> explain()#> ┌───────────────────────────┐#> │ HASH_GROUP_BY │#> │ ──────────────────── │#> │ Groups: │#> │ #0 │#> │ #1 │#> │ │#> │ Aggregates: │#> │ mean(#2) │#> │ median(#3) │#> │ │#> │ ~6729125 Rows │#> └─────────────┬─────────────┘#> ┌─────────────┴─────────────┐#> │ PROJECTION │#> │ ──────────────────── │#> │ Year │#> │ Month │#> │ InFlightDelay │#> │ InFlightDelay │#> │ │#> │ ~13458250 Rows │#> └─────────────┬─────────────┘#> ┌─────────────┴─────────────┐#> │ PROJECTION │#> │ ──────────────────── │#> │ Year │#> │ Month │#> │ InFlightDelay │#> │ │#> │ ~13458250 Rows │#> └─────────────┬─────────────┘#> ┌─────────────┴─────────────┐#> │ READ_PARQUET │#> │ ──────────────────── │#> │ Function: │#> │ READ_PARQUET │#> │ │#> │ Projections: │#> │ Year │#> │ Month │#> │ DepDelay │#> │ ArrDelay │#> │ │#> │ File Filters: │#> │ (CAST(Year AS DOUBLE) < │#> │ 2024.0) │#> │ │#> │ Scanning Files: 2/3 │#> │ │#> │ ~13458250 Rows │#> └───────────────────────────┘out|> print()|> system.time()#> # A duckplyr data frame: 4 variables#> Year Month MeanInFlightDelay MedianInFlightDelay#> <dbl> <dbl> <dbl> <dbl>#> 1 2022 11 -5.21 -7#> 2 2023 11 -7.10 -8#> 3 2022 8 -5.27 -7#> 4 2023 4 -4.54 -6#> 5 2022 7 -5.13 -7#> 6 2022 4 -4.88 -6#> 7 2023 8 -5.73 -7#> 8 2023 7 -4.47 -7#> 9 2022 2 -6.52 -8#> 10 2023 5 -6.17 -7#> # ℹ more rows#> user system elapsed#> 1.145 0.455 9.402
Over 10M rows analyzed in about 10 seconds over the internet, that’s notbad. Of course, working with Parquet, CSV, or JSON files downloadedlocally is possible as well.
For full compatibility,na.rm = FALSE
by default in the aggregationfunctions:
flights|> summarize(mean(ArrDelay-DepDelay))#> # A duckplyr data frame: 1 variable#> `mean(ArrDelay - DepDelay)`#> <dbl>#> 1 NA
vignette("large")
:Tools for working with large datavignette("prudence")
:How duckplyr can help protect memory when working with large datavignette("fallback")
:How the fallback to dplyr works internallyvignette("limits")
:Translation of dplyr employed by duckplyr, and current limitationsvignette("developers")
:Using duckplyr for individual data frames and in other packagesvignette("telemetry")
:Telemetry in duckplyr
If you encounter a clear bug, please file an issue with a minimalreproducible example onGitHub. For questionsand other discussion, please useforum.posit.co.
Please note that this project is released with aContributor Code ofConduct. Byparticipating in this project you agree to abide by its terms.
About
A drop-in replacement for dplyr, powered by DuckDB for speed.