indexes Stay organized with collections Save and categorize content based on your preferences.
Usage
view: my_view { derived_table: { indexes: ["order_id"] ... }}Hierarchy indexes- or - indexes | Default Value NoneAccepts The names of one or more columns in a PDT or an aggregate tableSpecial Rules indexes is supported only onspecific dialects |
Definition
Theindexes parameter lets you apply indexes to the columns of apersistent derived table (PDT) or anaggregate table. When you add more than one column, Looker will create one index for each column that you specify; it does not create a single, multi-column index. If theindexes parameter is missing from the query, Looker will warn you to add anindexes parameter to improve query performance. Learn more about indexing persistent derived tables on theDerived tables in Looker documentation page.
See theDialect support for
indexessection on this page for the list of dialects that supportindexes.The
indexesparameter works only with tables that arepersistent, such as PDTs and aggregate tables.indexesis not supported forderived tables without a persistence strategy.In addition, the
indexesparameter is not supported for derived tables that are defined usingcreate_processorsql_create.
If you useindexes with Redshift, you will create an interleaved sort key. You can also create regular sort keys usingsortkeys, but you cannot use both at the same time. Distribution keys can be created withdistribution.
Generally speaking, indexes should be applied to primary keys and date or time columns.
Examples
For a traditional database (for example, MySQL or Postgres), create acustomer_order_facts persistent derived table. The PDT should rebuild when theorder_datagroupdatagroup is triggered and will have an index oncustomer_id:
view: customer_order_facts { derived_table: { explore_source: order { column: customer_id { field: order.customer_id } column: lifetime_orders { field: order.lifetime_orders } } datagroup_trigger: order_datagroup indexes: ["customer_id"] }}For a traditional database, create acustomer_order_facts persistent derived table that is based on a SQL query and applies an index oncustomer_id:
view: customer_order_facts { derived_table: { sql: SELECT customer_id, COUNT(*) AS lifetime_orders FROM order GROUP BY customer_id ;; persist_for: "24 hours" indexes: ["customer_id"] }}For a traditional database, create acustomer_day_facts derived table with indexes on bothcustomer_id anddate:
view: customer_day_facts { derived_table: { sql: SELECT customer_id, DATE(order_time) AS date, COUNT(*) AS num_orders FROM order GROUP BY customer_id ;; persist_for: "24 hours" indexes: ["customer_id", "date"] }}For a Redshift database, create acustomer_day_facts derived table with an interleaved sort key built fromcustomer_id anddate:
view: customer_day_facts { derived_table: { sql: SELECT customer_id, DATE(order_time) AS date, COUNT(*) AS num_orders FROM order GROUP BY customer_id ;; persist_for: "24 hours" indexes: ["customer_id", "date"] }}Dialect support forindexes
The ability to useindexes depends on the database dialect your Looker connection is using. If you are working with something other than a traditional database (for example, MySQL or Postgres), your database may not support theindexes parameter. Looker will warn you if this is the case. You can swap out theindexes parameter for one that is appropriate for your database connection. Learn more about such parameters on theView parameters documentation page.
In the latest release of Looker, the following dialects supportindexes:
| Dialect | Supported? |
|---|---|
| Actian Avalanche | |
| Amazon Athena | |
| Amazon Aurora MySQL | |
| Amazon Redshift | |
| Amazon Redshift 2.1+ | |
| Amazon Redshift Serverless 2.1+ | |
| Apache Druid | |
| Apache Druid 0.13+ | |
| Apache Druid 0.18+ | |
| Apache Hive 2.3+ | |
| Apache Hive 3.1.2+ | |
| Apache Spark 3+ | |
| ClickHouse | |
| Cloudera Impala 3.1+ | |
| Cloudera Impala 3.1+ with Native Driver | |
| Cloudera Impala with Native Driver | |
| DataVirtuality | |
| Databricks | |
| Denodo 7 | |
| Denodo 8 & 9 | |
| Dremio | |
| Dremio 11+ | |
| Exasol | |
| Google BigQuery Legacy SQL | |
| Google BigQuery Standard SQL | |
| Google Cloud AlloyDB for PostgreSQL | |
| Google Cloud PostgreSQL | |
| Google Cloud SQL | |
| Google Spanner | |
| Greenplum | |
| HyperSQL | |
| IBM Netezza | |
| MariaDB | |
| Microsoft Azure PostgreSQL | |
| Microsoft Azure SQL Database | |
| Microsoft Azure Synapse Analytics | |
| Microsoft SQL Server 2008+ | |
| Microsoft SQL Server 2012+ | |
| Microsoft SQL Server 2016 | |
| Microsoft SQL Server 2017+ | |
| MongoBI | |
| MySQL | |
| MySQL 8.0.12+ | |
| Oracle | |
| Oracle ADWC | |
| PostgreSQL 9.5+ | |
| PostgreSQL pre-9.5 | |
| PrestoDB | |
| PrestoSQL | |
| SAP HANA | |
| SAP HANA 2+ | |
| SingleStore | |
| SingleStore 7+ | |
| Snowflake | |
| Teradata | |
| Trino | |
| Vector | |
| Vertica |
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-05 UTC.