home User Guide Getting Started Help Center Documentation Community Training Certification
Looker keyboard_arrow_down
language keyboard_arrow_down
Creating Native Derived Tables

A derived table is a query whose results are used as if it were a physical table in the database. Native derived tables (NDTs) perform the same function as writing a SQL query, but are defined in LookML. They are also much easier to read, understand, and reason about as you model your data.

Both native and SQL-based derived tables are defined in LookML using the derived_table parameter at the view level. However, with NDTs, you do not need to create a SQL query. Instead, you use the explore_source parameter to specify the Explore on which to base the derived table, the desired columns, and other desired characteristics.

You can also have Looker create the derived table LookML from a SQL Runner query, as described on the Using SQL Runner to Create Derived Tables documentation page.

Using an Explore to Begin Defining Your NDTs

Starting with an Explore, Looker can generate LookML for all or most of your derived table. Just create an Explore and select all of the fields you want to include in your derived table. Then, to generate the NDT LookML:

  1. Click the Explore’s gear menu.

  2. Select Get LookML.

  3. Click the Derived Table tab.

    Looker displays the LookML to create the corresponding NDT.

  4. Copy the LookML.

Now that you have copied the generated LookML, paste it into a view file:

  1. Navigate to your project files.

  2. Click the + at the top of the project file list in the Looker IDE and select Create View. Or you can click a folder’s menu and select Create View from the menu to create the file inside the folder.

  3. Set the view name to something meaningful.

  4. Optionally, change column names, specify derived columns, and add filters.

When you use a measure of type: count in an Explore, the visualization labels the resulting values with the view name rather than the word Count. To avoid confusion, we recommend pluralizing your view name, selecting Show Full Field Name under Series in the visualization settings, or using a view_label with a pluralized version of your view name.

Defining an NDT in LookML

Whether you use derived tables declared in SQL or native LookML, the output of a derived_table’s query is a table with a set of columns. When the derived table is expressed in SQL, the output column names are implied by the SQL query. For example, the SQL query below will have the output columns user_id, lifetime_number_of_orders, and lifetime_customer_value:

SELECT user_id , COUNT(DISTINCT order_id) as lifetime_number_of_orders , SUM(sale_price) as lifetime_customer_value FROM order_items GROUP BY 1

In Looker, a query is based on an Explore, includes measure and dimension fields, adds any applicable filters, and may also specify a sort order. An NDT contains all these elements plus the output names for the columns.

The simple example below produces a derived table with three columns: user_id, lifetime_customer_value, and lifetime_number_of_orders. You don’t need to manually write the query in SQL — instead, Looker creates the query for you by using the specified Explore order_items and some of that Explore’s fields (order_items.user_id, order_items.total_revenue, and order_items.order_count).

view: user_order_facts { derived_table: { explore_source: order_items { column: user_id { field: order_items.user_id } column: lifetime_number_of_orders { field: order_items.order_count } column: lifetime_customer_value { field: order_items.total_revenue } } } # Define the view's fields as desired dimension: user_id { hidden: yes } dimension: lifetime_number_of_orders { type: number } dimension: lifetime_customer_value { type: number } }

Using include Statements to Enable Referencing Fields

In the NDT’s view file you use the explore_source parameter to point to an Explore and to define the desired columns and other desired characteristics for the NDT. Because you are pointing to an Explore from within the NDT’s view file, you must also include the file containing the Explore’s definition. Explores are usually defined within a model file, but in the case of NDTs it’s cleaner to create a separate file for the Explore using the .explore.lkml file extension, as described in the documentation for Creating Explore Files. That way, in your NDT view file you can include a single Explore file and not the entire model file. In which case:

Explore files will listen to the connection of the model they are included in. Consider this fact when you include Explore files in models that are configured with a connection that is different from the Explore file’s parent model. If the schema for the including model’s connection differs from the schema for the parent model’s connection, it can cause query errors.

Defining NDT Columns

As shown in the example above, you use column to specify the output columns of the derived table.

Specifying the Column Names

For the user_id column, the column name matches the name of the specified field in the original Explore.

Frequently, you will want a different column name in the output table than the name of the fields in the original Explore. In the example above, we are producing a lifetime value calculation by user using the order_items Explore. In the output table, total_revenue is really a customer’s lifetime_customer_value.

The column declaration supports declaring an output name that is different from the input field. For example, the code below says, “make an output column named lifetime_value from field order_items.total_revenue”:

column: lifetime_value { field: order_items.total_revenue }

Implied Column Names

If the field parameter is left out of a column declaration, it is assumed to be <explore_name>.<field_name>. For example, if you have specified explore_source: order_items, then

column: user_id { field: order_items.user_id }

is equivalent to

column: user_id {}

Creating Derived Columns for Calculated Values

You can add derived_column parameters to specify columns that don’t exist in the explore_source parameter’s Explore. Each derived_column parameter has a sql parameter specifying how to construct the value.

Your sql calculation can use any columns that you have specified using column parameters. Derived columns cannot include aggregate functions, but they can include calculations that can be performed on a single row of the table.

The example below produces the same derived table as the earlier example, except that it adds a calculated average_customer_order column, which is calculated from the lifetime_customer_value and lifetime_number_of_orders columns in the NDT.

view: user_order_facts { derived_table: { explore_source: order_items { column: user_id { field: order_items.user_id } column: lifetime_number_of_orders { field: order_items.order_count } column: lifetime_customer_value { field: order_items.total_revenue } derived_column: average_customer_order { sql: lifetime_customer_value / lifetime_number_of_orders ;; } } } # Define the view's fields as desired dimension: user_id { hidden: yes } dimension: lifetime_number_of_orders { type: number } dimension: lifetime_customer_value { type: number } dimension: average_customer_order { type: number } }

Using SQL Window Functions

Some database dialects support window functions, especially to create sequence numbers, primary keys, running and cumulative totals, and other useful multi-row calculations. After the primary query has been executed, any derived_column declarations are executed in a separate pass.

If your database dialect supports window functions, then you can use them in your native derived table. Create a derived_column parameter with a sql parameter that contain the desired window function. When referring to values, you should use the column name as defined in your NDT.

The example below creates an NDT that includes the user_id, order_id, and created_time columns. Then, using a derived column with a SQL ROW_NUMBER() window function, it calculates a column that contains the sequence number of a customer’s order.

view: user_order_sequences { derived_table: { explore_source: order_items { column: user_id { field: order_items.user_id } column: order_id { field: order_items.order_id } column: created_time { field: order_items.created_time } derived_column: user_sequence { sql: ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY created_time) ;; } } } dimension: order_id { hidden: yes } dimension: user_sequence { type: number } }

Adding Filters to an NDT

Suppose we wanted to build a derived table of a customer’s value over the past 90 days. We want the same calculations as we performed above, but we only want to include purchases from the last 90 days.

We just add a filter to the derived_table that filters for transactions in the last 90 days. The filters parameter for a derived table uses the same syntax as you use to create a filtered measure.

view: user_90_day_facts { derived_table: { explore_source: order_items { column: user_id { field: order_items.user_id } column: number_of_orders_90_day { field: order_items.order_count } column: customer_value_90_day { field: order_items.total_revenue } filters: [order_items.created_date: "90 days"] } } # Add define view's fields as desired dimension: user_id { hidden: yes } dimension: number_of_orders_90_day { type: number } dimension: customer_value_90_day { type: number } }

Filters will be added to the WHERE clause when Looker writes the SQL for the derived table.

Using Templated Filters

You can use bind_filters to include templated filters:

bind_filters: { to_field: users.created_date from_field: filtered_lookml_dt.filter_date }

This is essentially the same as using the following code in a sql block:

{% condition filtered_lookml_dt.filter_date %} users.created_date {% endcondition %}

The to_field is the field to which the filter is applied. The to_field must be a field from the underlying explore_source.

The from_field specifies the field from which to get the filter, if there is a filter at runtime.

In the bind_filters example above, Looker will take any filter applied to the filtered_lookml_dt.filter_date field and apply the filter to the users.created_date field.

You can also use the bind_all_filters subparameter of explore_source to pass all runtime filters from an Explore to an NDT subquery. See the explore_source documentation page for more information.

Sorting and Limiting NDTs

You can also sort and limit the derived tables, if desired:

sorts: [order_items.count: desc] limit: 10

Remember, an Explore may display the rows in a different order than the underlying sort.

Converting NDTs to Different Time Zones

You can specify the time zone for your NDT using the timezone subparameter:

timezone: "America/Los_Angeles"

When you use the timezone subparameter, all time-based data in the NDT will be converted to the time zone you specify. See the timezone values documentation page for a list of the supported time zones.

If you don’t specify a time zone in your NDT definition, the NDT will not perform any time zone conversion on time-based data, and instead time-based data will default to your database time zone.

If the NDT is not persistent, you can set the time zone value to "query_timezone" to automatically use the time zone of the currently running query.