A derived table is a query whose results are used as if it were a physical table in the database. Native derived tables (NDTs) perform the same function as writing a SQL query, but are defined in LookML. They are also much easier to read, understand, and reason about as you model your data.
Both native and SQL-based derived tables are defined in LookML using the
derived_table parameter at the view level. However, with NDTs, you do not need to create a SQL query. Instead, you use the
explore_source parameter to specify the Explore on which to base the derived table, the desired columns, and other desired characteristics.
Using an Explore to Begin Defining Your NDTs
Starting with an Explore, Looker can generate LookML for all or most of your derived table. Just create an Explore and select all of the fields you want to include in your derived table. Then, to generate the NDT LookML:
Click the Explore’s gear menu.
Select Get LookML.
Click the Derived Table tab.
Looker displays the LookML to create the corresponding NDT.
Copy the LookML.
Now that you have copied the generated LookML, paste it into a view file:
Navigate to your project files.
Click the + at the top of the project file list in the Looker IDE and select Create View. If the project is enabled for IDE folders, you can click a folder’s menu and select Create View from the menu to create the file inside the folder.
Set the view name to something meaningful.
Optionally, change column names, specify derived columns, and add filters.
When you use a measure of
type: countin an Explore, the visualization labels the resulting values with the view name rather than the word Count. To avoid confusion, we recommend pluralizing your view name, selecting Show Full Field Name under Series in the visualization settings, or using a
view_labelwith a pluralized version of your view name.
Defining an NDT in LookML
Whether you use derived tables declared in SQL or native LookML, the output of a
derived_table’s query is a table with a set of columns. When the derived table is expressed in SQL, the output column names are implied by the SQL query. For example, the SQL query below will have the output columns
In Looker, a query is based on an Explore, includes measure and dimension fields, adds any applicable filters, and may also specify a sort order. An NDT contains all these elements plus the output names for the columns.
The simple example below produces a derived table with three columns:
lifetime_number_of_orders. You don’t need to manually write the query in SQL — instead, Looker creates the query for you by using the specified Explore
order_items and some of that Explore’s fields (
include Statements to Enable Referencing Fields
In the NDT’s view file you use the
explore_source parameter to point to an Explore and to define the desired columns and other desired characteristics for the NDT. Because you are pointing to an Explore from within the NDT’s view file, you must also include the file containing the Explore’s definition. Explores are usually defined within a model file, but in the case of NDTs it’s cleaner to create a separate file for the Explore using the
.explore.lkml file extension, as described in the documentation for Creating Explore Files. That way, in your NDT view file you can include a single Explore file and not the entire model file. In which case:
- The native derived table’s view file should include the Explore’s file:
- The Explore’s file should include the view files that it needs:
- The model should include the Explore’s file:
Explore files will listen to the connection of the model they are included in. Consider this fact when you include Explore files in models that are configured with a connection that is different from the Explore file’s parent model. If the schema for the including model’s connection differs from the schema for the parent model’s connection, it can cause query errors.
Defining NDT Columns
As shown in the example above, you use
column to specify the output columns of the derived table.
Specifying the Column Names
user_id column, the column name matches the name of the specified field in the original Explore.
Frequently, you will want a different column name in the output table than the name of the fields in the original Explore. In the example above, we are producing a lifetime value calculation by user using the
order_items Explore. In the output table,
total_revenue is really a customer’s
column declaration supports declaring an output name that is different from the input field. For example, the code below says, “make an output column named
lifetime_value from field
Implied Column Names
field parameter is left out of a column declaration, it is assumed to be
<explore_name>.<field_name>. For example, if you have specified
explore_source: order_items, then
is equivalent to
Creating Derived Columns for Calculated Values
You can add
derived_column parameters to specify columns that don’t exist in the
explore_source parameter’s Explore. Each
derived_column parameter has a
sql parameter specifying how to construct the value.
sql calculation can use any columns that you have specified using
column parameters. Derived columns cannot include aggregate functions, but they can include calculations that can be performed on a single row of the table.
The example below produces the same derived table as the earlier example, except that it adds a calculated
average_customer_order column, which is calculated from the
lifetime_number_of_orders columns in the NDT.
Using SQL Window Functions
Some database dialects support window functions, especially to create sequence numbers, primary keys, running and cumulative totals, and other useful multi-row calculations. After the primary query has been executed, any
derived_column declarations are executed in a separate pass.
If your database dialect supports window functions, then you can use them in your native derived table. Create a
derived_column parameter with a
sql parameter that contain the desired window function. When referring to values, you should use the column name as defined in your NDT.
The example below creates an NDT that includes the
created_time columns. Then, using a derived column with a
SQL ROW_NUMBER() window function, it calculates a column that contains the sequence number of a customer’s order.
Adding Filters to an NDT
Suppose we wanted to build a derived table of a customer’s value over the past 90 days. We want the same calculations as we performed above, but we only want to include purchases from the last 90 days.
We just add a filter to the
derived_table that filters for transactions in the last 90 days. The
filters parameter for a derived table uses the same syntax as you use to create a filtered measure.
Filters will be added to the
WHERE clause when Looker writes the SQL for the derived table.
Using Templated Filters
You can use
bind_filters to include templated filters:
This is essentially the same as using the following code in a
to_field is the field to which the filter is applied. The
to_field must be a field from the underlying
from_field specifies the field from which to get the filter, if there is a filter at runtime.
bind_filters example above, Looker will take any filter applied to the
filtered_lookml_dt.filter_date field and apply the filter to the
You can also use the
bind_all_filters subparameter of
explore_source to pass all runtime filters from an Explore to an NDT subquery. See the
explore_source documentation page for more information.
Sorting and Limiting NDTs
You can also sort and limit the derived tables, if desired:
Remember, an Explore may display the rows in a different order than the underlying sort.
Converting NDTs to Different Time Zones
You can specify the time zone for your NDT using the
When you use the
timezone subparameter, all time-based data in the NDT will be converted to the time zone you specify. See the
timezone values documentation page for a list of the supported time zones.
If you don’t specify a time zone in your NDT definition, the NDT will not perform any time zone conversion on time-based data, and instead time-based data will default to your database time zone.
If the NDT is not persistent, you can set the time zone value to
"query_timezone" to automatically use the time zone of the currently running query.