Author: CB

  • Multiple DAX Measures Creation methods

    There are different ways we can build multiple measures at once in Power BI, and we can ask AI to build us the scripts to do the job for us. First,I will tell AI what tables I have from the publicly available Adventure Works database and then ask for generate some useful measures we can use.

    The methods we use are as follows:
    1. DAX query (as part of Power BI desktop).
    2. Tabular Editor (third-party tool)
    3. TMDL


    DAX query multiple measure creation.
    The first method is to create a script in DAX query:

    The tables that I am using are below:

    AI gives us the code we need to create 10 measures for the Reseller table.
    It creates a nice DAX query that we can use to add measures.

    -- Define 10 foundational measures for AdventureWorksDW Reseller model
    DEFINE
        MEASURE FactResellerSales[Sales] =
            SUM ( FactResellerSales[SalesAmount] )
    
        MEASURE FactResellerSales[Total Cost] =
            SUM ( FactResellerSales[TotalProductCost] )
    
        MEASURE FactResellerSales[Gross Profit] =
            [Sales] - [Total Cost]
    
        MEASURE FactResellerSales[Gross Margin %] =
            DIVIDE ( [Gross Profit], [Sales] )
    
        MEASURE FactResellerSales[Order Count] =
            DISTINCTCOUNT ( FactResellerSales[SalesOrderNumber] )
    
        MEASURE FactResellerSales[Units Sold] =
            SUM ( FactResellerSales[OrderQuantity] )
    
        MEASURE FactResellerSales[Avg Order Value] =
            DIVIDE ( [Sales], [Order Count] )
    
        MEASURE FactResellerSales[Sales YTD] =
            TOTALYTD ( [Sales], 'DimDate'[FullDateAlternateKey] )
    
        MEASURE FactResellerSales[Sales PY] =
            CALCULATE ( [Sales], DATEADD ( 'DimDate'[FullDateAlternateKey], -1, YEAR ) )
    
        MEASURE FactResellerSales[Sales YoY %] =
            DIVIDE ( [Sales] - [Sales PY], [Sales PY] )
    
    -- Example result set to validate measures (optional)
    EVALUATE
    SUMMARIZECOLUMNS(
        'DimDate'[CalendarYear],
        'DimProductCategory'[EnglishProductCategoryName],
        'DimProductSubcategory'[EnglishProductSubcategoryName],
        "Sales", [Sales],
        "Gross Profit", [Gross Profit],
        "GM %", [Gross Margin %],
        "Orders", [Order Count],
        "Units", [Units Sold],
        "AOV", [Avg Order Value],
        "Sales YTD", [Sales YTD],
        "Sales PY", [Sales PY],
        "YoY %", [Sales YoY %]
    )
    ORDER BY 'DimDate'[CalendarYear], 'DimProductCategory'[EnglishProductCategoryName], 'DimProductSubcategory'[EnglishProductSubcategoryName]
    

    I can then add them to my model by clicking on each measure: ‘Update model: Add measure’. So it does take a little effort.


    My measures are created, but disorganised, so I’d like them in a folder without us having to do the work. This is where the tabular editor comes in.

    Tabular editor C# Script
    We can use AI to give us a C# script that we can run in the Tabular editor (you need to enable unsupported features in File > Preferences), for it to run, but it does run. Note I’m using the free tabular editor 2 here.

    // AdventureWorksDW 2022 - Starter measures (create or update)
    // Target table: FactResellerSales
    // Folders: Measures\Sales Performance, \Customer Insights, \Product Performance, \Time Intelligence
    
    var t = Model.Tables["FactResellerSales"];
    if (t == null) throw new Exception("Table 'FactResellerSales' not found.");
    
    // create-or-update helper
    int created = 0, updated = 0;
    System.Action<string,string,string,string> Make = (name, expr, folder, fmt) =>
    {
        Measure m = null;
        foreach (var mm in t.Measures) { if (mm.Name == name) { m = mm; break; } }
        if (m == null) { m = t.AddMeasure(name, expr); created++; } else { m.Expression = expr; updated++; }
        if (!string.IsNullOrWhiteSpace(folder)) m.DisplayFolder = folder;
        if (!string.IsNullOrWhiteSpace(fmt))    m.FormatString  = fmt;
    };
    
    // folder constants
    var F_Sales = "Measures\\Sales Performance";
    var F_Cust  = "Measures\\Customer Insights";
    var F_Prod  = "Measures\\Product Performance";
    var F_Time  = "Measures\\Time Intelligence";
    
    // SALES PERFORMANCE
    Make("Sales",               "SUM(FactResellerSales[SalesAmount])",                F_Sales, "#,0");
    Make("Total Cost",          "SUM(FactResellerSales[TotalProductCost])",           F_Sales, "#,0");
    Make("Gross Profit",        "[Sales] - [Total Cost]",                             F_Sales, "#,0");
    Make("Gross Margin %",      "DIVIDE([Gross Profit],[Sales])",                     F_Sales, "0.00%");
    Make("Order Count",         "DISTINCTCOUNT(FactResellerSales[SalesOrderNumber])", F_Sales, "#,0");
    Make("Units Sold",          "SUM(FactResellerSales[OrderQuantity])",              F_Sales, "#,0");
    Make("Avg Order Value",     "DIVIDE([Sales],[Order Count])",                      F_Sales, "#,0.00");
    Make("Avg Unit Price",      "DIVIDE([Sales],[Units Sold])",                       F_Sales, "#,0.00");
    Make("Avg Unit Cost",       "DIVIDE([Total Cost],[Units Sold])",                  F_Sales, "#,0.00");
    Make("Profit per Unit",     "DIVIDE([Gross Profit],[Units Sold])",                F_Sales, "#,0.00");
    Make("Profit per Order",    "DIVIDE([Gross Profit],[Order Count])",               F_Sales, "#,0.00");
    
    // CUSTOMER INSIGHTS
    Make("Distinct Customers",  "DISTINCTCOUNT(FactResellerSales[CustomerKey])",      F_Cust,  "#,0");
    Make("Sales per Customer",  "DIVIDE([Sales],[Distinct Customers])",               F_Cust,  "#,0.00");
    Make("Orders per Customer", "DIVIDE([Order Count],[Distinct Customers])",         F_Cust,  "#,0.00");
    Make("Units per Customer",  "DIVIDE([Units Sold],[Distinct Customers])",          F_Cust,  "#,0.00");
    Make("Profit per Customer", "DIVIDE([Gross Profit],[Distinct Customers])",        F_Cust,  "#,0.00");
    
    // PRODUCT PERFORMANCE
    Make("Distinct Products",   "DISTINCTCOUNT(FactResellerSales[ProductKey])",       F_Prod,  "#,0");
    Make("Sales per Product",   "DIVIDE([Sales],[Distinct Products])",                F_Prod,  "#,0.00");
    Make("Profit per Product",  "DIVIDE([Gross Profit],[Distinct Products])",         F_Prod,  "#,0.00");
    Make("Sales Category Share","DIVIDE([Sales], CALCULATE([Sales], ALL('DimProductCategory')))",          F_Prod, "0.00%");
    Make("Profit Category Share","DIVIDE([Gross Profit], CALCULATE([Gross Profit], ALL('DimProductCategory')))", F_Prod, "0.00%");
    
    // TIME INTELLIGENCE
    // Make sure DimDate is marked as Date table and FullDateAlternateKey is a Date type
    var dateCol = "'DimDate'[FullDateAlternateKey]";
    Make("Sales YTD",          "TOTALYTD([Sales], " + dateCol + ")",                  F_Time, "#,0");
    Make("Profit YTD",         "TOTALYTD([Gross Profit], " + dateCol + ")",           F_Time, "#,0");
    Make("Sales PY",           "CALCULATE([Sales], DATEADD(" + dateCol + ", -1, YEAR))",        F_Time, "#,0");
    Make("Profit PY",          "CALCULATE([Gross Profit], DATEADD(" + dateCol + ", -1, YEAR))", F_Time, "#,0");
    Make("Sales YoY %",        "DIVIDE([Sales]-[Sales PY],[Sales PY])",               F_Time, "0.00%");
    Make("Profit YoY %",       "DIVIDE([Gross Profit]-[Profit PY],[Profit PY])",      F_Time, "0.00%");
    Make("Sales MTD",          "TOTALMTD([Sales], " + dateCol + ")",                  F_Time, "#,0");
    Make("Profit MTD",         "TOTALMTD([Gross Profit], " + dateCol + ")",           F_Time, "#,0");
    Make("Sales QTD",          "TOTALQTD([Sales], " + dateCol + ")",                  F_Time, "#,0");
    Make("Profit QTD",         "TOTALQTD([Gross Profit], " + dateCol + ")",           F_Time, "#,0");
    Make("Sales Rolling 12M",  "CALCULATE([Sales], DATESINPERIOD(" + dateCol + ", MAX(" + dateCol + "), -12, MONTH))",         F_Time, "#,0");
    Make("Profit Rolling 12M", "CALCULATE([Gross Profit], DATESINPERIOD(" + dateCol + ", MAX(" + dateCol + "), -12, MONTH))",  F_Time, "#,0");
    
    // summary to Output window
    Console.WriteLine("Created: " + created + ", Updated: " + updated);
    

    This is better; they are neatly organised in folders and have formatting applied. All I need to do now is review them and save them. The only thing that didn’t work was the formatting.

    Now that it worked, we’ll ask for some more measures:

    // -----------------------------------------------------------------------------
    // AdventureWorksDW 2022 – Extended Measure Library
    // Creates ±35 measures for FactResellerSales with proper folders & formatting.
    // -----------------------------------------------------------------------------
    
    var table = Model.Tables["FactResellerSales"];
    if (table == null) throw new Exception("Table 'FactResellerSales' not found.");
    
    // helper delegate
    System.Action<string,string,string> Make = (name, expr, folderPath) =>
    {
        var m = table.AddMeasure(name, expr);
        m.DisplayFolder = folderPath;
    };
    
    // -----------------------------------------------------------------------------
    // SALES PERFORMANCE
    // -----------------------------------------------------------------------------
    Make("Sales",            "SUM(FactResellerSales[SalesAmount])",              "Measures\\Sales Performance");
    Make("Total Cost",       "SUM(FactResellerSales[TotalProductCost])",         "Measures\\Sales Performance");
    Make("Gross Profit",     "[Sales] - [Total Cost]",                            "Measures\\Sales Performance");
    Make("Gross Margin %",   "DIVIDE([Gross Profit],[Sales])",                    "Measures\\Sales Performance");
    Make("Order Count",      "DISTINCTCOUNT(FactResellerSales[SalesOrderNumber])","Measures\\Sales Performance");
    Make("Units Sold",       "SUM(FactResellerSales[OrderQuantity])",             "Measures\\Sales Performance");
    Make("Avg Order Value",  "DIVIDE([Sales],[Order Count])",                     "Measures\\Sales Performance");
    Make("Avg Unit Price",   "DIVIDE([Sales],[Units Sold])",                      "Measures\\Sales Performance");
    Make("Avg Unit Cost",    "DIVIDE([Total Cost],[Units Sold])",                 "Measures\\Sales Performance");
    Make("Profit per Unit",  "DIVIDE([Gross Profit],[Units Sold])",               "Measures\\Sales Performance");
    Make("Profit per Order", "DIVIDE([Gross Profit],[Order Count])",              "Measures\\Sales Performance");
    
    // -----------------------------------------------------------------------------
    // CUSTOMER INSIGHTS
    // -----------------------------------------------------------------------------
    Make("Distinct Customers", "DISTINCTCOUNT(FactResellerSales[CustomerKey])",  "Measures\\Customer Insights");
    Make("Sales per Customer", "DIVIDE([Sales],[Distinct Customers])",           "Measures\\Customer Insights");
    Make("Orders per Customer","DIVIDE([Order Count],[Distinct Customers])",     "Measures\\Customer Insights");
    Make("Units per Customer", "DIVIDE([Units Sold],[Distinct Customers])",      "Measures\\Customer Insights");
    Make("Profit per Customer","DIVIDE([Gross Profit],[Distinct Customers])",    "Measures\\Customer Insights");
    
    // -----------------------------------------------------------------------------
    // PRODUCT PERFORMANCE
    // -----------------------------------------------------------------------------
    Make("Distinct Products",   "DISTINCTCOUNT(FactResellerSales[ProductKey])",  "Measures\\Product Performance");
    Make("Sales per Product",   "DIVIDE([Sales],[Distinct Products])",           "Measures\\Product Performance");
    Make("Profit per Product",  "DIVIDE([Gross Profit],[Distinct Products])",    "Measures\\Product Performance");
    Make("Top Product Sales",   "TOPN(1, VALUES('DimProductSubcategory'[EnglishProductSubcategoryName]), [Sales])", "Measures\\Product Performance");
    Make("Sales Category Share","DIVIDE([Sales], CALCULATE([Sales], ALL('DimProductCategory')))", "Measures\\Product Performance");
    Make("Profit Category Share","DIVIDE([Gross Profit], CALCULATE([Gross Profit], ALL('DimProductCategory')))", "Measures\\Product Performance");
    
    // -----------------------------------------------------------------------------
    // TIME INTELLIGENCE  (requires DimDate marked as Date Table)
    // -----------------------------------------------------------------------------
    Make("Sales YTD",        "TOTALYTD([Sales],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Profit YTD",       "TOTALYTD([Gross Profit],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Sales PY",         "CALCULATE([Sales],DATEADD('DimDate'[FullDateAlternateKey],-1,YEAR))", "Measures\\Time Intelligence");
    Make("Profit PY",        "CALCULATE([Gross Profit],DATEADD('DimDate'[FullDateAlternateKey],-1,YEAR))", "Measures\\Time Intelligence");
    Make("Sales YoY %",      "DIVIDE([Sales]-[Sales PY],[Sales PY])", "Measures\\Time Intelligence");
    Make("Profit YoY %",     "DIVIDE([Gross Profit]-[Profit PY],[Profit PY])", "Measures\\Time Intelligence");
    Make("Sales MTD",        "TOTALMTD([Sales],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Profit MTD",       "TOTALMTD([Gross Profit],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Sales QTD",        "TOTALQTD([Sales],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Profit QTD",       "TOTALQTD([Gross Profit],'DimDate'[FullDateAlternateKey])", "Measures\\Time Intelligence");
    Make("Sales Rolling 12M","CALCULATE([Sales],DATESINPERIOD('DimDate'[FullDateAlternateKey],MAX('DimDate'[FullDateAlternateKey]),-12,MONTH))","Measures\\Time Intelligence");
    Make("Profit Rolling 12M","CALCULATE([Gross Profit],DATESINPERIOD('DimDate'[FullDateAlternateKey],MAX('DimDate'[FullDateAlternateKey]),-12,MONTH))","Measures\\Time Intelligence");
    

    Now we have 4 sets of measures neatly placed in 4 folders.

    Now i can review the numbers.

    TMDL

    TMDL is Microsoft’s YAML-based representation of a tabular model, which is fully editable, meaning you can define:
    1. Tables
    2. Columns
    3. Measures
    4. Relationships
    5. Calculation groups
    You can use it in the Tabular editor and Power BI Desktop.

    Right-click on the Reseller table and select script TMDL to Script tab:

    Here we can add in measures as follows, before the partition block.
    Here, we can define the expression, the format string, and the display folder, which will live in the FactReseller table.
    And it worked nicely.

  • DAX Server Timings in DAX Studio

    We can take a measure like the one customer order measure, add it to a card visual, and use the performance analyzer to give us the DAX query, which we can paste into the performance analyzer. However, we may see different results if we use it in a matrix, so we will test that too.

    The original DAX measure is as follows, and the test measures are found here, which use different functions to create the virtual tables used in the methods. The first measure is below, and we will look at the output of the server timings in more detail, using AI to help us analyze the output.

    CustomersWithOneOrder = 
    VAR _CustomerOrders = SUMMARIZE('FactInternetSales', -- Table 
                                'FactInternetSales'[CustomerKey], -- Groupby  
                                 "Orders", DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber]) 
                                    )
    VAR _OneOrder = FILTER(_CustomerOrders, [Orders] = 1)
    
    RETURN
    COUNTROWS(_OneOrder)
    
    

    The DAX query created by the Formula Engine (copied from the performance analyzer) is as follows for a card visual (total only):

    // DAX Query
    EVALUATE
        ROW(
        "CustomersWithOneOrder", '_Measures'[CustomersWithOneOrder]
    )


    In DAX Studio, we can paste in the code, clear the cache first, select server time, and all the options to show us each step of the query plan

    The first part of the server timings shows us that the query took 12 ms in the Formula engine (FE) and 5 ms in the storage engine (SE) a total of 12 ms to run. In addition, we can see that a total of 2 SE queries were parsed to the Vertipaq engine.

    This SQL-like language is called xmlSQL and here are the steps that we can see in DAX Studio.

    -- Line 1, Subclass Internal, Duration 2 rows 0
    
    SET DC_KIND="GEC32";   -- Selecting distinct count strategy DC_KIND
    SELECT
        'FactInternetSales'[CustomerKey],
        'FactInternetSales'[SalesOrderNumber]
    FROM 'FactInternetSales';
    
    -----------------------------------
    Line 2, Subclass: Internal,  Duration 3 rows 27,659   --- builds a datacache projects of the 2 columns it needs
    
    SET DC_KIND="GEC32";
    SELECT
        'FactInternetSales'[CustomerKey],
        'FactInternetSales'[SalesOrderNumber]
    FROM 'FactInternetSales';
    
    Estimated size: rows = 27,659  bytes = 110,636
    ---------------------------------
    Line 3, Subclass: Internal,  Duration 3, rows 0  ---use the disinct cache count from the data cached.
    
    
    SET DC_KIND="C32";
    SELECT
        'FactInternetSales'[CustomerKey],
        COUNT () 
    FROM $DCOUNT_DATACACHE USING ( 'FactInternetSales' ) ;
    
    ----------------------------
    Line 4, Subclass: Scan, Duration 4, Rows 18,484   --- this is the real storage engine scan with a DCOUNT aggregation, group by customerkey
    
    SET DC_KIND="AUTO";
    SELECT
        'FactInternetSales'[CustomerKey],
        DCOUNT ( 'FactInternetSales'[SalesOrderNumber] )
    FROM 'FactInternetSales';
    
    
    Estimated size: rows = 18,484  bytes = 221,808
    
    ------------------------------------------
    
    Line 5, Subclass Internal, Duration 1, rows 0   --- auzillary count to help the optimizer (density/cardinatlity hints).
    
    SET DC_KIND="DENSE";
    SELECT
        'FactInternetSales'[CustomerKey],
        COUNT () 
    FROM 'FactInternetSales';
    
    ------------------------------------------
    
    Line 6, Subclass Scan, Duration 1, Rows 18,487  --another quick scan of customerkey likely to get the distinct key
    
    SET DC_KIND="AUTO";
    SELECT
        'FactInternetSales'[CustomerKey]
    FROM 'FactInternetSales';
    
    
    Estimated size: rows = 18,487  bytes = 147,896
    
    ------------------------------------------
    
    Line 7, Duration 0 refers to the execution metrics
    

    Note that the actual result is 11619, which is not shown in the server timings. The server timings show the intermediary steps.

    So that was all very interesting, but what do we do with it? Well, first, we can use it to compare measures as follows:

    Card Visual (Total) DAX query performance

    Table function used in measuresFESETotalSE QueriesApprox. Peak Mem Consumption (Bytes)
    SUMMARIZE()1201323407
    SELECTCOLUMS()1071722379
    SUMMARIZECOLUMNS()8101813376
    FILTER()671313357

    In the first test, which was based on the DAX query total on a card visual, all the measures ran very fast, with the summarize() and filter() functions performing faster. So based on this, we might run off and decide to use one of these, but then what if we want to use them in a different visual? Will the performance be the same?

    Matrix Visual DAX Query performance

    Table function used in measuresFESETotalSE QueriesApprox. Peak Mem Consumption (Bytes)
    Summarize()79250329539,884
    Selectcolumns()414990611,968
    SummarizeColumns()192645399,54
    FILTER()247397510,445

    So, as we can see, in the DAX query taken from the matrix test, the SUMMARIZE() function performed very poorly, with the measure that used the SUMMARIZECOLUMNS performing best. As you can see, this adds a whole different level of complexity to testing.

    A solution for this is to use the ISINSCOPE() function to determine the scope of how the measure is being filtered and use it to switch between 2 different measures, but of course that makes the model more complex and is a topic for another day.


  • Testing DISTINCTCOUNT() in DAX

    The DISTINCTCOUNT() function can slow down reports, but if it’s required, then what are the alternatives?


    Test 1: Adventure Works

    -- Method 1: DISTINCTCOUNT()
    
    Customers = DISTINCTCOUNT('FactInternetSales'[CustomerKey])
    
    -- Method 2: COUNTROWS(VALUES()
    
    Customers2 = COUNTROWS(VALUES('FactInternetSales'[CustomerKey]))
    
    -- Method 3: SUMX()
    
    Customers3 = SUMX(VALUES('FactInternetSales'[CustomerKey]), 1)
    

    The measures all give the same results, but which one is faster?


    We can test these measures by opening the above matrix for all years to give us a bit more data. The results are too close to call, so we need more data to work with.


    Test 2: Contoso Database:
    The FactOnlineSales table in the Contoso data warehouse has over 12 million rows of data, which should give our unique count measures a better run.

    We use the same measure calculations but in the new table as follows:

    CTSO Customers = DISTINCTCOUNT('FactOnlineSales'[CustomerKey])
    
    CTSO Customers2 = COUNTROWS(VALUES('FactOnlineSales'[CustomerKey]))
    
    CTSO Customers3 = SUMX(VALUES('FactOnlineSales'[CustomerKey]), 1)

    We can then test the speed of the measure using the performance analyzer, and DISTINCTCOUNT distinctly loses!

    Why is DISTINCTCOUNT() slower in the test results?

    The brains as SQL BI have a good article about DISTINCTCOUNT() vs. SUMX, and they refer to using DAX Studio to get into the details of what’s going on with the DAX query.
    We can start by using the measure in a simple card visual in Power BI to get the simplest DAX to analyze.

    We can copy the code from the performance analyzer into DAX Studio and select the options for getting our output.
    1. Clear on run (clears the cache each time on run). Note this is session-based so it won’t clear the whole cache for you dataset.

    2. Query plan that relates to the 2 engines: Formula Engine and Storage engine.

    There are two query plans generated from a DAX measure:

    Formula Engine plan — defines logical steps (filters, joins, iterators).
    Storage Engine plan(s) — defines physical scans (which columns, which filters, aggregations).

    If the DAX query is simple, it only needs to run the Formula Engine (FE) plan, but there is less information available in DAX Studio. Note that the storage engine is faster than the formula engine at retrieving results.

    Engine Memory

    Formula EngineMemory TypeDescription
    SEPersistent, compressed memory (the VertiPaq model)The actual stored dataset — physical columns and segments
    FEEphemeral, uncompressed memory (working memory)Virtual tables, intermediate results, cached expressions

    Engine Speed

    EngineType of workSpeedThreadsData format
    Storage Engine (SE)Scanning, filtering, grouping, aggregating⚡ Very fastMulti-threadedCompressed, columnar
    Formula Engine (FE)Evaluating DAX expressions, context transitions, row-by-row logic🐢 SlowerSingle-threadedUncompressed, row-based virtual tables



    If we look at server timings, we can see there is no time for the SE, so the

  • Working with Virtual Tables in DAX Measures

    Testing DAX table functions

    We can test the speed of measures using the performance analyzer in Power BI to see which table function used in a formula performed best. Below are the results of 10 tests per measure. SUMMARIZECOLUMNS() and SELECTCOLUMNS() are neck and neck, with the old SUMMARIZE() function finishing last.

    Typically, when working with a large semantic model, you will not be able to see the actual data you are working with, which requires you to visualize the tables in your mind when creating virtual tables in DAX measures.

    It is good to learn the different structures of creating DAX measures, which you can assign to variables. Remember, a measure is always scalar and never a table, so it returns a value within whatever filter context it is in, e.g., months in a matrix, the measure being sales.

    But you can still create a virtual table in a DAX measure, but the return result needs to be a scalar; this is typically done with formulas, such as COUNTROWS(), SUMX, AVERAGEX(), MINX(), MAX(), and COUNTX().

    We can use the FactInternetSales table and the DimDate table from the AdventureWorks Database.
    FactInternetSales is joined to DimDate using the OrderDate.

    Let’s say we wanted to get the number of customers with only one order and the average number of orders per customer. We can create the following measures:

    -- First Measure Count of unique Customers
    Customers = DISTINCTCOUNT('FactInternetSales'[CustomerKey])
    
    --Second Measure Customers with one order
    
    CustomersWithOneOrder = 
    VAR _CustomerOrders = SUMMARIZE('FactInternetSales', -- Table 
                                'FactInternetSales'[CustomerKey], -- Groupby  
                                 "Orders", DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber]) 
                                    )
    VAR _OneOrder = FILTER(_CustomerOrders, [Orders] = 1)
    
    RETURN
    COUNTROWS(_OneOrder)
    
    -- Third measure % of customers that only have one order
    
    % One Order = DIVIDE([CustomersWithOneOrder], [Customers],0)
    
    --Average Orders per Customer
    Orders Per Customer = 
                    VAR _Customers = DISTINCTCOUNT(FactInternetSales[CustomerKey])
                    VAR _Orders = DISTINCTCOUNT(FactInternetSales[SalesOrderNumber])
                    
                    RETURN
                    DIVIDE(_Orders, _Customers, 0)

    We can put the measure into a matrix with dates, and now we have a useful metric that we can use as a KPI for the business.

    For the average orders per customer table, we create a virtual table in our measure to get the results we need, but there are different methods of creating virtual tables in measures, so it’s good to test them all.

    Method 1: Summarize()

    CustomersWithOneOrder = 
    VAR _CustomerOrders = SUMMARIZE('FactInternetSales', -- Table 
                                'FactInternetSales'[CustomerKey], -- Groupby  
                                 "Orders", DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber])  -- unique count 
                                    )
    VAR _OneOrder = FILTER(_CustomerOrders, [Orders] = 1)
    
    RETURN
    COUNTROWS(_OneOrder)

    Method 2: SELECTCOLUMNS()
    Note that, as the SELECTCOLUMNS() function iterates row by row, we have to use CALCULATE to overcome this

    CustomersWithOneOrder2 = 
    VAR _CustomerOrders = SELECTCOLUMNS(VALUES('FactInternetSales'[Customerkey]), --Unique column group by
                                "Customerkey", [CustomerKey], -- 
                                 "Orders", CALCULATE(DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber])) -- unique order count
                                    )
    VAR _OneOrder = FILTER(_CustomerOrders, [Orders] = 1) 
    
    RETURN
    COUNTROWS(_OneOrder)

    METHOD 3: SUMMARIZECOLUMNS()
    The measure using summarize columns is the same as SUMMARIZE()

    CustomersWithOneOrder3 = 
    VAR _CustomerOrders = SUMMARIZECOLUMNS( 'FactInternetSales'[Customerkey], -- group by 
                                 "Orders", CALCULATE(DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber]))
                                    )
    VAR _OneOrder = FILTER(_CustomerOrders, [Orders] = 1) 
    
    RETURN
    COUNTROWS(_OneOrder)
    

    METHOD 4: FILTER()
    This is the simplest way and perhaps the fastest, but we should test it.

    CustomersWithOneOrder4 = 
    
    COUNTROWS(
        FILTER(
            VALUES('FactInternetSales'[CustomerKey]),
            CALCULATE(DISTINCTCOUNT('FactInternetSales'[SalesOrderNumber])) = 1
        )
    )
    

    The winners are at the top of the page! Scroll back up!



  • Power BI Model Performance Optimization

    1. Model Optimization

    a) Remove unnecessary columns and tables

    • Use DAX Studio Vertipaq analyzer to understand what the largest table and column are in your dataset.
    • Remove columns that are duplicated in other parts of your dataset.
    • Use the Measure Killer tool to find columns and tables not used in reports for deletion from the model.

    b) Use correct data types:

    • Use the smallest datatypes (you can check with Vertipaq analyzer (e.g., fixed decimal)
    • Reduce precision (e.g., 0 decimal points instead of two).

    c) Use Star schemas

    • Use dimensional modelling (Fact-Dimension) instead of a snowflake or flat table.
    • The further the number of joins from the fact table, the slower the filtering by dimension tables.
    • Use low cardinality fields in relationships to minimize search.

    d) Reduce cardinality

    • Avoid unique strings (e.g., GUIDs, transaction IDs) in visuals or joins (although sometimes unavoidable).
    • Split or group detailed columns (e.g., “Day” instead of full timestamp).

    e) Disable auto date/time tables. Turn this feature off in options, as it bloats the model with hidden additional data. Use a central date table instead.

    f) Aggregation tables

    • Create summary tables (Import mode) for high-grain DirectQuery data. Try to avoid transaction-level tables in your model. If transaction-level data is required, consider using direct query, so as not to increase the size of your model (although reports will be slower).
    • Power BI automatically uses them when queries match the aggregation grain.


    g) Incremental refresh to reduce the dataset refresh time.

    • Set up incremental refresh on large tables where data does not change historically, or if it does,s make use of an updated date field to use in your range parameters so as to ensure historical changes are also kept up to date (instead of e.g. the last day/week).
    • Although incremental refresh may spike memory at some point, the reduced time will, in general, free up more memory time for report consumption.

    h) Check model relationships are optimized.

    • Try to keep as many relationships as one-to-many as a rule.
    • Avoid bi-directional cross-filtering that can destabilize the model. Use only where necessary.
    • Avoid many-to-many relationships that will slow down the model.

    2. DAX measure optimization

    • Avoid row-by-row iteration unless necessary. SUMX, FILTER, can be heavy; use SUM, AVERAGE, COUNTROWS where possible.
    • Use variables in DAX measures to allow expressions to be reused in measures
    • Consider using IF(ISINSCOPE() to use different measures for totals than filters. More complex, but worth testing.
    • Avoid nested IF, SWITCH, or FILTER chains on large tables.
    • Avoid DISTINCTCOUNT on large tables, you can try SUMX(VALUES(‘table'[column]), 1)
    • Pre-calculate in ETL (views, etc., rather than putting load in DAX measures)
    • Use KEEPFILTERS() and REMOVEFILTERS() carefully, as each adds cost.
    • Test multiple variations when using complex measures, use SELECTCOLUMNS() instead of SUMMARIZE or GROUPBY()

    3. Data Source / ETL Optimization

    • Filter early in Power Query (or table source)
    • Push filters to the source so DB does the work.
    • Materialize complex transformations in the source. Avoid heavy joins, grouping, and parsing in Power Query. Push it back to SQL views, staging tables, or earlier in the pipeline.
    • Compress categorical data – Replace text fields with dimension keys, e.g., ProductID, not ProductName
    • Optimize refresh frequency if possible. Find the best time of day before peak report usage times.
    • Use Incremental refresh to reduce refresh load and refresh times.

    4. Report Layer Optimization

    • Limit visuals per page. Each visual is one or more DAX queries. Try to reduce queries to around 8 per page.
    • Use ‘Apply all filters’ to prevent visuals from refreshing on each slicer change. Wait for the user to finish slicing before applying the query, to prevent repetitive queries from being issued.
    • Turn off unnecessary interactions between visuals using ‘Edit Interactions’.
    • Pre-calculate totals in DAX or SQL. Cards with complex measures can be replaced by base measures or pre-aggregated values.
    • Use bookmarks or buttons to toggle between visuals, to reduce the number of charts showing at one time (reduces the number of DAX queries being issued).
    • Avoid visual-level calculations like TOP N Filters. Move logic into measures or calculated tables.

    5. Service Level – For Power BI Premium / Fabric capacities.

    • Enable a large dataset storage format. Reduces memory fragmentation and improves refresh concurrency.
    • Use Aggregations + DirectQuery to split heavy workloads. Import summaries, query details on demand.
    • Enable Query caching in capacity settings. Caches query results across users for faster loading. Note cache is cleared after a dataset refresh and potentially every hour as it checks for changes every hour.
    • Use the XMLA endpoint to connect SSAS to the model and analyze refresh with SQL Server Profiler.
    • Consider upgrading to higher capacities if the model is optimized, but still slow.
    • Note that with Premium capacity, you can monitor CPU and memory usage using the Power BI Premium capacity metrics app.

  • Comparison of Full Data Pipelines from Data Ingestion to Data Science

    A comparison of three types of data pipelines.

    Technology data flow
    Code data flow

    Technology Data flow

    Stage Path 1 — Microsoft / Fabric Path 2 — Snowflake + dbt (Cloud-agnostic) Path 3 — Google Cloud (GCP)
    Sources & Ingestion
    Azure Data Factory (ADF)
    Fabric Dataflows Gen2
    Event Hubs / IoT Hub (stream)
    ADF Copy Activity, REST, ODBC/JDBC
    Snowpipe (auto-ingest) + Stages
    Fivetran / Stitch / Airbyte
    Kafka / Kinesis via connectors
    AWS Glue jobs (optional)
    Cloud Data Fusion (GUI ETL)
    Pub/Sub (stream)
    Dataflow (Beam) ingestion
    Storage Transfer / Transfer Service
    Raw Landing / Data Lake
    Azure Data Lake Storage Gen2
    OneLake (Fabric)
    Delta/Parquet zones: /raw /bronze
    External Stages on S3/Azure/GCS
    Internal Stages (Snowflake-managed)
    Raw files (CSV/JSON/Parquet)
    Google Cloud Storage (GCS)
    Raw buckets (landing)
    Formats: Avro/Parquet/JSON
    Orchestration
    ADF Pipelines & Triggers
    Fabric Pipelines
    Azure Functions (events)
    Azure DevOps/GitHub Actions (runs)
    Airflow / Dagster / Prefect
    Snowflake Tasks & Streams
    dbt Cloud scheduler
    CI via GitHub Actions
    Cloud Composer (Airflow)
    Workflows / Cloud Scheduler
    Dataform (dbt-like) scheduling
    Transform (ELT / ETL)
    Fabric Data Engineering (Spark)
    Azure Databricks (Delta)
    T-SQL in Fabric Warehouse
    Synapse SQL/Spark (legacy)
    dbt models (SQL + Jinja)
    Snowflake SQL (MERGE/Tasks)
    Snowpark (Python/Scala)
    Streams for CDC
    BigQuery SQL (ELT)
    Dataflow (Beam) for heavy lift
    Dataproc (Spark) when needed
    Dataform/dbt for modeling
    Curated / Serving Warehouse
    Fabric Warehouse / Lakehouse
    Dedicated SQL Pools (Synapse)
    Delta tables (silver/gold)
    Snowflake (Databases/Schemas)
    Time Travel, Cloning
    Materialized Views
    BigQuery Datasets
    Partitioned & clustered tables
    Materialized Views
    Semantic Layer / Modeling
    Power BI Datasets (Tabular)
    Calculation Groups (TE)
    Row-Level Security (RLS)
    Power BI Deployment Pipelines
    dbt semantic models & metrics
    Headless BI (Cube/Virt.)
    RLS via Snowflake roles/policies
    DirectQuery/Live connections
    Looker (LookML semantic layer)
    Looker Explore/Views/Models
    BigQuery Authorized Views
    Row/column policy tags
    BI / Visualization & Analysis
    Power BI (Desktop/Service)
    Paginated Reports (RDL)
    Excel over Power BI
    Power BI / Tableau / Looker Studio
    Sigma / Mode (optional)
    Embedded analytics
    Looker (first-class)
    Looker Studio (lightweight)
    Data Catalog-linked exploration
    Data Science / ML
    Azure ML (AutoML, MLOps)
    Databricks ML + MLflow
    SynapseML / ONNX
    Snowpark ML / UDFs
    External: SageMaker / Databricks
    Feature Store via Snowflake/Feast
    Vertex AI (AutoML, pipelines)
    BigQuery ML (in-SQL models)
    Feature Store (Vertex)
    Data Quality / Governance
    Microsoft Purview (Catalog/Lineage)
    Power BI lineage & sensitivity
    Great Expectations (optional)
    Snowflake RBAC, Tags, Masking
    dbt tests, Great Expectations
    Monte Carlo/Bigeye (obs.)
    Dataplex (governance)
    Data Catalog (metadata)
    DQ via Dataform tests / GE
    DevOps / CI-CD & Infra
    Azure DevOps / GitHub Actions
    Power BI Deployment Pipelines
    IaC: Bicep / Terraform
    GitHub Actions + dbt CI
    schemachange / SnowChange
    IaC: Terraform / Pulumi
    Cloud Build / Cloud Deploy
    Dataform CI, dbt CI
    IaC: Terraform
    Monitoring / Cost Control
    Azure Monitor / Log Analytics
    Fabric Workspace metrics
    Cost Mgmt + Budgets
    Snowflake Resource Monitors
    Query History, Access History
    3rd-party cost dashboards
    Cloud Monitoring & Logging
    BigQuery INFORMATION_SCHEMA
    Budgets + Alerts

    Code Data Flow

    Stage Microsoft / Fabric Snowflake + dbt Google Cloud (GCP)
    Ingestion Code
    Python ETL (requests, pyodbc)
    ADF / Fabric pipeline JSON
    Dataflow Gen2 JSON
    CREATE PIPE / CREATE STAGE
    Airbyte / Fivetran configs (YAML)
    COPY OPTIONS
    Apache Beam (Py/Java)
    Cloud Data Fusion JSON
    Pub/Sub schema JSON
    Raw Landing Config
    ADLS / OneLake folder layout
    Parquet / Delta write options
    Access policies (JSON)
    Stages & File format DDL
    CSV / JSON / Parquet
    Grants & policies
    GCS bucket layout
    Lifecycle rules JSON
    BQ external table DDL
    Orchestration Code
    ADF pipeline JSON + triggers
    Fabric Pipeline YAML
    Azure Functions (Python)
    Airflow DAGs (Python)
    Prefect flows (Python)
    Snowflake TASKS SQL
    Cloud Composer DAGs (Python)
    Cloud Scheduler jobs
    Dataform schedules
    Transform / Modeling
    Databricks notebooks (Py/Spark)
    Delta Live Tables pipelines
    T-SQL stored procs
    dbt models (*.sql)
    dbt Jinja macros (*.sql)
    Snowpark (Python) UDFs
    BigQuery SQL models (*.sql)
    Dataform/dbt *.sqlx + yaml
    Dataproc Spark notebooks
    CDC / Merge to Curated
    MERGE INTO (T-SQL)
    PySpark notebook jobs
    Delta OPTIMIZE/VACUUM
    MERGE INTO curated.* SQL
    Streams for CDC
    Materialized Views
    MERGE INTO USING staging
    Partition / Cluster DDL
    Stored procedures
    Semantic Layer
    Tabular model (TMDL)
    Calc groups (TE script)
    RLS DAX expressions
    dbt semantic models (YAML)
    metrics.yaml / exposures
    Masking policies (SQL)
    LookML view/model files
    Explores & joins
    Policy tags
    BI / Report Code
    Power BI PBIX / PBIT
    Paginated RDL XML
    PowerQuery M scripts
    Tableau / Power BI
    BI SQL views
    Sigma workbooks
    Looker dashboards (lkml)
    Looker Studio reports
    BQ UDFs (JS)
    Data Science Code
    Azure ML notebooks (Python)
    MLflow tracking code
    ONNX export
    Snowpark-ML notebooks (Py)
    UDF registration SQL
    MLflow registry
    Vertex AI notebooks (Python)
    BQML CREATE MODEL SQL
    Vertex pipelines (YAML)
    Tests & Data Quality
    Great Expectations suites
    Power BI model tests (DAX)
    Custom pytest checks
    dbt tests (schema.yml)
    Great Expectations suites
    SQL anomaly checks
    Dataform tests (assertions)
    Great Expectations in Beam
    INFORMATION_SCHEMA queries
    CI/CD Config
    GitHub Actions YAML
    Power BI Deployment Pipelines
    Bicep steps
    dbt Cloud job YAML
    GitHub Actions for dbt
    Terraform scripts
    Cloud Build YAML
    BQ deploy scripts
    Terraform modules
    Infra as Code
    Bicep / Terraform templates
    Azure DevOps variable groups
    Terraform (Snowflake provider)
    SnowChange / schemachange
    Terraform (GCS, BQ, VPC)
    IAM/Secrets configs
  • SQL Code comparisons: SQL Server, Snowflake, BigQuery

    When jumping from one project to another, it can be useful to be able to compare common code structures.
    I couldn’t find anything like this out there, so here it is magically created.

    Contents:
    Common Code Structures
    Working with Dates
    Window Functions
    Error Handling
    Casting
    Joining Tables
    CTE (Common Table Expressions)

    Common Code Structures

    Topic SQL Server Snowflake BigQuery
    Select & Filtering
    SELECT TOP 10 *
    FROM Sales
    WHERE Amount > 1000;
    SELECT *
    FROM Sales
    WHERE Amount > 1000
    LIMIT 10;
    SELECT *
    FROM `project.dataset.Sales`
    WHERE Amount > 1000
    LIMIT 10;
    String Functions
    SELECT LEFT(CustomerName, 5), LEN(CustomerName)
    FROM Customers;
    SELECT LEFT(CustomerName, 5), LENGTH(CustomerName)
    FROM Customers;
    SELECT SUBSTR(CustomerName, 1, 5), LENGTH(CustomerName)
    FROM `project.dataset.Customers`;
    Date Functions
    SELECT GETDATE() AS CurrentDate, DATEADD(DAY, 7, GETDATE());
    SELECT CURRENT_DATE, DATEADD(DAY, 7, CURRENT_DATE);
    SELECT CURRENT_DATE(), DATE_ADD(CURRENT_DATE(), INTERVAL 7 DAY);
    Joins
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    INNER JOIN Orders o ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    JOIN Orders o ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM `project.dataset.Customers` c
    JOIN `project.dataset.Orders` o
    ON c.CustomerID = o.CustomerID;
    Aggregations & Group By
    SELECT CustomerID, SUM(Amount) AS TotalAmount
    FROM Sales
    GROUP BY CustomerID
    HAVING SUM(Amount) > 1000;
    SELECT CustomerID, SUM(Amount) AS TotalAmount
    FROM Sales
    GROUP BY CustomerID
    HAVING SUM(Amount) > 1000;
    SELECT CustomerID, SUM(Amount) AS TotalAmount
    FROM `project.dataset.Sales`
    GROUP BY CustomerID
    HAVING SUM(Amount) > 1000;
    Window Functions
    SELECT CustomerID,
           SUM(Amount) OVER (PARTITION BY CustomerID) AS TotalByCustomer
    FROM Sales;
    SELECT CustomerID,
           SUM(Amount) OVER (PARTITION BY CustomerID) AS TotalByCustomer
    FROM Sales;
    SELECT CustomerID,
           SUM(Amount) OVER (PARTITION BY CustomerID) AS TotalByCustomer
    FROM `project.dataset.Sales`;
    Creating Tables
    CREATE TABLE Customers (
        CustomerID INT PRIMARY KEY,
        CustomerName NVARCHAR(100)
    );
    CREATE TABLE Customers (
        CustomerID INT PRIMARY KEY,
        CustomerName STRING
    );
    CREATE TABLE dataset.Customers (
        CustomerID INT64,
        CustomerName STRING
    );
    Insert
    INSERT INTO Customers (CustomerID, CustomerName)
    VALUES (1, 'John Doe');
    INSERT INTO Customers (CustomerID, CustomerName)
    VALUES (1, 'John Doe');
    INSERT INTO dataset.Customers (CustomerID, CustomerName)
    VALUES (1, 'John Doe');
    Update
    UPDATE Customers
    SET CustomerName = 'Jane Doe'
    WHERE CustomerID = 1;
    UPDATE Customers
    SET CustomerName = 'Jane Doe'
    WHERE CustomerID = 1;
    UPDATE dataset.Customers
    SET CustomerName = 'Jane Doe'
    WHERE CustomerID = 1;
    Delete
    DELETE FROM Customers WHERE CustomerID = 1;
    DELETE FROM Customers WHERE CustomerID = 1;
    DELETE FROM dataset.Customers WHERE CustomerID = 1;
    Case When Statements
    SELECT OrderID,
           CASE 
             WHEN Amount >= 1000 THEN 'High'
             WHEN Amount >= 500  THEN 'Medium'
             ELSE 'Low'
           END AS OrderCategory
    FROM Sales;
    SELECT OrderID,
           CASE 
             WHEN Amount >= 1000 THEN 'High'
             WHEN Amount >= 500  THEN 'Medium'
             ELSE 'Low'
           END AS OrderCategory
    FROM Sales;
    SELECT OrderID,
           CASE 
             WHEN Amount >= 1000 THEN 'High'
             WHEN Amount >= 500  THEN 'Medium'
             ELSE 'Low'
           END AS OrderCategory
    FROM `project.dataset.Sales`;
    Start of Month
    SELECT OrderDate,
           DATEADD(MONTH, DATEDIFF(MONTH, 0, OrderDate), 0) AS StartOfMonth
    FROM Orders;
    SELECT OrderDate,
           DATE_TRUNC('MONTH', OrderDate) AS StartOfMonth
    FROM Orders;
    SELECT OrderDate,
           DATE_TRUNC(OrderDate, MONTH) AS StartOfMonth
    FROM `project.dataset.Orders`;
    End of Month
    SELECT OrderDate,
           EOMONTH(OrderDate) AS EndOfMonth
    FROM Orders;
    SELECT OrderDate,
           LAST_DAY(OrderDate, 'MONTH') AS EndOfMonth
    FROM Orders;
    SELECT OrderDate,
           LAST_DAY(OrderDate, MONTH) AS EndOfMonth
    FROM `project.dataset.Orders`;
    Null Handling
    SELECT CustomerID,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM Customers;
    SELECT CustomerID,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM Customers;
    SELECT CustomerID,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM `project.dataset.Customers`;
    Conditional Aggregation
    SELECT CustomerID,
           SUM(CASE WHEN Amount > 1000 THEN Amount ELSE 0 END) AS HighValueSales
    FROM Sales
    GROUP BY CustomerID;
    SELECT CustomerID,
           SUM(CASE WHEN Amount > 1000 THEN Amount ELSE 0 END) AS HighValueSales
    FROM Sales
    GROUP BY CustomerID;
    SELECT CustomerID,
           SUM(CASE WHEN Amount > 1000 THEN Amount ELSE 0 END) AS HighValueSales
    FROM `project.dataset.Sales`
    GROUP BY CustomerID;
    Date Difference
    SELECT OrderID,
           DATEDIFF(DAY, OrderDate, ShippedDate) AS DaysToShip
    FROM Orders;
    SELECT OrderID,
           DATEDIFF(DAY, OrderDate, ShippedDate) AS DaysToShip
    FROM Orders;
    SELECT OrderID,
           DATE_DIFF(ShippedDate, OrderDate, DAY) AS DaysToShip
    FROM `project.dataset.Orders`;

    Working with Dates

    Date Topic SQL Server Snowflake BigQuery
    Build Date from Parts
    SELECT DATEFROMPARTS(2025, 10, 3) AS d;
    SELECT DATE_FROM_PARTS(2025, 10, 3) AS d;
    SELECT DATE(2025, 10, 3) AS d;
    Parse Date from String
    SELECT TRY_CONVERT(date, '03/10/2025', 103); -- DD/MM/YYYY
    SELECT CONVERT(date, '2025-10-03', 23);       -- ISO
    SELECT TO_DATE('03/10/2025', 'DD/MM/YYYY');
    SELECT TO_DATE('2025-10-03');
    SELECT PARSE_DATE('%d/%m/%Y','03/10/2025');
    SELECT DATE '2025-10-03';
    Current Date / Timestamp
    SELECT CAST(GETDATE() AS date) AS current_date,
           GETDATE() AS current_datetime;
    SELECT CURRENT_DATE AS current_date,
           CURRENT_TIMESTAMP AS current_ts;
    SELECT CURRENT_DATE() AS current_date,
           CURRENT_TIMESTAMP() AS current_ts;
    Start of Month
    SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, OrderDate), 0)
    FROM Orders;
    SELECT DATE_TRUNC('MONTH', OrderDate)
    FROM Orders;
    SELECT DATE_TRUNC(OrderDate, MONTH)
    FROM `project.dataset.Orders`;
    End of Month
    SELECT EOMONTH(OrderDate)
    FROM Orders;
    SELECT LAST_DAY(OrderDate, 'MONTH')
    FROM Orders;
    SELECT LAST_DAY(OrderDate, MONTH)
    FROM `project.dataset.Orders`;
    Add Days
    SELECT DATEADD(DAY, 7, OrderDate)
    FROM Orders;
    SELECT DATEADD(DAY, 7, OrderDate)
    FROM Orders;
    SELECT DATE_ADD(OrderDate, INTERVAL 7 DAY)
    FROM `project.dataset.Orders`;
    Add Months
    SELECT DATEADD(MONTH, 3, OrderDate)
    FROM Orders;
    SELECT DATEADD(MONTH, 3, OrderDate)
    FROM Orders;
    SELECT DATE_ADD(OrderDate, INTERVAL 3 MONTH)
    FROM `project.dataset.Orders`;
    Difference (Days)
    SELECT DATEDIFF(DAY, OrderDate, ShippedDate)
    FROM Orders;
    SELECT DATEDIFF('DAY', OrderDate, ShippedDate)
    FROM Orders;
    SELECT DATE_DIFF(ShippedDate, OrderDate, DAY)
    FROM `project.dataset.Orders`;
    Difference (Months)
    SELECT DATEDIFF(MONTH, StartDate, EndDate) AS months_between;
    SELECT DATEDIFF('MONTH', StartDate, EndDate) AS months_between;
    SELECT DATE_DIFF(EndDate, StartDate, MONTH) AS months_between;
    Truncate to Week (Week Start)
    -- Week starting Sunday
    SELECT DATEADD(WEEK, DATEDIFF(WEEK, 0, OrderDate), 0)
    FROM Orders;
    -- ISO week (Mon start)
    SELECT DATE_TRUNC('WEEK', OrderDate)
    FROM Orders;
    -- Default WEEK (Sun). Use WEEK(MONDAY) if needed
    SELECT DATE_TRUNC(OrderDate, WEEK)
    FROM `project.dataset.Orders`;
    Truncate to Quarter
    SELECT DATEADD(QUARTER, DATEDIFF(QUARTER, 0, OrderDate), 0)
    FROM Orders;
    SELECT DATE_TRUNC('QUARTER', OrderDate)
    FROM Orders;
    SELECT DATE_TRUNC(OrderDate, QUARTER)
    FROM `project.dataset.Orders`;
    First Day of Year
    SELECT DATEADD(YEAR, DATEDIFF(YEAR, 0, OrderDate), 0)
    FROM Orders;
    SELECT DATE_TRUNC('YEAR', OrderDate)
    FROM Orders;
    SELECT DATE_TRUNC(OrderDate, YEAR)
    FROM `project.dataset.Orders`;
    Last Day of Year
    SELECT EOMONTH(DATEFROMPARTS(YEAR(OrderDate), 12, 1))
    FROM Orders;
    SELECT LAST_DAY(OrderDate, 'YEAR')
    FROM Orders;
    SELECT LAST_DAY(OrderDate, YEAR)
    FROM `project.dataset.Orders`;
    Extract Year / Month / Day
    SELECT YEAR(OrderDate) AS y,
           MONTH(OrderDate) AS m,
           DAY(OrderDate) AS d
    FROM Orders;
    SELECT YEAR(OrderDate) AS y,
           MONTH(OrderDate) AS m,
           DAY(OrderDate) AS d
    FROM Orders;
    SELECT EXTRACT(YEAR FROM OrderDate) AS y,
           EXTRACT(MONTH FROM OrderDate) AS m,
           EXTRACT(DAY FROM OrderDate) AS d
    FROM `project.dataset.Orders`;
    Day Name / Weekday
    SELECT DATENAME(WEEKDAY, OrderDate) AS day_name,
           DATEPART(WEEKDAY, OrderDate) AS weekday_num
    FROM Orders;
    SELECT DAYNAME(OrderDate) AS day_name,
           DAYOFWEEK(OrderDate) AS weekday_num
    FROM Orders;
    SELECT FORMAT_DATE('%A', OrderDate) AS day_name,
           EXTRACT(DAYOFWEEK FROM OrderDate) AS weekday_num
    FROM `project.dataset.Orders`;
    Between Dates (Inclusive)
    SELECT *
    FROM Orders
    WHERE OrderDate BETWEEN '2025-10-01' AND '2025-10-31';
    SELECT *
    FROM Orders
    WHERE OrderDate BETWEEN '2025-10-01' AND '2025-10-31';
    SELECT *
    FROM `project.dataset.Orders`
    WHERE OrderDate BETWEEN DATE '2025-10-01' AND DATE '2025-10-31';
    Last 30 Days Filter
    SELECT *
    FROM Orders
    WHERE OrderDate >= DATEADD(DAY, -30, CAST(GETDATE() AS date));
    SELECT *
    FROM Orders
    WHERE OrderDate >= DATEADD(DAY, -30, CURRENT_DATE);
    SELECT *
    FROM `project.dataset.Orders`
    WHERE OrderDate >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY);

    Window Functions

    Window Topic SQL Server Snowflake BigQuery
    Running Total
    SELECT CustomerID, OrderDate, Amount,
      SUM(Amount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS RunningTotal
    FROM Sales;
    SELECT CustomerID, OrderDate, Amount,
      SUM(Amount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS RunningTotal
    FROM Sales;
    SELECT CustomerID, OrderDate, Amount,
      SUM(Amount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS RunningTotal
    FROM `project.dataset.Sales`;
    Moving Avg (7 rows)
    SELECT OrderDate, Amount,
      AVG(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
      ) AS MovAvg7
    FROM Sales;
    SELECT OrderDate, Amount,
      AVG(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
      ) AS MovAvg7
    FROM Sales;
    SELECT OrderDate, Amount,
      AVG(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
      ) AS MovAvg7
    FROM `project.dataset.Sales`;
    ROW_NUMBER
    SELECT *,
      ROW_NUMBER() OVER (
        PARTITION BY CustomerID ORDER BY OrderDate DESC
      ) AS rn
    FROM Sales;
    SELECT *,
      ROW_NUMBER() OVER (
        PARTITION BY CustomerID ORDER BY OrderDate DESC
      ) AS rn
    FROM Sales;
    SELECT *,
      ROW_NUMBER() OVER (
        PARTITION BY CustomerID ORDER BY OrderDate DESC
      ) AS rn
    FROM `project.dataset.Sales`;
    RANK
    SELECT CustomerID, Amount,
      RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS rnk
    FROM Sales;
    SELECT CustomerID, Amount,
      RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS rnk
    FROM Sales;
    SELECT CustomerID, Amount,
      RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS rnk
    FROM `project.dataset.Sales`;
    DENSE_RANK
    SELECT CustomerID, Amount,
      DENSE_RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS drnk
    FROM Sales;
    SELECT CustomerID, Amount,
      DENSE_RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS drnk
    FROM Sales;
    SELECT CustomerID, Amount,
      DENSE_RANK() OVER (PARTITION BY CustomerID ORDER BY Amount DESC) AS drnk
    FROM `project.dataset.Sales`;
    NTILE (Quartiles)
    SELECT CustomerID, Amount,
      NTILE(4) OVER (ORDER BY Amount DESC) AS quartile
    FROM Sales;
    SELECT CustomerID, Amount,
      NTILE(4) OVER (ORDER BY Amount DESC) AS quartile
    FROM Sales;
    SELECT CustomerID, Amount,
      NTILE(4) OVER (ORDER BY Amount DESC) AS quartile
    FROM `project.dataset.Sales`;
    LAG (Prev Row)
    SELECT OrderDate, Amount,
      LAG(Amount, 1) OVER (ORDER BY OrderDate) AS PrevAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LAG(Amount, 1) OVER (ORDER BY OrderDate) AS PrevAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LAG(Amount, 1) OVER (ORDER BY OrderDate) AS PrevAmt
    FROM `project.dataset.Sales`;
    LEAD (Next Row)
    SELECT OrderDate, Amount,
      LEAD(Amount, 1) OVER (ORDER BY OrderDate) AS NextAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LEAD(Amount, 1) OVER (ORDER BY OrderDate) AS NextAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LEAD(Amount, 1) OVER (ORDER BY OrderDate) AS NextAmt
    FROM `project.dataset.Sales`;
    FIRST_VALUE
    SELECT OrderDate, Amount,
      FIRST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS FirstAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      FIRST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS FirstAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      FIRST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS FirstAmt
    FROM `project.dataset.Sales`;
    LAST_VALUE*
    SELECT OrderDate, Amount,
      LAST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
      ) AS LastAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LAST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
      ) AS LastAmt
    FROM Sales;
    SELECT OrderDate, Amount,
      LAST_VALUE(Amount) OVER (
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
      ) AS LastAmt
    FROM `project.dataset.Sales`;
    PERCENT_RANK()
    SELECT Amount,
      PERCENT_RANK() OVER (ORDER BY Amount) AS pct_rank
    FROM Sales;
    SELECT Amount,
      PERCENT_RANK() OVER (ORDER BY Amount) AS pct_rank
    FROM Sales;
    SELECT Amount,
      PERCENT_RANK() OVER (ORDER BY Amount) AS pct_rank
    FROM `project.dataset.Sales`;
    CUME_DIST()
    SELECT Amount,
      CUME_DIST() OVER (ORDER BY Amount) AS cume
    FROM Sales;
    SELECT Amount,
      CUME_DIST() OVER (ORDER BY Amount) AS cume
    FROM Sales;
    SELECT Amount,
      CUME_DIST() OVER (ORDER BY Amount) AS cume
    FROM `project.dataset.Sales`;
    Share of Total
    SELECT CustomerID, Amount,
      Amount * 1.0 / SUM(Amount) OVER (
        PARTITION BY CustomerID
      ) AS ShareOfCust
    FROM Sales;
    SELECT CustomerID, Amount,
      Amount / SUM(Amount) OVER (
        PARTITION BY CustomerID
      ) AS ShareOfCust
    FROM Sales;
    SELECT CustomerID, Amount,
      Amount / SUM(Amount) OVER (
        PARTITION BY CustomerID
      ) AS ShareOfCust
    FROM `project.dataset.Sales`;
    COUNT Over Window
    SELECT CustomerID, OrderDate,
      COUNT(*) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS CntToDate
    FROM Sales;
    SELECT CustomerID, OrderDate,
      COUNT(*) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS CntToDate
    FROM Sales;
    SELECT CustomerID, OrderDate,
      COUNT(*) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
      ) AS CntToDate
    FROM `project.dataset.Sales`;
    Distinct Count in Window
    -- COUNT(DISTINCT) OVER is not supported.
    -- Workaround: distinct per partition via DENSE_RANK.
    WITH x AS (
      SELECT CustomerID, ProductID,
             DENSE_RANK() OVER (
               PARTITION BY CustomerID ORDER BY ProductID
             ) AS r
      FROM Sales
    )
    SELECT CustomerID, MAX(r) AS DistinctProducts
    FROM x GROUP BY CustomerID;
    -- COUNT(DISTINCT) OVER not supported in Snowflake windows.
    -- Use DENSE_RANK or COUNT(DISTINCT) without OVER at final grouping.
    -- COUNT(DISTINCT) OVER not supported in BigQuery windows.
    -- Use DENSE_RANK or COUNT(DISTINCT) at GROUP BY level.

    Error Handling

    Error Handling Topic SQL Server Snowflake BigQuery
    Replace NULL with Default
    SELECT ISNULL(Amount, 0) AS SafeAmt,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM Sales;
    SELECT IFNULL(Amount, 0) AS SafeAmt,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM Sales;
    SELECT IFNULL(Amount, 0) AS SafeAmt,
           COALESCE(CustomerName, 'Unknown') AS SafeName
    FROM `project.dataset.Sales`;
    Safe Division by Zero
    SELECT CASE WHEN Denominator = 0
                THEN NULL
                ELSE Numerator * 1.0 / Denominator END AS Ratio
    FROM Data;
    SELECT DIV0(Numerator, Denominator) AS Ratio
    FROM Data;
    SELECT SAFE_DIVIDE(Numerator, Denominator) AS Ratio
    FROM `project.dataset.Data`;
    Safe Cast / Conversion
    SELECT TRY_CAST(Value AS INT) AS SafeInt
    FROM RawData;
    SELECT TRY_TO_NUMBER(Value) AS SafeInt
    FROM RawData;
    SELECT SAFE_CAST(Value AS INT64) AS SafeInt
    FROM `project.dataset.RawData`;
    Null If
    SELECT NULLIF(Amount, 0) AS NullIfZero
    FROM Sales;
    SELECT NULLIF(Amount, 0) AS NullIfZero
    FROM Sales;
    SELECT NULLIF(Amount, 0) AS NullIfZero
    FROM `project.dataset.Sales`;
    Error Catch / Try
    BEGIN TRY
      SELECT 1/0;
    END TRY
    BEGIN CATCH
      SELECT ERROR_MESSAGE();
    END CATCH;
    -- Snowflake doesn’t have TRY/CATCH in SQL.
    -- Use TRY_* functions (TRY_TO_NUMBER, TRY_CAST) to avoid errors.
    -- BigQuery doesn’t support TRY/CATCH in SQL.
    -- Use SAFE_CAST, SAFE_DIVIDE, IFERROR(expr, alt) in some contexts.
    IfError Style (Return fallback if error)
    -- Not native in T-SQL.
    -- Wrap in TRY/CATCH or use CASE + TRY_CAST.
    SELECT TRY_TO_NUMBER(Value, 0) AS SafeVal;
    SELECT IFERROR(1/0, NULL) AS SafeVal;

    Casting

    Casting Topic SQL Server Snowflake BigQuery
    Basic CAST
    SELECT CAST(Amount AS DECIMAL(12,2)) AS amt_dec;
    SELECT CAST(Amount AS NUMBER(12,2)) AS amt_dec;
    SELECT CAST(Amount AS NUMERIC(12,2)) AS amt_dec;
    Alt Syntax
    SELECT CONVERT(DECIMAL(12,2), Amount) AS amt_dec;
    SELECT Amount::NUMBER(12,2) AS amt_dec;
    -- Standard CAST only (no ::)
    SELECT CAST(Amount AS NUMERIC(12,2));
    Safe Cast
    SELECT TRY_CAST(Value AS INT) AS safe_int;
    SELECT TRY_TO_NUMBER(Value) AS safe_num;
    SELECT SAFE_CAST(Value AS INT64) AS safe_int;
    String → INT
    SELECT TRY_CONVERT(INT, '123') AS i;
    SELECT TRY_TO_NUMBER('123')::INT AS i;
    SELECT SAFE_CAST('123' AS INT64) AS i;
    String → DECIMAL
    SELECT TRY_CAST('123.45' AS DECIMAL(10,2)) AS d;
    SELECT TRY_TO_DECIMAL('123.45',10,2) AS d;
    SELECT SAFE_CAST('123.45' AS NUMERIC(10,2)) AS d;
    String → DATE (format)
    -- DD/MM/YYYY
    SELECT TRY_CONVERT(date, '03/10/2025', 103);
    SELECT TO_DATE('03/10/2025','DD/MM/YYYY');
    SELECT PARSE_DATE('%d/%m/%Y', '03/10/2025');
    String → TIMESTAMP
    SELECT TRY_CONVERT(datetime2, '2025-10-03T14:05:00Z', 127);
    SELECT TO_TIMESTAMP_TZ('2025-10-03T14:05:00Z');
    SELECT TIMESTAMP('2025-10-03T14:05:00Z');
    Epoch Seconds → TS
    SELECT DATEADD(SECOND, 1696341900, '1970-01-01');
    SELECT TO_TIMESTAMP(1696341900); -- seconds
    SELECT TIMESTAMP_SECONDS(1696341900);
    Time Zone Convert
    SELECT (YourDT AT TIME ZONE 'UTC')
           AT TIME ZONE 'Europe/Malta' AS local_dt;
    SELECT CONVERT_TIMEZONE('UTC','Europe/Malta', YourTS) AS local_ts;
    -- Convert UTC timestamp to Europe/Malta datetime
    SELECT DATETIME(TIMESTAMP(YourDT), 'Europe/Malta') AS local_dt;
    String → BOOLEAN
    -- No direct parse; map via CASE
    SELECT CASE WHEN LOWER(val) IN ('true','1','y','yes') THEN 1 ELSE 0 END AS bitval;
    SELECT TRY_TO_BOOLEAN(val) AS b;
    SELECT SAFE_CAST(val AS BOOL) AS b;
    To String (format)
    SELECT CONVERT(varchar(10), OrderDate, 23) AS iso_date; -- YYYY-MM-DD
    SELECT TO_VARCHAR(OrderDate, 'YYYY-MM-DD') AS iso_date;
    SELECT FORMAT_DATE('%F', OrderDate) AS iso_date;
    String → JSON
    -- No JSON type. Keep NVARCHAR, use OPENJSON to shred:
    SELECT * FROM OPENJSON(@json);
    SELECT PARSE_JSON('{"a":1,"b":"x"}') AS j;  -- VARIANT
    SELECT PARSE_JSON('{"a":1,"b":"x"}') AS j;  -- JSON
    -- Safe variant:
    SELECT SAFE.PARSE_JSON(json_str) AS j;
    Any → JSON String
    -- Build JSON via FOR JSON:
    SELECT * FROM T FOR JSON AUTO;
    SELECT TO_JSON(OBJECT_CONSTRUCT('a',1,'b','x')) AS json_s;
    SELECT TO_JSON(STRUCT(1 AS a, 'x' AS b)) AS json_s;
    Date/Time Families
    -- date, datetime, datetime2, time, smalldatetime
    -- DATE, TIME, TIMESTAMP_NTZ/LTZ/TTZ
    -- DATE, DATETIME, TIME, TIMESTAMP

    Joining Tables

    Join Type SQL Server Snowflake BigQuery
    INNER JOIN
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    INNER JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM `project.dataset.Customers` c
    JOIN `project.dataset.Orders` o
      ON c.CustomerID = o.CustomerID;
    LEFT JOIN
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    LEFT JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    LEFT JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM `project.dataset.Customers` c
    LEFT JOIN `project.dataset.Orders` o
      ON c.CustomerID = o.CustomerID;
    RIGHT JOIN
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    RIGHT JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    RIGHT JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM `project.dataset.Customers` c
    RIGHT JOIN `project.dataset.Orders` o
      ON c.CustomerID = o.CustomerID;
    FULL OUTER JOIN
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    FULL OUTER JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM Customers c
    FULL OUTER JOIN Orders o
      ON c.CustomerID = o.CustomerID;
    SELECT c.CustomerName, o.OrderID
    FROM `project.dataset.Customers` c
    FULL OUTER JOIN `project.dataset.Orders` o
      ON c.CustomerID = o.CustomerID;
    CROSS JOIN
    SELECT c.CustomerName, p.ProductName
    FROM Customers c
    CROSS JOIN Products p;
    SELECT c.CustomerName, p.ProductName
    FROM Customers c
    CROSS JOIN Products p;
    SELECT c.CustomerName, p.ProductName
    FROM `project.dataset.Customers` c
    CROSS JOIN `project.dataset.Products` p;
    SELF JOIN
    SELECT e1.EmployeeName, e2.ManagerName
    FROM Employees e1
    JOIN Employees e2
      ON e1.ManagerID = e2.EmployeeID;
    SELECT e1.EmployeeName, e2.ManagerName
    FROM Employees e1
    JOIN Employees e2
      ON e1.ManagerID = e2.EmployeeID;
    SELECT e1.EmployeeName, e2.ManagerName
    FROM `project.dataset.Employees` e1
    JOIN `project.dataset.Employees` e2
      ON e1.ManagerID = e2.EmployeeID;
    Semi Join (Exists)
    SELECT c.CustomerName
    FROM Customers c
    WHERE EXISTS (
      SELECT 1 FROM Orders o
      WHERE o.CustomerID = c.CustomerID
    );
    SELECT c.CustomerName
    FROM Customers c
    WHERE EXISTS (
      SELECT 1 FROM Orders o
      WHERE o.CustomerID = c.CustomerID
    );
    SELECT c.CustomerName
    FROM `project.dataset.Customers` c
    WHERE EXISTS (
      SELECT 1 FROM `project.dataset.Orders` o
      WHERE o.CustomerID = c.CustomerID
    );
    Anti Join (Not Exists)
    SELECT c.CustomerName
    FROM Customers c
    WHERE NOT EXISTS (
      SELECT 1 FROM Orders o
      WHERE o.CustomerID = c.CustomerID
    );
    SELECT c.CustomerName
    FROM Customers c
    WHERE NOT EXISTS (
      SELECT 1 FROM Orders o
      WHERE o.CustomerID = c.CustomerID
    );
    SELECT c.CustomerName
    FROM `project.dataset.Customers` c
    WHERE NOT EXISTS (
      SELECT 1 FROM `project.dataset.Orders` o
      WHERE o.CustomerID = c.CustomerID
    );

    CTE

    CTE Topic SQL Server Snowflake BigQuery
    Basic CTE
    WITH TopSales AS (
      SELECT CustomerID, Amount
      FROM Sales
      WHERE Amount > 1000
    )
    SELECT * FROM TopSales;
    WITH TopSales AS (
      SELECT CustomerID, Amount
      FROM Sales
      WHERE Amount > 1000
    )
    SELECT * FROM TopSales;
    WITH TopSales AS (
      SELECT CustomerID, Amount
      FROM `project.dataset.Sales`
      WHERE Amount > 1000
    )
    SELECT * FROM TopSales;
    Multiple CTEs
    WITH f AS (
      SELECT * FROM Sales WHERE Amount > 1000
    ), g AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM f GROUP BY CustomerID
    )
    SELECT * FROM g;
    WITH f AS (
      SELECT * FROM Sales WHERE Amount > 1000
    ), g AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM f GROUP BY CustomerID
    )
    SELECT * FROM g;
    WITH f AS (
      SELECT * FROM `project.dataset.Sales` WHERE Amount > 1000
    ), g AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM f GROUP BY CustomerID
    )
    SELECT * FROM g;
    CTE with INSERT
    WITH agg AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM Sales GROUP BY CustomerID
    )
    INSERT INTO CustTotals(CustomerID, Total)
    SELECT CustomerID, Total FROM agg;
    WITH agg AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM Sales GROUP BY CustomerID
    )
    INSERT INTO CustTotals (CustomerID, Total)
    SELECT CustomerID, Total FROM agg;
    WITH agg AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM `project.dataset.Sales` GROUP BY CustomerID
    )
    INSERT INTO `project.dataset.CustTotals` (CustomerID, Total)
    SELECT CustomerID, Total FROM agg;
    CTE with UPDATE/DELETE
    -- UPDATE via CTE
    WITH d AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM Sales GROUP BY CustomerID
    )
    UPDATE c
    SET c.Total = d.Total
    FROM CustTotals c JOIN d ON c.CustomerID = d.CustomerID;
    
    -- DELETE via CTE
    WITH old AS (SELECT * FROM Logs WHERE CreatedAt < '2025-01-01')
    DELETE FROM old;
    -- UPDATE via CTE
    WITH d AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM Sales GROUP BY CustomerID
    )
    UPDATE CustTotals c
    SET Total = d.Total
    FROM d
    WHERE c.CustomerID = d.CustomerID;
    
    -- DELETE via CTE
    WITH old AS (SELECT * FROM Logs WHERE CreatedAt < '2025-01-01')
    DELETE FROM Logs USING old WHERE Logs.id = old.id;
    -- UPDATE via CTE
    WITH d AS (
      SELECT CustomerID, SUM(Amount) AS Total
      FROM `project.dataset.Sales` GROUP BY CustomerID
    )
    UPDATE `project.dataset.CustTotals` c
    SET Total = d.Total
    FROM d
    WHERE c.CustomerID = d.CustomerID;
    
    -- DELETE via CTE
    WITH old AS (SELECT id FROM `project.dataset.Logs`
                 WHERE CreatedAt < DATE '2025-01-01')
    DELETE FROM `project.dataset.Logs` WHERE id IN (SELECT id FROM old);
    Recursive CTE (Hierarchy)
    WITH EmpCTE AS (
      SELECT EmployeeID, ManagerID, 0 AS lvl
      FROM Employees WHERE ManagerID IS NULL
      UNION ALL
      SELECT e.EmployeeID, e.ManagerID, c.lvl + 1
      FROM Employees e
      JOIN EmpCTE c ON e.ManagerID = c.EmployeeID
    )
    SELECT * FROM EmpCTE OPTION (MAXRECURSION 100);
    WITH RECURSIVE EmpCTE AS (
      SELECT EmployeeID, ManagerID, 0 AS lvl
      FROM Employees WHERE ManagerID IS NULL
      UNION ALL
      SELECT e.EmployeeID, e.ManagerID, c.lvl + 1
      FROM Employees e
      JOIN EmpCTE c ON e.ManagerID = c.EmployeeID
    )
    SELECT * FROM EmpCTE;
    WITH RECURSIVE EmpCTE AS (
      SELECT EmployeeID, ManagerID, 0 AS lvl
      FROM `project.dataset.Employees`
      WHERE ManagerID IS NULL
      UNION ALL
      SELECT e.EmployeeID, e.ManagerID, c.lvl + 1
      FROM `project.dataset.Employees` e
      JOIN EmpCTE c ON e.ManagerID = c.EmployeeID
    )
    SELECT * FROM EmpCTE;
    Recursive CTE (Date Series)
    WITH Dates AS (
      SELECT CAST('2025-01-01' AS date) AS d
      UNION ALL
      SELECT DATEADD(DAY, 1, d) FROM Dates
      WHERE d < '2025-01-31'
    )
    SELECT * FROM Dates OPTION (MAXRECURSION 0);
    WITH RECURSIVE Dates AS (
      SELECT TO_DATE('2025-01-01') AS d
      UNION ALL
      SELECT DATEADD(day, 1, d) FROM Dates
      WHERE d < TO_DATE('2025-01-31')
    )
    SELECT * FROM Dates;
    WITH RECURSIVE Dates AS (
      SELECT DATE '2025-01-01' AS d
      UNION ALL
      SELECT DATE_ADD(d, INTERVAL 1 DAY) FROM Dates
      WHERE d < DATE '2025-01-31'
    )
    SELECT * FROM Dates;
    CTE vs Materialization
    -- CTE is not materialized.
    -- Use #temp or @table for reuse:
    SELECT ... INTO #t FROM ...;
    -- or:
    WITH c AS (...) SELECT ... FROM c;
    -- CTE not materialized.
    -- Use TEMP TABLE or transient table:
    CREATE TEMP TABLE t AS SELECT ...;
    WITH c AS (...) SELECT ... FROM c;
    -- CTE not materialized by default.
    -- BigQuery supports MATERIALIZED CTE hint:
    WITH c AS MATERIALIZED (SELECT ... )
    SELECT ... FROM c;
    Notes & Scope
    -- Scope: single statement.
    -- Name must be unique within WITH.
    -- Recursive needs OPTION(MAXRECURSION ...).
    -- Scope: single statement.
    -- Use WITH RECURSIVE for recursion.
    -- Can precede DML/DDL that supports SELECT.
    -- Scope: single statement.
    -- WITH RECURSIVE supported.
    -- MATERIALIZED can improve reuse/perf.
  • Testing Dual mode in Power BI and BigQuery

    In this test, we will try to see the performance improvement of using dual mode in a Power BI dimension table using a cloud database and a shared dataset, so we don’t get the performance issues experienced in the first dual mode test on a laptop.

    Test 1
    In the first test, we will use a single dimension table to test and a single fact table set to direct query mode.

    I will use a public dataset from Google BigQuery.
    Get Data > Google Big Query. I enter the project ID and am prompted for my Google login.

    BigQuery connection from Power BI

    I can then navigate to the dataset I want to work with.

    Navigate to BiQuery dataset
    Table Selction

    We choose a table from the New York taxi trips dataset, which has over 16 million rows.

    SQL row count.

    This is configured with a Direct Query connection.

    Selecting table import mode


    I will then add some simple measures for testing as follows:

    Measures

    Next, we’ll add the tax-zone_geom as a dimension table and add it with Direct Query mode initially. It has 263 rows.

    I find there are duplicates in my dimension, which is a pain, so I need a new view in BigQuery to fix this.
    I will create a simple view as follows (replace myproject_id with your own):

    CREATE OR REPLACE VIEW `customer-myprojectid.pbitesting.DimZone` AS

    SELECT
    DISTINCT Zone_id, Zone_name
    FROM
    bigquery-public-data.new_york_taxi_trips.taxi_zone_geom

    I can then connect Power BI to BigQuery using the different ProjectID and select my de-duped dimension table.

    Talbe Selection DimZone


    DimZone Distinct Values


    Next, I change the zone_id and pickup_location_id fields to whole numbers to save space.

    I will then connect my DimZone table with my tlc_green_trips_2016 table using zone_id and pick_up_location_id.

    Model relationships

    I then changed my tax_zone_geom table to dual mode.

    Selecting Dual Mode



    Then I publish my dataset to the Power BI service and we can connect to it with a new report.

    Get Data > Power BI semantic models >

    Selecting the semantic dataset

    When I try to connect, I get the following BigQuery authentication error:

    DataSource.Error: ADBC: The service bigquery has thrown an exception. HttpStatusCode is Unauthorized. Request is missing the required authentication credentials. Expected OAuth 2 access token, login cookie, or other valid authentication credential

    So I need to add the credentials for accessing BigQuery to my dataset as follows in the dataset settings, and then I will need to sign in with my Google Account.

    Configuring dataset credentials

    My settings then look like this:

    dataset credentials for BigQuery

    I can then return to my Power BI Desktop report and connect to my semantic model.
    I can now view the tables in my shared dataset.

    Dataset tables



    I can then build a simple matrix with my measures from my tlc_green_tips_2016 fact table and the Zone_name from my dimension table, as well as adding a Zone_name slicer to my model.
    It takes about 40 seconds to load.

    Matrix for taxi data

    Initial testing results are interesting. Even though caching is switched off in the dataset, we still see caching on testing with performance analyzer. Apparently this is due to session level caching, which can’t be switched off.

    Checking report query setting in power bi

    We didn’t see the session-level caching in the previous test, so we will have to select different selections each time to do the dual mode via import mode test. Also, we can’t do the full table test as this is also impacted by session-level caching.

    In the first test, we will use a one-dimensional table and a fact table in direct query mode. We will test the dimension table in dual mode and then in import mode, and see the impact on performance.

    Dual Mode Test 1 Results: 1 Dimension Table and 1 Fact Table in Direct Query Mode

    Dual mode results test 1

    After excluding outliers, we can see the import mode is trending faster than the dual mode.
    After outliers are removed, which look more like anomalies due to resource limitations, rather than as part of the norm,
    the T-test shows a significant difference between the 2 sets of observations.



    Test 2

    In the next test, we will test dual mode and 2 different fact tables. One is in import mode and the other is in direct query mode. This is where we should see the performance improvement of using dual mode.
    We start by adding an additional fact table to the model, but this time in import mode.

    Adding another table

    The tlc_green_trips_2017 table. This table is 12.7M rows.
    Then I rebuild the model. The 2016 is in direct query mode, the DimZone table is in dual mode, and the 2017 table. I then add some additional measures for each table to use in the Matrix test.

    Model 2 with dual mode


    My new test matrix now has some measures from both my fact table 2016 (import), 2017 (direct query)

    New Matrix design

    Test Results:
    The trend appears to favor the import mode again, which was kind of disappointing as i wanted to see a similar improvement as i saw using SQL server locally.

    Running a Test using Python in Power BI
    We can use the Python visual to create some Python graphs and a T-Test.
    I use AI to generate the code for me, and i add it into the Python script editor.
    And i add the Zone, mode, and load time to the value wells of the visualisations pane.

    The results are shown below, above the box plot, with the p-value = 0.0001 or a 1 in a thousand chance of this happening by chance. It seems that import mode is faster again. Perhaps BigQuery doesn’t work as well. It seems testing is the only way of finding out. Models may be faster in some circumstances than in others.


  • Testing Dual Mode Impact using the performance analyzer

    Testing Dual Mode Impact using the performance analyzer

    Dual mode can potentially speed up queries by enabling the cache in certain situations. In theory, it can speed up queries in certain situations by utilizing the cache.
    Microsoft’s description is here


    In the following example, we start with the V_FactOnline sales view. It is a large table with over 12M rows and 19 columns. We could import it into our dataset, but it will increase the refresh time of our dataset and use valuable memory in our model that we may want to conserve.


    To get an idea of the size of the table in our model, we can use the Vertipaq analyzer in DAX Studio.
    I have imported 3 tables into our data model so far.


    1. V_FactOnlineSales in import mode
    2. V_DimProduct in import mode
    3. FactOnlineSales table in direct query mode (which is the same size as V_FactOnlineSales

    The total size of our dataset now is 233MB, and we can see that the imported V_FactOnlineSales table is taking up nearly all that space. When we have added another 50 tables to our model, we might decide to switch the v_fact_online sales table to a direct query model to reduce refresh time and free up memory. You will also notice that the FactSalesOnline table in direct query mode uses no memory.

    Vertipaq analyzer dataset summary
    Vertipaq analzer table sizes


    After deleting the imported V_FactOnlineSales table, we can import it again, but in import mode, so it does not take up any space in our dataset. We have a one-to-many relationship between V_dimProduct and V_FactOnline Sales, with a single cross-filter direction, so only v_DimProduct and filter V_FactOnlineSales and not the other way around. They are joined by the ProductKey.

    one to many relationship between dimension and fact table.

    We’ll start by creating some simple measures from the FactOnlineSales table as follows:

    Sales = SUM(V_FactOnlineSales[SalesAmount])
    Cost = SUM(V_FactOnlineSales[TotalCost])
    Profit = [Sales]-[Cost]
    Profit Margin = DIVIDE([Profit], [Sales], 0)

    We can organise them in the home table of V_FactOnline Sales and give them a folder to keep them together.

    We can then add our measures to a matrix visual and a slicer on product name using the measures and the product name field from the dimproduct table. We can then use the performance analyzer and refresh the visuals. It takes about 5 seconds for the matrix to load. If I clear the query results and refresh the performance analyzer. I have a nice benchmark of about 5 seconds to work with. The performance analyzer is designed not to cache the queries, so this seems reliable to work with.

    Power BI matrix showing product sales and profit.
    Performance analyzer results.

    If I select a single product from the slicer, it takes about 1 second to load the matrix.

    Performance analyer results sliced

    Now, if I remove a product from my slicer, it takes longer (about 6 seconds, consistently).

    Performance analyzer results one prouct removed

    Next, I could move on to testing the DimProduct table in dual mode. Note I had to recreate the table in Direct Query mode and switch it to dual mode, as I wasn’t allowed to switch from import mode to dual mode.

    A comparison of the tests between the two modes is in the table below.
    I found I had quite consistent results, although when the full table was selected, I did run into performance problems on my laptop, and it was more of an issue in Dual mode, which does use more memory and some additional CPU (it creates a copy of the table import mode into memory to execute the query). Therefore, I will need to do a cloud-based test to get a more reliable picture. In this test, I’m using Power BI locally with SQL Server on an aging laptop.

    The most obvious difference I found between the 2 modes was in individual selections in the slicer, with the Dual mode performing over twice the speed vs. import mode, which is consistent with what should be expected by using dual mode.

    Matrix loading speed using the performance analyzer

    Product Slicer SelectionDimProduct: Import
    FactOnelineSales: Direct Query

    DimProduct: Dual Mode
    FactOnlineSales: Direct Query
    Select All:
    5-6 seconds
    6 -8 seconds (CPU maxing out)
    One Product selected1.5 seconds0.5 seconds
    Exclude one product 6 seconds1.5 seconds

    To summarize, performance improvements were apparent for slicing with a dimension using dual mode, but a cloud-based test is required, in particular, to see the speed of the full unfiltered matrix refresh.

  • Power BI Report Builder Report Page formatting basics

    Power BI Report Builder Report Page formatting basics

    While many people use Power BI, but never try report builder, but report builder is a handy tool for certain projects. If you have large volumes of detailed data that would take up too much space in your data model, or you find that direct queries are too slow, paginated reports can provide an alternative.

    In this example, we’ll grab some data from Kaggle. I want a big customer file that I can work with. In real life, you’re most likely to be importing this from a Data Warehouse, but the only difference here is the connection and import; all the methods of work will be the same once you’ve imported the data into your dataset.
    Here is the file I will use, entitled Bank Customer Segmentation 1M+ rows.
    https://www.kaggle.com/datasets/shivamb/bank-customer-segmentation

    The file we have is from a fictitious bank; someone has randomly generated a date for.
    On inspection in Excel, the file is 1,048,569 rows

    report builder Transactions example


    You can download the report builder for free from here:

    https://www.microsoft.com/en-us/download/details.aspx?id=10594

    Open the report builder after you’ve installed it and select a blank report to get started:

    report builder creation

    No the first thing we want to do is get our data, so right click on the Datasets folder and select ‘Get Data’, and you can log in your Power BI account here. If you don’t have one, there is a free trial. I’ll let you figure that bit out.

    Once you’ve logged in, you’ll get the data source screen where you can select your data source. There are a whole host of data connector files here to choose from, including data warehouses such as Google BigQuery, SQL Server, Snowflake, as well as other tools such as Google Analytics, and in this case, because I’m lazy, I will just use the CSV file that I downloaded, but it doesn’t matter for learning report builder.

    report builder data source

    On the next screen, you need to enter your data connection information, but in this example, there is the option of uploading a file at the top below connection settings.

    The main difference here is that with a connection to a data warehouse or whatever, once you publish your report to the Power BI service, the report will refresh with the latest data when the report is reloaded, whereas I will need to upload my file again.

    report builder data source connection

    Once my file is uploaded, I get a preview of the data, and I can change a few things, like the delimiter, which is set to comma-delimited by default.

    importing data


    Next, I click on Transform Data, which takes me down to a scaled-down version of Power Query compared to the version included with Power BI Desktop.

    report builder power query



    Here you can do things like add custom columns and format your data.
    Click Create to import your dataset from Power Query into the Report Builder design window.

    Your first dataset


    So let’s start by building a simple paginated report. I will use a simple table: Insert a table, and then I will drag my fields into the table.

    report builder adding a table

    Design and formatting the paginated report

    First, we need to turn on the properties window (View > Properties), where we can easily edit things like fonts and background colours.

    I got our mutual friend to create a logo for me and supplied the background colours in RGB format,

    For the logo, I will add it to the head of the report (Insert Header). I add a rectangle and move the image into the rectangle, so I can control its properties. The design is now coming to life (Roar).

    Report builder adding a header

    I can preview the report to test how it works by selecting Home > Run:
    It takes a while to load as it loads the whole dataset, not page by page.

    Below is the screenshot of the report in preview mode. There are a million rows, so it breaks the page across pages by default: It paginates the report. The logo in the header displays across pages, but the headers do not, so I need to fix that.

    Report builder preview report

    Fixing Row Headers in place with Power BI Report Builder

    To fix the row headers in place, we need to do the following:
    Select the table (Tablix) and select advanced mode in the bottom right corner. The row and column groups will display at the bottom of the screen.

    manage rows in report builder


    In the row groups box (bottom left), I can control the positioning of the rows during pagination and scrolling.
    I set the following for the Title row and the headers row:

    FixedData = True. This will fix the row in position when I scroll.
    KeepWithGroup = After. This is to stop the row from breaking from the row below.
    RepeatOnNewPage = True. This I to repeat the row on each page.

    I set these on both the title row and the headers row (click on static for each, and the properties will show on the right). To turn on properties, click View> Properties from the main ribbon.

    Now, when I scroll, the header and title are fixed at the top of the page and also stay in place when I navigate pages.

    fixing the top row

    Now we have a basic report working, let’s publish it to the service. Remember, if we had a connection to a data warehouse table, the report would grab the latest data for us, but for now, we’re lazy and just using a static file that won’t change.

    To do so, Select File Publish and select the workspace you want to publish to:

    Once I’ve published my report, I can view it on the service and give others access:
    The report takes a while to load the 1 M-plus rows. Once it’s loaded, you can see some of the features available, such as export to Excel, which is a popular option. So can also set up a subscription to the report so the report will run and email it to you. This can be a useful option for some reports.

    exporting to excel
    subscribing to a report
  • SQL Window Functions

    SQL Window Functions

    Summary


    1. Aggregate Window Functions
    These functions perform calculations across a set of table rows that are somehow related to the current row.

    SUM() OVER(…) – Running total or sum per partition
    AVG() OVER(…) – Average per partition
    COUNT() OVER(…) – Count rows in partition
    MIN() OVER(…) – Minimum value in partition
    MAX() OVER(…) – Maximum value in partition

    2. Ranking Window Functions
    These assign a rank or row number to each row within a partition.

    ROW_NUMBER() OVER(…) – Unique sequential number per row in partition
    RANK() OVER(…) – Rank with gaps for ties
    DENSE_RANK() OVER(…) – Rank without gaps for ties
    NTILE(n) OVER(…) – Divides partition into n buckets and assigns a bucket number

    3. Value Navigation (Analytic) Functions
    These functions return values from other rows in the window frame relative to the current row.

    LAG(expression, offset, default) OVER(…) – Value from a previous row
    LEAD(expression, offset, default) OVER(…) – Value from a following row
    FIRST_VALUE(expression) OVER(…) – First value in the window frame
    LAST_VALUE(expression) OVER(…) – Last value in the window frame
    NTH_VALUE(expression, n) OVER(…) – The nth value in the window frame


    Examples


    1. Aggregate Window Functions

    These perform calculations over a window of rows but return a value for each row.

    NameSalesDate
    Alice1002025-01-01
    Alice1502025-01-03
    Bob2002025-01-01
    Alice502025-01-05
    Bob3002025-01-04


    SUM() OVER()

    SELECT
      Name,
      Sales,
      SUM(Sales) OVER (PARTITION BY Name ORDER BY Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS RunningTotal
    FROM SalesData;


    The window function SUM(Sales) OVER (...) calculates a running total of sales for each Name.

    The window is partitioned by Name (so calculations are done separately for each person).

    The rows are ordered by Date within each partition.

    The frame ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW means the sum includes all rows from the first date up to the current row’s date.

    Output:

    NameSalesRunningTotal
    Alice100100
    Alice150250
    Alice50300
    Bob200200
    Bob300500


    AVG() OVER()

    Customer_idsales
    101200
    102150
    101300
    103400
    102250
    101100
    SELECT 
    customer_id, 
    sales, 
    AVG(sales) OVER (PARTITION BY customer_id) AS AvgSales 
    FROM orders;
    • The window function AVG(sales) OVER (PARTITION BY customer_id) calculates the average sales for each customer_id.
    • The average is computed over all rows with the same customer_id.
    • The result is shown on every row corresponding to that customer.
    customer_idsalesAvgSales
    101200200
    101300200
    101100200
    102150200
    102250200
    103400400


    COUNT(*) OVER

    departmentemployee_id
    Sales101
    Sales102
    HR201
    HR202
    HR203
    IT301
    SELECT
      department,
      employee_id,
      COUNT(*) OVER (PARTITION BY department) AS DeptCount
    FROM employees;
    • The window function COUNT(*) OVER (PARTITION BY department) counts the total number of employees in each department.
    • This count is repeated on every row for that department.
    • No ordering is required here because the count is the same for all rows in the partition.

    departmentemployee_idDeptCount
    Sales1012
    Sales1022
    HR2013
    HR2023
    HR2033
    IT3011


    2. Ranking Functions

    Assign ranks or row numbers within partitions.

    ROW_NUMBER() — Assigns unique row numbers

    employee_iddepartmentsalary
    101Sales5000
    102Sales7000
    103Sales6000
    201HR4500
    202HR4800
    301IT8000

    SELECT
      employee_id,
      salary,
      ROW_NUMBER() OVER (PARTITION BY department ORDER BY salary DESC) AS RowNum
    FROM employees;

    employee_idsalaryRowNum
    10270001
    10360002
    10150003
    20248001
    20145002
    30180001

    The window function ROW_NUMBER() assigns a unique sequential number to each row within the partition defined by department.

    Rows are ordered by salary in descending order within each department.

    The highest salary in each department gets RowNum = 1, the next highest gets 2, and so on.

    RANK() — Assigns rank, with gaps for ties

    employee_iddepartmentsalary
    101Sales7000
    102Sales7000
    103Sales6000
    201HR4800
    202HR4500
    301IT8000
    302IT8000
    303IT7500
    SELECT
      employee_id,
      salary,
      RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS Rank
    FROM employees;

    The RANK() function assigns a rank to each employee within their department, ordering by salary descending.

    Employees with the same salary get the same rank.

    The next rank after a tie skips the appropriate number of positions (i.e., gaps in ranking).

    employee_idsalaryRank
    10170001
    10270001
    10360003
    20148001
    20245002
    30180001
    30280001
    30375003


    DENSE_RANK() — Like RANK but no gaps

    employee_iddepartmentsalary
    101Sales7000
    102Sales7000
    103Sales6000
    201HR4800
    202HR4500
    301IT8000
    302IT8000
    303IT7500
    SELECT
      employee_id,
      salary,
      DENSE_RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS DenseRank
    FROM employees;
    • DENSE_RANK() assigns ranks within each department, ordered by salary descending.
    • Employees with the same salary get the same rank.
    • Unlike RANK()DENSE_RANK() does not skip ranks after ties. The next distinct value gets the next consecutive rank.
    employee_idsalaryDenseRank
    10170001
    10270001
    10360002
    20148001
    20245002
    30180001
    30280001
    30375002



    3. Value Navigation Functions

    Access values from other rows relative to the current row.

    LAG() — Value from previous row

    employee_iddepartmentsalary
    101Sales5000
    102Sales6000
    103Sales7000
    201HR4500
    202HR4800
    301IT8000
    SELECT
      employee_id,
      salary,
      LAG(salary, 1) OVER (PARTITION BY department ORDER BY salary) AS PrevSalary
    FROM employees;

    LAG(salary, 1) returns the salary value from the previous row within the same department, based on the ascending order of salary.

    For the first row in each department, there is no previous salary, so the result is NULL.

    employee_idsalaryPrevSalary
    1015000NULL
    10260005000
    10370006000
    2014500NULL
    20248004500
    3018000NULL



    LEAD() — Value from next row

    employee_iddepartmentsalary
    101Sales5000
    102Sales6000
    103Sales7000
    201HR4500
    202HR4800
    301IT8000

    SELECT
      employee_id,
      salary,
      LEAD(salary, 1) OVER (PARTITION BY department ORDER BY salary) AS NextSalary
    FROM employees;


    LEAD(salary, 1) returns the salary value from the next row within the same department, based on the ascending order of salary.

    For the last row in each department, there is no next salary, so the result is NULL


    5000
    6000
    10260007000
    1037000NULL
    20145004800
    2024800NULL
    3018000NULL


    LAST_VALUE() — Last value in the window frame

    employee_iddepartmentsalary
    101Sales5000
    102Sales7000
    103Sales6000
    201HR4500
    202HR4800
    301IT8000

    SELECT
      employee_id,
      salary,
      LAST_VALUE(salary) OVER (PARTITION BY department ORDER BY salary ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS MaxSalary
    FROM employees;
    
    • LAST_VALUE(salary) returns the last salary value in the ordered partition.
    • The window frame ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ensures the entire partition is considered (not just up to the current row).
    • When ordered by salary ascending, the last value is the highest salary in the department.
    employee_idsalaryMaxSalary
    10150007000
    10360007000
    10270007000
    20145004800
    20248004800
    30180008000

  • Testing Power BI DAX Measures Speed with DAX Studio

    DAX Expression > DAX query > Vertipaq query.

    Test 1: Testing the Impact of USERELATIONSHIP() in a DAX Measure

    Does using USERELATIONSHIP() with inactive relationships slow down a DAX measure? I’m not sure so I thought I would test it out.

    To test this I will create a measure using the existing between my date table and my FactOnlineSales table and then I created a copy of the date table and joined it in the same way to the FactOnlineSales table, but making the relationship inactive.

    I use DAX studio to create 2 measures for calculating total sales and then I throw in some filters for year and continent as follows:

    DEFINE
        MEASURE '_Measures'[test1] =
            CALCULATE (
                SUMX (
                    'FactOnlineSales',[Sales Amount]
                ),
                
                'DimDate'[CalendarYear] = 2009,
                'DimGeography'[ContinentName] = "Europe"
                ) 
    
        MEASURE '_Measures'[test2] =
            CALCULATE (
                SUMX (
                    'FactOnlineSales',[Sales Amount]
                ),
                'DimDate (2)'[CalendarYear] = 2009,
              	'DimGeography'[ContinentName] = "Europe",
                USERELATIONSHIP('DimDate (2)'[Datekey], FactOnlineSales[DateKey])
            )
    EVALUATE
    ROW (
        "test", [test2] 
    )

    Below is a screenshot of my DAX studio settings. As you can see I’ve got the ‘Clear cache on Run’ button selected. This will help ensure each test is equal. I also have the use ‘Server Timings’ button selected. Note be careful using ‘Clear cache on run’ on a connection to a production model as it will clear the cache (hence slowing down reports).

    For the original sale measure that has an active relationship to the data table, I get a time of 2,197 which is broken down into FE (Formula engine) is 624 ms and 1,573 ms for the SE (Storage engine -that’s the Vertipaq engine). The FE turns the DAX expression into a DAX query and creates a query plan that it sends to the SE to retrieve the results of the query. The FE also caches the results of queries (this is why I have turned on the clear cache when the DAX query is run, otherwise the subsequent queries will return results very quickly).

    Comparing the 2 measures
    Here we run the query on each measure 5 times to get an average and as you can see there was very little difference in performance. Indeed the first 3 tests on the measure with the active relationship took longer (but this may have been because my laptop was busy doing something). My results are below, I think they are pretty inconclusive, but they are interesting none the less, and the method would be interesting to test with a larger model.


    In the server timings you can view the Vertipaq query that is executed by the Analysis services:

    Each step of the execution is below:
    The first step selects the data from FactOnlineSales with the 2 filters returning 1.4M rows of data of 133 KB.
    The second step does the sum of the sales amount and returns one row. The third step returns the start and end time as well as other information (if you mouse over the query).

    The Par. field stands for Parrallism, which represents the number of parallel threads used by the storage engine to retrieve the result at each step.

  • Analysing the Power BI Data Model to Improve Refresh Rates with DAX Studio


    I’ll start with some references:

    Definitive Guide to DAX Book by Marco Russo and Alberto Ferrari (the definitive experts).
    The Guy in the Cube video on model refresh slow:
    Chat GPT (I don’t know who that is!)
    My own testing and experience and demo using Power BI model using data from the Contoso Retail data warehouse

    First things: Connecting DAX Studio to your data model in the Power BI service
    1. Get the XMLA entry point URL from Power BI by selecting workspace settings > License info:
    You will need to log in with your Power BI login to connect.


    2. You can connect to your model in the service with DAX studio as follows (you may need to download and install it first). You can open it either independently or using the external tools within Power BI.



    File and Model Size
    Typical models use data imported into physical storage from a data warehouse and potentially other sources, such as Google Analytics. The Vertipaq engine stores the model in virtual memory. My Contoso Retail Power BI file is 193 MB (that’s not very big). It is compressed to reduce storage requirements.

    Model Size in DAX Studio (Advanced > View Metrics > Summary)
    When I look at the model summary from the advanced Metrics in DAX studio, the model loaded into virtual memory by the Vertipaq engine is 253 MB. Also of interest are the 37 tables and 366 columns. My model only has 8 tables and 1 calculated table, so it will be interesting later to understand what these other tables are.



    So why is this different?

    In-memory Compression: Power BI compresses data in the .pbix file to reduce storage, so the file size on disk is often smaller than its in-memory model size. However, once loaded, Power BI expands it for efficient query performance. There is still compression (next section), it’s just not as much as in the stored file.

    Metadata Overhead: The model size shown in DAX Studio includes metadata, indexes, dictionaries, and structures needed for DAX calculations and relationships that Power BI uses while executing queries, which aren’t directly reflected in the file’s storage size.

    Cache and Temp Data: DAX Studio may include caches or other temporary data generated during analysis, which increases the apparent size of the model in memory.

    Unused Columns or Tables: Any tables or columns loaded but not used may also contribute to the model’s size in DAX Studio, while Power BI might not fully load them in the saved file.

    Note: According to the model settings, the model is not using caching (which is off by default), so I can rule out one of the four possibilities.

    Power BI License requirements based on model size

    The Power BI Pro license allows for models up to 1GB in size, 10 GB of native storage, and 8 refreshes a day
    The Power BI Premium license allows for models up to 100GB in size and 100 TB of native storage and up to 48 refreshes a day.

    Note: Beyond just loading the raw model, Power BI needs additional memory overhead for processing, caching, and temporary storage during data refreshes and calculations. This means that even if your model fits within the 1 GB limit, you’re likely to experience slow performance or even errors during complex operations, especially if near that cap.

    Semantic Model Refresh Speed (Vertipaq engine: imported tables)

    We can easily access the model refresh times in the Power BI workspace (Select ‘Refresh history’ from the 3 dots menu next to the model entry. This gives us our total model refresh benchmarks (we can average them (or remove outliers as required at known busy times).

    So what are the stages of the model refresh and which ones can we troubleshoot and optimize?

    1. Data Import time: This one we can measure. Adam from Guy in the Cube video gives us a simple suggestion on how we can benchmark our table’s load times, by simply doing a SELECT * on each table in the data warehouse and making a benchmark of the time (repeat process at different times of the day (perhaps remove outliers at busy times) and average. I guess there is also a network latency speed to consider with large volumes of data, but let’s just do the: Total Refresh time – Table SELECT * time:

    Here is an example from my Contoso data warehouse tables:
    1. DimCurrency: Time 0: Rows: 28
    2. DimCustomer: Time 0: Rows: 18,869
    3. DimDate: Time 0: rows 648
    4. DimGeography: Time 0: Rows 274
    5. DimProduct: Time 0: Rows: 2467
    6. DimProductCAtegory: Time 0: Rows 8
    7. DimProductSubcategory: Time 0: Rows: 48
    8. DimStore: Time 0: Rows 306
    9. FactOnlineSales: 2 Mins 49 Secs, Rows: 12.6M

    As you can see only the FactOnline sales table took any real time to SELECT.
    But then you might say – hey, why are you using tables in your models and not views? If a new column appears the model refresh will fail.
    So you’re right in which case you can run the tests on your views.

    Now I don’t have a gateway setup to connect to my laptop today, so I need to set up something in the cloud to perform the test but let’s say the model refreshed in 4 minutes and your SELECT * benchmarks totaled 2 Minutes and 49 seconds, then you can say the for 1 Minute and 11 seconds the power bi service is doing something else.

    So what is happening was the data has been imported? Well, that’s when we need to get into the Vertipaq engine. Note, that the engine is proprietary and so has secrets that we will never learn, but what we do know about the steps it performs are as follows:

    Vertipaq engine steps
    1. Loads the tables into columns
    2. Check each column for the best sort order
    3. Applys compression to create the smallest fastest model
    4. Loads the model into Memory

    Note: the Vertipaq engine only deals with imported data. Direct Queries are processed by the Direct Query engine.

    DAX formula engine
    1. Creates calculated columns and tables (in memory) – not compressed as after Vertipaq
    2. Creates relationships between columns

    So we need to learn more about what these two engines do.

    The Vertipaq Engine
    1. Stores imported data into a columnar database (for faster data retrieval).

    2 . Sorts the columns by the best order for compression techniques (such as RLE)

    3. Compresses the columns using various (proprietary techniques), such as:

    a) Value encoding (e.g. deducting a constant value from each column entry to create a value with fewer bits and hence less space.

    b) Hash encoding (dictionary coding), creating a table similar to normalization technique where an ID and the value are stored in a table in memory (e.g. product sizes, 0: small 1: medium, and 2: large). The 0, 1, and 2 will replace the sizes small, medium, and large resulting in fewer bits being stored in the table.

    c) Run Length encoding (RLE): When values are repeated in a column entries 1-2000 are all Q1, then this can be put in a table of Q1: 2000 (first 50 rows are 50). Entries 2001 to 3500 are Q2. This can be put into the table as Q2: 1500 and so on.
    Using this technique the Vertipaq engine can compute the results very quickly. Sorting is therefore very important for RLE-type compression. The engine does do sorting, but you can potentially help it by sorting the data by columns with the lowest cardinality first.
    the
    Factors impacting compression are therefore as follows:
    1. Cardinality (the more unique values, the less compression can be done, hence increasing the size of the column in memory
    2. Number of repetitive values – The more, the better the RLE compression possible.
    3. The number of rows is important, but the cardinality and repetitive values are more important for compression
    4. The data type – e.g. automatic date/time should be turned off as it created a larger entry size than the date field alone. Firstly the size will be smaller and if removing time results in much lower cardinality the column can be compressed much further using either RLE or Hash encoding.

    Primary Keys such as user ID integer types will most likely use value encoding.
    Numbers such as sales amounts that don’t have massive outliers will probably also be value encoded otherwise dictionary encoding may be used.

    To determine the best sort order, a sample of the first x number of rows (I think its a million), is taken, so it’s important that the first x number of rows are of the best quality. You can help the Vertipaq engine therefore by ensuring your initial data is representative of the whole dataset or the engine will have to re-encode data e.g. if it finds outliers and finds its value compression doesn’t work (hence slowing down the process).

    We can look at the type of compression used, by using DAX Studio (Advanced > View Metrics)


    Firstly, you’ll see an overview of the tables. Each column in a table is encoded differently, so you’ll see the encoding type as ‘Many’.
    You need to drill down to get to the column level (or you could just select the column tab on the left-hand side).

    Now, let’s have a look at the FactOnlineSales table.

    The table is sorted by default by the column size. This is the size in Bytes that the column is consuming in memory.

    The first column: OnlineSalesKey has 12, 6127,608 rows and a cardinality of 12, 627,608 and as the name suggests is a unique key, hence it has the same cardinality (unique values) as rows.
    It also is the largest column consuming 84MB of memory and 31.77% of the database.
    It is encoding as VALUE type,

    Rows: 12627608
    Cardinarlity: 12627608
    Col Size: 84156128 (Data + Dictionary + HierSize)
    Data 33, 645560
    Dictionary: 128
    Hier Size: 50, 5101440
    % Table: 33.07%

    Unique keys are generally required fields in most tables, so it’s not something you can usually safe space by optimizing (unless you can reduce the size of the key somehow).

    By applying the same analysis you can determine which columns and tables are having the biggest impact on your model size and hence slowing your model refresh time.

  • How to use the Window and OFFSET Functions in DAX Measures:


    The Window Function

    To use the Window function, we can start by creating a virtual table of customer sales by sales key using the CALCULATETABLE function and then creating a window on the table to return only the results we want by position. We can make use of DAX query to do this.

    
    DEFINE
    
    VAR CustomerSales =  CalculateTable(SUMMARIZECOLUMNS('V_FactOnlineSales'[CustomerKey], "Sales", SUMX('V_FactOnlineSales', V_FactOnlineSales[SalesAmount])), NOT(ISBLANK(V_FactOnlineSales[CustomerKey])))
    
    VAR CustomerSalesWindow = 
    
        WINDOW(
            1,ABS,3,ABS,
          CustomerSales, 
            ORDERBY([Sales], DESC), 
        )
        
    EVALUATE
    CustomerSalesWindow


    As you can see from the code above, we start off by creating a table in a variable called CustomerSales, which has a column for CustomerKey and a summarized sales column. The table is filtered, so that the customer is not blank.

    We then create a second variable called ‘CustomerSalesWindow’ to filter our CustomerSales table using the Window function.
    The initial customer sales table is as follows

    We then create the window using the following code:

    
    VAR CustomerSalesWindow = 
    
        WINDOW(
            1,ABS,3,ABS,
          CustomerSales, 
            ORDERBY([Sales], DESC), 
        )
    
    EVALUATE
    CustomerSalesWindow


    As you can see the original table is sorted by sales in descending order (we can also use partition, but not In this example).

    The syntax for the window function is as follows:
    WINDOW ( from[, from_type], to[, to_type][, <relation> or <axis>][, <orderBy>][, <blanks>][, <partitionBy>][, <matchBy>][, <reset>] )

    So you can see that we have:
    WINDOW(
    from: 1
    from-type: ABS
    to: 1
    to-type: ABS
    relation: CustomerSales
    ORDERBY([Sales[, DESC))


    As you can see the ORDERBY and PARTITION are very similar to SQL in that it is sorting the sales field in descending order.
    The positions from ABS: 1 to ABS: 3 give us the top 3 sales results.

    Of course, we could have used something like TOPN() function to get the top 3 sales, but if we wanted a specific location e.g. The 3rd to 5th positions, then the WINDOW() function would make it much easier.

    The OFFSET() Function

    The OFFSET() function ‘Returns a single row that is positioned either before or after the current row within the same table, by a given offset’.

    In this example, we use OFFSET() to display the previous month’s sales in a table. We start off by creating a _SummarySales table and then add a second column _SalesOffest, which uses -1 as the delta which shows the previous month’s sales when the ORDERBY clause on CalanderMonth is set to ascending order (ASC). We can then add the final SalesMonthOnMonth column to the table to get the final summary.

    AFDEFINE
    	-- Syntax:
    	-- OFFSET ( <delta>[, <relation> or <axis>][, <orderBy>][, <blanks>][, <partitionBy>][, <matchBy>][, <reset>] )
    	
    	-- Create Sales by moth table
    	
    	VAR _SummarySales = SUMMARIZECOLUMNS(
    		DimDate[CalendarMonth],
    		"Sales", SUMX(
    			RELATEDTABLE(FactOnlineSales),
    			FactOnlineSales[SalesAmount]
    		)
    	)
    
    -- Add the offset column by one month
    
    	VAR _SalesOffset =
    	ADDCOLUMNS(
    		_SummarySales,
    		"PreviousMonth", SELECTCOLUMNS(
    			OFFSET(
    				-1,
    				_SummarySales,
    				ORDERBY(
    					[CalendarMonth],
    					ASC
    				)
    			),
    			[Sales]
    		)
    	)
    
    -- Add a month on month difference column
    
    	VAR _SalesMonthOnMonth =
    	ADDCOLUMNS(
    		_SalesOffset,
    		"Month on Month", [Sales] - [PreviousMonth]
    	)
    
    EVALUATE
    	_SalesMonthOnMonth

    After adding the previous month column, we can then add a month-on-month column.
    This gives us a table as follows:



  • How to filter DAX Measures In Power BI

    The application of various filter arguments is crucial to controlling the output of DAX measures when visual filters are present on a report page. When CALCULATE() is used in an expression, any filters applied will override any existing filters on the filter being applied.
    e.g. CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimProduct[BrandName]=”Litware”)
    Any filter on V_DimProduct[BrandName]is removed and the new filter is applied. So a slicer on that column would stop working.

    In this example, we create a number of measures in a matrix, using different filter functions. There are 3 slicers that can be applied to the matrix: Year, BrandName, and ProductCategoryName. The original simple measure is sales amount, which is simply just a sum of sales amount from the FactOnlineSales table from the Contoso Retail Datawarehouse.



    Simple SUMX Measure:
    The original measure with no filter arguments, will just be affected by the current filter context applied by both slicers and matrix or table rows and columns
    Sales Amount =
    SUM(V_FactOnlineSales[SalesAmount])

    CALCULATE(): Using the powerful Calculate() function, we can apply filters to the original measure to change the way the visual filters affect the measure.

    Sales Amount (Filter Brand Litware) =CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimProduct[BrandName]=”Litware”)

    In the next example, we apply a filter using calculate, but on the year, which is a filter in the matrix. As you can see, the filter on year is removed from the columns and the sum of sales for 2008 is repeated for each year from 2007 to 2009.

    Sales Amount (Filter Year 2008) = CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimDate[CalendarYear]=2008).

    To fix this issue we can use KEEPFILTERS()

    KEEPFILTERS()
    : The KEEPFILTERS() function helps us keep the filters on Year in the matrix:
    Sales Amount (KeepFilters) =

    CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimDate[CalendarYear]=2008, KEEPFILTERS(V_DimDate))

    REMOVEFILTERS(): By using the REMOVEFILTERS() function along with CALCULATE() we can remove all the filters applied to the report. As you can see in the table above, this removes the filters from the Year, Brand, and Product Category Name columns. Essentially giving us the total sales for the company for all time.

    Sales Amount (RemoveFilters: all) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), REMOVEFILTERS())

    REMOVEFILTERS(‘TableName’
    ): We can also use the REMOVEFILTERS() function to remove filters only on a specific table (not all tables). In this example, we remove any filters on the V_DimProduct table. Other filters will continue to filter the measure:

    Sales Amount (RemoveFilters: Table) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), REMOVEFILTERS(V_DimDate))

    REMOVEFILTERS(‘TableName'[ColumnName]): A more granular method is to remove filters just on specific columns. In this example, we remove any filtering applied by V_DimProductCategory'[ProductCategoryName]

    Sales Amount (RemoveFilters: Column) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), REMOVEFILTERS(‘V_DimProductCategory'[ProductCategoryName]))

    ALL(‘TableName’): Removes all filters from a specified table. This is similar to REMOVEFILTERS(‘TableNAme’), but it works differently.

    Sales Amount (All: DimProductCategory Table) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), ALL(V_DimProductCategory))

    ALL(‘ColumnName’): Remove all filters from specified columns:

    ALLEXCEPT(): Remove all filters except the specified column or columns. In this example, we remove all filters except from the V_DimProductCategory'[ProductCategoryName] column.

    Sales Amount (AllExcept: ProductCategoryName) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), ALLEXCEPT(V_DimProductCategory, ‘V_DimProductCategory'[ProductCategoryName]))

    ALLNONBLANKROWS(): If your data has blank rows in it, then this may be useful to you. If it doesn’t then you don’t need it. You can apply it to a table or columns:

    SalesTableRows = SalesTableRows (non blank) = COUNTROWS(ALLNOBLANKROW(‘V_FactOnlineSales’))
    SalesOrderLines (non blank) = COUNTROWS(ALLNOBLANKROW(‘V_FactOnlineSales'[SalesOrderLineNumber]))

    ALLSELECTED(‘TableName’): Removes filters coming from within the visual. So in this example, ALLSELECTED() removes the filters on year within the matrix i.e the columns, but if you use the slicer on year, it will still work. You can also apply to columns.

    Sales Amount (AllSelected DimDate) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), ALLSELECTED(V_DimDate))

    Using multiple Tables and Columns in Filter Expressions


    The above use of filter functions affecting visuals can be further expanded to include multiple tables and multiple columns as in the examples below:

    CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimProduct[BrandName]=”Litware” &&  V_DimProduct[BrandName]=”Northwind Traders”)

    Sales Amount (Filter Multiple Year) =

    CALCULATE(SUM(V_FactOnlineSales[SalesAmount]), V_DimDate[CalendarYear] IN {2008, 2009})

    Sales Amount (RemoveFilters multiple) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), REMOVEFILTERS(‘V_DimProductCategory'[ProductCategoryName], ‘V_DimProductCategory'[ProductCategoryDescription]))

    Sales Amount (All: multiple) = CALCULATE(SUMX(V_FactOnlineSales, V_FactOnlineSales[SalesAmount]), ALL(V_DimProductCategory), ALL(V_DimDate))

    Recommended Book: The Definitive Guide to DAX by Marco Russo and Alberto Ferrari













  • How to Create Multiple Measures With Power BI DAX Query Editor

    Creating multiple measures in DAX Query Editor

    It may be useful to create templates for creating DAX models, using commonly used DAX measures. Here we create a template for commonly used sales measures, such as Sales YTD, Sales Last Year, Sales Year on Year, and Sales Year on Year %, we can then apply the same logic for QTD and MTD.

    The full code for creating the measures, I will add to the bottom of this page.

    For this example, I am using just 2 tables from the Contoso Retail Data warehouse: DimDate and FactSales. They are joined on the DateKey.

    We start with the Year sales measures as shown below in the DAX query editor. To add them to the model, we just click ‘Update model: Add new measure’, but first we want to format the code, using the Format Query button.

    Here is the code, with the DAX formatted.

    We can then click the 4 ‘Update mode: Add new measure’ texts and it will add the 4 measures to the model..


    We can then create similar measures for QTD and MTD as follows:

    Here is the code for the Quarterly measures:

    The code for creating the Monthly measures is as follows:

    That gives me the 12 measures in record time!


    As promised here is the full code that can be copied and pasted. Of course, you’ll need to change the table names as required. Note, I have created an empty ‘_Meusures’ table to act as a container for the measures.

    // Learn more about DAX queries at https://aka.ms/dax-queries
    DEFINE
    //Year Measures
    	MEASURE '_Measures'[Sales_YTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			DATESYTD('DimDate'[Datekey])
    		)
    	MEASURE '_Measures'[Sales_LY_YTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			SAMEPERIODLASTYEAR(DATESYTD('DimDate'[Datekey]))
    		)
    	MEASURE '_Measures'[Sales_YOY] = '_Measures'[Sales_YTD] - '_Measures'[Sales_LY_YTD]
    	MEASURE '_Measures'[Sales_YOY%] = ('_Measures'[Sales_YTD] - '_Measures'[Sales_LY_YTD]) / '_Measures'[Sales_LY_YTD]
    	
    //QTD Measures
    	MEASURE '_Measures'[Sales_QTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			DATESQTD('DimDate'[Datekey])
    		)
    	MEASURE '_Measures'[Sales_LY_QTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			SAMEPERIODLASTYEAR(DATESQTD('DimDate'[Datekey]))
    		)
    	MEASURE '_Measures'[Sales_QTD_YOY] = '_Measures'[Sales_QTD] - '_Measures'[Sales_LY_QTD]
    	MEASURE '_Measures'[Sales_QTD_YOY%] = ('_Measures'[Sales_QTD] - '_Measures'[Sales_LY_QTD]) / '_Measures'[Sales_LY_QTD]
    	
    	//MTD Measures
    	MEASURE '_Measures'[Sales_MTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			DATESMTD('DimDate'[Datekey])
    		)
    	MEASURE '_Measures'[Sales_LY_MTD] = CALCULATE(
    			SUM(FactSales[SalesAmount]),
    			SAMEPERIODLASTYEAR(DATESMTD('DimDate'[Datekey]))
    		)
    	MEASURE '_Measures'[Sales_MTD_YOY] = '_Measures'[Sales_MTD] - '_Measures'[Sales_LY_MTD]
    	MEASURE '_Measures'[Sales_MTD_YOY%] = ('_Measures'[Sales_MTD] - '_Measures'[Sales_LY_MTD]) / '_Measures'[Sales_LY_MTD]
  • How to use the Index Function in DAX Measures

    Demonstrating the use of the Index function with the Contoso Retail Data warehouse. We can start off by building a virtual table in the DAX query editor, which we can use to apply the Index function. The table creates is a list of the first 10 customers by customer key from the DimCustomer table.

    DEFINE
    	-- Syntax
    	-- INDEX(<position>[, <relation> or <axis>][, <orderBy>][, <blanks>][, <partitionBy>][, <matchBy>][, <reset>] )
    
    	VAR CustomerSalesSample =
    	CALCULATETABLE(
    		SELECTCOLUMNS(
    			DimCustomer,
    			"CustomerKey", DimCustomer[CustomerKey],
    			"FirstName", DimCustomer[FirstName],
    			"LastName", DimCustomer[LastName],
    			"Total Sales", SUMX(
    				RELATEDTABLE(FactOnlineSales),
    				FactOnlineSales[SalesAmount]
    			)
    		),
    		TOPN(
    			10,
    			DimCustomer,
    			DimCustomer[CustomerKey],
    			ASC
    		)
    	)
    
    VAR Ind = INDEX(
    		1,
    		CustomerSalesSample,
    		ORDERBY(
    			[Total Sales],
    			DESC
    		)
    	)
    EVALUATE
    	Ind

    If we evaluate the CustomerSalesSample table first we can see the table we are working with.

    The syntax for the INDEX function is as follows:

    — INDEX(<position>[, <relation> or <axis>][, <orderBy>][, <blanks>][, <partitionBy>][, <matchBy>][, <reset>] )
    In our example, we use the <position> of 1 to get the first sales amount, which is the highest sales amount as the Total Sales column is sorted in descending order (DESC)

    And then when we evaluate the Index expression (variable Ind), we get the following, which is the correct output we are looking for.

    To get the last position, we could either sort the data by ascending order (ASC) or we can use the INDEX of -1 as the following example:

    VAR Ind = INDEX(
    		-1,
    		CustomerSalesSample,
    		ORDERBY(
    			[Total Sales],
    			DESC
    		)
    	)

    When we use -1 as the index we get the blank sales returned, which isn’t what we wanted, so need to modify the code.

    One way of filtering out the blanks from the sales table is to add a filter on the table to filter out the blanks as an additional variable as int he below.

    DEFINE
    	-- Syntax
    	-- INDEX(<position>[, <relation> or <axis>][, <orderBy>][, <blanks>][, <partitionBy>][, <matchBy>][, <reset>] )
    
    	VAR CustomerSalesSample =
    	CALCULATETABLE(
    		SELECTCOLUMNS(
    			DimCustomer,
    			"CustomerKey", DimCustomer[CustomerKey],
    			"FirstName", DimCustomer[FirstName],
    			"LastName", DimCustomer[LastName],
    			"Total Sales", SUMX(
    				RELATEDTABLE(FactOnlineSales),
    				FactOnlineSales[SalesAmount]
    			)
    		),
    		TOPN(
    			10,
    			DimCustomer,
    			DimCustomer[CustomerKey],
    			ASC
    		) 
    
    	)
    	VAR CustomerSalesNotBlank = FILTER(CustomerSalesSample, NOT(ISBLANK([Total Sales])))	
    
    	VAR Ind = INDEX(
    		-1,
    		CustomerSalesNotBlank,
    		ORDERBY(
    			[Total Sales],
    			DESC
    		)
    	)
    EVALUATE
     Ind

    Evaluating this code now gives us the least total sales amount that is not blank:

  • How to use the RankX Function in DAX Measures

    Using the RANK() Function in DAX

    To demonstrate the use of the DAX function, we’ll start by creating a simple sales table with the sales amount and the year. Then we’ll add additional columns to create examples of using the RANK function.

    In the first example (SimpleSalesRank), we’ll just create a simple ranking. The default function ranks the Sales column from the least amount to the highest amount and you’ll notice the blank sales value is included as rank one. Note, we can order by more than one column if partitioning by more than one column.

    SimpleSalesRank = RANK(ORDERBY(SalesRank[Sales]))
    

    The first thing we can do is move the blank value to the end of the ranking, using the LAST parameter.

    SimpleSalesRankBLANKLast = RANK(ORDERBY(SalesRank[Sales]), LAST)

    Then we can rank the sales values from the highest first, while still retaining blank in the last position.

    SimpleSalesRankDESC0 = RANK(ORDERBY(SalesRank[Sales], DESC), LAST)

    We can then partition the ranking by using the Year column.

    SimpleSalesRankPARTITION = RANK(DENSE, ORDERBY(SalesRank[Sales], DESC),  LAST, PARTITIONBY(SalesRank[Year]))

    Using RANKX() function in DAX

    In the next example, we create a similar Sales table with a Sales column and a year column. We can then use the RANK functions to create our rankings.

    First, we’ll create a simple ranking as before, but with RANKX(). Included here is the function syntax in the comments.
    As you can see from the table above, the RANKX function defaults to sorting the highest sales value first, whereas the RANK function sorts it last. The RANKX function also defaults to putting the blank last, whereas the RANK function ordered it first.

    RankXFunction = 
    --RANKX(<table>, <expression>[, <value>[, <order>[, <ties>]]])  
    RANKX(SalesRankX2, SalesRankX2[Sales])

    We can reverse the order with the following code, but you’ll notice we don’t use the ORDERBY clause in the code:

    RankXFunctionASC = RANKX(SalesRankX2, SalesRankX2[Sales],,  ASC)

    We can also apply the DENSE clause for the ties, as the default is to SKIP the ranking when there have been equal values, for example, there are 2 sales of value 94, which are both ranked 6, as the default treatment of ties is set to SKIP, the next rank value jumps to 8. With the DENSE clause, the next rank does not jump.

    RankXFunctionDENSE = 
    --RANKX(<table>, <expression>[, <value>[, <order>[, <ties>]]])  
    --note: defaul ties is SKIP
    RANKX(SalesRankX2, SalesRankX2[Sales], ,  ASC, DENSE)

    In the next example we use RANKX with RELATEDTABLE(). We start of by creating a sample of the DimCustomer table, joining it to the FactOnline Sales table and then adding a ranking table to the new customer table for total sales. We then check the ranking by adding a totalsales

    Step 1: Create a sample customer table. In the core below, we create a simple selection of CustomerKey, FirstName, LastName and then filter the table by the first 10 customers by customerkey. So this will give us a table of the first 10 customers in the DimCustomer table. These data come from the Contoso Retail Data Warehouse.

    CustomerSimple = CALCULATETABLE(
        SELECTCOLUMNS(
            DimCustomer,
            "CustomerKey", DimCustomer[CustomerKey],
            "FirstName", DimCustomer[FirstName],
            "LastName", DimCustomer[LastName]
        ),
        TOPN( 10, DimCustomer, DimCustomer[CustomerKey], ASC )
    )

    The table below is created (first 3 columns), then we add the SalesRank and CustomerTotalSalesCheck to demonstrate the use of the RANKX function with RELATEDTABLE function

    The code for ranking for sales is below. Then we add the CustoemrTotalSalesCheck to confirm that our ranking is working correctly.
    As you can see it is. The first rank of 1 is allocated to total sales of £3,932. The last value is blank ranked 10. You will notice there is no rank 8, so the default tie is to SKIP rank when then is a tie. We can change this by adding the DENSE clause

    SalesRank = RANKX(CustomerSimple, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]))

    Add the the CustomerTotalSales Check column:

    CustomerTotalSalesCheck = SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]) 

    Next, we add the DENSE so the ties don’t jump (optional)

    SalesRankDENSE = RANKX(CustomerSimple, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]),,,Dense)

    Creating a Measure using RANKX()
    Now we have some good examples of using the RANK and RANKX function we can use them in a measure, which creates a temporary table in memory.

    To create the measure we can use the DAX Editor in Power BI, DAX Tabular Editor, or DAX Studio. My preference is the DAX Tabular editor, but the code will work in all three, allowing us to see the data as we build the measure. If you just try and build the measure, you can’t be sure what’s going on unless you build the table directly in Power BI, but they are slower to run.

    Here we borrow the code from the previous example creating a physical table, but ensure the table is set to DimCustomer in the Rank function.

    DEFINE
    VAR CustomerSample = 
    CALCULATETABLE(
        SELECTCOLUMNS(
            DimCustomer,
            "CustomerKey", DimCustomer[CustomerKey],
            "FirstName", DimCustomer[FirstName],
            "LastName", DimCustomer[LastName], 
    		"Rank", RANKX(DimCustomer, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]),,,Dense) ,
    	    "TotalSalesCheck",  SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]) 
    		    ),
        TOPN( 10, DimCustomer, DimCustomer[CustomerKey], ASC )
    )
    
    EVALUATE
    CustomerSample
    

    Here is the output of the code in DAX Query View (inside Power BI, I haven’t used tabular editor this time).

    As you can see the top rank is 3932 as before.

    I can now for example filter the rankings table, in this example, I filter greater than 1 and less than 4

    DEFINE
    VAR CustomerSample = 
    CALCULATETABLE(
        SELECTCOLUMNS(
            DimCustomer,
            "CustomerKey", DimCustomer[CustomerKey],
            "FirstName", DimCustomer[FirstName],
            "LastName", DimCustomer[LastName], 
    		"RankNumber", RANKX(DimCustomer, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]),,,Dense) ,
    	    "TotalSalesCheck",  SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]) 
    		    ),
        TOPN( 10, DimCustomer, DimCustomer[CustomerKey], ASC )
    )
    
    VAR TopThreeRankSales = FILTER(CustomerSample, AND([RankNumber] > 1, [RankNumber]< 4))
    
    EVALUATE
    TopThreeRankSales

    This gives me a new table:

    Next, we want to get the sum of the 2nd and 3rd-ranking sales, we can do this by using the GROUPBY function, but without grouping on any columns.

    DEFINE
    VAR CustomerSample = 
    CALCULATETABLE(
        SELECTCOLUMNS(
            DimCustomer,
            "CustomerKey", DimCustomer[CustomerKey],
            "FirstName", DimCustomer[FirstName],
            "LastName", DimCustomer[LastName], 
    		"RankNumber", RANKX(DimCustomer, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]),,,Dense) ,
    	    "TotalSalesCheck",  SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]) 
    		    ),
        TOPN( 10, DimCustomer, DimCustomer[CustomerKey], ASC )
    )
    
    VAR TopThreeRankSales = FILTER(CustomerSample, AND([RankNumber] > 1, [RankNumber]< 4))
    VAR SalesOfSecondAndThird = GROUPBY(TopThreeRankSales, "SalesOfSecondAndThird", SUMX(CURRENTGROUP(), [TotalSalesCheck]))
    
    EVALUATE
    SalesOfSecondAndThird

    This gives the following output:

    We should now be able to use the DAX in a measure, so we transfer the DAX code into a Measure as follows:
    The code using DEFINE and EVALUATE is a DAX query in the DAX query editor. When we create a measure we are creating a DAX expression. DAX expressions are converted to DAX queries when they are evaluated. Here is the code for creating the measure below.

    _SecondAndThirdTotalSales = 
    VAR CustomerSample = 
    CALCULATETABLE(
        SELECTCOLUMNS(
            DimCustomer,
            "CustomerKey", DimCustomer[CustomerKey],
            "FirstName", DimCustomer[FirstName],
            "LastName", DimCustomer[LastName], 
    		"RankNumber", RANKX(DimCustomer, SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]),,,Dense) ,
    	    "TotalSalesCheck",  SUMX(RELATEDTABLE(FactOnlineSales), FactOnlineSales[SalesAmount]) 
    		    ),
        TOPN( 10, DimCustomer, DimCustomer[CustomerKey], ASC )
    )
    
    VAR TopThreeRankSales = FILTER(CustomerSample, AND([RankNumber] > 1, [RankNumber]< 4))
    VAR SalesOfSecondAndThird = GROUPBY(TopThreeRankSales, "SalesOfSecondAndThird", SUMX(CURRENTGROUP(), [TotalSalesCheck]))
    
    RETURN
    SalesOfSecondAndThird

    We can then use the measure in a visualization as follows:

  • An Example of Creating a Sales Report with SQL

    We can create a more interesting sales report using Window functions and additional calculations in the query, including the gross margin %, total profit, and ranks for product price, sales quantity, and sales rank. The query uses a Common Table Expression (CTE) function to group the data first (as we can’t group using Windows functions).

    This is the kind of report decision-makers will be interested in and as more questions come up, additional fields can be added.

    USE [ContosoRetailDW];
    -- Note: Qty * UnitPrice - Discount = TotalSalesAmount (small descrepency)
    -- CTE
    
    WITH sub1 AS
    (
    SELECT p.ProductName, s.UnitCost, s.UnitPrice, SUM(SalesQuantity) as TotalSalesQuantity, 
    (s.UnitCost * SUM(SalesQuantity)) as TotalCost, SUM(DiscountAmount) as TotalDiscount,
    SUM(SalesAmount) as TotalSalesAmount
    FROM [dbo].[FactOnlineSales] as s
    LEFT OUTER JOIN [dbo].[DimProduct] as p ON s.[ProductKey] = p.[ProductKey]
    LEFT OUTER JOIN [dbo].[DimDate] d on s.DateKey = d.DateKey
    LEFT OUTER JOIN [dbo].[DimStore] st ON s.StoreKey = st.StoreKey
    WHERE 
    d.CalendarYear = 2007 AND d.CalendarMonthLabel = 'January'
    AND
    st.StoreName = 'Contoso Europe Online Store'
    GROUP BY  p.ProductName, s.UnitCost, s.UnitPrice
    )
    
    --Main Query referencing the CT
    
    SELECT ProductName, UnitCost, UnitPrice, (UnitPrice-UnitCost)/UnitCost as Margin, 
    TotalSalesQuantity, format(TotalCost,'$0,,.0M') as TotalCost, format(TotalSalesAmount, '$0,,.0M') as TotalSales, 
    format(TotalSalesAmount-TotalCost, '$0,,.0M')  as TotalProfit,
    TotalDiscount,
    RANK() OVER(ORDER BY UnitPrice DESC) as PriceRank,
    RANK() OVER(ORDER BY TotalSalesQuantity DESC) as QtyRank,
    RANK() OVER(ORDER BY TotalSalesAmount DESC) as SalesRank
    FROM sub1
    ORDER BY SalesRank
    ORDER BY SalesRank
  • Using Python for Multi-Linear Regression

    Creating a simple Linear regression model and preparing for multi-linear regression.

    In this example, we use a sample of marketing spend data vs. sales and inspect the correlation between radio spend and total sales. The regression line is fitted using the ols function from statsmodels.formula.api

    You can download the original file from Kaggle here then just replace the location you save it to in the (df = pd.read_csv() line.

    
    import pandas as pd
    import seaborn as sns
    from statsmodels.formula.api import ols
    import statsmodels.api as sm
    import matplotlib.pyplot as plt
    pd.set_option('display.max_columns', None)
    df = pd.read_csv("marketing_sales_data.csv")
    
    # Drop rows with any null values
    df.dropna(inplace=True)
    
    # Check and handle duplicates if needed
    if df.duplicated().sum() > 0:
        df.drop_duplicates(inplace=True)
    
    #rename columns to snake 
    df.rename(columns = {'Social Media': 'Social_Media'}, inplace = True)
    
    # Simple order encoding
    tv_dict = {'Low': 1, 'Medium': 2, 'High': 3}
    df['TV'] = df['TV'].replace(tv_dict)
    
    # One-hot encoding for non-ordinal variable
    df = pd.get_dummies(df, columns=['Influencer'], dtype=int)
    
    
    # Define and fit the model
    ols_formula = "Sales ~ Radio"
    OLS = ols(formula=ols_formula, data=df)
    model = OLS.fit()
    summary = model.summary()
    print(summary)  #Prints off the statistical summary including R squared and the Beta coefficients.
    
    # Calculate residuals and predicted values
    residuals = model.resid
    y_pred = model.predict(df['Radio'])
    

    Results return from the model.summary() method from the OLS (ordinary least squares) function from the statsmodels module. R squared is calculated as 0.757 meaning 76% of the variability in y (sales) is accounted for by radio. However, if we look at other media, we will see that other variables (TV) also have a strong correlation.

    The Beta coefficient for radio spend is 8.17, which means that for every $1 million in Radio spend, we get $8.17 million in sales.