Category: Data Wrangling

Merging Two Excel Files or Sheets Using Power Query (with the merge based on Multiple Columns)

Excel Power Query is a powerful, no-code/low-code tool that allows you to combine and transform data from multiple sources in a repeatable and refreshable way. One common use case is merging two Excel files or worksheets based on multiple matching columns, similar to a SQL join. Power Query is a major part of Power BI, but it can be used in Excel.

When to Use Power Query for Merging

Power Query is ideal when:

  • You receive recurring Excel files with the same structure
  • You need a reliable, refreshable merge process
  • You want to avoid complex formulas like VLOOKUP or XLOOKUP across many columns

Step-by-Step Overview

1. Load Both Data Sources into Power Query

  • Open Excel and go to Data → Get Data
  • Choose From Workbook (for separate files) or From Table/Range (for sheets in the same file)

Tip: Ensure the columns you plan to merge on have the same data types (e.g., text vs. number).


  • Load each dataset into Power Query as a separate query

2. Start the Merge Operation

  • In Power Query, select the primary table
  • Go to Query → Merge Queries
  • Choose the secondary table from the dropdown

3. Select Multiple Matching Columns

  • Click the first matching column in the primary table
  • Hold Ctrl (or Cmd on Mac) and select additional matching columns
  • Repeat the same column selections in the secondary table, in the same order

For example, if you needed to perform the merge on CustomerID, OrderDate, and Region, you would click Customer ID, then hold the Ctrl key and click OrderDate, then (while still holding down the Ctrl key) click Region.

Power Query treats this as a composite key, and all selected columns must match for rows from both tables to merge.


4. Choose the Join Type

Select the appropriate join kind:

  • Left Outer – Keep all rows from the first table (most common) and brings in the values for the matching rows from the second table
  • Inner – Keep only matching rows from both tables
  • Full Outer – Keep all rows from both tables, merging the table where there is a match and having just the values from the respective tables when there is no match

Click OK to complete the merge.


5. Expand the Merged Data

  • A new column appears containing nested tables
  • Click the expand icon to select which columns to bring in
  • Remove unnecessary columns to keep the dataset clean

6. Load and Refresh

  • Click Close & Load
  • The merged dataset is now available in Excel
  • When source files change, simply click Refresh to update everything automatically

Key Benefits

  • Handles multi-column joins cleanly and reliably
  • Eliminates fragile lookup formulas
  • Fully refreshable and auditable
  • Scales well as data volume grows

In Summary

Using Power Query to merge Excel data on multiple columns brings database-style joins into Excel, making your workflows more robust, maintainable, and professional. Once set up, it saves time and reduces errors—especially for recurring reporting and analytics tasks.

Thanks for reading!

Understanding UNION, INTERSECT, and EXCEPT in Power BI DAX

When working with data in Power BI, it’s common to need to combine, compare, or filter tables based on their rows. DAX provides three powerful table / set functions for this: UNION, INTERSECT, and EXCEPT.

These functions are especially useful in advanced calculations, comparative analysis, and custom table creation in reports. If you have used these functions in SQL, the concepts here will be familiar.

Sample Dataset

We’ll use the following two tables throughout our examples:

Table: Sales_2024

The above table (Sales_2024) was created using the following DAX code utilizing the DATATABLE function (or you could enter the data directly using the Enter Data feature in Power BI):

Table: Sales_2025

The above table (Sales_2025) was created using the following DAX code utilizing the DATATABLE function (or you could enter the data directly using the Enter Data feature in Power BI):

Now that we have our two test tables, we can now use them to explore the 3 table / set functions – Union, Intersect, and Except.

1. UNION – Combine Rows from Multiple Tables

The UNION function returns all rows from both tables, including duplicates. It requires the same number of columns and compatible data types in corresponding columns in the the tables being UNION’ed. The column names do not have to match, but the number of columns and datatypes need to match.

DAX Syntax:

UNION(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

UnionTable = UNION(Sales_2024, Sales_2025)

As you can see, the UNION returns all rows from both tables, including duplicates.

If you were to reverse the order of the tables (in the function call), the result remains the same (as shown below):

To remove duplicates, you can wrap the UNION inside a DISTINCT() function call, as shown below:

2. INTERSECT – Returns Rows Present in Both Tables

The INTERSECT function returns only the rows that appear in both tables (based on exact matches across all columns).

DAX Syntax:

INTERSECT(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

IntersectTable = INTERSECT(Sales_2024, Sales_2025)

Only the rows in Sales_2024 that are also found in Sales_2025 are returned.

If you were to reverse the order of the tables, you would get the following result:

IntersectTableReverse = INTERSECT(Sales_2025, Sales_2024)

In this case, it returns only the rows in Sales_2025 that are also found in Sales_2024. Since the record with “D – West – $180” exists twice in Sales_2025, and also exists in Sales_2024, then both records are returned. So, while it might not be relevant for all datasets, order does matter when using INTERSECT.

3. EXCEPT – Returns Rows in One Table but Not the Other

The EXCEPT function returns rows from the first table that do not exist in the second.

DAX Syntax:

EXCEPT(<Table1>, <Table2>)

For our example, here is the syntax and resulting dataset:

ExceptTable = EXCEPT(Sales_2024, Sales_2025)

Only the rows in Sales_2024 that are not in Sales_2025 are returned.

If you were to reverse the order of the tables, you would get the following result:

ExceptTableReverse = EXCEPT(Sales_2025, Sales_2024)

Only the rows in Sales_2025 that are not in Sales_2024 are returned. Therefore, as you have seen, since it pulls data from the first table that does not exist in the second, order does matter when using EXCEPT.

Comparison table summarizing the 3 functions:

FunctionUNIONINTERSECTEXCEPT
Purpose & OutputReturns all rows from both tablesReturns rows that appear in both tables (i.e., rows that match across all columns in both tables)Returns rows from the first table that do not exist in the second
Match CriteriaColumn position (number of columns) and datatypesColumn position (number of columns) and datatypes and valuesColumn position (number of columns) and datatypes must match and values must not match
Order Sensitivityorder does not matterorder matters if you want duplicates returned when they exist in the first tableorder matters
Duplicate HandlingKeeps duplicates. They can be removed by using DISTINCT()Returns duplicates only if they exist in the first tableReturns duplicates only if they exist in the first table

Additional Notes for your consideration:

  • Column Names: Only the column names from the first table are kept; the second table’s columns must match in count and data type.
  • Performance: On large datasets, these functions can be expensive, so you should consider filtering the data before using them.
  • Case Sensitivity: String comparisons are generally case-insensitive in DAX.
  • Real-World Use Cases:
    • UNION – Combining a historical dataset and a current dataset for analysis.
    • INTERSECT – Finding products sold in both years.
    • EXCEPT – Identifying products discontinued or newly introduced.

Thanks for reading!

SQL Tips: How to generate insert statements using the data from the output of a select statement using Toad

If you need to build insert statements for many rows of data for inserting that data into another table or into the same table in a different environment, there is a convenient way to do this in Toad. This is often needed to move data around.

In Toad, execute the appropriate select statement on the source table …

Select * from [my_schema].[my_simple_table];

I used a simple “select *” above, but your SQL statement can be any valid SQL statement that returns the data you want to insert into the other table. You may add specific columns, add filters, joins, and any other valid SQL operation.

Let’s say you want to insert the output into another table in a different schema.

Right-click on the output result data, and click “Export Dataset…”

From the “Export format” drop down menu, choose “Insert Statements”

In the Output section of the Export Dataset dialog box, enter the location and name of the Insert Script file that will be generated.

There are several other parameters that you could choose but we won’t cover them all here.

If you only wanted to generate inserts for some selected rows, select “Export only selected rows”.

If you need to specify the schema of the target table, select “Include schema name”

In the Table section, enter the name of the target schema and table

Note, there are data masking options available that can be very useful if, for example, you are moving some data from a Production environment to a Non-Production environment, and you do not want to expose the data there.

After you have set the parameters relevant to your scenario, Click “OK”.

The Insert Script file of all the data will be generated with the filename and at the location you specified. And the insert statements will include the name of the schema and table you specified.

Thanks for reading!

Data Cleaning methods

Data cleaning is an essential step in the data preprocessing pipeline when preparing data for analytics or data science. It involves identifying and correcting or removing errors, inconsistencies, and inaccuracies in the dataset to improve its quality and reliability. It is essential that data is cleaned before being used in analyses, reporting, development or integration. Here are some common data cleaning methods:

Handling missing values:

  • Delete rows or columns with a high percentage of missing values if they don’t contribute significantly to the analysis.
  • Impute missing values by replacing them with a statistical measure such as mean, median, mode, or using more advanced techniques like regression imputation or k-nearest neighbors imputation.

Handling categorical variables:

  • Encode categorical variables into numerical representations using techniques like one-hot encoding, label encoding, or target encoding.

Removing duplicates:

  • Identify and remove duplicate records based on one or more key variables.
  • Be cautious when removing duplicates, as sometimes duplicated entries may be valid and intentional.

Handling outliers:

  • Identify outliers using statistical methods like z-scores, box plots, or domain knowledge.
  • Decide whether to remove outliers or transform them based on the nature of the data and the analysis goals.

Correcting inconsistent data:

  • Standardize data formats: Convert data into a consistent format (e.g., converting dates to a specific format).
  • Resolve inconsistencies: Identify and correct inconsistent values (e.g., correcting misspelled words, merging similar categories).

Dealing with irrelevant or redundant features:

  • Remove irrelevant features that do not contribute to the analysis or prediction task.
  • Identify and handle redundant features that provide similar information to avoid multicollinearity issues.

Data normalization or scaling:

  • Normalize numerical features to a common scale (e.g., min-max scaling or z-score normalization) to prevent certain features from dominating the analysis due to their larger magnitudes.

Data integrity issues:

Finally, you need to address data integrity issues.

  • Check for data integrity problems such as inconsistent data types, incorrect data ranges, or violations of business rules.
  • Resolve integrity issues by correcting or removing problematic data.

It’s important to note that the specific data cleaning methods that need to be applied to a dataset will vary depending on the nature of the dataset, the analysis goals, and domain knowledge. It’s recommended to thoroughly understand the data and consult with domain experts when preparing to perform data cleaning tasks.

Transpose vs. Pivot vs. Unpivot in Power BI

In this post, I will quickly show how to perform 3 key transformations in Power BI – Transpose, Pivot, and Unpivot – and what impact they have on the data they are applied to. These transformations are often needed to manipulate the original data to get it into the shape needed to create the Power BI model and reports. We will use a simple example so that you can see clearly the effect of each transformation.

This is the original dataset – Student Grades:

Transpose

Transpose the data by going to the Transform ribbon and clicking “Transpose”.

This is what the data looks like after the Transpose transformation is applied. The rows are converted into columns, and the columns are converted into rows. Default headers / column names are applied.

Pivot Column

Starting with the original dataset, we will now Pivot the dataset. Go to Transform ribbon, click on “Pivot Column”.

The Pivot Column dialog opens. As noted, it will use the values in the Names column to create new columns, and will use the Score column as the Values Column.

This is what the data looks like after Pivot is applied as described above. Pivot Column converts the selected Name column values into column headers.

Unpivot Columns

Starting with the original dataset, we will now Unpivot the dataset. First select all 4 name columns by using “Ctrl + Click”.

Then, go to Transform ribbon, click on “Unpivot Columns”. This is what the data looks like after the Unpivot Columns transformation is applied. Unpivot Columns converts the selected columns (the name columns in this case) into a column of values (Attribute) and the column values are stored in the Value column.

Thanks for reading!