Trending March 2024 # Iferror In Power Query Using Try Otherwise # Suggested April 2024 # Top 12 Popular

You are reading the article Iferror In Power Query Using Try Otherwise updated in March 2024 on the website Cancandonuts.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Iferror In Power Query Using Try Otherwise

In Excel we can use IFERROR to if our calculation results in an error, and we can then tell Excel to produce a different result, instead of the error.

Power Query doesn’t have IFERROR but it does have a way of checking for errors and replacing that error with a default answer, it’s called try otherwise

In this post I’ll show you how to use try otherwise to handle errors when loading data, how to handle errors in your transformations and how to handle errors when your query can’t locate a data source.

Watch the Video

Download Sample Excel Workbook

Enter your email address below to download the sample workbook.

By submitting your email address you agree that we can email you our Excel newsletter.

Please enter a valid email address.

. Note: This is a .xlsx file please ensure your browser doesn’t change the file extension on download. Excel Workbook . Note: This is a .xlsx file please ensure your browser doesn’t change the file extension on download.

First up, let’s load data from this table.

I’ve already generated a couple of errors in this table, and of course I can obviously see them and I could fix them before loading into Power Query.

But when using Power Query this isn’t always the situation. Your query will be loading data without knowing what it is so how would it handle these errors?

Let’s load the data into Power Query and call it Errors from Sheet

Straight away you can see the errors in the column.

Now of course you could use Remove Errors but that would remove the rows with the errors and that’s not what I want.

Or I could use Replace Errors, but this doesn’t give me any idea what the cause of the error is.

I want to see what caused the error and to do this I’ll add a Custom Column and use try [End]

This creates a new column with a Record in each row

In this record are two fields. HasError states whether or not there’s an error in the [End] column

If there is an Error then the 2nd field is another record containing information about that error

If there isn’t an error, then the 2nd field is the value from the [End] column

If I expand the new column I get 3 new columns containing the HasError value which is boolean, and either an Error or a Value

Checking what’s in the Error Records, you can see the Reason for the error, DataFormat.Error, this is from Power Query

There’s the Message, which is the error from the Excel sheet, and some errors give extra Detail, but not in this case.

If I expand this Error column I can see all of these fields.

I’ve ended up with a lot of extra columns here and it’s a bit messy so let’s tidy it up. In fact I’ll duplicate the query and show you another way to get the same information in a neater way

The new query is called Errors from Sheet (Compact) and I’ve deleted all steps except the first two.

What I want to do is , check for an error in the Try_End column, and if there is one I want to see the error message from Excel.

If there isn’t an error I want the value from the [End] column.

I can do all of this in a new column using an if then else

Add a new Custom Column called Error or Value and enter this code

What this is saying is:

If the boolean value [HasError] in the [Try_End] column is true then

return the [Message] in the [Error] record of the [Try_End] column

else return the [Value] from the [Try_End] column

With that written I can remove both the End and Try_End columns so the final table looks like this

Checking for Errors and Replacing Them With Default Values

In this scenario I don’t care what the error is or what caused it, I just want to make sure my calculations don’t fail.

I duplicate the original query again, calling this one Error in Calculation, and remove every step except the Source step

I add a new Custom column called Result and what I’ll do here is divide [Start] by [End]

this gives me an error as I know it will in rows 1 and 3

so to avoid this, edit the step and use try .. otherwise

now the errors are replaced with 0.

Errors Loading Data from A Data Source

I’ll create a new query and load from an Excel workbook

Navigating to the file I want I load it

and loading this table

I’m not going to do any transformations because I just want to show you how to deal with errors finding this source file.

I don’t have an X: drive so I know this will cause the workbook loading to fail.

So that’s what happens when the file can’t be found so let’s say I have a backup or alternate file that I want to load if my main file can’t be found.

Open the Advanced Editor again and then use try otherwise to specify the backup file’s location

close the editor and now my backup file is loaded.

You're reading Iferror In Power Query Using Try Otherwise

Dense Ranking In Power Query

There are several ways to rank things, dense ranking is when items that compare equally receive the same ranking number, with subsequent items receiving the next ranking number, with no gaps between numbers, like this:

Download the Excel Workbook and Queries in This Post

The queries in this Excel file can be copied/pasted into the Power BI Desktop Advanced Editor and will work there too.

Enter your email address below to download the workbook with the data and code from this post.

By submitting your email address you agree that we can email you our Excel newsletter.

Please enter a valid email address.

Download the Excel Workbook.

I have some data in an Excel table for students who are studying Spanish or English. They’ve just taken exams and I’m going to use Power Query to create a dense ranking for the scores they received in those exams.

I’m going to use two queries to load the table of data. The first query called Scores just gives us the same 20 row table, sorted first by Course (ascending) and then by Score (descending).

The 2nd query is called Ranks and this is where most of the work is done. After loading the same source table, the first thing to do is remove all columns except Course and Score.

Then remove duplicates in the Score column

Next, Group By the Course column

To give us a table with two rows, one for each course. Each row in the Count column contains a table that contains all the scores for that course.

Now the best part, by adding a Custom Column and using this to add an Index Column to each item in the Count column, because each item is a table of scores for that course,

you end up with another table in each row of the new custom column (called Rank) that has assigned a ranking (index) to every score for each course.

Now by expanding the tables in the Rank column you end up with this, a table with each score in each course ranked.

There are only 13 rows in this table but we have 20 rows in our source data so we need to merge (join) the Scores and Rank tables together into a new query

The result is a table in every row of our new query that includes the rank for that combination of Course and Score.

Expanding the column of tables gives this

Can you see the problem? All the ranks are wrong.

So what is going on? I’m not 100% sure. I’ve read several blog posts and articles where similar issues are described, and I’ve seen this same kind of problem occur with sorting and removing duplicates.

My understanding is that Power Query presents one view of how data is stored, as in the end result of the Ranks query above, but it actually stores it in another way/order.

This does seem odd but the explanation I’ve seen given is that PQ uses lazy evaluation – it only really evaluates something when it is actually needed. So as you are going through building a query with various steps, the data you see in the preview isn’t necessarily the data you’re going to get when you run the query for real.

I’m not convinced that this is desirable behaviour, but the solution appears to be to use Table.Buffer. Table.Buffer takes a table and stores it in memory after evaluating it. This seems to be the key point.

As you add steps to your query, and as that query is run, each step is evaluated and the data in the step may be evaluated many times. What does it mean to evaluate? It means PQ checks the data to see what it is. But there appears to be no guarantee that the data is stored in an expected, ordered state.

You could sort a list but in a subsequent step that sorting is lost. Or as we have here, we’ve created a ranking that isn’t applied correctly, even though when you examine the table in the Ranks column, it shows you the correct rank.

What is really puzzling is that as the query is doing a merge, it is matching up two columns, the Course and the Score, so shouldn’t it follow that the Dense Rank value in that row in the Ranks table should be correct?

The fact that it isn’t would imply that the join isn’t working properly. If the join can attach the correct Course and Score from the Ranks table to the Scores table, why is the Dense rank value wrong?

Anyway, the fix is to wrap the Ranks table in Table.Buffer inside the join step.

Buffering the table like this means the table is held in computer memory in a known state. The query evaluates Ranks once and then does not evaluate it again. The order of the elements in the table won’t change.

With Table.Buffer in place, the result of the join is now correct.

We might have to ask Microsoft what is actually going on here.

Function Query And Operators In The Query Editor

This tutorial will discuss the Function Query feature in the Query Editor. You’ll learn how to use and maximize function queries to get the results and data you desire. You’ll also understand how they work with operators to generate specific outcomes.

Next, open the Advanced Editor window and delete all its contents. If you want to construct a custom function, you have to start with a set of parentheses. Then, define a comma-separated list of parameters inside those parentheses. After that, input the go-to sign, which is the combination of the equal and greater than sign, followed by the function body.

In this example, the parameters are a and b, and the function body is a + b. Name the query Add2Values.

This is what the Function Query looks like.

Beside the query name in the Query Pane, you can see the fx icon which indicates that it is a function query.

To invoke the function, enter a value for each parameter and press Invoke.

Pressing Invoke will create a new query called Invoked Function, which contains the result of the set parameters. In the formula bar, you’ll also see that it references the function query by name and assigns the values of the parameters.

To add values from different columns, you may also use the same Function Query. Create a new query and open the Advanced Editor window. Next, input the following code to create a small table.

To invoke a custom function on each row of the table, you can go to the Add Column tab and select Invoke Custom Function.

In this example, the values are in Columns 1 and 2.

You can see that a new column has been added to the table. The values inside the column are the sum of the row values of Columns 1 and 2.

If you remove one of the arguments inside the formula, the values inside the new column will yield an Error. In this example, Column2 is removed from the formula.

The custom function has a required set of parameters which allows us to create optional function parameters.

If you go back to the SumExample Table Query, you’ll see that the Error values in the last column turn to null values. Applying the operator to values that include a null will always return a null.

Another thing to be aware of is that Function Query accepts arguments of any type. This could potentially cause problems because you could pass a text value and raise another error. The addition operator can’t be applied to operands of that data type.

In the Advanced Editor window, you can type functions by adding the keyword as. Aside from typing the parameters, you can also assign a return type to the function after the parentheses.

Adding too many arguments will also get error values. If you input Columns 2 and 3 in the formula bar, the last column will show error values.

There is an M function that helps deal with a situation like this. First, create a new blank query and input the chúng tôi function in the formula bar. You’ll then see documentation of the function.

To demonstrate, duplicate the Add2Values Function Query and open the Advanced Editor window. Then, input Function.From at the beginning of the syntax.

Next, go back to the SumExample Table Query and change the Function Query to AddValues. You’ll see that the AddValues column now has the sum of each row values of the column.

Even though two parameters were only declared in the function type, you can invoke the function with as many arguments as you want. This is because all arguments are merged into a single list before passing it to the function.

How you name your parameters doesn’t matter.

If you’re writing a custom function within the Function.From and you need to reference an item, you have to use the positional index operator to access the item in the list.

Unary functions are functions that you see all the time. Many of the standard library functions take functions as arguments and those parameter functions are often unary. It means that the function takes just one single argument.

As an example, add a filter example query by creating a new blank query. Next, open the Advanced Editor window and input the following syntax.

Once done, you’ll see a table with CustomerID and Name columns in the preview pane. Name the query FilterExample.

Next, input the Table.SelectRows function and its arguments in the formula bar. The first and second arguments must be a table and a condition as a function, respectively. In this example, the first argument is the ChType and the second argument is a custom function that brings out the customer ID greater than 2.

Another way is to use the each keyword, which is a shorthand for a unary function. It takes a single nameless variable as an argument and is represented by the underscore ( _ ). To demonstrate, open the Advanced Editor window and change the custom function.

Once you press Done, you can see that it generates the same results.

To improve the readability of the formula, you can omit the underscore when accessing fields or columns.

If you go back to the Advanced Editor window and remove the underscore in the custom function, it will still return the same results.

All the expressions are equal to one another. But from a readability and writing standpoint, the last version is definitely easier to understand. When creating this step through the user interface, the M engine uses the shorthand notation.

A Function Query utilizes and maximizes functions to obtain data. They help bring out or gather specific information from a table or source to provide results. You can use these functions to effectively create a data report and improve your data development skills.

Melissa

How To Use Port Query Tool (Portqry.exe) In Windows 11/10

Port Query (PortQry.exe) is a command-line utility in the Windows operating system that you can use to help troubleshoot TCP/IP connectivity issues. The tool reports the port status of TCP and UDP ports on a computer that you select. In this post, we will show you how to use the Port Query tool for network reconnaissance or forensic activity.

Port Query (PortQry.exe) tool in Windows 11/10

Windows has many tools for diagnosing problems in TCP/IP networks (ping, telnet, pathping, etc.). But not all of them allow you to conveniently check the status or scan opened network ports on a server. The chúng tôi utility is a convenient tool to check the response of TCP/UDP ports on hosts to diagnose issues related to the operation of various network services and firewalls in TCP/IP networks. Most often, the Portqry utility is used as a more functional replacement for telnet command, and unlike telnet, it also allows you to check open UDP ports.

Computer systems use TCP and UDP for most of their communication, and all versions of Windows open many ports that provide useful functionality such as file sharing and remote procedure call (RPC). However, malicious programs such as Trojan horses can use ports nefariously to open a back door for attackers into your computer system. Whether you need to troubleshoot a necessary network service or detect unwanted programs, you need to be able to understand and manage the traffic between computers on your network. A basic step toward doing so is determining which programs are listening on your computer systems’ network ports.

How to use Port Query Tool (PortQry.exe)

You can use Port Query both locally and remotely on a server. To use chúng tôi you will need to download the tool. Once you download chúng tôi extract the chúng tôi archive, then open command prompt and run the command below to go to the directory with the utility:

cd c:PortQryV2

Alternatively, you can navigate to the folder where you downloaded the tool to, and press Alt + D key combo, type CMD and hit Enter to launch command prompt within the directory.

You can now proceed to use the tool.

Remotely use Port Query (PortQry.exe) tool

Port Query can scan remote systems, but it’s slow and unsophisticated compared with other port scanners. For example, unlike Nmap, chúng tôi doesn’t let you perform scans that use specified packet flags (e.g., SYN, FIN).

For example, to check the availability of a DNS server from a client, you need to check if 53 TCP and UDP ports are open on it. The syntax of the port check command is as follows:

Where:

-n is the name or IP address of the server, which availability you are checking;

-e is the port number to be checked (from 1 to 65535);

-r is the range of ports to be checked (for example, 1:80);

-p is the protocol used for checking. It may be TCP, UDP or BOTH (TCP is used by default).

In our example, the command looks like this:

PortQry.exe –n 10.0.25.6 -p both -e 53

PortQry.exe can query a single port, an ordered list of ports, or a sequential range of ports. chúng tôi reports the status of a TCP/IP port in one of the following three ways:

Listening:

A process is listening on the port on the computer that you selected. chúng tôi received a response from the port.

Not Listening:

No process is listening on the target port on the target system. chúng tôi received an Internet Control Message Protocol (ICMP) “Destination Unreachable – Port Unreachable” message back from the target UDP port. Or if the target port is a TCP port, Portqry received a TCP acknowledgment packet with the

Reset

 flag set.

Filtered:

The port on the computer that you selected is being filtered. chúng tôi did not receive a response from the port. A process may or may not be listening on the port. By default, TCP ports are queried three times, and UDP ports are queried one time before a report indicates that the port is filtered.

Locally use Port Query (PortQry.exe) tool

What PortQry lacks in remote scanning features it makes up for with its unique local-machine capabilities. To enable local mode, run PortQry with the -local switch. When -local is the only switch used, PortQry enumerates all local port usage and port-to-PID mapping. Instead of sorting the data by open port, PortQry lists it according to PID, letting you quickly see which applications have open network connections.

To watch port 80, you’d run the command below:

portqry -local -wport 80 Using PortQryUI

It’s also worth mentioning that Microsoft also made available a graphical front end to PortQry, called PortQryUI.

PortQryUI includes a version of chúng tôi and some predefined services, which consist simply of groups of ports to scan.

The PortQueryUI contains several predefined sets of queries to check the availability of the popular Microsoft services:

Domain and trusts (checking ADDS services on an Active Directory domain controller)

Exchange Server

SQL Server

Networking

IP Sec

Web Server

Net Meeting

Possible return codes in PortQueryUI is highlighted in the image above:

0 (0x00000000) – the connection has been established successfully and the port is available.

1 (0x00000001) – the specified port is unavailable or filtered.

2 (0x00000002) – a normal return code when checking the availability of a UDP connection, since ACK response is not returned.

Hope this helps.

Read next: How to check what Ports are open?

Power Pivottables & Power Pivotcharts

Power PivotTables & Power PivotCharts

When your data sets are big, you can use Excel Power Pivot that can handle hundreds of millions of rows of data. The data can be in external data sources and Excel Power Pivot builds a Data Model that works on a memory optimization mode. You can perform the calculations, analyze the data and arrive at a report to draw conclusions and decisions. The report can be either as a Power PivotTable or Power PivotChart or a combination of both.

You can utilize Power Pivot as an ad hoc reporting and analytics solution. Thus, it would be possible for a person with hands-on experience with Excel to perform the high-end data analysis and decision making in a matter of few minutes and are a great asset to be included in the dashboards.

Uses of Power Pivot

You can use Power Pivot for the following −

To perform powerful data analysis and create sophisticated Data Models.

To mash-up large volumes of data from several different sources quickly.

To perform information analysis and share the insights interactively.

To create Key Performance Indicators (KPIs).

To create Power PivotTables.

To create Power PivotCharts.

Differences between PivotTable and Power PivotTable

Power PivotTable resembles PivotTable in its layout, with the following differences −

PivotTable is based on Excel tables, whereas Power PivotTable is based on data tables that are part of Data Model.

PivotTable is based on a single Excel table or data range, whereas Power PivotTable can be based on multiple data tables, provided they are added to Data Model.

PivotTable is created from Excel window, whereas Power PivotTable is created from PowerPivot window.

Creating a Power PivotTable

Suppose you have two data tables – Salesperson and Sales in the Data Model. To create a Power PivotTable from these two data tables, proceed as follows −

As you can observe, the layout of the Power PivotTable is similar to that of PivotTable.

The PivotTable Fields List appears on the right side of the worksheet. Here, you will find some differences from PivotTable. The Power PivotTable Fields list has two tabs − ACTIVE and ALL, that appear below the title and above the fields list. ALL tab is highlighted. The ALL tab displays all the data tables in the Data Model and ACTIVE tab displays all the data tables that are chosen for the Power PivotTable at hand.

The corresponding fields with check boxes will appear.

Each table name will have the symbol on the left side.

If you place the cursor on this symbol, the Data Source and the Model Table Name of that data table will be displayed.

Drag Salesperson from Salesperson table to ROWS area.

The field Salesperson appears in the Power PivotTable and the table Salesperson appears under ACTIVE tab.

Both the tables – Sales and Salesperson appear under the ACTIVE tab.

Drag Month to COLUMNS area.

Drag Region to FILTERS area.

Power PivotTable can be modified dynamically to explore and report data.

Creating a Power PivotChart

A Power PivotChart is a PivotChart that is based on Data Model and created from the Power Pivot window. Though it has some features similar to Excel PivotChart, there are other features that make it more powerful.

Suppose you want to create a Power PivotChart based on the following Data Model.

As you can observe, all the tables in the data model are displayed in the PivotChart Fields list.

Drag the fields – Salesperson and Region to AXIS area.

Two field buttons for the two selected fields appear on the PivotChart. These are the Axis field buttons. The use of field buttons is to filter data that is displayed on the PivotChart.

Drag TotalSalesAmount from each of the 4 tables – East_Sales, North_Sales, South_Sales and West_Sales to ∑ VALUES area.

As you can observe, the following appear on the worksheet −

In the PivotChart, column chart is displayed by default.

In the LEGEND area, ∑ VALUES gets added.

The Values appear in the Legend in the PivotChart, with title Values.

The Value Field Buttons appear on the PivotChart.

You can remove the legend and the value field buttons for a tidier look of the PivotChart.

Deselect Legend in the Chart Elements.

The value field buttons on the chart will be hidden.

Note that display of Field Buttons and/or Legend depends on the context of the PivotChart. You need to decide what is required to be displayed.

As in the case of Power PivotTable, Power PivotChart Fields list also contains two tabs − ACTIVE and ALL. Further, there are 4 areas −

AXIS (Categories)

LEGEND (Series)

∑ VALUES

FILTERS

Table and Chart Combinations

Power Pivot provides you with different combinations of Power PivotTable and Power PivotChart for data exploration, visualization and reporting.

Consider the following Data Model in Power Pivot that we will use for illustrations −

You can have the following Table and Chart Combinations in Power Pivot.

Chart and Table (Horizontal) – you can create a Power PivotChart and a Power PivotTable, one next to another horizontally in the same worksheet.

Chart and Table (Vertical) – you can create a Power PivotChart and a Power PivotTable, one below another vertically in the same worksheet.

Hierarchies in Power Pivot

You can use Hierarchies in Power Pivot to make calculations and to drill up and drill down the nested data.

Consider the following Data Model for illustrations in this chapter.

You can create Hierarchies in the diagram view of the Data Model, but based on a single data table only.

The hierarchy field with the three selected fields as the child levels gets created.

Type a meaningful name, say, EventHierarchy.

You can create a Power PivotTable using the hierarchy that you created in the Data Model.

Create a Power PivotTable.

As you can observe, in the PivotTable Fields list, EventHierarchy appears as a field in Medals table. The other fields in the Medals table are collapsed and shown as More Fields.

The fields under EventHierarchy will be displayed. All the fields in the Medals table will be displayed under More Fields.

Add fields to the Power PivotTable as follows –

Drag EventHierarchy to ROWS area.

Drag Medal to ∑ VALUES area.

As you can observe, the values of Sport field appear in the Power PivotTable with a + sign in front of them. The medal count for each sport is displayed.

As you can observe, medal count is given for the Events, that get summed up at the parent level – DisciplineID, that get further summed up at the parent level – Sport.

Calculations Using Hierarchy in Power PivotTables

You can create calculations using a hierarchy in a Power PivotTable. For example in the EventsHierarchy, you can display the no. of medals at a child level as a percentage of the no. of medals at its parent level as follows –

Value Field Settings dialog box appears.

As you can observe, the child levels are displayed as the percentage of the Parent Totals. You can verify this by summing up the percentage values of the child level of a parent. The sum would be 100%.

Drilling Up and Drilling Down a Hierarchy

You can quickly drill up and drill down across the levels in a hierarchy in a Power PivotTable using Quick Explore tool.

EXPLORE box with Drill Up option appears. This is because from Event you can only drill up as there are no child levels under it.

EXPLORE box appears with Drill Up and Drill Down options displayed. This is because from Discipline you can drill up to Sport or drill down to Event levels.

This way you can quickly move up and down the hierarchy in a Power PivotTable.

Using a Common Slicer

You can insert Slicers and share them across the Power PivotTables and Power PivotCharts.

Create a Power PivotChart and Power PivotTable next to each other horizontally.

Drag Discipline from Disciplines table to AXIS area.

Drag Medal from Medals table to ∑ VALUES area.

Drag Discipline from Disciplines table to ROWS area.

Drag Medal from Medals table to ∑ VALUES area.

Insert Slicers dialog box appears.

Two Slicers – NOC_CountryRegion and Sport appear.

Arrange and size them to align properly next to the Power PivotTable as shown below.

The Power PivotTable gets filtered to the selected values.

As you can observe, the Power PivotChart is not filtered. To filter Power PivotChart with the same filters, you can use the same Slicers that you have used for the Power PivotTable.

Report Connections dialog box appears for the NOC_CountryRegion Slicer.

As you can observe, all the Power PivotTables and Power PivotCharts in the workbook are listed in the dialog box.

Repeat for Sport Slicer.

The Power PivotChart also gets filtered to the values selected in the two Slicers.

Next, you can add more detail to the Power PivotChart and Power PivotTable.

Drag Gender to LEGEND area.

Select Stacked Column in the Change Chart Type dialog box.

Drag Event to ROWS area.

Aesthetic Reports for Dashboards

You can create aesthetic reports with Power PivotTables and Power PivotCharts and include them in dashboards. As you have seen in the previous section, you can use Report Layout options to choose the look and feel of the reports. For example with the option – Show in Outline Form and with Banded Rows selected, you will get the report as shown below.

As you can observe, the field names appear in place of Row Labels and Column Labels and the report looks self-explanatory.

You can select the objects that you want to display in the final report in the Selection pane. For example, if you do not want to display the Slicers that you created and used, you can just hide them by deselecting them in the Selection pane.

Advertisements

Tables In Power Bi: Types & Distinctions

I want to spend some time today talking about tables in Power BI. We run into tables all the time with every problem, but we don’t really spend a lot of time thinking about them alone. You can watch the full video of this tutorial at the bottom of this blog.

There was a post recently from Enterprise DNA member, Ashton, who always comes up with good, thought-provoking questions. His query got me thinking about a pattern that I see pretty regularly and want to address in this tutorial.

It wasn’t a mistake he made, but it’s a common mistake. I see many times where people would say they’re having trouble with a virtual table, for example, when it’s not a virtual table in the first place.

There’s a real distinction between the types of tables in Power BI.

The analogy I would give is if somebody said, “Hey, I’ve recently adopted a dog and I’m having trouble with the dog and wonder if you can help” and it’s not a dog, but a wolf. They’re both canines, but there are some pretty big distinctions that you want to take into account.

So while they’re all kind of the same family, just like the tables in Power BI, they are important distinctions that really affect the solution and how you use those tables. And so what I wanted to do was revisit Ashton’s question, and then delve a bit into the differences between the tables you find in Power BI and show how that influences the way you handle them.

The question was pretty simple, but there’s a little more to it than there initially appears. He had a simple data set that just says clients, the type of fruit they purchased, the quantity, and then just an index number.

He wanted to have a slicer with a multi-select capability so that if you selected say orange, it would pull up all the clients who bought oranges, but it would also show what else they purchased.

So, if we turn this selection (orange) off, we can see the full data set. We can also see that these are the two clients (Joe and Mary) who purchased oranges, but they each also had an additional purchase.

And so, the first thing we know about is that the Fruit slicer has got to be a disconnected slicer table. If it were a regular slicer, you’d hit orange and it would basically take out everything but orange in the table. We also know, because this is dynamic, we’ve got a virtual table issue.

So let’s delve into this a bit, but before we do, let’s take a look at the different types of tables that we find in Power BI.

There are three types of tables that we find regularly. The first one is the most common and it’s a physical table, and this is really your primary data. Whenever you do Get Data or you Enter Data directly through this option or in Power Query, you do a Reference or Duplicate, or you load data through a Blank Query, like a Date table, that’s a physical table.

A physical table is not fully dynamic, but it does have all these other characteristics that increase file size because it is physical data. It’s accessible in Power Query. Typically the relationships in the data model are built through physical relationships. They don’t have to be, but they usually are. They’re used for primary data and you visualize it through either the data view or through Power Query.

A physical table is not fully dynamic, but it does have all these other characteristics that increase file size because it is physical data. It’s accessible in Power Query. Typically the relationships in the data model are built through physical relationships. They don’t have to be, but they usually are. They’re used for primary data and you visualize it through either the data view or through Power Query.

The one that often gets confused with the virtual table is this calculated table or what we can also refer to as a DAX expression table. This has a direct analogy to calculated columns that are created through DAX expressions. This is done through the Modeling tab and the New table option, and then you enter the DAX expression.

The one that often gets confused with the virtual table is this calculated table or what we can also refer to as a DAX expression table. This has a direct analogy to calculated columns that are created through DAX expressions. This is done through the Modeling tab and the New table option, and then you enter the DAX expression.

As you can see from the comparison table below, the calculated table is quite different in profile from either the physical table or the virtual table. It’s not fully dynamic.

It needs to be refreshed in order to pick up the new information. It does increase physical file size. Unlike a straight-up physical table, it’s not accessible in Power Query. Just like a calculated column doesn’t show up in Power Query, a calculated table doesn’t either. But it does possess the other aspects of a physical table.

Typically, calculated tables used for supporting tables and used more commonly for debugging. There are now better tools like DAX Studio and Tabular Editor, so it’s not used as frequently for that. You view this table through the data view.

And frankly, like calculated columns, it’s generally something to stay away from. There are better ways of doing things than through calculated tables.

And then the analogy to measures are the true virtual tables. These are created in the context of measures through the DAX Measure Code. They are fully dynamic, unlike the other two types of tables. They don’t increase file size because they’re on-demand. Just like measures, virtual tables are calculated on-demand in memory and are all used over the course of contexts that they’re needed for.

They’re not accessible in Power Query. You can’t put a physical relationship on those. If you relate them in your data model, you do so using virtual relationships, most commonly with TREATAS.

Virtual tables are used for intermediate calculations in measures because a measure can’t return a table value and it has to return a scalar value. So you typically use those as the interim calculation upon which you generate your scalers, which are the resulting product of your measure.

You can visualize these tables in tools like DAX Studio, Tabular Editor, or New Table, which is again, probably the least desirable way because it clutters up your data model with a bunch of extra tables, unless you delete this.

Going back to Ashton’s question, by looking at the table, we can tell that it is dynamic. We can also do multiple choices and this changes fully on the fly. And so because of that, we know that it’s a virtual table because the other two (physical and calculated tables) are not fully dynamic. We also know that it’s a disconnected table, as I mentioned in the beginning.

I used the Rubber Duck concept to solve this. In a previous tutorial, I’ve talked about this concept of rubber ducking, which is developing a conceptual solution to the problem before you start digging into the specific DAX.

And so, for the rubber duck solution here, what I did was basically put together this disconnected slicer table. Then, I harvested the value of those slicers. Next, I came up with a virtual table that filtered clients by those who had purchased the selections in the disconnected table.

Then, I took that client list and filtered our original data set by those clients to come up with the clients and the purchases made by those clients, in addition to the slicer value.

Let’s take a look at the specific measure to see how this played out and some of the considerations you have in working with virtual tables.

The first thing was to develop two harvest measures for our values. The first one being the value of the disconnected slicer.

Since we had the multi-select in the requirement, it meant we couldn’t just use SELECTEDVALUE. So what I did is use the VALUES function to capture potentially one or more selections in that disconnected slicer. For clients, since we’re going to have one client on each row that we’re evaluating, we can use SELECTEDVALUE.

The next thing I did was this virtual table variable (VAR Buyers) that starts with CALCULATETABLE. Then, it goes into filtering the DISTINCT clients by whether or not they made a fruit purchase that was in our values of the disconnected slicer.

We can check this out by doing a DAX query. We can do that in either DAX Studio or in Tabular Editor. So let’s take our virtual table measure here, copy it over into Tabular Editor. We’ll make sure that it’s giving us the values that we expect to see, which would be Joe and Mary.

DAX queries always return tables and they always started with EVALUATE, so we add EVALUATE here. However, we’re not getting anything in the result. This is because the selected disconnect fruit (SelDisconnFruit) harvest variable has no awareness of the slicer. It’s out of context.

So the way to test that is kind of artificially setting that slicer value. We’ll replace SelDisconnFruit with a small table here. Type in Orange (the value of that slicer) between squiggly brackets. And now, we get the result here, which is Joe and Mary. With that, the virtual table is working just exactly as we thought it would.

So we can go back now to our measure expression. One of the challenging things about virtual tables is that they’re easy to create virtual tables within a measure. But you can’t return a virtual table as the result of a measure. So we need to return a scalar, but that captures the relevant aspect of the virtual table.

So here, we have the virtual table that tells us which clients purchased the fruits that were in the value slicer. We can set up another variable (VAR BuyerIn) that says, if the selected client was in that buyer’s table, then it gets a one. And if not, it gets a zero.

Then we take the results of that and put that into the filter pane. The Key Buyers measure, we say, is one and that’s going to be the clients that purchased, in this case, oranges.

And so by doing that, we filter our original data table down to the correct records. You can see that it now creates that virtual table and it does that filtering properly based on each selection.

That’s the bulk of what I wanted to cover today. It’s a fairly straightforward virtual table example, but with the focus on the difference between physical tables, calculated tables, and virtual tables.

There’s a lot of interesting issues to revisit with regard to virtual tables, particularly with debugging virtual table measures. That’s something I’ll be coming back to within the next few weeks, but for now, that’s all of this tutorial.

Cheers!

Brian

Update the detailed information about Iferror In Power Query Using Try Otherwise on the Cancandonuts.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!