DRAFT This page is not complete – if you need information on this topic, please contact us
The analysis of a model generates several metrics used to prioritize the issues reported by DAX Optimizer.
This section describes each metric, but it is important to consider that DAX Optimizer performs a static analysis of the DAX expressions, generating an estimation of the cost that cannot match the actual cost of each element.
Therefore, the comparison between metrics makes sense only in relative terms. Do not make any assumptions about the absolute values reported in each metric. Moreover, to display the metrics more intuitively, DAX Optimizer automatically rescales the numbers with a meaningful number of digits.
The weight is the estimated impact of the measure on the overall performance of the model.
The weight value is normalized using a normalization factor; the tooltip on the number shows the actual value computed in the analysis.
The CPU Opt is the possible optimization (%) of the CPU cost of the measure. The estimate is based on the weight of the issues detected compared to the overall CPU cost of the measure.
The CPU Cost is the estimated CPU cost of the measure inclusive of referenced measures.
The value is normalized using a normalization factor reported in the tooltip on the column header. However, the absolute value is not important; use CPU Cost only for relative comparison with other measures.
The RAM Opt is the possible optimization (%) of the RAM cost of the measure.
The estimate is based on the estimated materialization of the issues detected compared to the overall materialization required by the measure.
The RAM Cost is the estimated RAM cost of the measure based on the maximum materialization expected.
The value is normalized using a normalization factor reported in the tooltip on the column header. However, the absolute value is not important; use RAM Cost only for relative comparison with other measures.
The Exec metric shows how many times the measure is executed in the model.
This number is obtained by estimating multiple executions of the measure when it is evaluated in an iterator. The estimation is not necessarily accurate, but it provides a good relative estimate to compare a measure with other measures.
The Dir Ref metric reports the number of measures that have a direct reference to the measure.
The Ind Ref metric reports the number of measures that depend on this measure but do not have a direct reference to it.