Subscriber Authentication Point
Open Access

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## 1 Introduction

Multi Criteria Decision Analysis (MCDA) is a well known method used for decision support in many fields (Triantaphyllou, 2000; Hiete et al., 2010) and could be a part of tools for informing discussions, debates and deliberations (TIDDDs) (see Dubreuil et al., 2010). It is of low computational complexity in comparison to other methods and very generic which is why it can be applied not only in crisis management but in many different fields.

The goal of a main class of MCDA methods is to establish a ranking on a set of alternatives thus providing decision support by determining a comparable value A1, …, An for each alternative through integrating a set of predefined criteria C1, …, Cm. Since the criteria are typically of different units and scales, the values C1, …, Cm have to be normalized before combining them into an overall result. For this purpose, normalization functions N1, …, Nm, such as e.g. min-max normalization, have to be defined for every single criterion (Vafaei et al., 2016). Trade-offs between criteria are represented through weights. Their relative importance is reflected in a specific normalized weight w1, …, wm for each criterion. Using the normalized values and weights of criteria a ranking value is determined by an aggregation method. One of the most popular aggregation methods is the computation of the weighted sum, which for each alternative requires the following computation:

The results A1, …, An are sorted according to their value. The top alternative having the highest value is assumed to be the most preferable one (assuming all criteria have to be maximised, which can be part of the normalization). Figure 1 shows the matrix-like structure of the previously mentioned formula.

The results can be presented in multiple ways like e.g. charts, graphs, textual report, and others depending on the specific requirements of the decision makers.

The MCDA as described above will process deterministic parameters, yet many (if not all) scenarios of decision making are affected by uncertainties, which boil down to processing probabilistic parameters. The following section describes how this limitation can be overcome.

 Fig. 1Example of the matrix-like structure of MCDA for 2 alternatives and 3 criteria.

## 2 Probabilistic multi criteria decision analysis

The MCDA is based on two basic parameter types: the values of criteria for each alternative and the weights of each criterion. Both types are deterministic in the default MCDA. As both values and weights can be affected by uncertainty, both have to be defined as probabilistic.

### 2.1 Defining uncertainties

For a practical application there are two possibilities to define probabilistic values and weights. If the characteristics of a criterion are well know, it is possible to provide a distribution function with the according parameters for each criterion-alternative pair for example a normal distribution with parameters mean and standard deviation. If such specific knowledge is not available but numerical measurements are on hand, the actual distribution can be estimated by use of a histogram. Both representations allow for random sampling of values which is required for the ensemble evaluation (see example below). Figure 2 shows an example of the parametrization of a criterion value as normal distribution function as it was implemented in the MCDA tool for CONFIDENCE.

 Fig. 2Defining a criterion value for an alternative as a normal distribution.

### 2.2 Ensembles

As the probabilistic MCDA cannot be evaluated directly, an ensemble of deterministic MCDAs is generated by random sampling of the probabilistic MCDA. The members of the ensemble are evaluated one by one and the results are combined into a single result, which inherently contains uncertainty. As the number of criterion-alternative pairs together with weights can easily become large, the number of generated ensemble members should take this into account and be chosen sufficiently large. Fortunately, the evaluation of one MCDA instance is very simple, so even evaluating hundreds of thousands of ensemble members is a matter of seconds.

### 2.3 Aggregating ensemble results

As MCDA provides a value for each alternative, which is used to establish a ranking on the set of alternatives, a natural approach to aggregate the ensemble results is to determine statistical attributes like mean, median, quantiles, minimum, maximum, etc. For practical reasons, as the ensemble set may be huge in size, applying methods like moving average, moving median, and moving quantiles is recommendable. Even if not always exact, these methods significantly reduce the amount of data to be stored during the evaluation.

However, displaying these attributes as box-plot shows that they are not very suitable for decision support as shown in an example in Figure 3. While the means appear different in general, boxes frequently overlap and it is not clear which alternative would be better. Instead of aggregating the ranking values aggregating the actual ranking might be preferred as shown in the following.

Several methods to aggregate on the ranking itself are possible, for example to determine rank winners by counting how many times an alternative ranked first place, how many times it ranked second place, etc., or to determine outranking by counting how often an alternative was better in rank compared to another one. For visualization of these results bar charts or bubble charts are suitable. Figures 4 and 5 show examples of such charts as they were implemented in the MCDA tool for CONFIDENCE. It is much clearer from these charts compared to the box charts that alternative “LW+Relocation” dominates the others. The shortcoming of this second approach is that it does not show the actual performance of the different alternatives. For this reason, the use of both types of information may be most helpful.

 Fig. 3A box chart showing the statistical results of an ensemble evaluation.
 Fig. 4Chart showing the rank winning results of an ensemble evaluation.
 Fig. 5Chart showing the outranking results of an ensemble evaluation.

## 3 Evaluation

The enhanced MCDA tool capable to deal with uncertainties was presented in several workshops during the CONFIDENCE project where the stakeholders provided suggestions to further improve the visualization of input and results (Müller et al., 2019). The suggestions were followed and re-evaluated in the following workshops. The evaluations were performed by means of moderated exercises using different scenarios, and involving stakeholders with different backgrounds, who may be involved in decision making during a real nuclear emergency. The scenarios, data used for evaluation and evaluation results from the final workshop of CONFIDENCE are presented in other articles of this special issue (Charnock et al., 2020; Duranova et al., 2020a, 2020b).

## 4 Conclusion

Within the framework of the CONFIDENCE project an existing MCDA tool was enhanced to cope with uncertainties in the input parameters. For this purpose, probabilistic MCDA and ensemble evaluations were developed. The uncertainties were defined as distribution functions or histograms and graphically displayed to stakeholders. The results were communicated as charts and textual report. The method was presented in stakeholder workshops where the stakeholders assessed the MCDA tool to be helpful in decision making under some boundary conditions. These may imply, for example, that there should be enough time to discuss preferences (which should be the case in the transition phase), and that the tool be operated by trained personal.

## Acknowledgement

The work described in this paper was conducted within the CONFIDENCE project which was part of the CONCERT project. This project has received funding from the Euratom research and training programme 2014–2018 under grant agreement No. 662287.

Disclaimer (Art. 29.5 GA). This publication reflects only the author’s view. Responsibility for the information and views expressed therein lies entirely with the authors. The European Commission is not responsible for any use that may be made of the information it contains.

## References

• Charnock TW, Andersson K, Trueba C, Montero M. 2020. Uncertainties confronting stakeholders and decision-makers in planning intervention in urban and agricultural scenarios in the transition phase of a radiological emergency. Radioprotection 55(HS1). https://doi.org/10.1051/radiopro/2020021. [Google Scholar]
• Dubreuil GH, Baudé S, Lochard J, Ollagnon H, Liland A. 2010. The EURANOS cooperative framework for preparedness and management strategies of the long-term consequences of a radiological event. Radioprotection 45(5): S199–S213. [Google Scholar]
• Duranova T, Raskob W, Beresford NA, Korsakissok I, Montero M, Müller T, Turcanu C, Woda C. 2020a. CONFIDENCE dissemination meeting: Summary on the scenario based workshop. Radioprotection 55(HS1). https://doi.org/10.1051/radiopro/2020009. [Google Scholar]
• Duranova T, van Asselt E, Müller T, Bohunova J, Twenhöfel CJW, Smetsers RCGM. 2020b. MCDA stakeholder workshops. Radioprotection 55(HS1). https://doi.org/10.1051/radiopro/2020032. [Google Scholar]
• Hiete M, Bertsch V, Comes T, Schultmann F, Raskob W. 2010. Evaluation strategies for nuclear and radiological emergency and post-accident management. Radioprotection 45(5): S133–S147. [CrossRef] [EDP Sciences] [Google Scholar]
• Müller T, Duranova T, van Asselt E, Twenhöfel CJW, French S, Andersson KG, Haywood S, Oughton D, Smith JQ, Turcanu C. 2019. Report from stakeholder panels and workshops related to the application of the methods and tools developed in ST9.1.6. CONCERT Deliverable D9.36. Available from https://www.concert-h2020.eu/en/Publications. [Google Scholar]
• Triantaphyllou E. 2000. Multi-criteria decision making: A comparative study. Dordrecht, The Netherlands: Kluwer Academic Publishers (now Springer), 320 p. ISBN 978-0-7923-6607-2. [CrossRef] [Google Scholar]
• Vafaei N, Ribeiro R, Camarinha-Matos L. 2016. Normalization techniques for multi-criteria decision making: Analytical hierarchy process case study, 470 p. https://doi.org/10.1007/978-3-319-31165-4_26. [Google Scholar]

## All Figures

 Fig. 1Example of the matrix-like structure of MCDA for 2 alternatives and 3 criteria. In the text
 Fig. 2Defining a criterion value for an alternative as a normal distribution. In the text
 Fig. 3A box chart showing the statistical results of an ensemble evaluation. In the text
 Fig. 4Chart showing the rank winning results of an ensemble evaluation. In the text
 Fig. 5Chart showing the outranking results of an ensemble evaluation. In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.