How do different evaluation methods affect outcomes in procurement?
Master thesis
Permanent lenke
https://hdl.handle.net/11250/2678795Utgivelsesdato
2020Metadata
Vis full innførselSamlinger
- Master Thesis [4379]
Sammendrag
This thesis uses simulation and regression analysis to investigate how different evaluation
methods affect outcomes in procurement. In order to simulate the data, we have made our own
algorithm in R Studio to answer our proposed questions. This algorithm can easily be adapted
by others who want to simulate similar data or run simulation with other assumptions and
parameters. Most procurement in Norway involves evaluating tenders based on both price and
quality aspects. Price is evaluated by using scoring rules, while quality aspects are evaluated by
expert panels and, in some cases, adjusted by the use of normalisation. By first investigating
scoring rules, we find that the relative scoring rules recommended by the Norwegian
Digitalisation Agency (NDA), and the most commonly used in practice, have serious
drawbacks, suggesting that they are not the most suitable. In addition, we know from previous
literature that these rules are unpredictable for bidders to use. In this thesis, we therefore provide
additional insights, showing that these relative scoring rules also weigh quality relatively less
compared to price during evaluation. Finally, we prove that normalisation has adverse effects
on outcomes in procurement. The NDA recommends procurers to adjust, or normalise, the
quality scores assigned by expert panels. In this thesis, we show that normalisation changes the
relative weight of quality in a tender evaluation, leading to arbitrarily and unpredictable
outcomes. By rather recommending expert panels to evaluate quality aspects relatively,
normalisation can be avoided.