• norsk
    • English
  • norsk 
    • norsk
    • English
  • Logg inn
Vis innførsel 
  •   Hjem
  • Norges Handelshøyskole
  • Thesis
  • Master Thesis
  • Vis innførsel
  •   Hjem
  • Norges Handelshøyskole
  • Thesis
  • Master Thesis
  • Vis innførsel
JavaScript is disabled for your browser. Some features of this site may not work without it.

How do different evaluation methods affect outcomes in procurement?

Benonisen, Monica; Strand, Marianne
Master thesis
Thumbnail
Åpne
masterthesis.pdf (1.515Mb)
Permanent lenke
https://hdl.handle.net/11250/2678795
Utgivelsesdato
2020
Metadata
Vis full innførsel
Samlinger
  • Master Thesis [4656]
Sammendrag
This thesis uses simulation and regression analysis to investigate how different evaluation

methods affect outcomes in procurement. In order to simulate the data, we have made our own

algorithm in R Studio to answer our proposed questions. This algorithm can easily be adapted

by others who want to simulate similar data or run simulation with other assumptions and

parameters. Most procurement in Norway involves evaluating tenders based on both price and

quality aspects. Price is evaluated by using scoring rules, while quality aspects are evaluated by

expert panels and, in some cases, adjusted by the use of normalisation. By first investigating

scoring rules, we find that the relative scoring rules recommended by the Norwegian

Digitalisation Agency (NDA), and the most commonly used in practice, have serious

drawbacks, suggesting that they are not the most suitable. In addition, we know from previous

literature that these rules are unpredictable for bidders to use. In this thesis, we therefore provide

additional insights, showing that these relative scoring rules also weigh quality relatively less

compared to price during evaluation. Finally, we prove that normalisation has adverse effects

on outcomes in procurement. The NDA recommends procurers to adjust, or normalise, the

quality scores assigned by expert panels. In this thesis, we show that normalisation changes the

relative weight of quality in a tender evaluation, leading to arbitrarily and unpredictable

outcomes. By rather recommending expert panels to evaluate quality aspects relatively,

normalisation can be avoided.

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit
 

 

Bla i

Hele arkivetDelarkiv og samlingerUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifterDenne samlingenUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifter

Min side

Logg inn

Statistikk

Besøksstatistikk

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit