• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Norges Handelshøyskole
  • Thesis
  • Master Thesis
  • View Item
  •   Home
  • Norges Handelshøyskole
  • Thesis
  • Master Thesis
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

How do different evaluation methods affect outcomes in procurement?

Benonisen, Monica; Strand, Marianne
Master thesis
Thumbnail
View/Open
masterthesis.pdf (1.515Mb)
URI
https://hdl.handle.net/11250/2678795
Date
2020
Metadata
Show full item record
Collections
  • Master Thesis [4656]
Abstract
This thesis uses simulation and regression analysis to investigate how different evaluation

methods affect outcomes in procurement. In order to simulate the data, we have made our own

algorithm in R Studio to answer our proposed questions. This algorithm can easily be adapted

by others who want to simulate similar data or run simulation with other assumptions and

parameters. Most procurement in Norway involves evaluating tenders based on both price and

quality aspects. Price is evaluated by using scoring rules, while quality aspects are evaluated by

expert panels and, in some cases, adjusted by the use of normalisation. By first investigating

scoring rules, we find that the relative scoring rules recommended by the Norwegian

Digitalisation Agency (NDA), and the most commonly used in practice, have serious

drawbacks, suggesting that they are not the most suitable. In addition, we know from previous

literature that these rules are unpredictable for bidders to use. In this thesis, we therefore provide

additional insights, showing that these relative scoring rules also weigh quality relatively less

compared to price during evaluation. Finally, we prove that normalisation has adverse effects

on outcomes in procurement. The NDA recommends procurers to adjust, or normalise, the

quality scores assigned by expert panels. In this thesis, we show that normalisation changes the

relative weight of quality in a tender evaluation, leading to arbitrarily and unpredictable

outcomes. By rather recommending expert panels to evaluate quality aspects relatively,

normalisation can be avoided.

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit