Recommender systems evaluation is usually based on predictiveaccuracy metrics with better scores meaning recommendations of higherquality. However, the comparison of results is becoming increasingly difficult,since there are different recommendation frameworks and different settings inthe design and implementation of the experiments. Furthermore, there might beminor differences on algorithm implementation among the differentframeworks. In this paper, we compare well known recommendationalgorithms, using the same dataset, metrics and overall settings, the results ofwhich point to result differences across frameworks with the exact samesettings. Hence, we propose the use of standards that should be followed asguidelines to ensure the replication of experiments and the reproducibility ofthe results.
|Name||IFIP Advances in Information and Communication Technology|
|Conference||14th International Conference on Artificial Intelligence Applications and Innovations|
|Period||22/05/18 → …|
This is a post-peer-review, pre-copyedit version of an article published in IFIP Advances in Information and Communication Technology. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-319-92007-8_34
- Recommender systems