Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Official code for "DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation" (TPAMI2022) and "Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison" (RecSys2020)

License

NotificationsYou must be signed in to change notification settings

AmazingDD/daisyRec

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyPI - Python VersionVersionGitHub repo sizeGitHubarXiv

Overview

daisyRec is a Python toolkit developed for benchmarking top-N recommendation task. The name DAISY stands for multi-Dimension fAir comparIson for recommenderSYstem.

The figure below shows the overall framework of DaisyRec-v2.0.

This repository is used for publishing. If you are interested in details of ourexperiments ranking results, try to reach thisrepo file.

We really appreciate these repositories to help us improve the code efficiency:

How to Run

Make sure you have aCUDA enviroment to accelarate since the deep-learning models could be based on it.

1. Install from pip

pip install daisyRec

2. Clone from github

git clone https://github.com/AmazingDD/daisyRec.git && cd daisyRec
  • Example codes are listed inrun_examples, try to refer them and find out how to use daisy; you can also implement these codes by moving them intodaisyRec/.

  • The GUI Command Generator fortest.py andtune.py, which can assist you to quikly write arguments and run the fair comparison experiments, is now availablehere.

    The generated command will be like this:

    python tune.py --param1=20 --param2=30 ....python test.py --param1=20 --param2=30 ....

    We highly recommend you to implement the code with our GUI firstly!

Documentation

The documentation of DaisyRec is availablehere, which provides detailed explanations for all arguments.

Implemented Algorithms

Models in daisyRec only take triples<user, item, rating> into account, so FM-related models will be specialized accrodingly.Below are the algorithms implemented in daisyRec. More baselines will be added later.

ModelPublication
MostPopA re-visit of the popularity baseline in recommender systems
ItemKNNItem-based top-N recommendation algorithms
EASEEmbarrassingly Shallow Autoencoders for Sparse Data
PureSVDTop-n recommender system via matrix completion
SLIMSLIM: Sparse Linear Methods for Top-N Recommender Systems
MFMatrix factorization techniques for recommender systems
FMFactorization Machines
NeuMFNeural Collaborative Filtering
NFMNeural Factorization Machines for Sparse Predictive Analytics
NGCFNeural Graph Collaborative Filtering
Multi-VAEVariational Autoencoders for Collaborative Filtering
Item2VecItem2vec: neural item embedding for collaborative filtering
LightGCNLightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

Datasets

You can download experiment data, and put them into thedata folder.All data are available in links below:

Cite

Please cite both of the following papers if you useDaisyRec in a research paper in any way (e.g., code and ranking results):

@inproceedings{sun2020are,  title={Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison},  author={Sun, Zhu and Yu, Di and Fang, Hui and Yang, Jie and Qu, Xinghua and Zhang, Jie and Geng, Cong},  booktitle={Proceedings of the 14th ACM Conference on Recommender Systems},  year={2020}}
@article{sun2022daisyrec,  title={DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation},  author={Sun, Zhu and Fang, Hui and Yang, Jie and Qu, Xinghua and Liu, Hongyang and Yu, Di and Ong, Yew-Soon and Zhang, Jie},  journal={arXiv preprint arXiv:2206.10848},  year={2022}}

About

Official code for "DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation" (TPAMI2022) and "Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison" (RecSys2020)

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp