Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

PyTorch(1.6+) implementation ofhttps://github.com/kang205/SASRec

License

NotificationsYou must be signed in to change notification settings

pmixer/SASRec.pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

update on 05/23/2025: thx toWentworth1028 andTiny-Snow, we have LayerNorm update, for higher NDCG&HR, and here's thedoc👍.

update on 04/13/2025: inhttps://arxiv.org/html/2504.09596v1, I listed the ideas worth to try but not yet due to my limited bandwidth in sparse time.

pls feel free to do these experiments to have fun, and pls consider citing the article if it somehow helps in your recsys exploration:

@article{huang2025revisiting_sasrec,  title={Revisiting Self-Attentive Sequential Recommendation},  author={Huang, Zan},  journal={CoRR},  volume={abs/2504.09596},  url={https://arxiv.org/abs/2504.09596},  eprinttype={arXiv},  eprint={2504.09596},  year={2025}}

or this bib for short

@article{huang2025revisiting,  title={Revisiting Self-Attentive Sequential Recommendation},  author={Huang, Zan},  journal={arXiv preprint arXiv:2504.09596},  year={2025}}

paper source code inlatex folder.

for questions or collaborations, pls create a new issue in this repo or drop me an email using the email address as shared.


modified based onpaper author's tensorflow implementation, switching to PyTorch(v1.6) for simplicity, fixed issues like positional embedding usage etc. (making it harder to overfit, except for that, in recsys, personalization=overfitting sometimes)

code inpython folder.

to train:

python main.py --dataset=ml-1m --train_dir=default --maxlen=200 --dropout_rate=0.2 --device=cuda

just inference:

python main.py --device=cuda --dataset=ml-1m --train_dir=default --state_dict_path=[YOUR_CKPT_PATH] --inference_only=true --maxlen=200

output for each run would be slightly random, as negative samples are randomly sampled, here's my output for two consecutive runs:

1st run - test (NDCG@10: 0.5897, HR@10: 0.8190)2nd run - test (NDCG@10: 0.5918, HR@10: 0.8225)

pls check paper author'srepo for detailed intro and more complete README, and here's the paper bib FYI :)

@inproceedings{kang2018self,  title={Self-attentive sequential recommendation},  author={Kang, Wang-Cheng and McAuley, Julian},  booktitle={2018 IEEE International Conference on Data Mining (ICDM)},  pages={197--206},  year={2018},  organization={IEEE}}

I see a dozen of citations of the repo🫰, pls use the bib as below if needed.

@misc{Huang_SASRec_pytorch,author = {Huang, Zan},title = {{SASRec.pytorch}},url = {https://github.com/pmixer/SASRec.pytorch},howpublished = {\url{https://github.com/pmixer/SASRec.pytorch}},year={2020}}

Contributors4

  •  
  •  
  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp