#
nlg-evaluation
Here are 5 public repositories matching this topic...
Language:All
Filter by language
[COLING22] An End-to-End Library for Evaluating Natural Language Generation
visualizationpythonnatural-language-processingmetricspytorchlanguage-modelsnatural-language-generationnlg-evaluation
- Updated
Dec 18, 2023 - Python
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
- Updated
Jan 22, 2024 - Python
PyTorch code for ACL 2022 paper: RoMe: A Robust Metric for Evaluating Natural Language Generationhttps://aclanthology.org/2022.acl-long.387/
nlptreedeep-learningtededitnatural-language-generationnlgevaluation-metricsdistance-calculationnlg-datasetearth-movers-distancetree-edit-distancenlg-evaluation
- Updated
Aug 13, 2023 - Python
💵 Code for Less is More for Long Document Summary Evaluation by LLMs (Wu*, Iso* et al; EACL 2024)
- Updated
Feb 22, 2024 - Python
- Updated
Apr 22, 2023 - Jupyter Notebook
Improve this page
Add a description, image, and links to thenlg-evaluation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thenlg-evaluation topic, visit your repo's landing page and select "manage topics."