Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

An Open Toolkit for Knowledge Graph Extraction and Construction published at EMNLP2022 System Demonstrations

License

NotificationsYou must be signed in to change notification settings

frozenuncle/DeepKE

 
 

Repository files navigation

DocumentationPyPIGitHubDocumentationOpen In Colab

English |简体中文

A Deep Learning Based Knowledge Extraction Toolkit
for Knowledge Graph Construction

DeepKE is a knowledge extraction toolkit for knowledge graph construction supportingcnSchemalow-resource,document-level andmultimodal scenarios forentity,relation andattribute extraction. We providedocuments,Google Colab tutorials,online demo,paper,slides andposter for beginners.

Table of Contents


What's New

Previous News

Prediction Demo

There is a demonstration of prediction. The GIF file is created byTerminalizer. Get thecode.


Model Framework

  • DeepKE contains a unified framework fornamed entity recognition,relation extraction andattribute extraction, the three knowledge extraction functions.
  • Each task can be implemented in different scenarios. For example, we can achieve relation extraction instandard,low-resource (few-shot),document-level andmultimodal settings.
  • Each application scenario comprises of three components:Data including Tokenizer, Preprocessor and Loader,Model including Module, Encoder and Forwarder,Core including Training, Evaluation and Prediction.

Quick Start

DeepKE-LLM

In the era of large models, DeepKE-LLM utilizes a completely new environment dependency.

conda create -n deepke-llm python=3.9conda activate deepke-llmcd example/llmpip install -r requirements.txt

Please note that therequirements.txt file is located in theexample/llm folder.

DeepKE

DeepKE supportspip install deepke.
Take the fully supervised relation extraction for example.

Step1 Download the basic code

git clone --depth 1 https://github.com/zjunlp/DeepKE.git

Step2 Create a virtual environment usingAnaconda and enter it.

conda create -n deepke python=3.8conda activate deepke
  1. InstallDeepKE with source code (Recommended)

    python setup.py installpython setup.py develop
  2. InstallDeepKE withpip

    pip install deepke

Step3 Enter the task directory

cd DeepKE/example/re/standard

Step4 Download the dataset, or follow theannotation instructions to obtain data

wget 120.27.214.45/Data/re/standard/data.tar.gztar -xzvf data.tar.gz

Many types of data formats are supported,and details are in each part.

Step5 Training (Parameters for training can be changed in theconf folder)

We support visual parameter tuning by usingwandb.

python run.py

Step6 Prediction (Parameters for prediction can be changed in theconf folder)

Modify the path of the trained model inpredict.yaml.The absolute path of the model needs to be used,such asxxx/checkpoints/2019-12-03_ 17-35-30/cnn_ epoch21.pth.

python predict.py
  • ❗NOTE: if you encounter any errors, please refer to theTips or submit a GitHub issue.

Requirements

DeepKE-LLM

python == 3.9

  • torch==1.13.0
  • accelerate==0.17.1
  • transformers==4.28.1
  • bitsandbytes==0.37.2
  • peft==0.2.0
  • gradio
  • datasets
  • sentencepiece
  • fire

DeepKE

python == 3.8

  • torch == 1.5
  • hydra-core == 1.0.6
  • tensorboard == 2.4.1
  • matplotlib == 3.4.1
  • transformers == 3.4.0
  • jieba == 0.42.1
  • scikit-learn == 0.24.1
  • seqeval == 1.2.2
  • tqdm == 4.60.0
  • opt-einsum==3.3.0
  • wandb==0.12.7
  • ujson

Introduction of Three Functions

1. Named Entity Recognition

  • Named entity recognition seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, organizations, etc.

  • The data is stored in.txt files. Some instances as following (Users can label data based on the toolsDoccano,MarkTool, or they can use theWeak Supervision with DeepKE to obtain data automatically):

    SentencePersonLocationOrganization
    本报北京9月4日讯记者杨涌报道:部分省区人民日报宣传发行工作座谈会9月3日在4日在京举行。杨涌北京人民日报
    《红楼梦》由王扶林导演,周汝昌、王蒙、周岭等多位专家参与制作。王扶林,周汝昌,王蒙,周岭
    秦始皇兵马俑位于陕西省西安市,是世界八大奇迹之一。秦始皇陕西省,西安市
  • Read the detailed process in specific README

    • STANDARD (Fully Supervised)

      Wesupport LLM and provide the off-the-shelf model,DeepKE-cnSchema-NER, which will extract entities in cnSchema without training.

      Step1 EnterDeepKE/example/ner/standard. Download the dataset.

      wget 120.27.214.45/Data/ner/standard/data.tar.gztar -xzvf data.tar.gz

      Step2 Training

      The dataset and parameters can be customized in thedata folder andconf folder respectively.

      python run.py

      Step3 Prediction

      python predict.py
    • FEW-SHOT

      Step1 EnterDeepKE/example/ner/few-shot. Download the dataset.

      wget 120.27.214.45/Data/ner/few_shot/data.tar.gztar -xzvf data.tar.gz

      Step2 Training in the low-resouce setting

      The directory where the model is loaded and saved and the configuration parameters can be cusomized in theconf folder.

      python run.py +train=few_shot

      Users can modifyload_path inconf/train/few_shot.yaml to use existing loaded model.

      Step3 Add- predict toconf/config.yaml, modifyloda_path as the model path andwrite_path as the path where the predicted results are saved inconf/predict.yaml, and then runpython predict.py

      python predict.py
    • MULTIMODAL

      Step1 EnterDeepKE/example/ner/multimodal. Download the dataset.

      wget 120.27.214.45/Data/ner/multimodal/data.tar.gztar -xzvf data.tar.gz

      We use RCNN detected objects and visual grounding objects from original images as visual local information, where RCNN viafaster_rcnn and visual grounding viaonestage_grounding.

      Step2 Training in the multimodal setting

      • The dataset and parameters can be customized in thedata folder andconf folder respectively.
      • Start with the model trained last time: modifyload_path inconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.
      python run.py

      Step3 Prediction

      python predict.py

2. Relation Extraction

  • Relationship extraction is the task of extracting semantic relations between entities from a unstructured text.

  • The data is stored in.csv files. Some instances as following (Users can label data based on the toolsDoccano,MarkTool, or they can use theWeak Supervision with DeepKE to obtain data automatically):

    SentenceRelationHeadHead_offsetTailTail_offset
    《岳父也是爹》是王军执导的电视剧,由马恩然、范明主演。导演岳父也是爹1王军8
    《九玄珠》是在纵横中文网连载的一部小说,作者是龙马。连载网站九玄珠1纵横中文网7
    提起杭州的美景,西湖总是第一个映入脑海的词语。所在城市西湖8杭州2
  • !NOTE: If there are multiple entity types for one relation, entity types can be prefixed with the relation as inputs.

  • Read the detailed process in specific README

    • STANDARD (Fully Supervised)

      Wesupport LLM and provide the off-the-shelf model,DeepKE-cnSchema-RE, which will extract relations in cnSchema without training.

      Step1 Enter theDeepKE/example/re/standard folder. Download the dataset.

      wget 120.27.214.45/Data/re/standard/data.tar.gztar -xzvf data.tar.gz

      Step2 Training

      The dataset and parameters can be customized in thedata folder andconf folder respectively.

      python run.py

      Step3 Prediction

      python predict.py
    • FEW-SHOT

      Step1 EnterDeepKE/example/re/few-shot. Download the dataset.

      wget 120.27.214.45/Data/re/few_shot/data.tar.gztar -xzvf data.tar.gz

      Step 2 Training

      • The dataset and parameters can be customized in thedata folder andconf folder respectively.
      • Start with the model trained last time: modifytrain_from_saved_model inconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.
      python run.py

      Step3 Prediction

      python predict.py
    • DOCUMENT

      Step1 EnterDeepKE/example/re/document. Download the dataset.

      wget 120.27.214.45/Data/re/document/data.tar.gztar -xzvf data.tar.gz

      Step2 Training

      • The dataset and parameters can be customized in thedata folder andconf folder respectively.
      • Start with the model trained last time: modifytrain_from_saved_model inconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.
      python run.py

      Step3 Prediction

      python predict.py
    • MULTIMODAL

      Step1 EnterDeepKE/example/re/multimodal. Download the dataset.

      wget 120.27.214.45/Data/re/multimodal/data.tar.gztar -xzvf data.tar.gz

      We use RCNN detected objects and visual grounding objects from original images as visual local information, where RCNN viafaster_rcnn and visual grounding viaonestage_grounding.

      Step2 Training

      • The dataset and parameters can be customized in thedata folder andconf folder respectively.
      • Start with the model trained last time: modifyload_path inconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.
      python run.py

      Step3 Prediction

      python predict.py

3. Attribute Extraction

  • Attribute extraction is to extract attributes for entities in a unstructed text.

  • The data is stored in.csv files. Some instances as following:

    SentenceAttEntEnt_offsetValVal_offset
    张冬梅,女,汉族,1968年2月生,河南淇县人民族张冬梅0汉族6
    诸葛亮,字孔明,三国时期杰出的军事家、文学家、发明家。朝代诸葛亮0三国时期8
    2014年10月1日许鞍华执导的电影《黄金时代》上映上映时间黄金时代192014年10月1日0
  • Read the detailed process in specific README

    • STANDARD (Fully Supervised)

      Step1 Enter theDeepKE/example/ae/standard folder. Download the dataset.

      wget 120.27.214.45/Data/ae/standard/data.tar.gztar -xzvf data.tar.gz

      Step2 Training

      The dataset and parameters can be customized in thedata folder andconf folder respectively.

      python run.py

      Step3 Prediction

      python predict.py

4. Event Extraction

  • Event extraction is the task to extract event type, event trigger words, event arguments from a unstructed text.
  • The data is stored in.tsv files, some instances are as follows:
Sentence Event type Trigger Role Argument
据《欧洲时报》报道,当地时间27日,法国巴黎卢浮宫博物馆员工因不满工作条件恶化而罢工,导致该博物馆也因此闭门谢客一天。 组织行为-罢工 罢工 罢工人员 法国巴黎卢浮宫博物馆员工
时间 当地时间27日
所属组织 法国巴黎卢浮宫博物馆
中国外运2019年上半年归母净利润增长17%:收购了少数股东股权 财经/交易-出售/收购 收购 出售方 少数股东
收购方 中国外运
交易物 股权
美国亚特兰大航展13日发生一起表演机坠机事故,飞行员弹射出舱并安全着陆,事故没有造成人员伤亡。 灾害/意外-坠机 坠机 时间 13日
地点 美国亚特兰
  • Read the detailed process in specific README

    • STANDARD(Fully Supervised)

      Step1 Enter theDeepKE/example/ee/standard folder. Download the dataset.

      wget 120.27.214.45/Data/ee/DuEE.zipunzip DuEE.zip

      Step 2 Training

      The dataset and parameters can be customized in thedata folder andconf folder respectively.

      python run.py

      Step 3 Prediction

      python predict.py

Notebook Tutorial

This toolkit provides manyJupyter Notebook andGoogle Colab tutorials. Users can studyDeepKE with them.


Tips

1.Using nearest mirror,THU in China, will speed up the installation ofAnaconda;aliyun in China, will speed uppip install XXX.

2.When encounteringModuleNotFoundError: No module named 'past',runpip install future .

3.It's slow to install the pretrained language models online. Recommend download pretrained models before use and save them in thepretrained folder. ReadREADME.md in every task directory to check the specific requirement for saving pretrained models.

4.The old version ofDeepKE is in thedeepke-v1.0 branch. Users can change the branch to use the old version. The old version has been totally transfered to the standard relation extraction (example/re/standard).

5.It's recommended to installDeepKE with source codes. Because user may meet some problems in Windows system with 'pip',and the source code modification will not work,seeissue

6.More related low-resource knowledge extraction works can be found inKnowledge Extraction in Low-Resource Scenarios: Survey and Perspective.

7.Make sure the exact versions of requirements inrequirements.txt.

To do

In next version, we plan to release a multimodal LLM for KE.

Meanwhile, we will offer long-term maintenance tofix bugs,solve issues and meetnew requests. So if you have any problems, please put issues to us.

Reading Materials

Data-Efficient Knowledge Graph Construction, 高效知识图谱构建 (Tutorial on CCKS 2022) [slides]

Efficient and Robust Knowledge Graph Construction (Tutorial on AACL-IJCNLP 2022) [slides]

PromptKG Family: a Gallery of Prompt Learning & KG-related Research Works, Toolkits, and Paper-list [Resources]

Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective [Survey][Paper-list]

Related Toolkit

DoccanoMarkToolLabelStudio: Data Annotation Toolkits

LambdaKG: A library and benchmark for PLM-based KG embeddings

EasyInstruct: An easy-to-use framework to instruct Large Language Models

Reading Materials:

Data-Efficient Knowledge Graph Construction, 高效知识图谱构建 (Tutorial on CCKS 2022) [slides]

Efficient and Robust Knowledge Graph Construction (Tutorial on AACL-IJCNLP 2022) [slides]

PromptKG Family: a Gallery of Prompt Learning & KG-related Research Works, Toolkits, and Paper-list [Resources]

Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective [Survey][Paper-list]

Related Toolkit:

DoccanoMarkToolLabelStudio: Data Annotation Toolkits

LambdaKG: A library and benchmark for PLM-based KG embeddings

EasyInstruct: An easy-to-use framework to instruct Large Language Models

Citation

Please cite our paper if you use DeepKE in your work

@inproceedings{EMNLP2022_Demo_DeepKE,author    ={Ningyu Zhang and               Xin Xu and               Liankuan Tao and               Haiyang Yu and               Hongbin Ye and               Shuofei Qiao and               Xin Xie and               Xiang Chen and               Zhoubo Li and               Lei Li},editor    ={Wanxiang Che and               Ekaterina Shutova},title     ={DeepKE: {A} Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population},booktitle ={{EMNLP} (Demos)},pages     ={98--108},publisher ={Association for Computational Linguistics},year      ={2022},url       ={https://aclanthology.org/2022.emnlp-demos.10}}

Contributors (Determined by the roll of the dice)

Zhejiang University:Ningyu Zhang, Liankuan Tao, Xin Xu, Honghao Gui, Xiaohan Wang, Zekun Xi, Xinrong Li, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Peng Wang, Yuqi Zhu, Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, Xiaozhuan Liang, Yunzhi Yao, Jing Chen, Yuqi Zhu, Shumin Deng, Wen Zhang, Guozhou Zheng, Huajun Chen

Community Contributors:thredreams,eltociear

Alibaba Group: Feiyu Xiong, Qiang Chen

DAMO Academy: Zhenru Zhang, Chuanqi Tan, Fei Huang

Intern: Ziwen Xu, Rui Huang, Xiaolong Weng

Other Knowledge Extraction Open-Source Projects

About

An Open Toolkit for Knowledge Graph Extraction and Construction published at EMNLP2022 System Demonstrations

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python82.2%
  • Jupyter Notebook17.6%
  • Other0.2%

[8]ページ先頭

©2009-2025 Movatter.jp