Movatterモバイル変換


[0]ホーム

URL:


CN110473595A - A kind of capsule network Relation extraction model in the most short interdependent path of combination - Google Patents

A kind of capsule network Relation extraction model in the most short interdependent path of combination
Download PDF

Info

Publication number
CN110473595A
CN110473595ACN201910600327.5ACN201910600327ACN110473595ACN 110473595 ACN110473595 ACN 110473595ACN 201910600327 ACN201910600327 ACN 201910600327ACN 110473595 ACN110473595 ACN 110473595A
Authority
CN
China
Prior art keywords
capsule
layer
vector
information
capsule network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910600327.5A
Other languages
Chinese (zh)
Inventor
琚生根
孙界平
刘宁宁
熊熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan UniversityfiledCriticalSichuan University
Priority to CN201910600327.5ApriorityCriticalpatent/CN110473595A/en
Publication of CN110473595ApublicationCriticalpatent/CN110473595A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The present invention relates to medicament research and development technical fields, disclose a kind of capsule network Relation extraction model in most short interdependent path of combination, aiming at the problem that existing drug relationship extraction model extracts poor effect and high-level characteristic information loss in long sentence, now propose following scheme, it includes embeding layer, BiLSTM layers, capsule network layer, output layer, the embeding layer includes that former sentence insertion module and most short interdependent path are embedded in module, the input information of former sentence insertion module includes word information, location information and drug type information, the input of most short interdependent path insertion module is the most short interdependent path between two pharmaceutical entities obtained using Stamford resolver;BiLSTM is made of two LSTM, obtains the forward direction information and backward information of sentence respectively.The present invention utilizes the Dynamic routing mechanisms of capsule network, and the dynamic information content for determining low layer capsule and transmitting to high-rise capsule avoids the problem of high-level characteristic information is lost, to promote extraction effect.

Description

Translated fromChinese
一种结合最短依存路径的胶囊网络关系抽取模型A Capsule Network Relation Extraction Model Combined with the Shortest Dependency Path

技术领域technical field

本发明涉及药物研发技术领域,尤其涉及一种结合最短依存路径的胶囊网络关系抽取模型。The invention relates to the technical field of drug research and development, in particular to a capsule network relation extraction model combined with the shortest dependency path.

背景技术Background technique

在药物相互作用是指药物之间存在的抑制或促进等作用,针对目前药物关系抽取模型在长语句中抽取效果较差以及高层特征信息丢失的问题,因此,需要一种结合最短依存路径的胶囊网络关系抽取模型来解决以上问题。Drug interaction refers to the inhibitory or promoting effects between drugs. In view of the poor extraction effect of the current drug relationship extraction model in long sentences and the loss of high-level feature information, a capsule that combines the shortest dependency path is needed. Network relationship extraction model to solve the above problems.

发明内容Contents of the invention

本发明提出的一种结合最短依存路径的胶囊网络关系抽取模型,解决了现有的药物关系抽取模型在长语句中抽取效果较差的问题。The invention proposes a capsule network relationship extraction model combined with the shortest dependency path, which solves the problem that the existing drug relationship extraction model has poor extraction effect in long sentences.

为了实现上述目的,本发明采用了如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种结合最短依存路径的胶囊网络关系抽取模型,包括:A capsule network relation extraction model combined with the shortest dependency path, including:

嵌入层;通过原句嵌入模块和最短依存路径嵌入模块将输入文本语句转换为向量的形式输出得到原句嵌入向量;Embedding layer; convert the input text sentence into a vector form output by the original sentence embedding module and the shortest dependency path embedding module to obtain the original sentence embedding vector;

BiLSTM层;采用两个BiLSTM分别获原句的向量表示h1和最短依存路径信息的向量表示hsdp,将二者拼接起来得到最终的向量表示hallBiLSTM layer: Two BiLSTMs are used to obtain the vector representation h1 of the original sentence and the vector representation hsdp of the shortest dependent path information respectively, and the two are spliced together to obtain the final vector representation hall ;

胶囊网络层:分为低层胶囊层、动态路由层和高层胶囊层,胶囊网络利用动态路由机制将低层胶囊的信息动态的传送到高层胶囊中的,将高层胶囊输出向量的长度通过非线性压缩函数转换为0到1之间。Capsule network layer: It is divided into low-level capsule layer, dynamic routing layer and high-level capsule layer. The capsule network uses the dynamic routing mechanism to dynamically transmit the information of the low-level capsule to the high-level capsule, and the length of the high-level capsule output vector is passed through the nonlinear compression function. Convert between 0 and 1.

输出层;将具有最大向量长度的高层胶囊类别作为最终模型预测的类别。Output layer; takes the high-level capsule category with the largest vector length as the category predicted by the final model.

优选的,所述嵌入层中文本语句转换为向量具体包括以下步骤:S1:输入原句为S={w1,w2,...,wn},采用词嵌入,位置嵌入和类型嵌入相结合的方法得到d代表单词嵌入的维度,p代表位置嵌入的维度,t代表类型嵌入的维度.则原句嵌入向量可以表示为S={x1,x2,...,xn}。Preferably, the conversion of text sentences into vectors in the embedding layer specifically includes the following steps: S1: Input the original sentence as S={w1 , w2 ,..., wn }, using word embedding, position embedding and type embedding combined method to get d represents the dimension of word embedding, p represents the dimension of position embedding, and t represents the dimension of type embedding. Then the original sentence embedding vector can be expressed as S={x1 , x2 ,...,xn }.

优选的,所述BiLSTM层中,将原语句与依存信息相结合,h1表示原语句的低层向量表示,hsdp表示最短依存路径信息的低层向量表示。Preferably, in the BiLSTM layer, the original sentence is combined with the dependency information, h1 represents the low-level vector representation of the original sentence, and hsdp represents the low-level vector representation of the shortest dependency path information.

优选的,所述胶囊网络层,通过公式计算得到低层胶囊预测向量通过公式计算得到高层胶囊输入向量sj,通过得到高层胶囊网络的输出vjPreferably, the capsule network layer, through the formula Calculate the low-level capsule prediction vector by formula Calculate the high-level capsule input vector sj , through Get the output vj of the high-level capsule network.

优选的,所述最终模型预测的类别为具有最大输出向量长度的高层胶囊对应的类别。Preferably, the category predicted by the final model is the category corresponding to the high-level capsule with the largest output vector length.

本发明的有益效果是:通过根据原语句解析出两个药物之间的最短依存路径,然后利用双向长短期记忆网络分别获取原语句和最短依存路径的低层向量表示,再将二者结合输入到胶囊网络中,利用胶囊网络的动态路由机制,动态的决定低层胶囊向高层胶囊传送的信息量,避免了高层特征信息丢失的问题,从而提升抽取效果。The beneficial effect of the present invention is: by analyzing the shortest dependency path between two medicines according to the original sentence, and then using the bidirectional long-short-term memory network to respectively obtain the low-level vector representation of the original sentence and the shortest dependency path, and then combining the two into the In the capsule network, the dynamic routing mechanism of the capsule network is used to dynamically determine the amount of information transmitted from the low-level capsules to the high-level capsules, avoiding the loss of high-level feature information, thereby improving the extraction effect.

附图说明Description of drawings

图1为本发明提出的一种结合最短依存路径的胶囊网络关系抽取模型的结构示意图。FIG. 1 is a schematic structural diagram of a capsule network relation extraction model combined with the shortest dependency path proposed by the present invention.

图2为胶囊网络结构示意图。Figure 2 is a schematic diagram of the capsule network structure.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention.

参照图1-2,一种结合最短依存路径的胶囊网络关系抽取模型,包括嵌入层、BiLSTM层、胶囊网络层、输出层,Referring to Figure 1-2, a capsule network relationship extraction model combined with the shortest dependency path, including an embedding layer, a BiLSTM layer, a capsule network layer, and an output layer,

嵌入层是将输入文本语句转换为向量的形式。在嵌入层中包括两个模块,分别是原句嵌入和最短依存路径嵌入。其中原句嵌入模块的输入信息包含单词信息,位置信息以及药物类型信息。最短依存路径嵌入模块的输入是利用斯坦福解析器获取的两个药物实体之间的最短依存路径。The embedding layer is the form of converting input text sentences into vectors. There are two modules in the embedding layer, which are the original sentence embedding and the shortest dependency path embedding. The input information of the original sentence embedding module includes word information, location information and drug type information. The input of the shortest dependency path embedding module is the shortest dependency path between two drug entities obtained using the Stanford parser.

假设输入原句S={w1,w2,...,wn},其中wi代表句子中的第i个词,通过设置两个位置向量p1和p2,来表示句子中的每个单词到两个药物实体的距离,此外用t1和t2表示两个药物的类型。在原句嵌入中,本文采用以词嵌入,位置嵌入和类型嵌入相拼接的方法表示句子,其中词嵌入是将单词以向量的形式表达出来,本文采用的是在大规模语料库上训练的glove词向量。则单个嵌入向量表示为如公式(1)所示。Suppose the original sentence S={w1 , w2 ,...,wn } is input, where wi represents the i-th word in the sentence, and two position vectors p1 and p2 are set to represent the word i in the sentence The distance between each word and two drug entities, and t1 and t2 represent the types of the two drugs. In the original sentence embedding, this paper uses the method of splicing word embedding, position embedding and type embedding to express the sentence. The word embedding is to express the word in the form of vector. This paper uses the glove word vector trained on a large-scale corpus . Then a single embedding vector is expressed as shown in Equation (1).

其中xi∈Rd+2p+2t,d代表单词嵌入的维度,p代表位置嵌入的维度,t代表类型嵌入的维度。则原句嵌入向量可以表示为S={x1,x2,...,xn}。BiLSTM层,长短期记忆网络(LongShort Term Memory,LSTM)是一种特殊结构的RNN模型,有效解决了循环神经网络模型面临的梯度消失问题。LSTM由输入门,遗忘门,输出门以及细胞状态组成,采用公式(2)至公式(7)进行更新。where xi ∈ Rd+2p+2t , d represents the dimension of word embedding, p represents the dimension of position embedding, and t represents the dimension of type embedding. Then the original sentence embedding vector can be expressed as S={x1 , x2 , . . . , xn }. BiLSTM layer, Long Short Term Memory (LSTM) is a special structured RNN model, which effectively solves the gradient disappearance problem faced by the cyclic neural network model. LSTM consists of an input gate, a forget gate, an output gate, and a cell state, which are updated using formulas (2) to (7).

ft=σ(Wf·[ht-1,xi]+bf) (2)ft =σ(Wf ·[ht-1 , xi ]+bf ) (2)

it=σ(Wi·[ht-1,xt]+bi) (3)it =σ(Wi ·[ht-1 , xt ]+bi ) (3)

Ot=σ(Wo·[ht-1,xt]+bo] (6)Ot =σ(Wo ·[ht-1 ,xt ]+bo ] (6)

ht=ot⊙tanh(Ct) (7)ht =ot ⊙tanh(Ct ) (7)

其中it表示输入门,ft表示遗忘门,ot表示输出门,ht-1和Ct-1表示前一个时间步t-1的隐藏状态和细胞状态,ht和Ct表示当前时间步t的隐藏状态和细胞状态。where it represents the input gate, ft represents the forget gate, ot represents the output gate, ht-1 and Ct-1 represent the hidden state and cell state of the previous time step t-1, ht and Ct represent the current Hidden state and cell state at time step t.

BiLSTM是LSTM的一种变形,它采用两个LSTM分别获取语句的前向信息和后向信息,然后将二者得到的向量拼接起来作为BiLSTM的输出,因此使用BiLSTM网络可以较好的捕获句子的全局序列信息,适用于长句子的处理,故本采用BiLSTM结构分别获取原句和最短依赖路径的低维语句表示。假设前向输出的隐藏状态用表示,后向的输出隐藏状态用表示,利用公式(8)可得到BiLSTM的输出。BiLSTM is a variant of LSTM. It uses two LSTMs to obtain the forward information and backward information of the sentence respectively, and then concatenates the vectors obtained by the two as the output of BiLSTM. Therefore, using the BiLSTM network can better capture the sentence. Global sequence information is suitable for processing long sentences, so the BiLSTM structure is used to obtain low-dimensional sentence representations of the original sentence and the shortest dependency path. Assume that the hidden state of the forward output is Indicates that the backward output hidden state is represented by It means that the output of BiLSTM can be obtained by using formula (8).

使用BiLSTM网络可以较好的捕获句子的全局序列信息,适用于长句子的处理,故本文采用BiLSTM结构分别获取原句的向量表示h1和最短依存路径信息的向量表示hsdp,将二者拼接起来得到最终的向量表示hall,将其作为胶囊网络层的输入。Using the BiLSTM network can better capture the global sequence information of the sentence, which is suitable for the processing of long sentences. Therefore, this paper uses the BiLSTM structure to obtain the vector representation h1 of the original sentence and the vector representation hsdp of the shortest dependency path information, and splice them together Get the final vector representation hall and use it as the input of the capsule network layer.

胶囊网络层,胶囊是由一组神经元组成的,每个胶囊负责确定对象的单个部分,所有胶囊共同决定对象的整体架构。胶囊网络的结构如图2所示。主要分为低层胶囊层,动态路由层和高层胶囊层。胶囊网络利用动态路由机制,将低层胶囊的信息动态的传送到高层胶囊中从而可以克服CNN中池化层信息丢失的问题。The capsule network layer, the capsule is composed of a group of neurons, each capsule is responsible for determining a single part of the object, and all capsules jointly determine the overall structure of the object. The structure of the capsule network is shown in Figure 2. It is mainly divided into low-level capsule layer, dynamic routing layer and high-level capsule layer. The capsule network uses the dynamic routing mechanism to dynamically transmit the information of the low-level capsules to the high-level capsules, so as to overcome the problem of information loss in the pooling layer in CNN.

假设ui是第i个低层胶囊的输出,则它预测的高层胶囊j的输出可以由公式(9)计算得到。Assuming ui is the output of the i-th low-level capsule, its predicted output of high-level capsule j can be calculated by Equation (9).

其中Wij代表低层胶囊i和高层胶囊j之间的权重矩阵,该矩阵是在训练过程中学习得到。where Wij represents the weight matrix between low-level capsule i and high-level capsule j, which is learned during training.

在高层胶囊网络中sj是高层胶囊网络的输入,vj是高层胶囊网络的输出,其中sj是根据低层胶囊预测向量和低层胶囊相对应的耦合系数cij得到的,如公式(10)所示。In the high-level capsule network, sj is the input of the high-level capsule network, and vj is the output of the high-level capsule network, where sj is the prediction vector according to the low-level capsule The coupling coefficient cij corresponding to the lower layer capsule is obtained, as shown in formula (10).

在胶囊网络中,向量的长度代表概率值,故用如下非线性压缩函数,将高层胶囊输出向量的长度转换为0到1之间。如公式(11)所示。In the capsule network, the length of the vector represents the probability value, so the following nonlinear compression function is used to convert the length of the high-level capsule output vector to between 0 and 1. As shown in formula (11).

低层胶囊网络和高层胶囊网络之间信息传递量是由耦合系数cij决定的,cij的计算过程如以下公式所示。The amount of information transmitted between the low-level capsule network and the high-level capsule network is determined by the coupling coefficient cij , and the calculation process of cij is shown in the following formula.

由上式可以看出,当低层胶囊将信息传送到正确的高层胶囊时,耦合系数cij会变大,当传送到错误的高层胶囊时,耦合系数cij会变小,当cij=0时表示低层胶囊i和高层胶囊j之间没有信息传递,cij由动态路由算法迭代更新。It can be seen from the above formula that when the low-level capsule transmits information to the correct high-level capsule, the coupling coefficient cij will become larger, and when it is transmitted to the wrong high-level capsule, the coupling coefficient cij will become smaller. When cij =0 When , it means that there is no information transmission between the low-level capsule i and the high-level capsule j, and cij is iteratively updated by the dynamic routing algorithm.

动态路由算法的伪代码如下所示:The pseudocode of the dynamic routing algorithm is as follows:

胶囊网络的损失函数如公式(13)所示。高层胶囊网络中的胶囊数量与数据集中药物关系的数目相同,每个胶囊代表一个关系类别,若存在类别k,则Tk值为1,否则Tk值为0,其中m+,m-和λ是需要事先指定的超参数。The loss function of the capsule network is shown in Equation (13). The number of capsules in the high-level capsule network is the same as the number of drug relationships in the data set. Each capsule represents a relationship category. If there is a category k, the Tk value is 1, otherwise the Tk value is 0, where m+ , m- and λ is a hyperparameter that needs to be specified in advance.

输出层,在该层中,选择具有最大输出向量长度的高层胶囊类别作为最终模型预测的类别。The output layer, where the high-level capsule category with the largest output vector length is selected as the category predicted by the final model.

本发明通过根据原语句解析出两个药物之间的最短依存路径,然后利用双向长短期记忆网络分别获取原语句和最短依存路径的低层向量表示,再将二者结合输入到胶囊网络中,利用胶囊网络的动态路由机制,动态的决定低层胶囊向高层胶囊传送的信息量,避免了高层特征信息丢失的问题,从而提升抽取效果。The present invention analyzes the shortest dependency path between two medicines according to the original sentence, and then uses the two-way long-short-term memory network to respectively obtain the low-level vector representations of the original sentence and the shortest dependency path, and then inputs the two into the capsule network to utilize The dynamic routing mechanism of the capsule network dynamically determines the amount of information transmitted from the low-level capsules to the high-level capsules, avoiding the loss of high-level feature information, thereby improving the extraction effect.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the present invention, "plurality" means two or more, unless otherwise specifically defined.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (5)

CN201910600327.5A2019-07-042019-07-04A kind of capsule network Relation extraction model in the most short interdependent path of combinationPendingCN110473595A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910600327.5ACN110473595A (en)2019-07-042019-07-04A kind of capsule network Relation extraction model in the most short interdependent path of combination

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910600327.5ACN110473595A (en)2019-07-042019-07-04A kind of capsule network Relation extraction model in the most short interdependent path of combination

Publications (1)

Publication NumberPublication Date
CN110473595Atrue CN110473595A (en)2019-11-19

Family

ID=68506788

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910600327.5APendingCN110473595A (en)2019-07-042019-07-04A kind of capsule network Relation extraction model in the most short interdependent path of combination

Country Status (1)

CountryLink
CN (1)CN110473595A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112270951A (en)*2020-11-102021-01-26四川大学Brand-new molecule generation method based on multitask capsule self-encoder neural network
CN112883732A (en)*2020-11-262021-06-01中国电子科技网络信息安全有限公司Method and device for identifying Chinese fine-grained named entities based on associative memory network
CN113158679A (en)*2021-05-202021-07-23广东工业大学Marine industry entity identification method and device based on multi-feature superposition capsule network
CN114372138A (en)*2022-01-112022-04-19国网江苏省电力有限公司信息通信分公司Electric power field relation extraction method based on shortest dependence path and BERT

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109241283A (en)*2018-08-082019-01-18广东工业大学A kind of file classification method based on multi-angle capsule network
US20190065576A1 (en)*2017-08-232019-02-28Rsvp Technologies Inc.Single-entity-single-relation question answering systems, and methods
CN109543200A (en)*2018-11-302019-03-29腾讯科技(深圳)有限公司Text translation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190065576A1 (en)*2017-08-232019-02-28Rsvp Technologies Inc.Single-entity-single-relation question answering systems, and methods
CN109241283A (en)*2018-08-082019-01-18广东工业大学A kind of file classification method based on multi-angle capsule network
CN109543200A (en)*2018-11-302019-03-29腾讯科技(深圳)有限公司Text translation method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QI WANG: "Recurrent Capsule Network for Relations Extraction: A Practical Application to the Severity Classification of Coronary Artery Disease", 《ARXIV:1807.06718V1 [CS.CL]》*
YATIAN SHEN: "Attention-Based Convolutional Neural Network for Semantic Relation Extraction", 《PROCEEDINGS OF COLING 2016, THE 26TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS: TECHNICAL PAPERS》*
YIJIA ZHANG: "Drug–drug interaction extraction via hierarchical RNNs on sequence and shortest dependency paths", 《DOI: 10.1093/BIOINFORMATICS/BTX659》*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112270951A (en)*2020-11-102021-01-26四川大学Brand-new molecule generation method based on multitask capsule self-encoder neural network
CN112883732A (en)*2020-11-262021-06-01中国电子科技网络信息安全有限公司Method and device for identifying Chinese fine-grained named entities based on associative memory network
CN113158679A (en)*2021-05-202021-07-23广东工业大学Marine industry entity identification method and device based on multi-feature superposition capsule network
CN114372138A (en)*2022-01-112022-04-19国网江苏省电力有限公司信息通信分公司Electric power field relation extraction method based on shortest dependence path and BERT

Similar Documents

PublicationPublication DateTitle
CN110473595A (en)A kind of capsule network Relation extraction model in the most short interdependent path of combination
CN110334219B (en)Knowledge graph representation learning method based on attention mechanism integrated with text semantic features
CN109471895B (en)Electronic medical record phenotype extraction and phenotype name normalization method and system
CN110597970B (en)Multi-granularity medical entity joint identification method and device
CN108681539B (en) A Mongolian-Chinese neural translation method based on convolutional neural network
CN108733742B (en) Globally normalized reader system and method
WO2022057669A1 (en)Method for pre-training knowledge graph on the basis of structured context information
CN113553440B (en)Medical entity relationship extraction method based on hierarchical reasoning
CN112989835B (en) A Method for Extracting Complex Medical Entity
CN114092707A (en)Image text visual question answering method, system and storage medium
CN111984772A (en)Medical image question-answering method and system based on deep learning
CN113095415A (en)Cross-modal hashing method and system based on multi-modal attention mechanism
CN110555083A (en)non-supervision entity relationship extraction method based on zero-shot
CN108984525B (en) A Chinese grammar error detection method based on word vector with text information
CN110276396B (en) Image description generation method based on object saliency and cross-modal fusion features
CN106570456A (en)Handwritten Chinese character recognition method based on full-convolution recursive network
CN109684449B (en)Attention mechanism-based natural language semantic representation method
CN113779993B (en)Medical entity identification method based on multi-granularity text embedding
CN116932722A (en)Cross-modal data fusion-based medical visual question-answering method and system
CN114841122A (en)Text extraction method combining entity identification and relationship extraction, storage medium and terminal
CN112183085A (en)Machine reading understanding method and device, electronic equipment and computer storage medium
CN112966069A (en)False news detection system and method based on general cognition and individual cognition
CN111881292A (en)Text classification method and device
CN108959260A (en)A kind of Chinese grammer error-detecting method based on textual term vector
CN113780350B (en)ViLBERT and BiLSTM-based image description method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20191119


[8]ページ先頭

©2009-2025 Movatter.jp