Movatterモバイル変換


[0]ホーム

URL:


2017
DOI: 10.1186/s12859-017-1868-5
|View full text |Cite
|
Sign up to set email alerts
|

Long short-term memory RNN for biomedical named entity recognition

Abstract:BackgroundBiomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features.ResultsWe present a recurrent neural network (RNN) framework based on word embeddings and character representation. On t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
70
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 120 publications
(72 citation statements)
references
References 35 publications
2
70
0
Order By: Relevance
“…Lyu et al introduces a variation of recurrent neural network (RNN) called bidirectional long short-term memory units (BLSTM) to recognize biomedical-named entities.20 Lakhani et al uses convolutional neural network (CNN) to extracting ICD-O-3 topographic codes from pathology reports.7 Gao et al uses a hierarchical attention networks for information extraction from pathology reports.…”
Section: Introductionmentioning
confidence: 99%
“…Lyu et al introduces a variation of recurrent neural network (RNN) called bidirectional long short-term memory units (BLSTM) to recognize biomedical-named entities.20 Lakhani et al uses convolutional neural network (CNN) to extracting ICD-O-3 topographic codes from pathology reports.7 Gao et al uses a hierarchical attention networks for information extraction from pathology reports.…”
Section: Introductionmentioning
confidence: 99%
“…Korvigo et al[26] applied a CNN-RNN network to recognize spans of chemicals and Luo et al 2018[28] proposed attention-based bidirectional LSTM with CRF to detect spans of chemicals. Unanue et al, 2017[29] used bidirectional LSTM with CRF to detect spans of drug names and clinical concepts, while Lyu et al 2017[27] proposed bidirectional LSTM-RNN model for detecting spans of a variety of biomedical concepts. However, none of these approaches also attempted the normalization step, so they did not identify which particular concept in an ontology was detected.…”
Section: Related Workmentioning
confidence: 99%
“…Features that capture word-internal characteristics have been shown to be effective for BNER tasks in CRF models(Klinger et al, 2008).Lyu et al (2017) applied a BiLSTM-CRF model with LSTM-based character-level word embeddings to a gene and protein NER task, demonstrating state-of-art performance that outperformed traditional feature-based models.Luo et al (2018) further improved on this result on a chemical NER task by adding an attention layer between the BiL-STM and CRF layers (Att-BiLSTM-CRF).…”
Section: Introductionmentioning
confidence: 99%
scite logo

scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.

Contact Info

customersupport@researchsolutions.com

10624 S. Eastern Ave., Ste. A-614

Henderson, NV 89052, USA

Product
Resources
About

This site is protected by reCAPTCHA and the GooglePrivacy Policy andTerms of Service apply.

BlogTerms and ConditionsAPI TermsPrivacy PolicyContactCookie PreferencesDo Not Sell or Share My Personal Information

Copyright © 2025 scite LLC. All rights reserved.

Made with 💙 for researchers

Part of the Research Solutions Family.


[8]ページ先頭

©2009-2025 Movatter.jp