Next Article in Journal
Thermal Characterizations of a Lithium Titanate Oxide-Based Lithium-Ion Battery Focused on Random and Periodic Charge-Discharge Pulses
Next Article in Special Issue
A Novel Machine Learning Approach for Sentiment Analysis on Twitter Incorporating the Universal Language Model Fine-Tuning and SVM
Previous Article in Journal
Future and Innovative Design Requirements Applying Industry 4.0 Technologies on Underground Ammunition Storage
Previous Article in Special Issue
Feature Learning for Stock Price Prediction Shows a Significant Role of Analyst Rating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Analysis of Active Learning for Biomedical Text Mining

1
School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia
2
School of Engineering, RMIT University, Carlton, VIC 3053, Australia
3
School of Electrical Engineering and Computing, The University of Newcastle, Newcastle, NSW 2308, Australia
4
UNSW Digital Health, WHO Center for eHealth, Faculty of Medicine, The University of New South Wales, Sydney, NSW 2052, Australia
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2021, 4(1), 23; https://doi.org/10.3390/asi4010023
Submission received: 31 December 2020 / Revised: 24 February 2021 / Accepted: 8 March 2021 / Published: 15 March 2021
(This article belongs to the Special Issue Advanced Machine Learning Techniques, Applications and Developments)

Abstract

:
An enormous amount of clinical free-text information, such as pathology reports, progress reports, clinical notes and discharge summaries have been collected at hospitals and medical care clinics. These data provide an opportunity of developing many useful machine learning applications if the data could be transferred into a learn-able structure with appropriate labels for supervised learning. The annotation of this data has to be performed by qualified clinical experts, hence, limiting the use of this data due to the high cost of annotation. An underutilised technique of machine learning that can label new data called active learning (AL) is a promising candidate to address the high cost of the label the data. AL has been successfully applied to labelling speech recognition and text classification, however, there is a lack of literature investigating its use for clinical purposes. We performed a comparative investigation of various AL techniques using ML and deep learning (DL)-based strategies on three unique biomedical datasets. We investigated random sampling (RS), least confidence (LC), informative diversity and density (IDD), margin and maximum representativeness-diversity (MRD) AL query strategies. Our experiments show that AL has the potential to significantly reducing the cost of manual labelling. Furthermore, pre-labelling performed using AL expediates the labelling process by reducing the time required for labelling.

1. Introduction

The wide-spread utilisation of capacity and digitising advancements, specifically the digitisation of clinical records, presents numerous information examination chances. Notwithstanding, to arrive at their maximum capacity, such investigation frameworks need to remove organised information from unstructured content reports. An expanding volume of unstructured clinical information about patients is put away electronically by clinics and medical services. Organised data is fundamental for applications, for example, reporting, reasoning, and retrieving, for instance, malignancy observations from medical reports and death certificates [1], checking radiology reports to forestall missed fractures [2], and clinical data retrieval [3]. Late advancements of Natural Language Processing (NLP) and information extraction (IE) have confronted fundamental difficulties in adequately catching valuable data from this free-text resources [4]. IE is a nontrivial interaction for extricating helpful, organised data like examples and different connections from unstructured info text.
One of the challenges is distinguishing cases of ideas that are alluded to in manners not captured inside current lexical assets and tackle uncertainty, polysemy, synonymy, and word order varieties. Moreover, the data introduced in clinical narratives are frequently unstructured, ungrammatical, and divided. Along these lines, standard NLP advances and frameworks cannot be straightforwardly applied to the clinical domain [5].
ML-based algorithms, rule-based and existing dictionary-based methods can be utilised to identify and extract the concepts from raw text corpus in finance, medical, and various other domains [6,7,8,9,10]. In the clinical domain, the ShARe/CLEF 2013 eHealth Evaluation Lab and the i2b2/VA challenge methodologies have been applied in shared tasks [11,12,13]. The results demonstrated that ML-based algorithms are scalable and usually beats the rule-based approaches.
A critical challenge is a clinical text contains domain expert words which requires domain expert efforts to presented rule-based methods or label huge corpora as training data for supervised ML-based methods. Usually, rule-based approaches are expensive because it needs domain experts and is a challenging task itself that can create error [14] and not adaptable or transferable to other tasks. The results of the supervised ML-based approached increases as the set of labelled data is used for training. Using crowdsourcing for labelling clinical data is not useful in the general domain; manual labelling is an expensive and labour-intensive task.
AL [15] and semi-supervised learning [16] are viable options in contrast to standard supervised ML methods and can reduce labelling costs. AL can prepare to accomplish an automated system with high adequacy and less labelling cost. Training an ML-based approach using a small subset of labelled data, selected randomly, leads to reduced effectiveness compared to when the model uses complete labelled data, while in AL, the aim is to reach high viability and effectiveness by training a small chunk of data.
AL is a human-in-the-loop technique with the capacity to radically decrease human inclusion contrasted with the conventional supervised ML techniques that require a massive amount of labelled data at the start. Figure 1 presents the overall general cycle of AL for extracting information from text. It is an iterative cycle, where informative samples from raw and unstructured text documents are chosen utilising a query strategy. A human annotator then labels these samples to extricate data and construct a supervised ML-based model at every iteration. The viability of AL techniques has been shown and decisively demonstrated in numerous spaces, for example, text classification, IE, and speech recognition [15].
Regardless of comparative findings on various tasks and domains, AL is not thoroughly investigated in biomedical tasks. Our research is based on the following research questions. RQ1: How AL can be used to reduce the labelling cost while maintaining the good quality of extracted information? RQ2: Which existing AL techniques perform well compared to other AL methods to reduce the labelling time?
RQ3: How can other ML approaches (i.e., representation learning and unsupervised learning) can produce effective information extraction while maintaining the quality and minimising the labelling effort?
Despite similar findings, the aim of our research is to provide a framework to the research community for extracting information from large amounts of unstructured biomedical documents by developing an AL-based framework that extracts high-quality concepts and reduces the burden of manual annotation.

2. Related Work

Expanding volumes of clinical information that can be presently digitised and put away in electronic medical records makes the extraction of information from clinical text progressively basic, especially in the region of NLP and ML. While numerous clinical assets and advances are presently accessible to encourage the preparing of clinical information, clinical data extraction remains challenging.
Recent studies focus on IE from biomedical literature, for example, books and scientific articles [17,18,19,20] and the subsequent gathering centres around IE from free-text clinical narratives delivered by clinical staff, for example, radiology and pathology reports or release synopses. Besides, other studies represent a more troublesome errand on account of the unstructured idea of the free content and the simple language used [6]. IE is a significant essential advance in extricating essential data from clinical records. The fundamental challenge is to create cost-productive methodologies that help automatic idea extraction from clinical free-text assets while guaranteeing the extracted ideas’ high quality. Automatic handling of such volumes of information could incredibly profit clinical information systems.

2.1. Information Extraction from Biomedical Corpus

Extracting information from biomedical documents involves capturing words of natural language from raw and unstructured document which express the significant information within a given domain [14]. NLP-based techniques cannot be directly used to extraction information from biomedical corpus due to its ubiquitous, raw, and unstructured nature. Current methods can be divided into following techniques.

2.1.1. Dictionary-Based Methods

Dictionary-based methods consist of matching a provided list of terms in a text and use patterns to extract structures like entities and text strings from a pre-defined dictionary. A large number of domain specific dictionaries are largely available which can be used to extract biomedical information. These include SNOMED CT [21] and UMLS [22]. Bashyam et al. [23] presented and demonstrated a lexical lookup approach for radiology reports to detect UMLS concepts. They showed that their method is 7 times faster than MetaMap in identifying the same concepts.
Dictionary-based techniques can be helpful in extracting information from free text with the help of dictionaries, and they can also normalise entities and be useful for both the syntactical and semantic level of information by associating the entities with terms in the dictionaries. These dictionary-based methods are useful but suffer from coverage issues, which makes their use limited in this domain.

2.1.2. Rule-Based Methods

Rule-based methods have been generally evolved to extract entities in the biomedical domain [24]. Rule-based methods contain manually created rules to extract biomedical information from the corpus. Various techniques are used to define these rules, which are used to capture patterns within natural language [25].
Current databases have coverage issues and do not cover recently discovered elements; some helpful objective substances and biomedical-related data covered up in non-important settings may be missed and not extracted utilising a word reference-based methodology. Hamon and Graber [26] presented a rule-based method to extract biomedical information using existing terms, rules, and shallow parsing methods. Mack et al. [27] proposed BioTeks, a rule-based approach to capture biomedical information from biomedical corpus. These methods are widely used in the biomedical domain; however, implementation requires domain expertise and they are not adaptable nor transferable to other domains [28].

2.2. Machine Learning (ML)

Machine learning (ML)-based methods are presented to address the shortcomings of the abovementioned techniques by making the machine learn and improve the performance [29]. Biomedical/clinical extraction can be classified as a labelling task sequence, referred to as a classification task in supervised learning of ML algorithms. Both support vector machines (SVMs) [30] and conditional random fields (CRFs) [31] are the methods mostly used in the classification for sequence labelling tasks. For another high level of sophisticated tasks, a large number and high quality of training data are required to train the models. Although a huge amount of data is available, the labelling cost is high and the task cumbersome. The AL technique is proposed to limit the required high volume of manual labelling of data. AL’s main idea is to query and label those samples that carry useful information for the learning model compared to other available samples. It can attain better performance with less-annotated training data [15,32].
Semi-supervised learning is another approach to address annotated corpus [33]. It has been effectively applied to some real-world applications. An abundance of unlabelled examples is effortlessly accessible, while physically naming them is an escalated and costly errand. Self-training is a customarily utilised technique where unannotated text is annotated in an iterative interaction. The updated labelled set is utilised to retrain and refresh the fundamental classifier at every emphasis. This examination researches how to increase the learning model at every point by consolidating self-preparing into the AL process.
Representation learning refers to learning data representations to facilitate information extraction. The IE is fed into the training of the machine learning model [34]. Mikolov et al. [35] introduced a novel word embedding concept where words are represented in continuous vector representations of words based on their various dimensions of difference.

2.3. Natural Language Processing (NLP)

NLP is the intersection of computing science and linguistics that includes dissecting and understanding common human language from both speech and written texts. Over the years, NLP has been used in various applications such as email filtering [36], irony and sarcasm detection [37] document organisation [38], sentiment and opinion mining prediction [39,40,41], hate speech detection [42,43,44], question answering [45], content mining [46], biomedical text mining [47,48], and many more [8,49,50].
In biomedical named entity recognition (BioNER), Yao et al. [51] initially created embeddings of words on unlabelled texts of biological topics using neural networks, going on to establish a multi-layer neural network to obtain cutting edge output. Li et al. [52] mixed sentence vectors and twin word embeddings and utilised the BiLSTM on biomedical texts to identify domain-relevant entities. To identify drug entities, Zeng et al. [53] developed their model, BiLSTM-CRF. CNN was utilised to get the representation of features on a character level. This was done with representations on a word level and used as data to be fed to the BiLSTM-CRF for BioNER. In biomedical literature, many words can cause information redundancy whilst neural network models are trained for feature capture, preventing critical information from being obtained. This may cause the crucial areas not to focus on the BioNER models, and loss of information could occur. It is a salient focus to make models of neural networks attentive to areas of importance. In machine translation, Bahdanau et al. [54] suggest the attention focusing mechanism. Taking the decoder model into account, the focus can be made on the initial text’s essential bits as it is decoded, reducing information loss. An attention-based BiLSTM-CRF model is used by Luo et al. [55] for BioNER on a document level. They optimise the tagging inconsistency problem by using, between various sentences, mechanisms that are attention-focused. The best results are obtained on CHEMDNER and CDR corpora using this approach.
Several other works have investigated the benefit of contextual models in biomedical and clinical areas. Several researchers trained ELMo on biomedical corpora and presented BioELMo and found that BioELMo beats ELMo on BioNER tasks [56,57]. Along with their work, a pre-trained BioELMo model was published, enabling further clinical research. Beltagy et al. [58] released Scientific BERT (SciBERT), where BERT was trained on the scientific texts. In non-contextual embedding, BERT has been usually superior and better than ELMo. Similarly, innovative wireless connectivity techniques could be applied to the remote execution of these techniques [59,60,61,62]
Si et al. [63], trained the BERT on clinical notes corpora, using complex task-specific models to improve both traditional embedding and ELMo embedding i2b2 2010 and 2012 BioNER. Similarly, in another study, a new domain-specific language model, BioBERT [64], trained a BERT model on biomedical documents from PMC abstracts and articles from PubMed that resulted in improved BioNER results. Peng et al. [65] introduced Biomedical Language Understanding Evaluation (BLUE), a collection of resources for evaluating and analysing natural biomedical language representation models.

2.4. Active Learning (AL)

AL algorithms are beneficial in ML, especially when we have large amounts of unannotated data. AL techniques use supervised ML methods in an iterative way. A human annotator is involved in the learning process and can drastically decrease the human involvement as demonstrated in Figure 2. Despite its strength, AL has not been fully explored for biomedical information extraction. AL’s primary goal is to maximise the model’s effectiveness by reducing the number of samples that need manual labelling. The main challenge is to find informative samples that are available to train a model, achieving the better performance and high effectiveness.

2.5. Active Learning in Clinical Domain

AL aims to reduce the costs and issues related to the manual annotation step in supervised ML methods. Decreasing the manual annotation burden becomes highly critical in the clinical domain because of qualified experts’ high costs to annotate the clinical free text. AL is used for various biomedical tasks [66], de-identifying clinical records [67], clinical text classification [68], and clinical named entity recognition [69]. Random sampling (RS), where samples are chosen randomly, is a commonly used AL technique.
Rosales et al. [70] presented an AL method to identify biomedical information to two groups. Their method outperformed the traditional methods. Chen et al. [66] presented a sampling technique established on the changes appearing in different learning models during AL. Another study on de-identifying of Swedish biomedical samples as a classification task was presented by Boström and Dalianis [67]. They presented the comparison on the performance of two AL methods against RS baseline methods. Recently, Chen et al. [69] proposed new AL query strategies that belong to uncertainty-based approaches and diversity-based approaches. Authors presented a comprehensive evaluation of current and new AL methods on biomedical tasks and found that uncertainty sample-based methods resulted in less effort being required to label the corpus as compared to diversity-based methods.
Considering the basic need of having cost-effective AL approaches for biomedical tasks, the highlighted limitations need to be addressed. Therefore, in this research, our aim is to address the cost needed for manual annotation using AL and representation learning.

3. Methodology

3.1. Dataset

In this study, we used the following datasets.
DDI extraction 2013 corpus is a collection of 792 texts selected from the DrugBank database and other 233 Medline abstracts [71]. The drug-drug interactions, including both pharmacokinetics and pharmacodynamic interactions, were annotated by two expert pharmacists with a substantial pharmacovigilance background. In our benchmark, we use 624 train files and 191 test files to evaluate the performance and report the micro-average F1-score of the four DDI types.
ChemProt consists of 1820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task [72]. We use the standard training, and test sets in the ChemProt shared task and evaluate the same five classes: CPR:3, CPR:4, CPR:5, CPR:6, and CPR:9.
HoC (the Hallmarks of Cancers corpus) consists of 1580 PubMed abstracts annotated with ten currently known hallmarks of cancer [73]. Annotation was performed at the sentence level by an expert with 15+ years of experience in cancer research. We used 315 (20%) abstracts for testing and the remaining abstracts for training. Table 1 shows the name along with the task description of the dataset used in this study. Further, Figure 3 depicts the data analysis of the dataset used in our study.

3.2. Active Learning Query Strategies

3.2.1. Random Sampling (RS)

The key idea for random sampling of AL is that it takes a small, random portion of the entire dataset to represent the entire dataset. Each member has an equal probability. During the AL application, random sampling is quite the most straightforward algorithm compared to other query strategies. It applies the random state and shuffles to achieve the random selection of the training and testing pools.

3.2.2. Least Confidence (LC)

Least confidence is one of the methods belonging to uncertainty sampling, a query strategy that tries to determine the word’s values by calculating the real uncertainty of the word.

3.2.3. Informative Diversity and Density (IDD)

IDD is a method used to calculate the information density of an instance x. Unlike uncertainty sampling, IDD can lead us to take the structure of data into account.

3.2.4. Margin

Margin is also belonging to uncertainty sampling; unlike LC, the margin is designed to measure the difference in probability of the first and second most likely prediction.

3.2.5. Maximum Representativeness-Diversity (MRD)

Maximum representativeness diversity is a method that relies only on the similarity between samples and all other samples in unlabelled sets. The most representative is to mark various samples in the current batch and then add them to the training set. This method could prevent experts from waiting until the learning model is on the current set of tags, and then the next batch of samples selects tags using one of the above query strategies.

3.3. AL Query Strategies

There are many query strategies in AL; however, not all query strategies are invented for all situations. We pick up LC and margin because they are the most popular query strategies in other areas. We pick up RS because it is different from other algorithms as it picks up pools randomly. Then we choose IDD because IDD uses a different measure way compared with LC and margin. For the same reason, we pick up MRD to increase the variety of our query strategies schemes to get a better and reliable result for analysing.

3.4. Feature Extraction Methods

For feature extraction methods, we pick TF-IDF for feature extraction method in many areas. Then, we add FastText to compare with TF-IDF because TF-IDF only considers the frequency of a word in a document. FastText, consider more than that which can give our study result analysis some other aspect to analyse the performance. In the end, we decided to add BERT and ELMo and their extension into our study. Because BERT and ELMo are heavy methods compared to others and perform well, especially with NLP tasks in other areas than other methods. Therefore, we decided also to include this to analyse its performance with clinical datasets.

3.5. Machine Learning Methods

For ML methods, first of all, we determined to choose some the widely used method as the basic ML methods for our study is why we pick SVM, KNN, and NB; they are widely used methods in many different aspects of the dataset. Then, to make some comparison with SVM, KNN, and NB, we pick up some algorithms with different schemes compared to SVM, KNN, and NB. XGBoost and CatBoost are both gradient boosts based on decision trees. Random forest (RF) and AdaBoost are both ensemble functions. Furthermore, each of them is the most popular method in their area. Therefore, we finally pick these 7 methods as our ML methods to make our results more reliable by analysing different schemes’ performance.

4. Results and Discussion

The results (Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13) show that the DDI dataset, which applies BERT for feature extraction, has the best performance in accuracy when we apply an SVM algorithm with an AL framework which builds based on MRD query strategies.
The result shows a table DDI dataset, which applies BERT for feature extraction and has the best performance in accuracy when applying an SVM algorithm with an AL framework that builds based on MRD query strategies.
Almost all ML methods have good performance except KNN algorithms. Further- more, in general, AL algorithms have slightly better performance than passive learning algorithms.
HoC datasets have a much clearer difference between different methods applied. In general, AL performs better than passive learning algorithms. For ML algorithms, we can see that XGBoost and CatBoost have relatively better performance than others. Furthermore, for query strategies, margins have overall better performance than others.
The following are answers to our research questions.
  • CatBoost performs better than others in most situations after we summarise all results tables.
  • In general, LC and margin have better performance than other query strategies after we summarise all result tables.
  • Overall, AL performance is better; therefore, AL is more recommended than passive learning.
In addition to the above results, we also notice that CatBoost always performs stably in every situation where other classifiers somehow have some bad performance. The judgement of LC and margin performance is challenging since they still have similar performance in almost all cases.
For the first results, CatBoost, as described in the methodology, is part of the gradient boost based on DT. This structure gives CatBoost the ability to get more chances to recover the errors during the implementation of the entire CatBoost structure since the later tree will fix the error that occurred by the previous tree. At the same time, the CatBoost boosting scheme is modified to be more efficient than other gradient boost algorithms, such as XGBoost, which gives CatBoost stability when changing the hyperparameters, especially with extensive data. All these advantages make CatBoost, overall, have a better ability to perform better in our study. Besides other algorithms such as AdaBoost, SVM is very sensitive with the correlation between data and data, which lead them to make more mistakes during the training and prediction than other algorithms.
For the second result, both LC and margin belong to uncertainty sampling, which calculates the uncertainty between data to measure the value of the word to decide the query order. Therefore, we can consider these two methods as a similar scheme used for AL. Then, uncertainty sampling was invented to reduce classification errors, making them more able to reduce classification errors than other query strategies, which is also what our study aims for. At the same time, IDD and MRD focused more on one word to decide the values. This could be better than uncertainty sampling with efficiently pre-computed; however, we cannot develop an efficiently pre-computed IDD and MRD algorithm to test the performance due to the tight time for implementation. This lead IDD and MRD cannot perform better than LC and Margin.
For the third result, the main reason why we can achieve better performance since with fewer data trained by using AL than using passive learning is the unbalance of the dataset. More data does not mean more accuracy for text classification. There exist iterations for the classification, even with all valuable data. In this time, insufficient data can immediately cause errors and reduce the accuracy of the classification. Therefore, the most important thing for classification is not the number of training pools. The most important thing is, can you find out which data is valuable enough to train the classifier. The AL algorithm is invented to achieve this goal by applying different query strategies. Therefore, AL can perform better than passive learning. Graphical representation of results is shown in Figure 4.

5. Conclusions

We conducted a simulated study to compare different AL algorithms for a clinical task. Our results showed that most AL algorithms outperformed the passive learning method when we assume equal annotation cost for each sentence. However, savings of annotation by AL were reduced when the length of sentences was considered. We suggest that the effectiveness of AL for clinical NER needs to be further evaluated by developing AL enabled annotation systems and conducting user studies.
We can conclude that AL is more recommended to test a clinical dataset classification with unlabelled data than passive learning. Compared to nowadays techniques to generate the health care outcomes, it will provide at least the same accuracy as before and even with less training dataset, which will significantly decrease the cost of collecting and labelling the dataset. Also, we can see that CatBoost makes a great performance combined with the uncertainty sampling AL framework. This also gives more options to choose when people want to implement the AL to text classification. Furthermore, the domain knowledge is not so hard to understand since AL is still one part of ML; therefore, the required knowledge is only ML; once master this knowledge, the rest part is not hard to implement.

Author Contributions

Conceptualisation, U.N. and M.K.; methodology, U.N., M.K.; writing— original draft preparation, U.N.; writing—review and editing, K.S., S.K.K., M.A.M.; project administration, M.K.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code and data are available from https://github.com/usmaann (accessed on 14 March 2021).

Acknowledgments

Authors acknowledge and thank Junchi Li and Hengze Liu for their contribution to the project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nguyen, A.N.; Moore, J.; O’Dwyer, J.; Philpot, S. Automated cancer registry notifications: Validation of a medical text analytics system for identifying patients with cancer from a state-wide pathology repository. AMIA Annu. Symp. Proc. 2016, 2016, 964. [Google Scholar] [PubMed]
  2. Koopman, B.; Zuccon, G.; Wagholikar, A.; Chu, K.; O’Dwyer, J.; Nguyen, A.; Keijzers, G. Automated reconciliation of radiology reports and discharge summaries. AMIA Annu. Symp. Proc. 2015, 2015, 775. [Google Scholar]
  3. Zuccon, G.; Koopman, B.; Nguyen, A.; Vickers, D.; Butt, L. Exploiting medical hierarchies for concept-based information retrieval. In Proceedings of the Seventeenth Australasian Document Computing Symposium, Dunedin, New Zealand, 5–6 December 2012; pp. 111–114. [Google Scholar]
  4. Ohno-Machado, L.; Nadkarni, P.; Johnson, K. Natural language processing: Algorithms and tools to extract computable information from EHRs and from the biomedical literature. J. Am. Med. Inform. Assoc. 2013, 20, 805. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Nadkarni, P.M.; Ohno-Machado, L.; Chapman, W.W. Natural language processing: An introduction. J. Am. Med. Inform. Assoc. 2011, 18, 544–551. [Google Scholar] [CrossRef] [Green Version]
  6. Meystre, S.M.; Savova, G.K.; Kipper-Schuler, K.C.; Hurdle, J.F. Extracting information from textual documents in the electronic health record: A review of recent research. Yearb. Med. Inform. 2008, 17, 128–144. [Google Scholar]
  7. Hu, Z.; Zhao, Y.; Khushi, M. A Survey of Forex and Stock Price Prediction Using Deep Learning. Appl. Syst. Innov. 2021, 4, 9. [Google Scholar] [CrossRef]
  8. Jaggi, M.; Mandal, P.; Narang, S.; Naseem, U.; Khushi, M. Text Mining of Stocktwits Data for Predicting Stock Prices. Appl. Syst. Innov. 2021, 4, 13. [Google Scholar] [CrossRef]
  9. Singh, J.; Khushi, M. Feature Learning for Stock Price Prediction Shows a Significant Role of Analyst Rating. Appl. Syst. Innov. 2021, 4, 17. [Google Scholar]
  10. Mukherjee, M.; Khushi, M. SMOTE-ENC: A novel SMOTE-based method to generate synthetic data for nominal and continuous features. Appl. Syst. Innov. 2021, 4, 18. [Google Scholar]
  11. Uzuner, Ö.; Goldstein, I.; Luo, Y.; Kohane, I. Identifying patient smoking status from medical discharge records. J. Am. Med. Inform. Assoc. 2008, 15, 14–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Suominen, H.; Salanterä, S.; Velupillai, S.; Chapman, W.W.; Savova, G.; Elhadad, N.; Pradhan, S.; South, B.R.; Mowery, D.L.; Jones, G.J.; et al. Overview of the ShARe/CLEF eHealth evaluation lab 2013. In International Conference of the Cross-Language Evaluation Forum for European Languages; Springer: Berlin, Germany, 2013; pp. 212–231. [Google Scholar]
  13. Gurulingappa, H. Mining the Medical and Patent Literature to Support Healthcare and Pharmacovigilance. Ph.D. Thesis, Universitäts-und Landesbibliothek Bonn, Bonn, Germany, 2012. [Google Scholar]
  14. Settles, B. Active Learning, volume 6 of Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan Claypool 2012, 6. [Google Scholar]
  15. Garla, V.; Taylor, C.; Brandt, C. Semi-supervised clinical text classification with Laplacian SVMs: An application to cancer case management. J. Biomed. Inform. 2013, 46, 869–875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Kholghi, M. Active Learning for Concept Extraction from Clinical Free Text. Ph.D. Thesis, Queensland University of Technology, Brisbane, Australia, 2017. [Google Scholar]
  17. Leser, U.; Hakenberg, J. What makes a gene name? Named entity recognition in the biomedical literature. Briefings Bioinform. 2005, 6, 357–369. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Cho, H.; Lee, H. Biomedical named entity recognition using deep neural networks with contextual information. BMC Bioinform. 2019, 20, 1–11. [Google Scholar] [CrossRef]
  19. Kumar, P.; Gupta, A. Active learning query strategies for classification, regression, and clustering: A survey. J. Comput. Sci. Technol. 2020, 35, 913–945. [Google Scholar] [CrossRef]
  20. Carvallo, A.; Parra, D.; Lobel, H.; Soto, A. Automatic document screening of medical literature using word and text embeddings in an active learning setting. Scientometrics 2020, 125, 3047–3084. [Google Scholar] [CrossRef]
  21. Cote, R.A.; Robboy, S. Progress in medical information management: Systematized Nomenclature of Medicine (SNOMED). JAMA 1980, 243, 756–762. [Google Scholar] [CrossRef] [PubMed]
  22. Lindberg, D.A.; Humphreys, B.L.; McCray, A.T. The unified medical language system. Methods Inf. Med. 1993, 32, 281. [Google Scholar]
  23. Bashyam, V.; Divita, G.; Bennett, D.B.; Browne, A.C.; Taira, R.K. A normalized lexical lookup approach to identifying UMLS concepts in free text. Stud. Health Technol. Inform. 2007, 129, 545. [Google Scholar]
  24. Spasić, I.; Sarafraz, F.; Keane, J.A.; Nenadić, G. Medication information extraction with linguistic pattern matching and semantic rules. J. Am. Med. Inform. Assoc. 2010, 17, 532–535. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Thapa, S.; Adhikari, S.; Naseem, U.; Singh, P.; Bharathy, G.; Prasad, M. Detecting Alzheimer’s Disease by Exploiting Linguistic Information from Nepali Transcript. In Proceedings of the International Conference on Neural Information Processing, Bangkok, Thailand, 17 November 2020; Springer: Bermany, Germany, 2020; pp. 176–184. [Google Scholar]
  26. Hamon, T.; Grabar, N. Linguistic approach for identification of medication names and related information in clinical narratives. J. Am. Med. Inform. Assoc. 2010, 17, 549–554. [Google Scholar] [CrossRef] [Green Version]
  27. Mack, R.; Mukherjea, S.; Soffer, A.; Uramoto, N.; Brown, E.; Coden, A.; Cooper, J.; Inokuchi, A.; Iyer, B.; Mass, Y.; et al. Text analytics for life science using the unstructured information management architecture. IBM Syst. J. 2004, 43, 490–515. [Google Scholar] [CrossRef]
  28. Esuli, A.; Marcheggiani, D.; Sebastiani, F. An enhanced CRFs-based system for information extraction from radiology reports. J. Biomed. Inform. 2013, 46, 425–435. [Google Scholar] [CrossRef]
  29. Qazi, A.; Bhowmik, C.; Hussain, F.; Yang, S.; Naseem, U.; Adebayo, A.A.; Gumaei, A.; Al-Rakhami, M. Analyzing the Public Opinion as a Guide for Renewable-Energy Status in Malaysia: A Case Study. IEEE Trans. Eng. Manag. 2021, 1–15. [Google Scholar] [CrossRef]
  30. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  31. Lafferty, J.; McCallum, A.; Pereira, F.C. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th International Conference on Machine Learning 2001 (ICML 2001), San Francisco, CA, USA, 28 June–1 July 2001. [Google Scholar]
  32. Naseem, U.; Khushi, M.; Khan, S.K.; Waheed, N.; Mir, A.; Qazi, A.; Alshammari, B.; Poon, S.K. Diabetic Retinopathy Detection Using Multi-layer Neural Networks and Split Attention with Focal Loss. In Proceedings of the International Conference on Neural Information Processing, Bangkok, Thailand, 17 November 2020; Springer: Bermany, Germany, 2020; pp. 26–37. [Google Scholar]
  33. Gan, H.; Li, Z.; Wu, W.; Luo, Z.; Huang, R. Safety-aware graph-based semi-supervised learning. Expert Syst. Appl. 2018, 107, 243–254. [Google Scholar] [CrossRef]
  34. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  35. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  36. Carreras, X.; Màrquez, L. Boosting Trees for Anti-Spam Email Filtering. arXiv 2001, arXiv:cs/0109015. [Google Scholar]
  37. Naseem, U.; Razzak, I.; Eklund, P.; Musial, K. Towards Improved Deep Contextual Embedding for the identification of Irony and Sarcasm. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  38. Hammouda, K.M.; Kamel, M.S. Efficient Phrase-Based Document Indexing for Web Document Clustering. IEEE Trans. Knowl. Data Eng. 2004, 16, 1279–1296. [Google Scholar] [CrossRef]
  39. Naseem, U.; Khan, S.K.; Razzak, I.; Hameed, I.A. Hybrid Words Representation for Airlines Sentiment Analysis. In AI 2019: Advances in Artificial Intelligence; Liu, J., Bailey, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 381–392. [Google Scholar]
  40. Naseem, U.; Razzak, I.; Musial, K.; Imran, M. Transformer based deep intelligent contextual embedding for twitter sentiment analysis. Future Gener. Comput. Syst. 2020, 113, 58–69. [Google Scholar] [CrossRef]
  41. Naseem, U.; Razzak, I.; Khushi, M.; Eklund, P.W.; Kim, J. COVIDSenti: A Large-Scale Benchmark Twitter Data Set for COVID-19 Sentiment Analysis. IEEE Trans. Comput. Soc. Syst. 2021, 1–13. [Google Scholar] [CrossRef]
  42. Naseem, U.; Khan, S.K.; Farasat, M.; Ali, F. Abusive Language Detection: A Comprehensive Review. Indian J. Sci. Technol. 2019, 12, 1–13. [Google Scholar] [CrossRef]
  43. Naseem, U.; Razzak, I.; Hameed, I.A. Deep Context-Aware Embedding for Abusive and Hate Speech detection on Twitter. Aust. J. Intell. Inf. Process. Syst. 2019, 15, 69–76. [Google Scholar]
  44. Naseem, U.; Musial, K. Dice: Deep intelligent contextual embedding for twitter sentiment analysis. In Proceedings of the 2019 International Conference on Document Analysis and Recognition (ICDAR), Sydney, NSW, Australia, 20–25 September 2019; pp. 953–958. [Google Scholar]
  45. Gupta, V.; Lehal, G. A Survey of Text Mining Techniques and Applications. J. Emerg. Technol. Web Intell. 2009, 1. [Google Scholar] [CrossRef]
  46. Aggarwal, C.C.; Reddy, C.K. Data Clustering: Algorithms and Applications; CRC Prints: Boca Raton, FL, USA, 2013. [Google Scholar]
  47. Naseem, U.; Khushi, M.; Reddy, V.; Rajendran, S.; Razzak, I.; Kim, J. BioALBERT: A Simple and Effective Pre-trained Language Model for Biomedical Named Entity Recognition. arXiv 2020, arXiv:2009.09223. [Google Scholar]
  48. Naseem, U.; Musial, K.; Eklund, P.; Prasad, M. Biomedical Named-Entity Recognition by Hierarchically Fusing BioBERT Representations and Deep Contextual-Level Word-Embedding. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  49. Naseem, U.; Razzak, I.; Eklund, P.W. A survey of pre-processing techniques to improve short-text quality: A case study on hate speech detection on twitter. Multimed. Tools Appl. 2020, 1–28. [Google Scholar] [CrossRef]
  50. Naseem, U.; Razzak, I.; Khan, S.K.; Prasad, M. A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models. arXiv 2020, arXiv:2010.15036. [Google Scholar]
  51. Yao, L.; Liu, H.; Liu, Y.; Li, X.; Anwar, M. Biomedical Named Entity Recognition based on Deep Neutral Network. Int. J. Hybrid Inf. Technol. 2015, 8, 279–288. [Google Scholar] [CrossRef]
  52. Li, L.; Jin, L.; Jiang, Y.; Huang, D. Recognizing Biomedical Named Entities Based on the Sentence Vector/Twin Word Embeddings Conditioned Bidirectional LSTM. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data; Springer: Cham, Switzerland, 2016. [Google Scholar]
  53. Zeng, D.; Sun, C.; Lin, L.; Liu, B. LSTM-CRF for Drug-Named Entity Recognition. Entropy 2017, 19, 283. [Google Scholar] [CrossRef] [Green Version]
  54. Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
  55. Luo, L.; Yang, Z.; Yang, P.; Zhang, Y.; Wang, L.; Lin, H.; Wang, J. An attention-based BiLSTM-CRF approach to document-level chemical named entity recognition. Bioinformatics 2018, 34, 1381–1388. [Google Scholar] [CrossRef] [Green Version]
  56. Jin, Q.; Dhingra, B.; Cohen, W.W.; Lu, X. Probing Biomedical Embeddings from Language Models. arXiv 2019, arXiv:1904.02181. [Google Scholar]
  57. Zhu, H.; Paschalidis, I.C.; Tahmasebi, A.M. Clinical Concept Extraction with Contextual Word Embedding. arXiv 2018, arXiv:1810.10566. [Google Scholar]
  58. Beltagy, I.; Lo, K.; Cohan, A. SciBERT: A Pretrained Language Model for Scientific Text. arXiv 2019, arXiv:1903.10676. [Google Scholar]
  59. Khan, S.K.; Farasat, M.; Naseem, U.; Ali, F. Performance evaluation of next-generation wireless (5G) UAV relay. Wirel. Pers. Commun. 2020, 113, 945–960. [Google Scholar] [CrossRef]
  60. Khan, S.K.; Naseem, U.; Siraj, H.; Razzak, I.; Imran, M. The role of UAVs and mmWave in 5G: Recent advances, and Challenges. Trans. Emerg. Telecommun. Technol. 2020, e4241. [Google Scholar] [CrossRef]
  61. Khan, S.K.; Naseem, U.; Sattar, A.; Waheed, N.; Mir, A.; Qazi, A.; Ismail, M. UAV-aided 5G Network in Suburban, Urban, Dense Urban, and High-rise Urban Environments. In Proceedings of the 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, USA, 24–27 November 2020; pp. 1–4. [Google Scholar]
  62. Khan, S.K.; Farasat, M.; Naseem, U.; Ali, F. Link-level Performance Modelling for Next-Generation UAV Relay with Millimetre- Wave Simultaneously in Access and Backhaul. Indian J. Sci. Technol. 2019, 12, 1–9. [Google Scholar]
  63. Si, Y.; Wang, J.; Xu, H.; Roberts, K. Enhancing clinical concept extraction with contextual embeddings. J. Am. Med. Inform. Assoc. 2019, 26, 1297–1304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Lee, J.; Yoon, W.; Kim, S.; Kim, D.; Kim, S.; So, C.H.; Kang, J. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. arXiv 2019, arXiv:1901.08746. [Google Scholar] [CrossRef] [PubMed]
  65. Peng, Y.; Yan, S.; Lu, Z. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. arXiv 2019, arXiv:1906.05474. [Google Scholar]
  66. Chen, Y.; Mani, S.; Xu, H. Applying active learning to assertion classification of concepts in clinical text. J. Biomed. Inform. 2012, 45, 265–272. [Google Scholar] [CrossRef] [Green Version]
  67. Boström, H.; Dalianis, H. De-identifying health records by means of active learning. Recall (micro) 2012, 97, 90–97. [Google Scholar]
  68. Figueroa, R.L.; Zeng-Treitler, Q.; Ngo, L.H.; Goryachev, S.; Wiechmann, E.P. Active learning for clinical text classification: Is it better than random sampling? J. Am. Med. Inform. Assoc. 2012, 19, 809–816. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Chen, Y.; Lasko, T.A.; Mei, Q.; Denny, J.C.; Xu, H. A study of active learning methods for named entity recognition in clinical text. J. Biomed. Inform. 2015, 58, 11–18. [Google Scholar] [CrossRef] [Green Version]
  70. Rosales, R.; Krishnamurthy, P.; Rao, R.B. Semi-supervised active learning for modeling medical concepts from free text. In Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA 2007), Cincinnati, OH, USA, 13–15 December 2007; pp. 530–536. [Google Scholar]
  71. Herrero-Zazo, M.; Segura-Bedmar, I.; Martínez, P.; Declerck, T. The DDI corpus: An annotated corpus with pharmacological substances and drug–drug interactions. J. Biomed. Inform. 2013, 46, 914–920. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Krallinger, M.; Rabal, O.; Akhondi, S.A.; Pérez, M.P.; Santamaría, J.; Rodríguez, G. Overview of the BioCreative VI chemical- protein interaction Track. In Proceedings of the Sixth BioCreative Challenge Evaluation Workshop, Bethesda, MD USA, 18–20 October 2017; Volume 1, pp. 141–146. [Google Scholar]
  73. Baker, S.; Silins, I.; Guo, Y.; Ali, I.; Högberg, J.; Stenius, U.; Korhonen, A. Automatic semantic classification of scientific literature according to the hallmarks of cancer. Bioinformatics 2016, 32, 432–440. [Google Scholar] [CrossRef] [Green Version]
Figure 1. An overview of supervised machine learning using in active learning (AL).
Figure 1. An overview of supervised machine learning using in active learning (AL).
Asi 04 00023 g001
Figure 2. Schematic diagram of AL as an iterative process which help labelling the raw data.
Figure 2. Schematic diagram of AL as an iterative process which help labelling the raw data.
Asi 04 00023 g002
Figure 3. Data analysis of the datasets used in this study.
Figure 3. Data analysis of the datasets used in this study.
Asi 04 00023 g003
Figure 4. Graphical Representation of Results.
Figure 4. Graphical Representation of Results.
Asi 04 00023 g004
Table 1. Dataset used.
Table 1. Dataset used.
DatasetTask
DDIRelation Extraction
ChemProtRelation Extraction
HoCDocument Classification
Table 2. Comparison of results with and without AL methods (TF-IDF) for DDI dataset.
Table 2. Comparison of results with and without AL methods (TF-IDF) for DDI dataset.
TF-IDFDDISupervised LearningRSLCIDDMarginMRD
SVMAccuracy82.1782.9281.8881.4481.7982.49
F187.0889.7586.98786.7188.73
NBAccuracy81.0882.9983.0482.6183.0181.84
F186.3490.6990.789.2590.7188.78
KNNAccuracy64.9274.9968.8467.4569.8770.6
F161.4275.7166.7864.9168.7269.63
XGBoostAccuracy82.8782.6282.8579.9383.0682.07
F189.8288.5688.6384.5889.0388.05
Random forestAccuracy81.4681.0382.8781.681.2781.44
F184.9485.6587.4586.7785.8686.73
AdaBoostAccuracy78.1183.0182.8380.5682.982.38
F181.3182.190.3186.3290.5389.78
CatBoostAccuracy81.1890.7191.4389.9390.589
F186.0187.490.88990.190.21
Table 3. Comparison of results with and without AL methods (TF-IDF) for ChemProt dataset.
Table 3. Comparison of results with and without AL methods (TF-IDF) for ChemProt dataset.
TF-IDFChemProtwithoutRSLCIDDMarginMRD
SVMAccuracy77.4079.3081.8881.4481.7979.12
F180.8386.2186.98786.7185.45
NBAccuracy79.5779.5983.0482.1683.0179.61
F187.788.6490.789.2590.7188.56
KNNAccuracy60.6865.6868.84.67.4569.8758.57
F157.3164.4266.7864.9168.7254.43
XGBoostAccuracy78.9278.4678.9978.8179.1178.91
F184.3683.3883.8284.2683.8684.64
Random forestAccuracy78.8578.3278.5578.578.678.58
F183.8184.2883.0484.4283.4984.39
AdaBoostAccuracy76.4679.3877.6977.8375.5178.46
F182.7786.6382.7985.4181.2286.62
CatBoostAccuracy78.9278.8180.584.8983.582.10
F184.3683.6382.8082.0083.1082.90
Table 4. Comparison of results with and without AL methods (TF-IDF) for HoC dataset.
Table 4. Comparison of results with and without AL methods (TF-IDF) for HoC dataset.
TF-IDFHoCwithoutRSLCIDDMarginMRD
SVMAccuracy93.6491.3993.2093.1293.1692.93
F191.2690.3991.0890.8391.0890.74
KNNAccuracy86.5088.1286.0586.3586.0589.08
F193.5191.8893.3693.4293.3693.28
Random ForestAccuracy93.5190.3592.9992.7992.9993.12
F181.8286.8181.6381.3981.6381.07
CatBoostAccuracy94.3292.0994.9094.1093.4092.10
F184.3683.3883.8284.2683.8684.64
Random forestAccuracy78.8578.3278.5578.5078.6078.58
F183.8184.2883.0484.4283.4984.39
AdaBoostAccuracy76.4679.3877.6977.8375.5178.46
F182.7786.6382.7985.4181.2286.62
CatBoostAccuracy78.9278.8179.8078.8079.1079.20
F184.3683.6385.8084.9084.2084.70
Table 5. Comparison of results with and without AL methods (FastText) for DDI dataset.
Table 5. Comparison of results with and without AL methods (FastText) for DDI dataset.
FastTextDDIwithoutRSLCIDDMarginMRD
SVMAccuracy83.1382.5981.7981.5181.5472.97
F190.2888.2185.2286.1285.2471.44
NBAccuracy83.0182.9383.1583.0182.9983.02
F190.7190.4890.1390.7190.2190.27
KNNAccuracy76.4673.7573.3578.3173.3573.12
F177.5374.8173.5981.5473.5973.09
XGBoostAccuracy83.4182.7283.5482.5583.4783.35
F189.9989.6789.8389.1190.3089.54
Random forestAccuracy77.4581.5381.2177.6979.9780.66
F179.6086.6385.9180.3483.8984.96
AdaBoostAccuracy70.5866.0882.4790.6178.0778.84
F170.2563.2489.7182.9482.0983.43
CatBoostAccuracy81.1781.2782.4881.4682.1782.57
F185.2486.4088.3687.3587.9588.69
Table 6. Comparison of results with and without AL methods (FastText) for HoC dataset.
Table 6. Comparison of results with and without AL methods (FastText) for HoC dataset.
FastTextHoCwithoutRSLCIDDMarginMRD
SVMF143.0157.9733.2534.3436.4840.52
F120.8636.4531.0629.4141.3727.19
KNNF143.4340.0436.8939.7239.0932.64
F141.0445.6141.7540.0949.0527.77
Random forestF123.4728.0822.8726.6823.9522.04
F136.9232.7137.7540.2345.0130.39
CatBoostF135.8232.7035.4740.2737.2328.30
F189.9989.6789.8389.1190.3089.54
Random forestAccuracy77.4581.5381.2177.6979.9780.66
F179.6086.6385.9180.3483.8984.96
AdaBoostAccuracy70.5866.0882.4790.6178.0778.84
F170.2563.2489.7182.9482.0983.43
CatBoostAccuracy81.1781.2782.4881.4682.1782.57
F185.2486.4088.3687.3587.9588.69
Table 7. Comparison of results with and without AL methods (FastText) for ChemProt dataset.
Table 7. Comparison of results with and without AL methods (FastText) for ChemProt dataset.
FastTextHoCwithoutRSLCIDDMarginMRD
SVMAccuracy79.5179.3777.6478.9979.3479.06
F188.3587.8284.0786.8587.9785.73
NBAccuracy79.5879.5579.4679.5879.5379.54
F188.6188.5688.2788.6388.4488.45
KNNAccuracy74.6475.8675.5274.9269.0575.85
F178.4481.2180.9479.8770.6781.11
XGBoostAccuracy79.6179.5779.6279.6579.5179.63
F188.2988.4188.3788.4788.4388.54
Random forestAccuracy67.8968.4176.2074.2176.3977.73
F168.6869.7382.4379.0682.4984.66
AdaBoostAccuracy73.6968.4977.1374.3071.3576.33
F178.6269.9384.5780.2475.8782.92
CatBoostAccuracy79.5779.5379.6279.5878.4679.59
F188.3388.1688.3788.3286.1388.42
Table 8. Comparison of results with and without AL methods (BERT) for DDI dataset.
Table 8. Comparison of results with and without AL methods (BERT) for DDI dataset.
BERTHoCwithoutRSLCIDDMarginMRD
SVMAccuracy82.2483.3782.9581.6479.7883.72
F186.5989.5785.5885.5381.9489.48
NBAccuracy62.5962.7067.8765.5166.2242.15
F158.5358.7266.1862.7564.0532.30
KNNAccuracy73.7775.5173.5574.9073.4473.81
F174.2076.9274.1276.0473.9474.54
XGBoostAccuracy83.0982.5982.7882.9783.0982.56
F188.8588.7388.1188.9988.4188.21
Random forestAccuracy75.8978.9379.4378.2579.9079.43
F177.8082.3083.6481.8784.3782.78
AdaBoostAccuracy82.5974.8581.7481.9381.5379.76
F189.8076.3787.4188.3987.9184.87
CatBoostAccuracy81.1181.0381.8882.1682.4581.91
F185.8585.1486.5987.8087.7986.75
Table 9. Comparison of results with and without AL methods (BERT) for HoC dataset.
Table 9. Comparison of results with and without AL methods (BERT) for HoC dataset.
BERTHoCwithoutRSLCIDDMarginMRD
SVMF183.6083.4689.2689.3389.2689.63
F185.9682.8784.2084.5184.2078.80
KNNF182.8182.2281.9681.5681.9681.86
F186.8085.2486.4085.9486.4086.29
Random ForestF183.6983.4383.6982.4083.6984.17
F194.6586.6495.6791.5095.6791.87
CatBoostF185.7285.2486.2885.9586.2886.68
F188.8588.7388.1188.9988.4188.21
Random forestAccuracy75.8978.9379.4378.2579.9079.43
F177.8082.3083.6481.8784.3782.78
AdaBoostAccuracy82.5974.8581.7481.9381.5379.76
F189.8076.3787.4188.3987.9184.87
CatBoostAccuracy81.1181.0381.8882.1682.4581.91
F185.8585.1486.5987.8087.7986.75
Table 10. Comparison of results with and without AL methods (BERT) for ChemProt dataset.
Table 10. Comparison of results with and without AL methods (BERT) for ChemProt dataset.
BERTChemProtwithoutRSLCIDDMarginMRD
SVMAccuracy79.5979.5878.6879.2079.4479.50
F188.6388.6384.8087.0087.4188.01
NBAccuracy69.7167.6167.4567.0060.4455.39
F173.0970.1069.6069.1559.1951.79
KNNAccuracy65.9268.4264.7064.1264.7364.69
F165.8769.7364.3663.5864.3664.33
XGBoostAccuracy79.5779.4479.4479.3879.4079.35
F188.4988.1988.0187.6187.8487.68
Random forestAccuracy76.6276.1176.7376.1877.0176.26
F182.8882.3183.2182.1883.6282.27
AdaBoostAccuracy79.1768.1273.8671.7976.6274.45
F187.9370.1879.4676.0983.0480.28
CatBoostAccuracy78.1977.8177.7777.6777.3176.61
F185.5384.7284.8284.3184.2282.76
Table 11. Comparison of results with and without AL methods (ELMo) for DDI dataset.
Table 11. Comparison of results with and without AL methods (ELMo) for DDI dataset.
ELMoDDIwithoutRSLCIDDMarginMRD
SVMAccuracy83.0183.0183.0183.0183.0183.01
F190.7190.7190.7190.7190.7190.71
NBAccuracy38.2144.6150.5655.0656.2256.29
F128.4536.2348.3247.9750.2849.46
KNNAccuracy78.4379.6079.3380.5679.1280.78
F179.1182.5481.2684.2682.7784.38
XGBoostAccuracy82.3883.1382.1782.7983.1682.87
F188.2390.2088.6590.6290.6689.94
Random forestAccuracy80.8279.9280.4380.3281.4180.85
F185.3184.4486.2185.1087.0686.15
AdaBoostAccuracy77.5678.0179.7678.1580.9980.16
F181.3282.4083.5382.4586.5385.32
CatBoostAccuracy80.4582.4281.4382.9783.0282.90
F187.3488.7189.5490.3590.4289.34
Table 12. Comparison of results with and without AL methods (ELMo) for HoC dataset.
Table 12. Comparison of results with and without AL methods (ELMo) for HoC dataset.
ELMoHoCwithoutRSLCIDDMarginMRD
SVMF185.7684.5190.5190.7890.6290.73
F184.7582.5884.8484.2184.8482.73
KNNF184.0682.9581.8682.1983.9181.64
F188.3187.1088.0887.9488.1287.95
Random forestF178.4578.5679.9474.2779.9479.92
F198.1090.9591.4994.8193.9493.09
CatBoostF186.6187.1687.1686.6987.3988.01
F188.2390.2088.6590.6290.6689.94
Random forestAccuracy80.8279.9280.4380.3281.4180.85
F185.3184.4486.2185.1087.0686.15
AdaBoostAccuracy77.5678.0179.7678.1580.9980.16
F181.3282.4083.5382.4586.5385.32
CatBoostAccuracy80.4582.4281.4382.9783.0282.90
F187.3488.7189.5490.3590.4289.34
Table 13. Comparison of results with and without AL methods (ELMo) for ChemProt dataset.
Table 13. Comparison of results with and without AL methods (ELMo) for ChemProt dataset.
ELMoChemProtwithoutRSLCIDDMarginMRD
SVMAccuracy79.5979.5979.5979.5979.5979.59
F188.6388.6488.6488.6488.6488.64
NBAccuracy37.7738.5948.9150.1850.4645.55
F128.2329.2041.9842.9843.4237.91
KNNAccuracy62.5264.8963.5763.7662.3467.18
F160.3863.7862,0862,6460.0867.35
XGBoostAccuracy79.8379.7279.9679.6579.8179.75
F188.0087.5687.2687.4787.2787.45
Random forestAccuracy74.8275.3877.4273.4977.6076.97
F180.2280.8783.6877.8684.2884.00
AdaBoostAccuracy72.8868.1177.0275.5876.7275.50
F176.4069.5183.9281.4582.6876.88
CatBoostAccuracy76.7876.2378.4578.3478.7976.94
F182.0181.9284.6184.9285.1783.05
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Naseem, U.; Khushi, M.; Khan, S.K.; Shaukat, K.; Moni, M.A. A Comparative Analysis of Active Learning for Biomedical Text Mining. Appl. Syst. Innov. 2021, 4, 23. https://doi.org/10.3390/asi4010023

AMA Style

Naseem U, Khushi M, Khan SK, Shaukat K, Moni MA. A Comparative Analysis of Active Learning for Biomedical Text Mining. Applied System Innovation. 2021; 4(1):23. https://doi.org/10.3390/asi4010023

Chicago/Turabian Style

Naseem, Usman, Matloob Khushi, Shah Khalid Khan, Kamran Shaukat, and Mohammad Ali Moni. 2021. "A Comparative Analysis of Active Learning for Biomedical Text Mining" Applied System Innovation 4, no. 1: 23. https://doi.org/10.3390/asi4010023

Article Metrics

Back to TopTop