0% found this document useful (0 votes)
4 views7 pages

Text Features Extraction Based On TF-IDF Associating Semantic

This conference paper discusses a method for text feature extraction that combines the TF-IDF algorithm with word vector representations using the word2vec model and density clustering. The approach aims to improve the accuracy of text feature extraction by considering semantic similarities among words, thereby reducing information loss associated with traditional methods. The proposed method demonstrates enhanced performance in identifying relevant text features by clustering similar words based on their semantic meanings.

Uploaded by

Merga Kumela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views7 pages

Text Features Extraction Based On TF-IDF Associating Semantic

This conference paper discusses a method for text feature extraction that combines the TF-IDF algorithm with word vector representations using the word2vec model and density clustering. The approach aims to improve the accuracy of text feature extraction by considering semantic similarities among words, thereby reducing information loss associated with traditional methods. The proposed method demonstrates enhanced performance in identifying relevant text features by clustering similar words based on their semantic meanings.

Uploaded by

Merga Kumela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334894282

Text Features Extraction based on TF-IDF Associating Semantic

Conference Paper · December 2018


DOI: 10.1109/CompComm.2018.8780663

CITATIONS READS
33 1,995

5 authors, including:

Yun Yang Wang Naiyao


Yunnan University Dalian Maritime University
116 PUBLICATIONS 1,474 CITATIONS 6 PUBLICATIONS 39 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Research on Representation Learning and Ensemble Learning Toward the Problems of Time Series Clustering (NSFC) View project

Research on multi-source ensemble transfer learning (NSFC) View project

All content following this page was uploaded by Yun Yang on 07 September 2019.

The user has requested enhancement of the downloaded file.


2018 IEEE 4th International Conference on Computer and Communications

Text Features Extraction based on TF-IDF Associating Semantic

Qing Liu Jing Wang


650091. Software College 650091. Software College
Yunnan University Yunnan University
Kunming, China Kunming, China
e-mail: [email protected] e-mail: [email protected]

Dehai Zhang * Yun Yang


650091. Software College 650091. Software College
Yunnan University Yunnan University
Kunming, China Kunming, China
e-mail: [email protected] e-mail: [email protected]

NaiYao Wang
650091. Software College
Yunnan University
Kunming, China
e-mail: [email protected]

Abstract—The TF-IDF (term frequency–inverse document features will be deleted, and finally the important features
frequency) algorithm is based on word statistics for text (sentences, words or characters, etc.) will be combined with
feature extraction. Which considers only the expressions of their weights to reflect the information contained in the text
words that are same in all texts, such as ASCLL, without [2].
considering that they could be represented by their synonyms. Text characterization based on word statistics is often
Separating words with the same or similar meanings will result used to extract text features. The BOW (bag of words) [3]
in the loss of partial information when text feature were and the TF-IDF (term frequency–inverse document
extracted. The representation of words needs to extract the frequency) [4] are the most typical models. These models
similarity of words, and the similarity among words needs to
can simplify the process of extraction and it is easy to
be obtained by the meaning of words in texts. In order to
improve the accuracy of text feature extraction, this paper uses
understand. However, when extracting words for text feature,
the word2vec model to train the word vector in the corpus to each word in the text is treated as a separate unit, so the
obtain its semantic features. After excluding words with low semantic features of the text cannot be effectively obtained.
TF-IDF value, the density clustering algorithm is used to For example, “kid” and “child” should belong to the
cluster the remaining words according to word vector same concept in our life, but the traditional method
similarity. As a result, similar words are clustered together and extracting text feature treats two words as different concepts,
can be represented to each other. Experiments show that using resulting in the loss of text information that represented by
the TF-IDF algorithm again, constructing a VSM (vector space the text features.
model) with these clusters as feature units can effectively Extracting the semantic features of each word in the text
improve the accuracy of text feature extraction. needs to be measured by the context where the word is
located. Word2Vector, which trains the corresponding word
Keywords-Text feature; TF-IDF; Word vector; Semantic vector based on the context of the word in the text, plays an
features; Clustering important role in extracting the semantic of words.
Word2Vector is a technique for transforming word
I. INTRODUCTION representation into space vector. It mainly uses the idea of
Text feature extraction is an important step of data deep learning to train corpus, by associating context of each
mining and information retrieval. It quantifies the feature word and mapping them into different N-dimensional vector
words which extracted from the text to represent the text [5]. In this way, the semantic features of each word can be
expressed and recognized by the computer .
information, and converts them from an unstructured original
For the data that already has semantic features,
text to a structured information which computer can
representations among words need to calculate semantic
recognize and process [1], that is, describing and replacing
similarity and uniformly store synonyms. Using the density
the text by the dimensionality reduction of the text word
clustering algorithm [6] in machine learning , the words with
space and the establishment of its mathematical model. In the
similar meanings can be clustered. This algorithm can
process of extracting text feature, irrelevant or redundant

978-1-5386-8339-2/18/$31.00 ©2018 IEEE 2338


flexibly control the minimum value of the distance of word Example 1: The words “monkeys like to eat fruits” and
vector (word similarity) in each cluster, and does not need to “gorillas are interested in plants” should be very similar.
set the number of cluster in advance, which is very practical When using TF-IDF for text characterization in word vector
for the case where the number of cluster is not clear and not space, their nonzero feature words do not intersect at all,
strict. even let alone find any similarities. According the method
with word vector processing and cluster substitution ---- after
II. RELATED WORKS use word2vector word vector training on corpus, the
In 1969, Gerard Salton and McGill proposed the VSM meanings of “monkey” and “gorilla”, “like” and “interest”,
(Vector Space Model), which converts the processing of text “fruit” and “plant” are very similar ---- these synonyms are
into a operation on a space vector by mapping the document divided into each unified clusters. Replacing the elements in
to a feature vector. And the similarity among the texts is the cluster to which they belong, so that the “monkey” in text
reflected by the similarity of vector space, which makes the A and the “gorilla” in text B can replace each other, which
processing of the text simpler [7]. Many methods for text can solve the above problems.
characterization based on VSM are proposed, one of the Back to the text feature extraction, our job is to find the
popular methods include the TF-IDF (term frequency- words worthy to further explore in the text through the basic
inverse document frequency) method, the DF (document word statistics method. Based on the uses of various
frequency) method, the MI (Mutual Information) method, algorithms to the corpus, explore other text features implied
and IG (Information Gain) method [8], etc. by these words. The source of these features are mainly the
At present, there are many researches on text feature contextual relationship among each word in corpus.
extraction. Yonghe Lu et al. [9] used feature pool to preselect
feature word, introduced genetic algorithm to encode IV. TEXT FEATURE EXTRACTION BASED ON WORD
candidate feature words and extracted the best feature vector. VECTOR CLUSTERING
In the case of less feature dimension, there is a better effect
The method of extracting text features based on word
of feature extraction, but the semantic association among
vector clustering mainly includes the following contents:
words is not considered in the process of block coding.
managing data and training word vector; excluding words
LiHong Wang [10] proposed a method for feature extraction
whose TF-IDF is too low in text; using density clustering
by improving the weight calculation based on word co-
algorithm to divide remaining words by word vector distance
occurrence. Compared with the traditional TF-IDF method,
clustering; obtains the virtual words that can replace any
this method has better performance in short text clustering.
element in the cluster and its TF-IDF; using the TF-IDF of
The above methods are based on word statistics to
each virtual word as the feature unit model to establish the
achieve text feature extraction, and do not consider the
vector space. The above will be described below in detail.
semantic association between words. Hong Liang et al. [11]
In this paper, w denotes a words, W denotes a
proposed a method of text feature extraction based on deep
learning. Zhou Shunxian et al. [12] proposed another collection of words, v denotes a word vector, V denotes a
context-based text characterization method for textual collection of word vectors, wt denotes the word w in the
similarity problems. These methods consider the semantics text t , v t denotes the word wt corresponds to the word
of words in the text in a certain degree, but the former has vector.
higher requirements on data quality, and has certain
limitations for larger data sets. The latter uses “centroid” as A. Managing Data and Training Word Vector
the text feature unit, because of the limitation of its algorithm, There are two main types of data in this paper. One is the
it is necessary to set the number of “centroids” in advance, target text that needs to obtain the TF-IDF feature, and
and it is not flexible to set the minimum word meaning another is the corpus text that assists in calculating the TF-
distance to an ideal range. IDF value. After data is cleaned for all texts, words in all
A good method to extract text feature needs not only texts are separated according to the specified symbols (there
necessary word statistics on the text and to construct a word is no need to consider separate words for English corpus),
statistical model, such as TF-IDF, but also needs to consider and using the Word2vect tool to train word vector. In order
the semantic features of the words in the text. to improve the accuracy of the training result, all the contents
of the corpus are placed in the same text for training. For the
III. CONTRIBUTIONS same words in different texts, the same word vector initial
In this paper, we use the TF-IDF statistical model, and value is given during the training process, which avoids the
use word2vec model and density clustering algorithm to case where some words have multiple word vectors.
create method to extract text feature, which takes into
account both word statistics and semantic features in text. B. Exclude low TF-IDF Words
The word2vector model is used to train the word vector in Clustering vectors of all words will take into account
the text, and a new set of text feature vectors suitable for the most of the words that have no effect on text feature
VSM is generated by clustering those word vector based on extraction. Although this can not affect the accuracy of
the TF-IDF algorithm, which can finally better reflect the feature extraction, it is much computationally intensive.
text features. Example 1 is about the specific application of Therefore, after obtaining the word vector of all words, we
the method proposed in this paper on text recognition. should calculate the TF-IDF of each word in the target text,

2339
and set a threshold  , then exclude the words whose TF-IDF uses density clustering algorithm to cluster words. The
is less than  are all . density clustering algorithm is mainly based on a set of
In equation (1), TF (w) denotes the frequency of the word “domain” parameters (,MinPts) , which are used to specify
t the closeness level of the sample distribution. In addition, the
w in the text, count(w) and count( w j ) respectively denote word vector density (distance) is measured by the Euclidean
the number of the word w in target text and the number of Distance between the two vectors .
the word w in corpus samples t j , and m denotes the According to the idea of the density clustering algorithm,
number of samples contained word w in corpus. we clustering the elements in the word vector set V final ,
where V final  {v1, v2 ,...vn} corresponds to W final in the
 TF ( w) 
count( w)
 equation (4). First find the   field of each vector and

m t determine the core object set  , then randomly select a core
count( w j )
j 1 object vr1 as a seed from  . Find all the vectors that are
reachable by its density to form a cluster c1 , and then
In equation (2), IDF(w) denotes the inverse file remove the core objects contained in c1 from  , that is
frequency of the word w , m denotes the number of samples
   - c1 . Another seed vr 2 is randomly selected from the
contained word w in corpus, and n denotes the total
number of corpus texts. updated set  to generate the next cluster. This process is
repeated until  is empty. [13]The process of clustering
n words is shown in “Fig.1”.
 IDF( w)  ln( ) 
m 1

Equation (3) calculates the TF-IDF value of the word w


according to equation (1) and (2).

n  count( w)
 TFIDF( w)  

n m t
ln( ) count( w j )
m 1 j 1

Equation (4) obtains a set W final of all words remaining


after excluding the low TF-IDF word, and Ws denotes a set
of words ws whose TF-IDF is smaller than  .
Figure 1. Process of clustering words
t
 W final  mj1 w j  ws (TFIDF(ws )   ) 
According to the final clustering result, the average value
of all word vectors in each cluster can be obtained, denoted
Perhaps there will be a question here: Why not first by v , and its corresponding word is denoted by the virtual
exclude words with a TF-IDF lower than  and then train word w . The result is shown in “Fig. 2”.
word vector, isn't this more likely to reduce the amount of
computation? According to the mechanism of Word2vector
algorithmic, it must consider all words and their contexts so
that train word vectors more accurately. excluding seemingly
unrelated words will affect the training results.
C. Clustering Similar Words
Words whose TF-IDF is not too low and its word vector
have been obtained in the above steps. Since the word vector
can represent the semantic features of the words in the
corpus, we clustered words based on the word vector
distance. As a result, the words with similar semantic are
clustered together. Figure 2. Clustering results and representation
Since the number of a text may be a lot, and the number
of words in each cluster is only a little in many cases D. Calculating the TF-IDF of the word w
(because it is necessary to ensure that the semantic similarity
in each cluster is high), the number of clusters is not easy to For the feature extraction of the specified text, the TF-
measure when clustering. Considering the advantages of IDF feature of any individual word in the text will not be
density clustering algorithm for such problems, this paper considered, but the TF-IDF of the virtual word w will be

2340
used to represent the TF-IDF feature of each word in its Word w c1 w c2 ... w cx
cluster. In this paper, TF ' (w ) , IDF' (w ) and TFIDF' (w ) Text w1c1 ~ wkc11 w1c2
~ wkc22 ... w1cx cx
~ wkx
are used to represent the term frequency, inverse document
frequency and term frequency–inverse document frequency T1 TFIDFT'1 (w c1 ) TFIDFT'1 (w c2 ) ... TFIDFT'1 (w cx )
of word w . T2 TFIDFT'2 (w c1 ) TFIDFT'2 (w c2 ) ... TFIDFT'2 (w cx )
This part assumes that the word w is included both in the
target text and in the cluster c .
... ... ... ... ...
Equation (5) calculates the term frequency of the word Ty TFIDFT' y ( w c1 ) TFIDFT' y (w c2 ) ... TFIDFT' y (w cx )
w corresponding to cluster c , k is the number of w in
Figure 4. Vector space model constructed After clustering
each cluster c , and wcj represents the j th word in cluster c .
k V. EXPERIMENT AND RESULTS
 TF ' ( w c )  
j 1
TF ( wcj ) 
After the text edit has been completed, the paper is ready
for the template. Duplicate the template file by using the
Equation (6) calculates the inverse document frequency Save As command, and use the naming convention
of the word w corresponding to cluster c , where n is the prescribed by your conference for the name of your paper. In
same as in Equation (2), and represents the total number of this newly created file, highlight all of the contents and
samples of the corpus, and m' represents the total number of import your prepared text file. You are now ready to style
samples to which each word w in cluster c belongs. your paper.

n A. Experimental Data
 IDF' ( w c )  ln( ) 
m'1 The experiment selected the Wikipedia corpus and the
abstracts of the papers crawled from CNKI as experimental
Equation (7) calculates the TF-IDF of the word w data. From the CNKI website, 5,000 paper abstracts about
according to Equation (5) and (6). “computer artificial intelligence” and “chemical pharmacy”
k were crawled from the CNKI website, each with 2,500 texts.
TF (w ) 
 n c Each of them uses 2000 texts as the target text for calculating
TFIDF' ( w)  ln( ) j
m'1 TF-IDF and 500 texts for testing. The Wikipedia corpus
j 1
contains 3,271,863 texts, which are separated into different
E. Constructing Vector Space Model Based on Clustering paragraphs by “\n” and stored in a text that will serve as an
Results auxiliary corpus for the experiment.
Similar to the TF-IDF algorithm, this paper uses the B. Experimental Process
vector space model for text characterization. The former's 1) For the corpus
feature unit is the TF-IDF of each word, as shown in “Fig.3”,
while the latter's feature unit is the TF-IDF of each virtual a) We first separate the words in the entire corpus,
word w , as shown in “Fig.4”. For each word that needs to leaving the character “\n” in the process.
extract text features later, we no longer consider their b) Using the Word2vec tool to train corpus that have
individual TF-IDF, but instead use the TF-IDF of the virtual separated words but not separated text.
word w corresponding to the cluster c that they belong, c) After obtaining word vectors, we divide the corpus
which is represented as follow: according to “\n” and save the result in each separate text.
2) For each target text:
TFIDF(wcj )  TFIDF' (w c ) 
a) Using all the independent texts in the corpus as
This solves the problem of considering only the auxiliary corpus, calculate the TF-IDF of all words in the
expressions of words (such as ASCLL) that are the same in target text according to equation (3), and set the threshold
all texts, without considering that they have other value to 0.0005, and delete the word vector whose TF-IDF is
expressions - synonyms. So, for Example 1, it is possible to lower than this value.
associate “monkey” with “gorilla” based on the meaning of b) The domain parameters ( , MinPts) is set to
the word.
Word  =0.034, MinPts =1, and the updated word vectors is
Text w1 w2 ... wx clustered using the DBSCAN(Density-Based Spatial
T1 TFIDFT1 (w1) TFIDFT1 (w2 ) ... TFIDFT1 (wx ) Clustering of Applications with Noise. MinPts is set to 1,
TFIDFT2 (w1) TFIDFT2 (w2 ) ... TFIDFT2 (wx )
essentially because the purpose of using this algorithm in this
T2
paper is to obtain as close words as possible for each word,
... ... ... ... ...
so that each word can be a core object.
Ty TFIDFTy (w1) TFIDFTy (w2 ) ... TFIDFTy (wx )
c) Calculate the TF-IDF of each cluster according to
Figure 3. Vector space model constructed by TF-IDF equation (7).

2341
d) We use the TF-IDF of each cluster as the text feature “Fig. 6” shows that the test results added the “other
unit, and combine these feature units into a text feature in a classes” whose average distance from 5 most similar texts is
specified order, and finally construct a vector space model greater than 1.36.
that can represent the text features. Based on the above second classification method, we use
the Macro-F1 in the F1-measure [16] standard to evaluate the
C. Analysis of Results classification performance. The corresponding average
His paper measures the effect of text characterization values are shown in “Tab.1” and “Tab.2”.
based on the classification results of the text. We classify the “Tab.1” shows the results obtained using only the vector
text by KNN (k-NearestNeighbor) [14] during the space model constructed based on the TF-IDF algorithm.
experiment. The KNN algorithm works by inputting new
data without tags and comparing each feature of them with TABLE I. TEST RESULTS FOR TF-IDF
the characteristics of the data containing tag, and then computer chemical
average
extracting the classification tag for the most similar data[15]. AI PM
When classifying the test text, we extracted the top 5 Num 98.2 82.0 -
most similar texts in the 5000 data that have obtained the text Recall 75.4% 62.2% 68.8%
Precision 76.8% 75.9% 76.4%
features. We stipulate that when at least three similar texts 76.1% 68.4% 72.25%
Macro-F1
belong to the same class A, then the test text is also classified
in A, as Fig. 5 shows. In addition, when the average distance
between the test text and the 5 most similar texts is greater “Tab.2” shows the results obtained using the vector space
than 1.36, regardless of which category the test text should model constructed by the method of this paper.
be classified, it is considered not to belong to any class or
TABLE II. TEST RESULTS FOR THE METHOD OF THIS PAPER
belong to other classes, as Fig. 6 shows.
We divided 1000 texts into 5 tests, and took 200 samples computer chemical
average
each time (100 “Computer Artificial Intelligence” texts and AI PM
100 “Chemical Pharmaceutical” texts). Finally, the Num 101.0 82.2 -
Recall 81.0% 66.8% 73.9%
classification results of the 5 tests are compared with the
Precision 80.2% 81.3% 80.75%
actual data. The classification accuracy rates of the two Macro-F1 80.6% 73.3% 77.0%
methods are shown in “Fig. 5” and “Fig. 6”.
“Fig. 5” shows the results that the texts being tested are
only classified into the “computer artificial intelligence” For these five experiments, it can be seen from “Tab.1”
class or the “chemical pharmaceutical” class. that when we only use the TF-IDF algorithm, an average of
100% 98.2 articles are classified into “computer artificial
95% intelligence”, and 75.4 articles are classified correctly; an
average of 82 articles are classified into “chemical
Correct rate

90% 86.00% 87.50%


85.50%
85% 84.50% 81.00% pharmaceuticals”, 62.2 articles are classified correctly. After
80%
80.50%
using the method of this paper to improve, an average of 101
80.00%
75% 79.00% 79.50% articles are classified into “computer artificial intelligence
76.50%
70% class”, 81.0 articles are classified correctly; an average of
65% 82.2 articles are classified into “chemical pharmaceuticals”
60% category, and 66.8 articles are classified correctly. The
55% former has an average Macro-F1 value of 72.25%, while the
50% latter has a corresponding average Macro-F1 value of 77.0%.
test 1 test 2
TFIDF
test 3 test 4
The Paper
test 5
Then, we can easily see with reference to “Fig.5” and
“Fig.6” that the method proposed in this paper does improve
Figure 5. Comparison of the two test results with only two classes the accuracy of text feature extraction.
90% VI. CONCLUSION
85%
In this paper, the text feature is expressed by constructing
80%
76.50%
vector space model. The construction idea focuses on the
73.00% 72.50%
75.00%
introduction of semantic features in the text. The
Correct rate

75% 72.50%

70% construction method gives full play to the advantages of


65% 68.00% 69.50% 67.50%
70.00% 69.00% Word2vec tool for extraction of word feature, and considers
the characteristics of multiple and scattered words in the text,
60%
appropriately adopts the density clustering algorithm. The
55%
construction process uses TF-IDF algorithms twice. For the
50% first time, the words with low TF-IDF in the text are
test 1 test 2 test 3
TFIDF
test 4
The Paper
test 5
excluded, which effectively reduces the amount of
computation in the construction process. For the second time,
Figure 6. Comparison of the two test results added the “other classes” the algorithm uses each cluster as the feature unit to generate

2342
the final required feature model. Compared with the on Network Security and Communication Engineering (NSCE 2014),
traditional TF-IDF algorithm, the accuracy of text feature 2014, pp. 5.
extraction is higher. Compared with the method of feature [3] Y. Tian, J. Zhang, “Improvement of Linked Data Fusion Algorithm
Based on Bag of Words,” Library Journal, vol. 35, pp. 17-22, 2016.
extraction that based on domain knowledge engineering, this
[4] W. Zhang, T. Yoshida, and X. Tang, “A comparative study of
paper uses the readied Word2vector model based on deep TF*IDF, LSI and multi-words for text classification,” Expert Systems
learning ideas to extract texts feature from different fields, with Applications, vol. 38, pp. 2758-2765, 2010.
which does not need to spend too much time to construct [5] M. Y. Jiang, R. Liu, and F. Wang, “Word Network Topic Model
domain Knowledge Graphs to extract semantic features. Based on Word2Vector,” IEEE Explore, pp. 241-247.
Nevertheless, for some words that are not included in the [6] Y. Kim, et al, “DBCURE-MR: An efficient density-based clustering
corpus or their context information is blurred, this paper algorithm for large data using MapReduce,” Information Systems, vol.
cannot effectively extract the semantic features of these 42, pp. 162-166, 2013.
words. The extraction of these semantic features also needs [7] S. Q. Xue, and Y. J. Niu, “Research on Chinese text similarity based
to be done with some semantic rules and specialized domain on vector space model,” Electronic Design Engineering, 2016.
Knowledge Graphs. In the future research work, we will [8] L. H. Patil, and M. Atique, “A novel feature selection based on
information gain using WordNet,” Science and Information
improve the related work on the basis of this paper. Conference IEEE, pp. 625-629, 2013.
ACKNOWLEDGEMENT [9] Y. Lu, and M. Liang, “Improvement of Text Feature Extraction with
Genetic Algorithm,” New Technology of Library & Information
This work is supported by: (i) Natural Science Service, vol. 38, pp. 523–525, 2014.
Foundation China (NSFC) under the Grant No. 61402397, [10] L. H. Wang, “An Improved Method of Short Text Feature Extraction
61263043 and 61663046; (ii) Yunnan Applied Fundamental Based on Words Co-Occurrence,” Applied Mechanics & Materials,
Research Project under the Grant No. 2016FB104; (iii) vol. 519-520, pp. 842-845, 2014.
Yunnan Provincial Young academic and technical leaders [11] H. Liang, et al, “Text feature extraction based on deep learning: a
review:,” Eurasip Journal on Wireless Communications &
reserve talents under the Grant No. 2017HB005; (iv) Yunnan Networking, vol. 2017, pp. 211, 2017.
Provincial Innovation Team under the Grant No. [12] S. Zhou , et al, “Characteristic representation method of document
2017HC012;(v) The National Nature Science Fund Project based on Word2vector,” Journal of Chongqing University of Posts &
(61562093), Key Project of Applied Basic Research Program Telecommunications, vol. 30, pp. 272-279, 2018.
of Yunnan Province (2016FA024);(vi) MOE Key Laboratory [13] Y. Chen, et al, “A fast clustering algorithm based on pruning
of Educational Informatization for Nationalities (YNNU) unnecessary distance computations in DBSCAN for high-dimensional
Open Funding Project (EIN2017001). data,” Pattern Recognition, 2018, 83.
[14] S. Tan, “Neighbor-weighted K-nearest neighbor for unbalanced text
REFERENCES corpus,” Expert Systems with Applications. vol. 28, pp. 667-671,
2005.
[1] V. Singh, B. Kumar, and T. Patnaik, “Feature Extraction Techniques
for Handwritten Text in Various Scripts: a Survey,” International [15] T. Denoeux, “A k-nearest neighbor classification rule based on
Journal of Soft Computing & Engineering, vol. 3, pp. 238-241, 2013. dempster-shafer theory” IEEE Transactions on Systems Man and
Cybernetics vol. 25 no. 5 pp. 804-813 1995.
[2] X. Chen, S. F. Li, and Y. F. Wang, “The feature extraction of the text
based on the deep learning,” Advanced Science and Industry [16] He Shaojun, et al, “The Capability Analysis on the Characteristic
Research Center. Proceedings of the 2014 International Conference Selection Algorithm of Text Categorization Based on F1 Measure
Value,” IEEE Explore, pp. 742 -746.

2343

View publication stats

You might also like