0% found this document useful (0 votes)
74 views8 pages

A Soft Computing Model For Knowledge Mining and Trifle Management

This document proposes a soft computing model using a competitive neural tree (CNet) for knowledge mining and organizing evidence ("trifles") to generate, validate, or negate hypotheses about terrorist threats. The CNet is a decision tree where each node contains a neural network that performs pattern recognition on input evidence encoded in interactive XML files. The model aims to automate the process of hypothesis generation by organizing large amounts of intelligence data to help analysts discover new connections and theories. It describes the CNet architecture, how trifles are represented, and how the model would operate to justify, negate, or create new hypotheses based on the input evidence.

Uploaded by

bensujin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views8 pages

A Soft Computing Model For Knowledge Mining and Trifle Management

This document proposes a soft computing model using a competitive neural tree (CNet) for knowledge mining and organizing evidence ("trifles") to generate, validate, or negate hypotheses about terrorist threats. The CNet is a decision tree where each node contains a neural network that performs pattern recognition on input evidence encoded in interactive XML files. The model aims to automate the process of hypothesis generation by organizing large amounts of intelligence data to help analysts discover new connections and theories. It describes the CNet architecture, how trifles are represented, and how the model would operate to justify, negate, or create new hypotheses based on the input evidence.

Uploaded by

bensujin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

189

INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

A Soft Computing Model for Knowledge Mining And Trifle Management


Krishnan Nallaperumal* , M. Karthikeyan**,B.Bensujin***
conceptual magnet. We propose a novel use of data mining, information retrieval, software agent, and distributed computing technologies to enable this innovation. In this paper, we describe an architecture that facilitates the organizing of the enormous volume of evidence that an intelligence analyst has available. An essential point in our design is that of automating the process of hypothesis generation. Humans, in contrast, have an amazing capacity of adaptation to new situations, and are capable of thinking of scenarios that have not occurred before. We are however, capable of organizing such a huge amount of evidence to corroborate or negate our hypothesis, and that is where our system strength resides through artificial neural networks. The network we used here is the Competitive neural tree (CNet)[1] [2] [3][21]. The CNeT is also a Tree network, in that there are well defined search techniques for spanning the tree is available. Neural network based pattern detection in an Ndimensional data space which consists of generating decision boundaries that can successfully distinguish the various classes in the feature space. The classification rules have been extracted from all these models in the form of lf-Then rules. Finally the extracted rules have been validated for their correctness. The central piece of our design will be the support for queries, both ad-hoc and long standing, that will act as magnets attracting the relevant evidence that a human needs to estimate the validity of a hypothesis. The evidence or the trifles are enclosed into an interactive XML file and then passed to the soft computing model. The soft computing model is designed in such a way it accepts the hypothesis, or negates the hypothesis, or it creates a new hypothesis. The datas can be entered into any format since it can be developed as I-XML files and can be accessed in any mode. The proposed soft computing model, the competitive Neural tree is designed in such a way that it will accept an I-XML file for training the network. The input to the network also be an I-XML file and then it can be read by individual node and the patterns are matched. Neural tree architectures were recently introduced for pattern classification [1, 2, 4, 21] in an attempt to combine advantages of neural networks and decision trees. By applying the decision tree methodology, one difficult decision is split into a sequence of less difficult decisions. The first decision determines which decision has to be made next and so on. From all the questions that are arranged in a tree like structure only some are asked in the process of determining the final answer. Neural tree architectures are decision trees with a neural network in each node. These networks perform either feature extraction from the input or make a decision. A
Abstract In the aftermath of September 11, the experts concluded that data mining could help it prevent future terrorist attacks. Experts are also concerned that in its zeal to apply technology to antiterrorism, the government could disrupt the crime-fighting processes of the agencies that are charged with finding and stopping terrorists before they act .In this paper, such a novel idea is proposed. The entire information or the evidence about a terrorist and the inclined behavior of some personalities are stored in an Interactive XML sheet (I XML), which are called as trifles, the piece of information. These trifles plays a vital role in the training the soft computing model and for pattern detection. These trifles in the form of I-XML sheets are given in the network for pattern detection which will be able to justify the hypothesis, or negate the hypothesis, or create a new hypothesis.. The soft computing model used here was the Competitive neural tree (CNet).The CNet is the type of decision tree in which each node is compared a decision will be take to move to the next.In CNet each node will compete to match it pattern with the input arriving the node. If any node matches with the input, then the pattern is recognized and the hypothesis is justified. In each stage the pattern recognition is done with the contents of the I-XML nodes. Keywords: CNet, Decision Tree, hypothesis I-XML, pattern recognition, soft computing, and trifles.

he key aspiration of this paper will be developing techniques for organizing intelligence information to sculpt terrorism threats using data mining techniques [10]. Our goal can be stated quite clearly as follows: How well we organize existing evidence influences? How well we humans are able to engender new hypotheses as well as new evidential tests of all hypotheses we are considering? The process of organizing evidence is a decisive step in the process of discovery or investigation. A data-centered approach to organizing evidence [9] [11], which allow us to create, justify, or negate hypotheses is adopted. This will be accomplished through the creation of intelligent agents that act as conceptual magnets that attract trifles (or atomic pieces) of evidence. This attraction is triggered in one of three ways: 1) the evidence justifies an existing hypothesis, 2) the evidence negates an existing hypothesis, or 3) the evidence suggests that a new hypothesis be formed, which in turn becomes a new
This work was supported in part by the CITE Department of Manonmainam Sundaranar University,Trinelveli. * Professor & Head, CITE, MS university, Trinelveli. ** Research Scholar, CITE , MS University. Trinelveli. *** Final year M.Tech Student, CITE, MS University, Trinelveli.

I. INTRODUCTION

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

190
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

decision must be made at internal nodes regarding the childnode to be visited next. Terminal nodes give the final answer. DECISION trees have extensively been used to perform decision making in pattern recognition [6] [7] [8]. By applying the decision tree methodology, one difficult decision can be split into a sequence of less difficult decisions. The first decision determines which decision has to be made next by indicating which node of the tree should be visited. Because of the tree structure, only some of all possible questions are asked in the process of making the final decision. In fact, the final decision is made at a terminal node of the tree, which is reached by traversing the tree starting from the root as indicated by the decisions made at internal nodes. The design of decision trees is frequently performed in a top-down fashion [12]. The nodes are split during the design process according to some criterion. The existing spitting criteria include the impurity measure used in classification and regression trees [13], [18] and the mutual information measure employed by the average mutual information gain algorithm [16]. The terminal nodes are determined during the construction of the tree by freezing some of the nodes according to some stopping criterion or by growing a large tree and performing selective backward pruning. After the final tree structure is determined, the terminal nodes are frequently assigned class labels by using a majority rule. II TRIFFLE ILLUSTRATION Before going into the details of our technical approach, we need to define a trifle as it forms the fundamental piece of evidence that will be used in hypothesis processing. Examples of trifles, or pieces of evidence, with which the intelligence community is currently inundated are: 1) an object identified in an image, 2) information in an intelligence report, 3) information in an open-source document (e.g., online newspaper article), or 4) a video clip (e.g., from DoorDharshan, Al- Jazeera or CNN). These trifles could be stored in a structured database [14] [17] or found from some unstructured location (e.g., web, Intelink). These trifles must be organized to support an existing query (e.g., Is chemical plant X producing weapons of mass destruction?) or to direct the analyst to perhaps consider a new hypothesis. The key innovation of this project will result in a new way for intelligence analysts to interact with the intelligence data stream and can be described by the following. Intelligence analysts are currently overwhelmed with the amount of data that they must analyze. Most of this data is never analyzed at all, potentially leaving key pieces of information out of the analysis process. Our ability to collect data will only grow, further accentuating the problem. Our approach allows the analyst to interact with the intelligence stream in a novel way. This interaction can be viewed as a triangle with the Human, Hypotheses, and Trifles at each vertex. The interaction is not isolated at each vertex of the triangle however, and the interaction with all three is the key to the analyst making informed analysis that supports the intelligence decisionmaking process. The human also interacts with the trifles, not
IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

arbitrarily, but as a result of the hypothesis. From this processing, trifles are accumulating, forming a critical mass that may require the attention of the analyst. Perhaps a new hypothesis should be considered as a result of this. III ARCHITECTURE The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes forms an m-ary tree as shown in Figure 1(a). Figure 1(b) shows a node in detail. Each node contains m slots s1, s2,., sm and a counter age that is incremented each time an example is presented to that node. The behavior of the node changes as the counter age increases. Each slot si stores a prototype pi, a counter count, and a pointer to a node. The prototypes pi P have the same length as the input vectors x. They are trained to match the patterns obtained from each node. The slot counter count is incremented each time the prototype of that slot is updated to match an example. Finally, the pointer contained in each slot may point to a child-node assigned to that slot. A NULL pointer indicates that no node was created as a child so far. In this case, the slot is called terminal slot or leaf. Internal slots are slots with an assigned child-node.

(a)

Figure 1 (a) Tree Structure, (b) Explanation of individual node Learning at the Node-Level

191
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

In the learning phase [5] [19] [20], the tree grows starting from a single node, the root. The prototypes of each node form a minuscule competitive network. All prototypes in a node compete to attract the examples arriving at this node. These networks are trained by competitive learning. When an example x arrives at a node, all of its prototypes p1, p2,..,pm compete to match it. The closest prototype to x is the winner. If d(x, pj) denotes the distance between x and pj , the prototype pk is the winner if d(x, pk) < d(x pj ) j k. The distance measure used in this paper is the squared Euclidean norm, defined
d ( x, P j ) = x p j
2

(1)

Do while stopping criterion is FALSE: Select randomly an I XML file. Traverse the tree starting from the root to find a terminal prototype pk that is close to x. Let nl and sk be the node and the slot that pk belongs to, respectively. If the node nl is not frozen, then update the prototype pk according to equation (2). If a splitting criterion for slot sk is TRUE, then assign a new node as child to sk and freeze node nl. Increment the counter count in slot sk and the counter age in node nl. Depending on how the search in the second step is implemented, various learning algorithms can be developed. The search method is the only operation in the learning algorithm that depends on the size of the tree. Hence, the computational complexity of the search method determines the speed of the learning process. Sample pseudo code used for training the network /*Assigning inputs to each neuron*/ Set the initial value i=0 For all neurom In Input Neuron.Output = Inputs(i) i=i+1 End

The competitive learning scheme used at the node level resembles that employed an unsupervised learning algorithm proposed to generate crisp c- partitions of a set of unlabeled data vectors [4], [5]. According to this scheme, the winner pk is the only prototype that is attracted by the input x arriving at the node. More specifically, the winner pk is updated according to the equation
old Pknew = PK + ( x Pkold

(2)

Where is the learning rate. The learning rate decreases exponentially with the age of a node according to the equation = 0 exp( d age) (3) where 0 is the initial value of the learning rate and d determines how fast decreases. The update equation (2) will move the winner pk closer to the example x and therefore decrease the distance between the two. After a sequence of example presentations and updates, the prototypes will respond each to examples from a particular region of the input space. Each prototype pj attracts a cluster of examples Rj. The prototypes split the region of the input space that the node sees into subregions. The examples that are located in a subregion constitute the input for a node on the next level ofthe tree that may be created after the node is mature. A new node will be created only if a splitting criterion is TRUE. Life Cycle of Nodes Each node goes through a life cycle. The node is created and ages with the exposure to examples. When a node is mature, new nodes can be assigned as children to it. A child-node is created by copying properties of the slot that is split to the slots of the new node. More specifically, the child will inherit the prototype of the parent slot. Right after the creation of a node, all its slots are identical. As soon as a child is assigned to a node, that node is frozen. Its prototypes are no longer updated in order to keep the partition of the input space for the child-nodes constant. A node may be destroyed after all of its children have been destroyed. IV TRAINING PROCEDURE The generic training procedure is described below:

/*Calculating the weight of each neuron*/ For all input neuron connected to This Neuron netValue = netValue + (Weight Associated With InputNeuron * Output of InputNeuron) End /*Calculating the error value */ Delta = Neuron.Output * (1 - Neuron.Output) * ErrorFactor

/*Calculating the output */ For each layer in Input layers neuron.Update(Input* Weight) End

/*Calculating the Bias Value */ Set netValue As Single = bias For all input neuron connected to ThisNeuron netValue = netValue + (Weight Associated With InputNeuron * Output of InputNeuron) End

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

192
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

Full Search Method The full search method explained in figure 2(a) is based on conservative exhaustive search. To guarantee that the prototype pk with the minimum distance to a given feature vector x is returned, it is necessary to compute the distances d(x, pj) between the input vector x and each of the terminal prototypes pj P. The prototype pk with the minimum distance is returned. The time required by the full search method is of the order O(n). However, the full search method guarantees the return of the closest prototype to the input vector.

pattern with previously trained elements. If the input matches with the pattern means a justification on the hypothesis will be made or else the hypothesis will be negated or else a new hypothesis will be generated. V GENERALISATION OF NETWORK The CNeT is generalized to classify the vectors of same features. The generalization ability is measured by the accuracy with which it makes these classifications. The network accepts complete set of nonlinear input. One of the major advantages of CNeT is their ability to generalize. This means that a trained net could classify data from the same class as the learning data that it has never seen before. To reach the best generalization, the dataset should be split into three parts: The training set is used to train a CNeT. The error of this dataset is minimized during training. The validation set is used to determine the performance of a neural network on patterns that are not trained during learning. A test set for finally checking the over all performance of a neural net.

Figure 2 : (a) Diagramatic view of full search method Pattern Detection All the networks are trained and stored in the form of XML files and can be used later. During the detection process, the Interactive XML sheets are uploaded to the network. As per the learning done on the neural tree, the tree recognize each node and perform a full search on the tree. When the searching is done on the neural tree, the tree does not grows, it still remain constant and can be compared with the nodes. The algorithm for the pattern detection is given below, Do While Terminal Node is True Read the root node of the IXML file . Compare the node with the existing trained network which was already trained with effective datas supplied. If the root node presents the availability, then move to the child node to dig the information and store in the nodes of the new network. If the delta value provides an output which is greater than 0.8 means the conclusion is attained by justifying the hypothesis. Else the input is negated. If the hypothesis is not justified nor negated the new hypothesis is generated. The tree will be again trained with the new hypothesis. The above algorithm receives an input in the form of an IXML file and fed in to the network. The network matches the

The following figure shows a typical error development of a training set (lower curve) and a validation set (upper curve).

Typical Error Curve 120 Error % on training set 100 80 60 40 20 0 1 2 3 4 5 Number of training
Validation Curve Training curve

Figure 3- Error Curve The learning should be stopped in the minimum of the validation set error. At this point the net generalizes best. When learning is not stopped, overtraining occurs and the performance of the net on the whole data decreases, despite the fact that the error on the training data still gets smaller. After finishing the learning phase, the net should be finally checked with the third data set, the test set. The generalization is not achieved, sometimes when the training set error is low. Two different estimate have been

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

193
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

conducted for calculating the error rate from the input which are not in the training set but drawn from the same underlying <?xml version="1.0" ?> distribution as the training set. The error rate we call it as out<doc> of-sample-set error rate. Perhaps the simplest technique we <assembly> adopted is to divide the input vectors available for training <name>Ixml</name> into two disjoint sets, considering one as the training set and <version>1.0.2346.43108</version> the other as the validation set. Based on the two set of datas <fullname>nxml, Version=1.0.2346.43108, the error rate is calculated. If the number of vectors in both Culture=neutral, PublicKeyToken=null</fullname> sets is large, the error rate on the validation set is a reasonable </assembly> estimate of generalization accuracy. /* Code which explains the nature of a particular terrorist*/ <Terrorist> The second method adopted to estimate the error rate, <Name> Name of the terrorist </Name> generalization accuracy, is the cross validation. The available <Age> Age of the terrorist </Age> training vectors are divided in to k disjoint subsets, called <Gender>Sex of the terrorist </Gender> folds. A single fold is selected as validation set and the <Place of Birth> Birth Place</Place of Birth> remaining k-1 set are selected as training set. The procedure <Previous Activities>Type of attack done by the terrorist has been repeated for k times, each time selecting a different </Previous Activities> fold as a validation set and its complement as the training set. <Location>Place where the terrorist attack has been The error rate is computed for each validation set and the happened </Location> average of these error rates is taken and estimated as the out <Number of participants> Total number of people involved of sample error. in the activity</Number of participants> </Terrorist> The following figure illustrates the estimate of generalization </doc> 193 error versus the training done on the CNeT.
Estimate of generalisation Error
Validation set error 200 180 160 140 % of Error 120 100 80 60 40 20 0 1 2 3 Training rate 4 5 Training set error

<?xml version="1.0" ?> <doc> <assembly> <name>Ixml</name> <version>1.0.2346.43108</version> <fullname>nxml, Version=1.0.2346.43108, Culture=neutral, PublicKeyToken=null</fullname> </assembly> /* Code which explains the nature of a particular terrorist*/ <Terrorist Attack> <Location>Place in which the terrorist attack took place </Location> <Type of Attack> The type of attack conducted by the terrorist </Type of Attack> <Members in the group> The members participated in the activity</Members> <Loss>Number of people died in the attack</Loss> </Terrorist Attack> All these different trifles are obtained from a very large database. The government agencies also provide such a database about terrorist events. No matter the format, how the datas are presented, the system will be able to convert into an IXML file. Sample Hypothesis The system accepts any type of hypothesis given into it. The CNeT will tries to match with the hypothesis and produce all the trifles relating to the hypothesis. If the probability of trifles are less means it negates the hypothesis or else it will create a new hypothesis. Such a hypothesis is given below

Figure 4-Generailisation Error

V TRAINING PATTERNS The following sample I-XML trifles illustrates the training patterns. Such thousands of trifles are used for training the network. The training set and validation set are classified from the thousand number of trifles used for training. The training pattern may be of any format which can be then converted into an IXML file by the system itself. Sample I-XML File / Trifles

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

194
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

BOMB BLAST IN HYDERABAD When the hypothesis is fed into a trained network, it matches with all the trifles available in it. When a node represents the any part of the hypothesis, the search is done followed into the node, means that the child nodes are traversed. If many nodes represent the hypothesis with the same pattern, then a higher probability, say over 80 percentage, is obtained. Hence the hypothesis is justified and provides all related information about the hypothesis for the investigator. If the probability is less than 40 percentage means the hypothesis will be negated, saying that such information are not available. It the probability lies between 40 percentage and 80 percentage means it provides a way to create a new hypothesis.
900 800 700 600 500 400 300 200 100 0 50

Sucess Rate

Sucess Range

100

300

500

1000

Num ber of Training

Training Vs Success Range 194 V EXPERIMENTAL RESULTS TABLE I The table shows the number of trails done on the network with the IXML file , the probability of success, failure is given below.
XML file No Cycles of Hypothesis Justified Hypothesis negated New hypothesis created

CHART II
Failure Rate
400 350 Failure Range 300 250 200 150 100 50 0 50 100 300 500 1000

A B C D E

1000 500 300 100 50

780 200 130 20 12

210 295 178 80 38

10 5 2 0 0

TABLE II Some theoretical values XML file


nave Perfect Kwrong

No of Cycles any 500 300

Hypothes is Justified t-k t 0

Hypothesi s negated K 0 t

New hypothesis created k 0 0

Num ber of Training

Training Vs Fault Range CHART III

CHART I

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

195
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

New Hypothesis Creation


12 Hypothesis Creation 10 8 6 4 2 0 50 100 300 500 1000 Num ber of Training

This paper introduces a innovative methodology to analyze the terrorism with the help of data mining. The idea proved here is very much valuable since it accepts an interactive XML in the input for training and detection. The soft computing model presented here is also capable of providing a solid inculcation with the training of the network and the detection of the network. VII REFERENCES [1] Sven Behnke and Nicolaos B. Karayiannis, CNeT: Competitive neural trees for pattern classification, in Proc. IEEE Int. Conf. Neural Networks, Washington, D.C., June 3 6, 1996, pp. 14391444. [2] Sivanandan,Shanmugam,sumathi,Development of Soft Computing Models For Data Miming, IE(I) journalCP PP 22 -31. [3] Sven Behnke and Nicolaos B. Karayiannis, Competitive neural trees for pattern classification, [4] L. Atlas, R. Cole, Y. Muthusamy, A. Lippman, J. Connor, D. Park, M.El-Sharkawi, and R. J. Marks II, A performance comparison of trained multilayer perceptrons and trained classification trees, Proc. IEEE, vol. 78, no. 10, pp. 1614 1619, 1990. [5] L. Fang, A. Jennings, W. X. Wen, K.Q.-Q. Li, T. Li \Unsupervised learning for neural trees," Proceedings International Joint Conference on Neural Networks, vol 3, pp. 2709{2715, 1991. [6] S. Rasoul Safavian and David Landgrebe, A Survey of Decision Tree Classifier Methodology 195 [7] Safavian, S.R. Landgrebe, D., A survey of decision tree classifier methodologyProceedings in the IEEE, Vol 21, issue 3. [8] Xiangyang Li, Nong Ye, Decision tree classifiers for computer intrusion detection,Published in Nova Science Publishers, Inc. Commack, NY, USA [9] Cousins, D.B.; Weishar, D.J.; Sharkey, J.B. , Intelligence collection for counter terrorism in massive information content Aerospace Conference, 2004. Proceedings. 2004 IEEE Volume 5, Issue , 6-13 March 2004 Page(s): 3273 - 3282 Vol.5 [10] http://www.cio.in/govern [11] www.homelandsecurity.org [12] R. Nowak, Decision Trees IEEE Trans. Systems, Man, & Cybernetics, May 1991. [13] L. Brieman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees. Belmont, CA: Wadsworth, 1984. [14] http://www.tkb.org [15] Michael J. Andrews, An Information Theoretic Hierarchical Classifier for Machine Vision WORCESTER POLYTECHNIC INSTITUTE, May 1999. [16] I. K. Sethi and G. P. R. Sarvarayudu, Hierarchical classifier design using mutual information, IEEE Trans. Pattern Anal. Machine Intell.vol. 4, pp. 441445, 1982. .[17] http://www.fbi.gov

Training Vs new hypothesis creation

CHART IV

Sucess Vs Failure
900 800 700 600 500 400 300 200 100 0 50 100 300 500 1000 Number of Training
New Hypo thesis

Failure Sucess

Sucess Rate

Success VS Failure VI CONCLUSION


IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

196
INTERNATIONAL JOURNAL OF IMAGING SCIENCE AND ENGINEERING (IJISE)

[18] P. A. Chou, Optimal partitioning for classification and regression trees, IEEE Trans. Pattern Anal. Machine Intell., vol. 13, pp. 340354, 1991. [19] S. Behnke and N. B. Karayiannis, Competitive neural trees for vector quantization, Neural Network World, vol. 6, no. 3, pp. 263277, 1996. [20] , CNeT: Competitive neural trees for pattern classification, in Proc. IEEE Int. Conf. Neural Networks, Washington, D.C., June 36, 1996, pp. 14391444. [21]Sven Behnke and Nicolaos B. Karayiannis, Competetive neural trees for vector quantization,in Neural Network World, 6(3): 263-277,1996

IJISE,GA,USA,ISSN:1934-9955,VOL1,NO.4, OCTOBER 2007

You might also like