1
|
ESMOTE: an overproduce-and-choose synthetic examples generation strategy based on evolutionary computation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08004-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
2
|
Kordos M, Blachnik M, Scherer R. Fuzzy clustering decomposition of genetic algorithm-based instance selection for regression problems. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
3
|
|
4
|
Kordos M, Arnaiz-González Á, García-Osorio C. Evolutionary prototype selection for multi-output regression. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.055] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
5
|
EEkNN: k-Nearest Neighbor Classifier with an Evidential Editing Procedure for Training Samples. ELECTRONICS 2019. [DOI: 10.3390/electronics8050592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The k-nearest neighbor (kNN) rule is one of the most popular classification algorithms applied in many fields because it is very simple to understand and easy to design. However, one of the major problems encountered in using the kNN rule is that all of the training samples are considered equally important in the assignment of the class label to the query pattern. In this paper, an evidential editing version of the kNN rule is developed within the framework of belief function theory. The proposal is composed of two procedures. An evidential editing procedure is first proposed to reassign the original training samples with new labels represented by an evidential membership structure, which provides a general representation model regarding the class membership of the training samples. After editing, a classification procedure specifically designed for evidently edited training samples is developed in the belief function framework to handle the more general situation in which the edited training samples are assigned dependent evidential labels. Three synthetic datasets and six real datasets collected from various fields were used to evaluate the performance of the proposed method. The reported results show that the proposal achieves better performance than other considered kNN-based methods, especially for datasets with high imprecision ratios.
Collapse
|
6
|
A multi-objective evolutionary approach to training set selection for support vector machine. Knowl Based Syst 2018. [DOI: 10.1016/j.knosys.2018.02.022] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
7
|
Salama KM, Abdelbar AM, Helal AM, Freitas AA. Instance-based classification with Ant Colony Optimization. INTELL DATA ANAL 2017. [DOI: 10.3233/ida-160031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
| | - Ashraf M. Abdelbar
- Department of Mathematics & Computer Science, Brandon University, Brandon, MB, Canada
| | - Ayah M. Helal
- School of Computing, University of Kent, Chatham Maritime, UK
| | | |
Collapse
|
8
|
|
9
|
Verbiest N, Vluymans S, Cornelis C, García-Pedrajas N, Saeys Y. Improving nearest neighbor classification using Ensembles of Evolutionary Generated Prototype Subsets. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2016.03.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
10
|
|
11
|
Verbiest N, Derrac J, Cornelis C, García S, Herrera F. Evolutionary wrapper approaches for training set selection as preprocessing mechanism for support vector machines: Experimental evaluation and support vector analysis. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2015.09.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
|
13
|
Hamidzadeh J, Monsefi R, Sadoghi Yazdi H. Large symmetric margin instance selection algorithm. INT J MACH LEARN CYB 2014. [DOI: 10.1007/s13042-014-0239-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Nikolaidis K, Mu T, Goulermas J. Prototype reduction based on Direct Weighted Pruning. Pattern Recognit Lett 2014. [DOI: 10.1016/j.patrec.2013.08.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
García-Pedrajas N, de Haro-García A, Pérez-Rodríguez J. A scalable memetic algorithm for simultaneous instance and feature selection. EVOLUTIONARY COMPUTATION 2013; 22:1-45. [PMID: 23544367 DOI: 10.1162/evco_a_00102] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Instance selection is becoming increasingly relevant due to the huge amount of data that is constantly produced in many fields of research. At the same time, most of the recent pattern recognition problems involve highly complex datasets with a large number of possible explanatory variables. For many reasons, this abundance of variables significantly harms classification or recognition tasks. There are efficiency issues, too, because the speed of many classification algorithms is largely improved when the complexity of the data is reduced. One of the approaches to address problems that have too many features or instances is feature or instance selection, respectively. Although most methods address instance and feature selection separately, both problems are interwoven, and benefits are expected from facing these two tasks jointly. This paper proposes a new memetic algorithm for dealing with many instances and many features simultaneously by performing joint instance and feature selection. The proposed method performs four different local search procedures with the aim of obtaining the most relevant subsets of instances and features to perform an accurate classification. A new fitness function is also proposed that enforces instance selection but avoids putting too much pressure on removing features. We prove experimentally that this fitness function improves the results in terms of testing error. Regarding the scalability of the method, an extension of the stratification approach is developed for simultaneous instance and feature selection. This extension allows the application of the proposed algorithm to large datasets. An extensive comparison using 55 medium to large datasets from the UCI Machine Learning Repository shows the usefulness of our method. Additionally, the method is applied to 30 large problems, with very good results. The accuracy of the method for class-imbalanced problems in a set of 40 datasets is shown. The usefulness of the method is also tested using decision trees and support vector machines as classification methods.
Collapse
Affiliation(s)
- Nicolás García-Pedrajas
- Department of Computing and Numerical Analysis, University of Cordoba, Córdoba, 14014, Spain
| | | | | |
Collapse
|
16
|
García-Pedrajas N, Perez-Rodríguez J, de Haro-García A. OligoIS: Scalable Instance Selection for Class-Imbalanced Data Sets. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:332-346. [PMID: 22868583 DOI: 10.1109/tsmcb.2012.2206381] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In current research, an enormous amount of information is constantly being produced, which poses a challenge for data mining algorithms. Many of the problems in extremely active research areas, such as bioinformatics, security and intrusion detection, or text mining, share the following two features: large data sets and class-imbalanced distribution of samples. Although many methods have been proposed for dealing with class-imbalanced data sets, most of these methods are not scalable to the very large data sets common to those research fields. In this paper, we propose a new approach to dealing with the class-imbalance problem that is scalable to data sets with many millions of instances and hundreds of features. This proposal is based on the divide-and-conquer principle combined with application of the selection process to balanced subsets of the whole data set. This divide-and-conquer principle allows the execution of the algorithm in linear time. Furthermore, the proposed method is easy to implement using a parallel environment and can work without loading the whole data set into memory. Using 40 class-imbalanced medium-sized data sets, we will demonstrate our method's ability to improve the results of state-of-the-art instance selection methods for class-imbalanced data sets. Using three very large data sets, we will show the scalability of our proposal to millions of instances and hundreds of features.
Collapse
|
17
|
|
18
|
Lausser L, Müssel C, Melkozerov A, Kestler HA. Identifying predictive hubs to condense the training set of $$k$$ -nearest neighbour classifiers. Comput Stat 2012. [DOI: 10.1007/s00180-012-0379-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
19
|
Derrac J, Triguero I, Garcia S, Herrera F. Integrating Instance Selection, Instance Weighting, and Feature Weighting for Nearest Neighbor Classifiers by Coevolutionary Algorithms. ACTA ACUST UNITED AC 2012; 42:1383-97. [DOI: 10.1109/tsmcb.2012.2191953] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Derrac J, Verbiest N, García S, Cornelis C, Herrera F. On the use of evolutionary feature selection for improving fuzzy rough set based prototype selection. Soft comput 2012. [DOI: 10.1007/s00500-012-0888-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
21
|
Enhancing evolutionary instance selection algorithms by means of fuzzy rough set based feature selection. Inf Sci (N Y) 2012. [DOI: 10.1016/j.ins.2011.09.027] [Citation(s) in RCA: 89] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
22
|
García S, Derrac J, Cano JR, Herrera F. Prototype selection for nearest neighbor classification: taxonomy and empirical study. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2012; 34:417-35. [PMID: 21768651 DOI: 10.1109/tpami.2011.142] [Citation(s) in RCA: 161] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
The nearest neighbor classifier is one of the most used and well-known techniques for performing recognition tasks. It has also demonstrated itself to be one of the most useful algorithms in data mining in spite of its simplicity. However, the nearest neighbor classifier suffers from several drawbacks such as high storage requirements, low efficiency in classification response, and low noise tolerance. These weaknesses have been the subject of study for many researchers and many solutions have been proposed. Among them, one of the most promising solutions consists of reducing the data used for establishing a classification rule (training data) by means of selecting relevant prototypes. Many prototype selection methods exist in the literature and the research in this area is still advancing. Different properties could be observed in the definition of them, but no formal categorization has been established yet. This paper provides a survey of the prototype selection methods proposed in the literature from a theoretical and empirical point of view. Considering a theoretical point of view, we propose a taxonomy based on the main characteristics presented in prototype selection and we analyze their advantages and drawbacks. Empirically, we conduct an experimental study involving different sizes of data sets for measuring their performance in terms of accuracy, reduction capabilities, and runtime. The results obtained by all the methods studied have been verified by nonparametric statistical tests. Several remarks, guidelines, and recommendations are made for the use of prototype selection for nearest neighbor classification.
Collapse
|
23
|
GARCÍA SALVADOR, CANO JOSÉRAMÓN, BERNADÓ-MANSILLA ESTER, HERRERA FRANCISCO. DIAGNOSE EFFECTIVE EVOLUTIONARY PROTOTYPE SELECTION USING AN OVERLAPPING MEASURE. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001409007727] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Evolutionary prototype selection has shown its effectiveness in the past in the prototype selection domain. It improves in most of the cases the results offered by classical prototype selection algorithms but its computational cost is expensive. In this paper, we analyze the behavior of the evolutionary prototype selection strategy, considering a complexity measure for classification problems based on overlapping. In addition, we have analyzed different k values for the nearest neighbour classifier in this domain of study to see its influence on the results of PS methods. The objective consists of predicting when the evolutionary prototype selection is effective for a particular problem, based on this overlapping measure.
Collapse
Affiliation(s)
- SALVADOR GARCÍA
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada 18071, Spain
| | - JOSÉ-RAMÓN CANO
- Department of Computer Science, University of Jaén, Higher Polytechnic Center of Linares, Alfonso X El Sabio street, Linares 23700, Spain
| | | | - FRANCISCO HERRERA
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada 18071, Spain
| |
Collapse
|
24
|
Caises Y, González A, Leyva E, Pérez R. Combining instance selection methods based on data characterization: An approach to increase their effectiveness. Inf Sci (N Y) 2011. [DOI: 10.1016/j.ins.2011.06.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
|
26
|
Olvera-López JA, Carrasco-Ochoa JA, Martínez-Trinidad JF, Kittler J. A review of instance selection methods. Artif Intell Rev 2010. [DOI: 10.1007/s10462-010-9165-y] [Citation(s) in RCA: 216] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
27
|
García-Osorio C, de Haro-García A, García-Pedrajas N. Democratic instance selection: A linear complexity instance selection algorithm based on classifier ensemble concepts. ARTIF INTELL 2010. [DOI: 10.1016/j.artint.2010.01.001] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
28
|
Derrac J, García S, Herrera F. A Survey on Evolutionary Instance Selection and Generation. INTERNATIONAL JOURNAL OF APPLIED METAHEURISTIC COMPUTING 2010. [DOI: 10.4018/jamc.2010102604] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The use of Evolutionary Algorithms to perform data reduction tasks has become an effective approach to improve the performance of data mining algorithms. Many proposals in the literature have shown that Evolutionary Algorithms obtain excellent results in their application as Instance Selection and Instance Generation procedures. The purpose of this paper is to present a survey on the application of Evolutionary Algorithms to Instance Selection and Generation process. It will cover approaches applied to the enhancement of the nearest neighbor rule, as well as other approaches focused on the improvement of the models extracted by some well-known data mining algorithms. Furthermore, some proposals developed to tackle two emerging problems in data mining, Scaling Up and Imbalance Data Sets, also are reviewed.
Collapse
|
29
|
|
30
|
Garcia-Pedrajas N. Constructing Ensembles of Classifiers by Means of Weighted Instance Selection. ACTA ACUST UNITED AC 2009; 20:258-77. [DOI: 10.1109/tnn.2008.2005496] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
A divide-and-conquer recursive approach for scaling up instance selection algorithms. Data Min Knowl Discov 2008. [DOI: 10.1007/s10618-008-0121-2] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
32
|
Cano JR, García S, Herrera F. Subgroup discover in large size data sets preprocessed using stratified instance selection for increasing the presence of minority classes. Pattern Recognit Lett 2008. [DOI: 10.1016/j.patrec.2008.08.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
33
|
Lee MC, Nelson SJ. Supervised pattern recognition for the prediction of contrast-enhancement appearance in brain tumors from multivariate magnetic resonance imaging and spectroscopy. Artif Intell Med 2008; 43:61-74. [PMID: 18448318 DOI: 10.1016/j.artmed.2008.03.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2007] [Revised: 02/24/2008] [Accepted: 03/10/2008] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The purpose of this study was to develop a pattern classification algorithm for use in predicting the location of new contrast-enhancement in brain tumor patients using data obtained via multivariate magnetic resonance (MR) imaging from a prior scan. We also explore the use of feature selection or weighting in improving the accuracy of the pattern classifier. METHODS AND MATERIALS Contrast-enhanced MR images, perfusion images, diffusion images, and proton spectroscopic imaging data were obtained from 26 patients with glioblastoma multiforme brain tumors, divided into a design set and an unseen test set for verification of results. A k-NN algorithm was implemented to classify unknown data based on a set of training data with ground truth derived from post-treatment contrast-enhanced images; the quality of the k-NN results was evaluated using a leave-one-out cross-validation method. A genetic algorithm was implemented to select optimal features and feature weights for the k-NN algorithm. The binary representation of the weights was varied from 1 to 4 bits. Each individual parameter was thresholded as a simple classification technique, and the results compared with the k-NN. RESULTS The feature selection k-NN was able to achieve a sensitivity of 0.78+/-0.18 and specificity of 0.79+/-0.06 on the holdout test data using only 7 of the 38 original features. Similar results were obtained with non-binary weights, but using a larger number of features. Overfitting was also observed in the higher bit representations. The best single-variable classifier, based on a choline-to-NAA abnormality index computed from spectroscopic data, achieved a sensitivity of 0.79+/-0.20 and specificity of 0.71+/-0.11. The k-NN results had lower variation across patients than the single-variable classifiers. CONCLUSIONS We have demonstrated that the optimized k-NN rule could be used for quantitative analysis of multivariate images, and be applied to a specific clinical research question. Selecting features was found to be useful in improving the accuracy of feature weighting algorithms and improving the comprehensibility of the results. We believe that in addition to lending insight into parameter relevance, such algorithms may be useful in aiding radiological interpretation of complex multimodality datasets.
Collapse
Affiliation(s)
- Michael C Lee
- Surbeck Laboratory of Advanced Imaging, Department of Radiology, University of California, UCSF Radiology Box 2532, 1700 4th Street, San Francisco, CA 94143-2532, USA.
| | | |
Collapse
|
34
|
Cano JR, Herrera F, Lozano M. Evolutionary stratified training set selection for extracting classification rules with trade off precision-interpretability. DATA KNOWL ENG 2007. [DOI: 10.1016/j.datak.2006.01.008] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Paterlini S, Krink T. Differential evolution and particle swarm optimisation in partitional clustering. Comput Stat Data Anal 2006. [DOI: 10.1016/j.csda.2004.12.004] [Citation(s) in RCA: 216] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
36
|
Editing prototypes in the finite sample size case using alternative neighborhoods. ACTA ACUST UNITED AC 2005. [DOI: 10.1007/bfb0033286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
37
|
|
38
|
Mollineda RA, Sánchez JS, Sotoca JM. Data Characterization for Effective Prototype Selection. PATTERN RECOGNITION AND IMAGE ANALYSIS 2005. [DOI: 10.1007/11492542_4] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
39
|
Ho SY, Chen JH, Huang MH. Inheritable Genetic Algorithm for Biobjective 0/1 Combinatorial Optimization Problems and its Applications. ACTA ACUST UNITED AC 2004; 34:609-20. [PMID: 15369097 DOI: 10.1109/tsmcb.2003.817090] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, we formulate a special type of multiobjective optimization problems, named biobjective 0/1 combinatorial optimization problem BOCOP, and propose an inheritable genetic algorithm IGA with orthogonal array crossover (OAX) to efficiently find a complete set of nondominated solutions to BOCOP. BOCOP with n binary variables has two incommensurable and often competing objectives: minimizing the sum r of values of all binary variables and optimizing the system performance. BOCOP is NP-hard having a finite number C(n, r) of feasible solutions for a limited number r. The merits of IGA are threefold as follows: 1) OAX with the systematic reasoning ability based on orthogonal experimental design can efficiently explore the search space of C(n, r); 2) IGA can efficiently search the space of C(n, r+/-1) by inheriting a good solution in the space of C(n, r); and 3) The single-objective IGA can economically obtain a complete set of high-quality nondominated solutions in a single run. Two applications of BOCOP are used to illustrate the effectiveness of the proposed algorithm: polygonal approximation problem (PAP) and the problem of editing a minimum reference set for nearest neighbor classification (MRSP). It is shown empirically that IGA is efficient in finding complete sets of nondominated solutions to PAP and MRSP, compared with some existing methods.
Collapse
Affiliation(s)
- Shinn-Ying Ho
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung, Taiwan 407, ROC.
| | | | | |
Collapse
|
40
|
|
41
|
|
42
|
Bezdek JC, Kuncheva LI. Nearest prototype classifier designs: An experimental study. INT J INTELL SYST 2001. [DOI: 10.1002/int.1068] [Citation(s) in RCA: 138] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
43
|
Kuncheva LI, Jain LC. Nearest neighbor classifier: Simultaneous editing and feature selection. Pattern Recognit Lett 1999. [DOI: 10.1016/s0167-8655(99)00082-3] [Citation(s) in RCA: 148] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
44
|
Evolution of Reference Sets in Nearest Neighbor Classification. ACTA ACUST UNITED AC 1999. [DOI: 10.1007/3-540-48873-1_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
45
|
Ferri F, Albert J, Vidal E. Considerations about sample-size sensitivity of a family of edited nearest-neighbor rules. ACTA ACUST UNITED AC 1999; 29:667-72. [DOI: 10.1109/3477.790454] [Citation(s) in RCA: 44] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
46
|
Romer C, Kandel A. Comments on "Constraints on belief functions imposed by fuzzy random variables": some technical remarks on Romer/Kandel. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. PART B, CYBERNETICS : A PUBLICATION OF THE IEEE SYSTEMS, MAN, AND CYBERNETICS SOCIETY 1999; 29:672. [PMID: 18252347 DOI: 10.1109/3477.790455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
First, we would like to thank V. Kratschmer for his validation of our results in the paper regarding the belief measure by using a topological approach. Though assertions (1) and (3) are presented in a weakened fashion, our results still remain valid, as he claims. It is true that assertion (2) has been proved by us only for Borel sets B, which have at most countable components. We were not able to prove the same result for Borel sets with uncountable components (such as the irrational numbers, for example) using our line of reasoning. We therefore applaud the proof presented by V. Kratschmer for the more general Borel sets using an interesting use of some topological properties induced by the Hansdorff metric defined on the space of closed intervals of the real numbers. This certainly makes our original approach to fuzzy data analysis combining fuzzy sets theory and Dempster-Shafer even more useful.
Collapse
Affiliation(s)
- C Romer
- Dept. of Comput. Sci. & Eng., Univ. of South Florida, Tampa, FL
| | | |
Collapse
|
47
|
Kuncheva L, Bezdek J. Nearest prototype classification: clustering, genetic algorithms, or random search? ACTA ACUST UNITED AC 1998. [DOI: 10.1109/5326.661099] [Citation(s) in RCA: 149] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
48
|
Sánchez J, Pla F, Ferri F. Prototype selection for the nearest neighbour rule through proximity graphs. Pattern Recognit Lett 1997. [DOI: 10.1016/s0167-8655(97)00035-4] [Citation(s) in RCA: 104] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
49
|
|