1
|
Liu S, Jiang C. A novel prediction approach based on three-way decision for cloud datacenters. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04505-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
2
|
Zhu C, Ma X, Zhang C, Ding W, Zhan J. Information granules-based long-term forecasting of time series via BPNN under three-way decision framework. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
3
|
Software defect prediction ensemble learning algorithm based on adaptive variable sparrow search algorithm. INT J MACH LEARN CYB 2023. [DOI: 10.1007/s13042-022-01740-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
4
|
Luo J, Hu M. A bipolar three-way decision model and its application in analyzing incomplete data. Int J Approx Reason 2023. [DOI: 10.1016/j.ijar.2022.10.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
A three-way decision method under probabilistic linguistic term sets and its application to Air Quality Index. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
6
|
Liu F, Hu YK, Wang SS. Cyclic sequential process of pairwise comparisons with application to multi-criteria decision making. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01705-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
Intuitionistic Fuzzy-Based Three-Way Label Enhancement for Multi-Label Classification. MATHEMATICS 2022. [DOI: 10.3390/math10111847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Multi-label classification deals with the determination of instance-label associations for unseen instances. Although many margin-based approaches are delicately developed, the uncertainty classifications for those with smaller separation margins remain unsolved. The intuitionistic fuzzy set is an effective tool to characterize the concept of uncertainty, yet it has not been examined for multi-label cases. This paper proposed a novel model called intuitionistic fuzzy three-way label enhancement (IFTWLE) for multi-label classification. The IFTWLE combines label enhancement with an intuitionistic fuzzy set under the framework of three-way decisions. For unseen instances, we generated the pseudo-label for label uncertainty evaluation from a logical label-based model. An intuitionistic fuzzy set-based instance selection principle seamlessly bridges logical label learning and numerical label learning. The principle is hierarchically developed. At the label level, membership and non-membership functions are pair-wisely defined to measure the local uncertainty and generate candidate uncertain instances. After upgrading to the instance level, we select instances from the candidates for label enhancement, whereas they remained unchanged for the remaining. To the best of our knowledge, this is the first attempt to combine logical label learning with numerical label learning into a unified framework for minimizing classification uncertainty. Extensive experiments demonstrate that, with the selectively reconstructed label importance, IFTWLE achieves statistically superior over the state-of-the-art multi-label classification algorithms in terms of classification accuracy. The computational complexity of this algorithm is On2mk, where n, m, and k denote the unseen instances count, label count, and average label-specific feature size, respectively.
Collapse
|