Liu S, Feng Y, Wu K, Cheng G, Huang J, Liu Z. Graph-Attention-Based Casual Discovery With Trust Region-Navigated Clipping Policy Optimization.
IEEE TRANSACTIONS ON CYBERNETICS 2023;
53:2311-2324. [PMID:
34665751 DOI:
10.1109/tcyb.2021.3116762]
[Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In many domains of empirical sciences, discovering the causal structure within variables remains an indispensable task. Recently, to tackle unoriented edges or latent assumptions violation suffered by conventional methods, researchers formulated a reinforcement learning (RL) procedure for causal discovery and equipped a REINFORCE algorithm to search for the best rewarded directed acyclic graph. The two keys to the overall performance of the procedure are the robustness of RL methods and the efficient encoding of variables. However, on the one hand, REINFORCE is prone to local convergence and unstable performance during training. Neither trust region policy optimization, being computationally expensive, nor proximal policy optimization (PPO), suffering from aggregate constraint deviation, is a decent alternative for combinatory optimization problems with considerable individual subactions. We propose a trust region-navigated clipping policy optimization method for causal discovery that guarantees both better search efficiency and steadiness in policy optimization, in comparison with REINFORCE, PPO, and our prioritized sampling-guided REINFORCE implementation. On the other hand, to boost the efficient encoding of variables, we propose a refined graph attention encoder called SDGAT that can grasp more feature information without priori neighborhood information. With these improvements, the proposed method outperforms the former RL method in both synthetic and benchmark datasets in terms of output results and optimization robustness.
Collapse