Version 1
: Received: 11 July 2024 / Approved: 11 July 2024 / Online: 11 July 2024 (13:03:43 CEST)
How to cite:
Garayev, G.; Alili, A. On Renormalization Group Based Deep Q-Network. Preprints2024, 2024070953. https://doi.org/10.20944/preprints202407.0953.v1
Garayev, G.; Alili, A. On Renormalization Group Based Deep Q-Network. Preprints 2024, 2024070953. https://doi.org/10.20944/preprints202407.0953.v1
Garayev, G.; Alili, A. On Renormalization Group Based Deep Q-Network. Preprints2024, 2024070953. https://doi.org/10.20944/preprints202407.0953.v1
APA Style
Garayev, G., & Alili, A. (2024). On Renormalization Group Based Deep Q-Network. Preprints. https://doi.org/10.20944/preprints202407.0953.v1
Chicago/Turabian Style
Garayev, G. and Azar Alili. 2024 "On Renormalization Group Based Deep Q-Network" Preprints. https://doi.org/10.20944/preprints202407.0953.v1
Abstract
In This paper we introduce the integration of Renormalization Group (RG) methods with Deep Q-Networks (DQNs) to improve reinforcement learning in high-dimensional state spaces. RG methods provide multi-scale analysis, enhancing state representation, learning stability, and exploration. The proposed RG-DQN algorithm uses hierarchical Q-value estimation and multi-scale representations, demonstrating superior performance on synthetic genomic data compared to traditional DQNs.}
Keywords
DQN; renormalization group; AI; loss functions
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.