Asynchronous Deep Reinforcement Learning: A Shared Experience Replay Framework

https://doi.org/10.48185/jaai.v6i2.1797

Authors

Keywords:

Asynchronous reinforcement learning, Shared experience replay, multi-agent learning, Off-policy algorithms, Deep reinforcement learning

Abstract

Off-policy reinforcement learning (RL) algorithms with experience replay have achieved strong performance across a range of decision-making tasks. However, traditional implementations typically rely on a single agent interacting with one environment instance, which can limit exploration diversity and slow convergence. In this paper, we propose an asynchronous multi-agent RL framework that leverages a shared experience replay buffer. Each agent interacts independently with its own environment instance, contributing to a centralized buffer that aggregates diverse trajectories. This setup enhances sample diversity, accelerates learning, and scales efficiently with modern hardware. Our framework is compatible with standard off-policy algorithms such as Double DQN (DDQN) and DDPG, and we demonstrate its effectiveness across discrete and continuous control benchmarks. Experimental results show that our approach significantly improves convergence speed and learning stability compared to single-agent baselines. We discuss the theoretical implications of sharng experiences across agents and highlight real-world applications and future extensions, including hierarchical coordination, prioritized sampling, and deployment in real-time control systems.

Downloads

Download data is not yet available.

Published

2025-12-31

How to Cite

Khaidem, L., & Xi, K. (2025). Asynchronous Deep Reinforcement Learning: A Shared Experience Replay Framework. Journal of Applied Artificial Intelligence, 6(2), 44–59. https://doi.org/10.48185/jaai.v6i2.1797

Issue

Section

Articles