jismart2024070: Optimized Control Strategy for Building Distributed Energy Systems Based on the Combination of Multi-Agent Reinforcement Learning and Imitation Learning
Keywords:
Energy Management, Multi-Agent Reinforcement Learning, Imitation LearningAbstract
The global challenges of energy consumption and building decarbonization have become major concerns, and distributed energy systems in buildings are recognized for their potential to reduce carbon emissions and improve energy efficiency. However, the application of multi-agent reinforcement learning (MARL) for controlling such systems faces challenges, including high computational complexity and slow convergence. To address these issues, this paper presents a novel control strategy that combines multi-agent reinforcement learning with imitation learning. In our proposed approach, the individual components of the building's distributed energy system are modeled as autonomous agents. By integrating reinforcement learning and imitation learning, agents can effectively learn optimal control policies with faster convergence and reduced computational demands. The method was tested in a simulation environment and compared to traditional control strategies. The results demonstrate that the combined approach offers significant advantages over baseline methods. The computational complexity is notably reduced, leading to faster convergence times and more efficient energy management. These improvements provide a solid theoretical foundation for the future application of energy control in building regions. In conclusion, the combination of multi-agent reinforcement learning and imitation learning presents a promising solution for optimizing building distributed energy systems, with potential applications in scalable and efficient energy management, contributing to the decarbonization efforts in the building sector.