The evolution towards decarbonized energy systems is entailing deep changes at the levels of generation and demand. Furthermore, the grid structure is becoming more decentralized with many small, interconnected generation units and bi-directional power flows. On the other hand, the emergence of prosumers (through self-generation, emerging vehicle-to-grid possibilities, domestic automation, etc.) are increasing the uncertainty behind the meter for utilities.
These changes are making the electric networks more complex with high levels of uncertainty, which makes their planning and operation more challenging. In this work, the focus is on the optimal real-time operation and planning of electric grids with high shares of distributed energy resources—e.g., rooftop solar generation, energy storage, controllable demand, etc. Leading control and planning approaches rely heavily on significant modelling efforts for them to perform well. As distribution systems are intrinsically dynamic—e.g., most behind-the-meter installations evade the control and scrutiny of grid planners and operators—, model-based approaches have significant limitations. To overcome these limitations, model-free and data-driven approaches have been proposed in the literature.
In this project, we are exploring the use of reinforcement learning as a means to achieve adequate decisions, but without the need to maintain large model and parameter databases. Several learning-based control architectures have been designed and tested by simulation of neighbourhoods of apartment buildings with shared solar generation and energy storage assets. Results are encouraging, with many learning-based approaches capable of reproducing full model-based control performance. The next challenge lies now with how learning-based control can be adapted to perform multi-objective power and energy control.