BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4//
BEGIN:VEVENT
UID:20260514T212913EDT-2482RA2kRC@132.216.98.100
DTSTAMP:20260515T012913Z
DESCRIPTION:\n	Dynamic Games and Applications Seminar\n\n	Speaker: Kaiqing Zh
 ang – University of Illinois at Urbana-Champaign\, United States\n\n	Webina
 r link\n		Webinar ID: 962 7774 9870\n		Passcode: 285404\n\n	Abstract: Recent ye
 ars have witnessed both tremendous empirical successes and fast-growing th
 eoretical development of reinforcement learning (RL)\, in solving many seq
 uential decision-making and control tasks. However\, many RL algorithms ar
 e still several miles away from being applied to practical autonomous syst
 ems\, which usually involve more complicated scenarios with multiple decis
 ion-makers and safety-critical concerns. In this talk\, I will introduce o
 ur work on the development of RL algorithms with provable guarantees\, wit
 h focuses on the multi-agent and safety-critical settings. I will first sh
 ow that policy optimization\, one of the main drivers of the empirical suc
 cesses of RL\, enjoys global convergence and sample complexity guarantees 
 for a class of robust control problems. More importantly\, we show that ce
 rtain policy optimization approaches automatically preserve some 'robustne
 ss' during the iterations\, some property we termed as 'implicit regulariz
 ation'. Interestingly\, such a setting naturally unifies other important b
 enchmark settings in control and game theory: risk-sensitive control desig
 n\, and linear quadratic zero-sum dynamic games\, while the latter is the 
 benchmark multi-agent RL (MARL) setting that mirrors the role played by li
 near quadratic regulators (LQR) for single-agent RL. Despite the nonconvex
 ity and the fundamental challenges in the optimization landscape\, our the
 ory shows that policy optimization enjoys global convergence guarantees in
  these problems as well. The results have then provided some theoretical j
 ustifications for several basic robust RL and MARL settings that are popul
 ar in the empirical RL world. In addition\, I will introduce several other
  works along this line of provable MARL and robust RL\, including decentra
 lized MARL with networked agents\, sample complexity of model-based MARL\,
  etc. Time permitting\, I will also share several future directions based 
 on the previous results\, towards large-scale and reliable autonomy.\n\n
DTSTART:20210218T160000Z
DTEND:20210218T170000Z
LOCATION:CA\, ZOOM
SUMMARY:Provable reinforcement learning for multi-agent and robust control 
 systems
URL:https://www.mcgill.ca/cim/channels/event/provable-reinforcement-learnin
 g-multi-agent-and-robust-control-systems-328630
END:VEVENT
END:VCALENDAR
