Title: An implicit tour of regularization
Abstract: Regularization is a key ingredient in the design of learning algorithms. Classically it amounts to the definition of a constrained/penalized empirical objective to be minimized. Optimization aspects are then considered separately. In practice, these distinctions are much more blurred. Indeed, it is a classical observation that an optimization process can have a self-regularizing effect by (implicitly) enforcing some inductive bias. This observation has recently become popular in machine learning. On the one hand, it seems to help understanding learning curves in deep learning. On the other hand, controlling regularization by optimization can improve efficiency in learning. In this talk, I will provide an overview of classical and recent results on the topic.
Seminar MTL Machine Learning and Optimization (MTL MLOpt)
Veuillez vous inscrire à la liste d'envoi/Please subscribe to the mailing list: https://mtl-mlopt.github.io/