Lessons and experiences of continual learning in deep learning for foundation models and LLMs

Talking about continual learning, very likely we may talk about different things, like AGI. It can be the post-training of LLM; self-evolve agents; experience replay in DQN (Mnih et al., 2013); sequential learning of MNIST digits; optimal order of learning tasks, … For a better conversation, this post aims at bringing back the context of continual learning in deep learning, positioning it together with other learning regimes, show the definition, evaluation, and solutions without technical details. Moreover, discuss the connection and inspiration for LLMs and foundation models. This post is inspired by (Mundt et al., 2023). ...

January 28, 2026 · 9 min