Speaker
Natabara Máté Gyöngyössy
(ELTE ITK)
Description
Large Language Models (LLMs) have been with us for a few years now. Their generalization capabilities are outstanding due to their sheer size; however, they still lack the benefits of information processing grounded in multimodality. In this review, we explore how early forms of this grounding could be achieved by constructing Large World Models (LWMs). We formulate method-agnostic general templates for both LLMs and LWMs and highlight several alternatives and distinct methods that were designed for these large, general models but could be applied to Deep Learning research in other areas as well.
Primary author
Natabara Máté Gyöngyössy
(ELTE ITK)