modellrl
modellrl is an open-source software library designed to support model-based reinforcement learning workflows. It provides a framework for building, training, and evaluating agents that use explicit models of their environment, including learned dynamics and reward models. The library is intended for research and applied use, supporting both simulated environments and real-world data by enabling rapid experimentation with different modeling and planning approaches.
The project emphasizes modularity and interoperability, allowing users to combine neural dynamics models with planning and
Key components include dynamics models, reward models, and uncertainty estimators; ensembles for robust predictions; planners that
The library supports a range of model-based algorithms, from world-model based planning and offline model-based RL
Typical workflows involve collecting data from an environment or simulator, training predictive models of dynamics and
modellrl is developed openly by a community of researchers and practitioners. It is released under a permissive
Related topics include model-based reinforcement learning, planning in RL, and comparative RL libraries such as RLlib