I am interested in the intersection of learning and model-based planning from a theoretical viewpoint. More specifically, I am interested in leveraging learning to scale up planning in a domain-independent fashion. The field is still in its infancy and is beginning to receive increased awareness, but most research results are still empirical and fall behind classical non-learning planners. I strongly believe that to achieve significant results and impact in this field, we require stronger theoretical foundation and results. Planning is focused on long horizon reasoning and complex problem solving with guarantees. However, planning models suffer from computational complexity. The tradeoff with planning is between expressivity and scalability: we want planning models which are expressive enough to model practical, real life problems, but are also solvable with reasonable computational resources to be actually useful. Learning is focused on pattern matching and data compression. However, learning models generally do not have guarantees with their outputs. The tradeoff with learning is between computation, expressivity and generalisability: we want learning models which are expressive enough to capture certain patterns, but are also tractable and able to generalise to unseen outputs. The planning problems we focus on involve symbolic models, as it has been shown consistently that model-based planning methods scale up significantly better than model-free planning methods such as reinforcement learning. There are many open problems with learning for model-based planning that I am interested in tackling, including finding expressive representations of learned knowledge, looking into generalisation theory for symbolic planning, and developing better suited learning algorithms for planning tasks. I am also interested in problem methodologies that make learning for planning approaches applicable.