Exploration and self-directed learning are valuable components of early childhood development. This often comes at an unacceptable safety trade-off, as infants and toddlers are especially at risk from environmental hazards that may fundamentally limit their ability to interact with and explore their environments. In this work we address this risk through the incorporation of a caregiver robot, and present a model allowing it to autonomously adapt its environment to minimize danger for other (novice) agents in its vicinity. Through an approach focusing on action prediction strategies for agents with unknown goals, we create a model capable of using expert demonstrations to learn typical behaviors for a multitude of tasks. We then apply this model to predict likely agent behaviors and identify regions of risk within this action space. Our contribution uses this information to prioritize and execute risk mitigating behaviors, manipulating and adapting the environment to minimize the potential harm the novice is likely to encounter. We conclude with an evaluation using multiple agents of varying goal-directedness, comparing agents’ self-interested performance in scenarios with and without the assistance of a caregiver incorporating our model. Our experiments yield promising results, with assisted agents incurring less damage, interacting longer, and exploring their environments more completely than unassisted agents.