We present a framework for autonomously learning a portable symbolic representation that describes a collection of low-level continuous environments. We show that abstract representations can be learned in a task-independent space specific to the agent that, when combined with problem-specific information, can be used for planning. We demonstrate knowledge transfer in a video game domain where an agent learns portable, task-independent symbolic rules, and then learns instantiations of these rules on a per-task basis, reducing the number of samples required to learn a representation of a new task.