An agent continuously performing different tasks in the same domain has the opportunity to learn, over the course of its operational lifetime, about the behavioural regularities afforded by the domain. This paper addresses the problem of learning a task independent behaviour model based on the underlying structure of a domain which is common across multiple tasks presented to an autonomous agent. Our approach involves learning action priors: a behavioural model which encodes a notion of local common sense behaviours in the domain, conditioned on either the state or observations of the agent. This knowledge is accumulated and transferred as an exploration behaviour whenever a new task is presented to the agent. The effect is that as the agent encounters more tasks, it is able to learn them faster and achieve greater overall performance. This approach is illustrated in experiments in a simulated extended navigation domain.