We develop an end-to-end model for learning to follow language instructions with compositional policies. Our model combines large language models with pretrained compositional value functions [Nangue Tasse et al., 2020] to generate policies for goal-reaching tasks specified in natural language. We evaluate our method in the BabyAI [Chevalier-Boisvert et al., 2019] environment and demonstrate compositional generalization to novel combinations of task attributes. Notably our method generalizes to held-out combinations of attributes, and in some cases can accomplish those tasks with no additional learning samples.