A major concern in reinforcement learning, especially as it is applied to real-world and robotics problems, is that of sample-efficiency given increasingly complex problems and the difficulty of data acquisition in certain domains. To that end, many approaches incorporate external advice in the learning process in order to increase the rate at which an agent learns to solve a given problem. However, these approaches typically rely on a single reliable information source; the problem of learning with information from multiple, potentially unreliable sources is still an open question in assisted reinforcement learning. We present CLUE (Cautiously Learning with Unreliable Experts), a framework for learning single-stage decision problems with policy advice from multiple, potentially unreliable experts. We compare CLUE against an unassisted agent and an agent that na¨ıvely follows advice, and our results show that CLUE exhibits faster convergence than an unassisted agent when advised by reliable experts, but is nevertheless robust against incorrect advice from unreliable experts.