We tackle the problem of open-ended learning by introducing a method that simultaneously evolves agents while also evolving increasingly challenging environments. Unlike previous open-ended approaches that optimize agents using a fixed neural network topology, we hypothesize that generalization can be improved by allowing agents’ controllers to become more complex as they encounter more difficult environments. Our method, Augmentative Topology EPOET (ATEP), extends the Enhanced Paired Open-Ended Trailblazer (EPOET) algorithm by allowing agents to evolve their own neural network structures over time, adding complexity and capacity as necessary. Our empirical results demonstrate that ATEP produces general agents capable of solving more environments than fixed-topology baselines. We also investigate mechanisms for transferring agents between environments and find that a species-based approach further improves the performance and generalization of agents.