Jeremy du Plessis

Jeremy du Plessis

University of Cape Town

Having begun his academic career studying a BMus at UCT (and picking up a few extra courses in psychology), Jeremy has always been guided by a deep interest in the cognitive aspects of human behaviour, learning, and creativity. A few years after, driven by a new interested in statistical learning and a fascination with the idea of training general-purpose predictive models from data, Jeremy returned to UCT to complete his BSc in Applied Mathematics & Computer Science, with the ultimate goal of pursuing a career in Machine Learning.

During his undergraduate and honours years Jeremy developed a research interest in reinforcement learning under the supervision of Associate Professor Jonathan Shock, conducting minor research on the topics of curiosity-driven reinforcement learning, and multi-agent reinforcement learning which involved a minor contribution to a paper on “A game-theoretic analysis of networked system control for common-pool resource management using multi-agent reinforcement learning”, published in the 2020 NeurIPS proceedings.

Following his honours year at UCT, Jeremy moved to London and is presently working for a consumer tech startup as a Machine Learning Engineer where he leads the ML efforts, designing and implementing scalable, performant ML systems centred around generative ML for content (text, image, video, audio) organisation and generation.

Additionally, Jeremy is pursuing his MSc part-time, under the co-supervision of Assoc. Professor Jonathan Shock and Professor Benjamin Rosman, researching the application of attention-based policy architectures for learning under conditions of sparse information. In many real-world applications (e.g. consumer healthcare tracking and intervention, manufacturing plant optimisation) the stream of available information admits observations which are often only partially complete (e.g. missing measurements due to broken sensors), furthermore, it is impossible to know ahead of time which variables will be missing. Therefore, any learning system applied in such environments must necessarily be robust to inconsistently incomplete observations - the working hypothesis is that attention-based policy architectures are better suited to such problems than conventional architectures, and the preliminary experimental results seem promising!

Interests
  • Reinforcement Learning
  • Deep Learning
  • Cognitive Science