An important aim of robotics is to design and build machines that can recognize and exploit opportunities afforded by new situations. Traditionally in artificial intelligence, this task has fallen on abstract representations, but that has left the problem of how to ground the representations in sensorimotor activity. In this talk, I propose a computational architecture whereby a mobile robot internalizes representations based on its experience. I first examine a fast on-line learning algorithm that allows the robot to build up a mapping of how its motor signals transform sensory data. Then I propose a way of categorizing object affordances according to their internal effects. Based on these effects, wavelet analysis is applied to sensory data to uncover invariance for developing a representation of goals. Finally, I’ll consider heuristics for projecting a learned sensorimotor mapping into the future to attain these goals.