Recently, Deep Neural Networks (DNNs) have become a major tool and model in vision science. However, DNNs often fail unexpectedly. For example, they are highly vulnerable to noise and struggle to transfer their performance from the lab to the real world. In experimental psychology, unexpected failures are often the consequence of unintended cue learning. For example, rats trained to perform a colour discrimination experiment may appear to have learned the task but fail unexpectedly once the odour of the colour paint is controlled for, revealing that they exploited an unintended cue—smell—to solve what was intended to be a vision experiment. Here we ask whether unexpected failures of DNNs too may be caused by unintended cue learning. We demonstrate that DNNs are indeed highly prone to picking up on subtle unintended cues: neural networks love to cheat. For instance, in a simple classification …