We present a novel, unsupervised method to extend LM decoding to new domains. Given an input, Reflective Decoding samples contexts that capture its meaning, then uses reverse-direction LMs to reflect back from context into text! Reflective Decoding is simple and intuitive to apply, using only a forward and a backward language model -- no tuning required!