How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense 2D projections like a movie on a screen? Estimating the 3D scene structure of our environment from a pair of 2D images (like those on our retinae) is mathematically an ill-posed inverse problem plagued by ambiguities and noise, and involving highly nonlinear constraints imposed by multi-view geometry. Given these complexities, it is quite impressive that the visual system is able to construct 3D representations that are accurate enough for us to successfully interact with our surroundings. A major area of research in the lab is devoted to understanding how the brain achieves accurate and reliable 3D representations of the world. A critical aspect of 3D vision is the encoding of 3D object orientation (e.g., the slant and tilt of a planar surface). By adapting mathematical tools used to analyze geomagnetic data (Bingham functions), we developed the first methods for quantifying the selectivity of visual neurons for 3D object orientation. Our work on this topic employs a synergistic, multifaceted approach combining computational modeling, neurophysiological studies, and human psychophysical experiments.