This article provides evidence for the existence of a robust “brainprint” of cinematic shot-scales that generalizes across movies, genres, and viewers. We applied a machine-learning method on a dataset of 234 fMRI scans taken during the viewing of a movie excerpt. Based on a manual annotation of shot-scales in five movies, we generated a computational model that predicts time series of this feature. The model was then applied on fMRI data obtained from new participants who either watched excerpts from the movies or clips from new movies. The predicted shot-scale time series that were based on our model significantly correlated with the original annotation in all nine cases. The spatial structure of the model indicates that the empirical experience of cinematic close-ups correlates with the activation of the ventral visual stream, the centromedial amygdala, and components of the mentalization network, while the experience of long shots correlates with the activation of the dorsal visual pathway and the parahippocampus. The shot-scale brainprint is also in line with the notion that this feature is informed among other factors by perceived apparent distance. Based on related theoretical and empirical findings we suggest that the empirical experience of close and far shots implicates different mental models: concrete and contextualized perception dominated by recognition and visual and semantic memory on the one hand, and action-related processing supporting orientation and movement monitoring on the other.