We discuss how to obtain the accurate and globally consistent self-calibration of a distributed camera network, in which cameras and processing nodes may be spread over a wide geographical area, with no centralized processor and limited ability to communicate a large amount of information over long distances. First, we describe how to estimate the vision graph for the network, in which each camera is represented by a node, and an edge appears between two nodes if the two cameras jointly image a sufficiently large part of the environment. We propose an algorithm in which each camera independently composes a fixed-length message that is a lossy representation of a subset of detected features, and broadcasts this “feature digest” to the rest of the network. Each receiver camera decompresses the feature digest to recover approximate feature descriptors, robustly estimates the epipolar geometry to reject outliers and grow additional matches, and decides whether sufficient evidence exists to form a vision graph edge. Second, we present a distributed camera calibration algorithm based on belief propagation, in which each camera node communicates only with its neighbors in the vision graph. The natural geometry of the system and the formulation of the estimation problem give rise to statistical dependencies that can be efficiently leveraged in a probabilistic framework. The camera calibration problem poses several challenges to information fusion, including missing data, overdetermined parameterizations, and non-aligned coordinate systems. We demonstrate the accurate and consistent performance of the vision graph generation and camera calibration algorithms using a simulated 60-node outdoor camera network.