Revisiting Neural Processes via Fourier Transform and Volterra Series
International Conference on Machine Learning (ICML) 2026
P. Mohseni, N. Duffield and R. K. W. Wong

Abstract

Many phenomena in science and engineering involve unknown latent functions observed through finite, irregularly sampled measurements. Neural processes (NPs) offer a powerful framework for probabilistic functional inference by bridging stochastic processes with deep learning. In many domains, these functions exhibit symmetries—most notably translation equivariance—that can be exploited to improve sample efficiency and generalization. Existing translation-equivariant NPs, however, have two key limitations: (i) they are constructed by stacking generic components with nonlinearities, obscuring the induced function class, thus limiting interpretability; and (ii) convolutional designs are based on localized receptive fields and require dense discretization, while attention-based methods avoid these issues but scale quadratically with the number of observations. We address these challenges through two contributions. First, we characterize continuous translation-equivariant operators through their Volterra expansions, representing them as sums of higher-order convolutions. This yields analytical transparency while remaining amenable to efficient approximation by first-order convolution operators. Second, we introduce set Fourier convolutions (SFConvs), a frequency-domain parameterization that operates directly on irregularly sampled sets. SFConvs achieve global receptive fields without spatial discretization and scale linearly in the number of observations. Building on these ideas, we propose two families of conditional NPs (CNPs): SFConvCNPs, constructed by stacking SFConv blocks with nonlinearies, and SFVConvCNPs, which integrate the Volterra formulation. Experiments on synthetic and real-world datasets demonstrate the efficacy of our methods compared to the state-of-the-art baselines.