Thursday, November 30, 2023 3pm to 4pm
About this Event
The manifold hypothesis states that many real-world high-dimensional data sets actually lie along low-dimensional manifolds inside the high-dimensional space. In neurosciences and deep learning, the neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables. In this talk, we discuss how to exploit extrinsic Riemannian geometry to quantify the structure of (neural) manifolds and shed light on the inner workings of natural and artificial neural networks. First, we introduce a numerical implementation of the building blocks of differential geometry proposed in the Python package Geomstats. Second, we apply this differential geometric approach to the study of simulated and real neural recordings, and recover geometric structures expected to exist in the so-called hippocampal place cells. We hope to open new avenues of research that reveal the geometric foundations of natural and artificial intelligence.