Skip to main content
Facilitating spatial integration of whole brain imaging data with deep learning
S. Bednarek, M. Stefaniuk, M. Pawłowska, K. Nowicka, L. Kaczmarek, D.K. Wójcik, P. Majka
Presenting author:
S. Bednarek
The increased interest in combining optical tissue clearing techniques with light sheet fluorescence microscopy (LSFM) imaging in neuroanatomical contexts necessitates the development of specialized computational methods. Among challenges that still remain largely unaddressed is the robust registration of images of diverse contrasts to brain atlases, e.g. Allen Institute mouse brain atlas.

We demonstrate that the accuracy of cross-modal image registration based on traditional image similarity metrics, such as the Mattes mutual information, is inherently limited. To rectify this, we introduce a deep learning based solution that converts a cross-modal registration problem into a much more robust unimodal registration.

Our solution encompasses a U-Net-like network for detecting individual neurons in high resolution LSFM images. The obtained neuronal density maps facilitate semantic segmentation of the autofluorescence image into distinct anatomical structures. The resulting segmentation is then used to perform highly accurate label-guided registration to the atlas. By applying this procedure to over thirty murine brains, we generated two new, population-based, synthetic modalities of the Allen Institute mouse brain atlas.

The advantages of this approach are demonstrated in studies of c-Fos-mediated neuroplasticity in experiments involving iDISCO optical tissue clearing and LSFM imaging. Additionally, the new atlas modalities allowed improved segmentation of the amygdalar complex and the hypothalamus, among other regions.