ED Sciences de la Vie et de la Santé
Molecular mechanisms underlying the surface organization of the NMDA receptors during development
by Nathan BENAC (Institut Interdisciplinaire de Neurosciences)
The defense will take place at 14h00 - Amphi Centre Broca Amphithéatre -Centre Broca Nouvelle Aquitaine 146 rue léo saignat 33700 Bordeaux
in front of the jury composed of
- Laurent GROC - Directeur de recherche - Université de Bordeaux - Directeur de these
- Alexander DITYATEV - Professor - Université de Magdeburg - Examinateur
- Cyril HERRY - Directeur de recherche - Université de Bordeaux - Examinateur
- Cécile CHARRIER - Directrice de recherche - ENS Paris - Rapporteur
- Sabine LEVI - Directrice de recherche - ESPCI Paris - Rapporteur
Understanding how neurons develop to form the organized pattern of synaptic connections remains a central question in neuroscience. The vast majority of excitatory synapses are formed early in development during a synaptogenesis window. N-methyl-D-aspartate receptors (NMDAR) have long been a strong candidate to drive synaptogenesis as both in vivo and in vitro data show a key role for NMDARs during that phase. Furthermore, the facts that NMDARs are found in the developmentally immature “silent” synapses and among the first receptors to accumulate at the site of nascent synapses together lead to the assumption that NMDAR's clustering is a nucleation point. Yet, the mechanisms underpinning the early clustering of NMDARs into synaptogenic assemblies remain enigmatic. Evidences that NMDARs can directly interact with other surface proteins, including receptors, has promoted the possibility that surface protein-protein interaction (PPI) represents a potent way to cluster receptors. Using a combination of live imaging and super-resolution microscopy, we observed that the interaction between D1R-GluN1-NMDARs were promoted in immature neurons, during the synaptogenesis phase. We showed that the D1R-GluN1-NMDAR interaction directly shapes the organization of NMDARs, allowing their functional clustering and synaptogenesis. Indeed, preventing the interaction in immature neurons, and not in mature neurons, altered the formation of excitatory post-synapses. We then focused on the intracellular and extracellular regulatory mechanisms of the interaction. We demonstrated a role of metabotropic glutamate receptors (mGluR) and casein kinase 1 (CK1) in promoting the interaction between D1Rs and GluN1-NMDARs. On the other hand, both the fact that the hyaluronic acid (HA), one of the main components of the extracellular matrix (ECM), is enriched early in the immature brain and regulates the surface diffusion of macromolecules opens the hypothesis that the ECM regulates the ability of NMDARs to interact with other surface macromolecules, including D1R. Yet, classical approaches have mainly focused on degrading the ECM. Herein, we aimed at increasing the ECM content in HA by over-expressing both the wild-type form of the rat hyaluronan synthase 2 (HAS2) or one bearing the two point-mutations present in the naked mole rat (NMR; N178S and N301S) which produces very high molecular weight HA (vHMW-HA). We observed that increasing the matrix impaired the development of the neuron and modified both the surface organization and trafficking of NMDARs. These findings validate our strategy, and open new paths for investigating the role of the ECM on neuronal development.
ED Sociétés, Politique, Santé Publique
Biomimetic movement-based prosthesis control: dataset of natural movements and reference frame transformation for real-world settings
by Bianca LENTO (Institut de neurosciences cognitives et intégratives d'Aquitaine)
The defense will take place at h00 - Amphithéâtre 2 Rue Dr Hoffmann Martinot, Bâtiment Bordeaux Biologie Santé, 33000 Bordeaux
in front of the jury composed of
- Aymar DE RUGY - Directeur de recherche - INCIA UMR 5287, CNRS, Université de Bordeaux - Directeur de these
- Peter Ford DOMINEY - Directeur de recherche - INSERM UMR 1093, CAPS, Université Bourgogne Franche-Comté - Rapporteur
- Christine AZEVEDO COSTE - Directrice de recherche - INRIA, Université de Montpellier - Rapporteur
- Pauline MAURICE - Chargée de recherche - CNRS - Examinateur
- Jean-Louis VERCHER - Directeur de recherche émérite - ISM UMR 7287, CNRS, Aix-Marseille Université - Examinateur
Myoelectric controls for transhumeral prostheses often lead to high rates of device abandonment due to their unsatisfactory performance. Grounded on advances in movement-based prosthesis control, we refined an alternative approach using an artificial neural network trained on natural arm movements to predict the configuration of distal joints based on proximal joint motion and movement goals. Previous studies have shown that this control strategy enabled individuals with transhumeral limb loss to control a prosthesis avatar in a virtual reality environment as well as with their valid arm. Yet, deploying this control system in real-world requires further development. A head-mounted camera and computer vision algorithms need to be integrated into the system for real-time object pose estimation. In this setup, object information might only be available in a head-centered reference frame, while our control relies on the object expressed in a shoulder reference frame. Taking inspiration from how the brain executes coordinate transformations, we developed and tested solutions to perform the required head-to-shoulder transformation from orientation-only data, possibly available in real-life settings. To develop these algorithms, we gathered a dataset reflecting the relationship between these reference frames by involving twenty intact-limbs participants in picking and placing objects in various positions and orientations in a virtual environment. This dataset included head and gaze motion, along with movements of the trunk, shoulders, and arm joints, capturing the entire kinematic chain between the movement goal and the hand moved to reach it. Following data collection, we implemented two methods to transform target information from the head to the shoulder reference frame. The first is an artificial neural network trained offline on the dataset to predict the head position in the shoulder referential given ongoing shoulder and head orientations and the participant height. The second method draws inspiration from multisensory integration in the brain. It derives the head position in the shoulder referential by comparing data about the prosthetic hand obtained in the shoulder referential through forward kinematics and simultaneously in the head referential through computer vision. Inspired by brain's mechanisms for peripersonal space coding, we encoded this difference in a spatial map by adapting the weights of a single-layer network of spatially tuned neurons online. Experimental results on twelve intact-limbs participants controlling a prosthesis avatar in virtual reality demonstrated persistent errors with the first method, which failed to adequately account for the specificity of the user's morphology, resulting in significant prediction errors and ineffective prosthesis control. In contrast, the second method elicited much better results and effectively encoded the transition from the head to the shoulder associated with different targets in space. Despite requiring an adaptation period, subsequent performances on already explored targets were comparable to the ideal scenario. The effectiveness of the second method was also tested on six participants with transhumeral limb loss in virtual reality, and a physical proof of concept was implemented on a teleoperated robotic platform with simple computer vision to assess feasibility in real-life settings. One intact-limbs participant controlled the robotic platform REACHY 2 to grasp cylinders on a board. ArUco markers on the robot's end effector and cylinders coupled with a gaze-guided computer vision algorithm enabled precise object pose estimation. The results of this proof of concept suggest that despite challenges in object detection, our bio-inspired spatial map effectively operates in real-world scenarios. This method also shows promise for handling complex scenarios involving errors in position and orientation, such as moving a camera or operating in perturbed environments.