My research goal is uncovering how hippocampal circuits produce memory and how they are disrupted in neurological diseases. My unique background in both theoretical physics and clinical medicine enables me to employ innovative yet rigorous approaches towards tackling this crucial problem. Now I'm trying to find opportunities to play piano and beach volleyball in Tokyo.
I am a nonlinear dynamics researcher with a focus on coupled oscillators and excitable systems. Currently, I am studying the dynamics of seizures in neuronal models to expand our knowledge about the dynamics of epileptic states. My research builds upon a combination of mathematical modeling, numerical simulations, and theoretical analysis.
My PhD research focused on using machine learning to make predictions of quantum forces in chemical systems. My current research involves looking at spiking neural networks and how they can potentially help us understand epilepsy better. I enjoy a wide range of things and coming up with creative solutions to problems. I usually have multiple projects on the go, and look forward to collaborations with others here at RIKEN!
I am a third year undergraduate student at Harvard studying computational neuroscience and English. My current research involves using recurrent neural networks to investigate early-stage Alzheimer's disease.
I have recently completed my Master’s in Neuroscience at UCL and will be starting my PhD there in September. My current research involves grid cell continuous attractor networks. I will be modeling Alzheimer’s disease and its associated loss of navigational capabilities.
Despite the rapid progress in development of anti-convulsive medications, about a third of epilepsy patients suffer from drug-resistant forms of disease and require alternative non-pharmacological therapies. One such alternative is provided by ketogenic diet, a dietary treatment based on replacing carbohydrates with fat, triggering in that way an energy metabolism switch in the main source of ATP production. While being practiced since the 1920s, the precise metabolic pathways owing to the efficacy of ketogenic diet remain not well understood. In our model, we address the mechanism of metabolic feedback induced by ketogenic diet where the change in the main source of ATP production activates ATP-dependent potassium channels, which in turn generates an additional outward potassium flux causing membrane hyperpolarization. The latter decreases neuronal excitability and firing frequency, altering the seizure-related onset of synchrony and providing new means of switching neuronal populations to healthy states with asynchronous rare spiking activity.
To model the metabolic feedback associated with ketogenic diet, we coupled a neuronal population that includes ATP-dependent potassium channels with an additional equation describing the dynamics of global ATP concentration. We derived a corresponding mean-field model based on the formalism of next-generation neural mass models for which we were able to perform a detailed bifurcation analysis revealing two distinct scenarios connecting normal asynchronous and seizure-like synchronous activity. We found supercritical (continuous) but also subcritical (hysteretic) transitions involving multistability from which the synchronous seizure-like states emerge. Using this, we demonstrated three potential control mechanisms for switching between asynchronous and synchronous states, involving parametric perturbation of the ATP production rate, external stimulation currents, or pulse-like ATP shocks, and indicated likely therapeutic advantages of hysteretic scenarios.
link | save_alt | Eydam S, Franović & Kang L. Control of seizure-like dynamics in neuronal populations with excitability adaptation related to ketogenic diet. Chaos 34, 053128 (2024). |
---|
The hippocampal region CA3 is believed to produce memories from incoming sensory information. Before arriving at CA3, this information stream is split along two neural pathways with different encoding properties—one is more correlated than the other. We construct a model of CA3 that incorporates this architecture to investigate its computational purpose. Our model reveals that decorrelated encodings maintain distinctions between similar experiences, whereas correlated encodings build concepts from them. This explains how the hippocampus forms distinct memories for separate visits with your grandmother while integrating them through “grandmother cells” that respond to many different forms of her.
Our model proposes that example-like and concept-like encodings are accessed at different phases of the theta oscillation, which is a dominant brain rhythm in the hippocampus. Its predictions stand up to extensive experimental tests using publicly available neural recordings. Finally, we extend our insights from the hippocampus to machine learning by introducing a novel HalfCorr loss function that endows neural networks with CA3-like complementary encodings. HalfCorr networks outperform networks with only single encoding types in a multitask learning paradigm, demonstrating how computational advantages found within neural systems can be unlocked through bio-inspired artificial intelligence.
link | save_alt | Kang L & Toyoizumi T. Distinguishing examples while building concepts in hippocampal and artificial networks. Nat Commun 15, 647 (2024). |
---|---|---|
link | save_alt | Kang L & Toyoizumi T. Hopfield-like model with complementary encodings of memories. Phys Rev E 108, 054410 (2023). |
Our brains maintain internal representations of values related to the external world, which allows us to, for example, find our way to the door if the lights go off. Continuous attractor networks are one class of neural circuit used to accomplish this. They contain localized regions of activity, called attractor bumps, whose positions can encode the value of a continuous variable. However, the brain is teeming with biological noise that perturbs the positions of the bumps and compromises the accuracy of the network.
We uncover a new means through which continuous attractor networks can enhance their robustness to noise. They can distribute their activity across multiple attractor bumps instead of concentrating it within a single bump. While such configurations have been considered by researchers, the connection between bump number and noise resilience had not been appreciated. This observation contributes to our fundamental knowledge of attractor networks, and it may help to explain why the mammalian grid cell network appears to have evolved a multi-bump configuration.
link | save_alt | Wang R & Kang L. Multiple bumps can enhance robustness to noise in continuous attractor networks. PLOS Comput Biol 18, e1010547 (2022). |
---|
To operate effectively, the enormous number of neurons in brain circuits must coordinate their activity. Detecting signatures of coordination in large, complex sets of neural data may help us understand neural computation. One such signature is topological structure, such as loops and voids, formed by the data in high-dimensional phase space.
Persistent cohomology is a powerful technique for discovering topological structure in data. Strategies for its use in neuroscience are still undergoing development. We explore the application of persistent cohomology to the brain’s spatial representation system. Our results suggest guidelines for applying persistent cohomology to experimental neural recordings.
link | save_alt | Kang L, Xu B & Morozov D. Evaluating state space discovery by persistent cohomology in the spatial representation system. Front Comput Neurosci 15, 616748 (2021). |
---|
The entorhinal cortex (EC) contains grid cells, each of which only fires when we approach certain locations that form a triangular lattice in space. There is experimental evidence that the grid cell network can be modeled as a continuous attractor, in which neural activity evolves through a set of attractor states that represent different positions in the 2D environment.
However, existing attractor models did not capture several key phenomena exhibited by the grid system. Grid cells belong to modules, which suggests that spatial information is discretized in memories, and grid cells can fire in rapid sequences that may be related to memory consolidation or planning. Through simulations, we demonstrated how these phenomena arise in continuous attractors with the addition of experimentally observed or biologically plausible features of EC. Our results suggest mechanisms through which the hippocampal region performs memory-related computations.
link | save_alt | Kang L & Balasubramanian V. A geometric attractor mechanism for self-organization of entorhinal grid modules. eLife 8, e46687 (2019). |
---|---|---|
link | save_alt | Kang L & DeWeese MR. Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network. eLife 8, e46351 (2019). |
28 Jun 2024
Hope to see you in Fukuoka at the Japan Neuroscience Society annual meeting!
17 Jun 2024
Mira, an undergraduate student at Harvard University, and Amith, a recent Master's graduate from University College London, have arrived as CBS Summer Program interns. They will be following Taka's footsteps and continuing our investigation on Alzheimer's disease.
31 May 2024
Taka has worked with us for the last two months and made significant contributions to a new project on Alzheimer's disease. He is a medical student at the University of Tokyo, and although he is interested in clinical medicine, we hope that he remains active in computational neuroscience research as well.
22 May 2024
Sebastian's paper on the ketogenic diet has been published in Chaos. He guided the project from conceptualization to publication in collaboration with Igor Franović from the University of Belgrade.
04 Feb 2024
Ismaeel will be presenting “Seizure susceptibility is related to task computations in recurrent neural networks” and Sebastian, in collaboration with Igor Franović from the University of Belgrade, will be presenting “Metabolic dynamics shapes neural activity: a framework for control of epilepsy” at Cosyne 2024. Please visit their posters if you are headed to Lisbon.
20 Jan 2024
Our model's connections to hippocampal neurons and AI neural networks has been published in Nature Communications. We uncover signatures predicted by our model in recordings of mouse hippocampus. Inspired by our biological findings, we propose a new loss function that improves neural network performance in multitask learning.
01 Dec 2023
Our model for storing both example-like and concept-like memories has been published in Physical Review E. We present detailed mathematical derivations of its capacities in the mean-field limit as well as simulations that characterize its capabilities to link memories across types.
01 Nov 2023
Louis will be sharing his work with Taro Toyoizumi at Neuroscience 2023. It represents the culmination of a large project on storing memories at both more specific and more generalized scales. Look out soon for our manuscripts, which have already been accepted for publication.