Karen Moxon: Decoding the Brain

Engineering professor in lab

Karen Moxon, professor of biomedical engineering, in her lab at UC Davis. Photo by Reeta Asmai/UC Davis.

By Aditi Risbud Bartl 

In the last decade, researchers in academia and the technology sector have been racing to unlock the potential of artificial intelligence. In parallel with federally-funded efforts from the National Institutes of Health and the National Science Foundation, heavy-hitters such as Microsoft, Facebook and Google are deeply invested in artificial intelligence.

As part of the BRAIN Initiative, many UC Davis investigators are studying the nervous system and developing new technologies to investigate brain function.

Reverse-engineering the brain is a central tenet to reproducing human intelligence. However, experts say, most efforts to design artificial brains haven’t involved giving much attention to real ones. By understanding how our brains work, we can leverage artificial intelligence to test new drug therapies for brain disorders, and one day even circumvent neurological disorders such as Alzheimer’s disease or Parkinson’s disease.

At UC Davis, Karen Moxon, a powerhouse researcher in the field of neuroengineering and professor of bioengineering, focuses on understanding how information in the brain is represented and how it is affected by spinal injury, stroke, or other brain damage.

Early in her career, Moxon contributed to the first demonstration of a closed-loop, real-time brain-machine interface system in an animal model that was quickly translated to non-human primates and, more recently, to humans with neurological disorders. This work has spurred an entirely new discipline within neuroengineering, called brain-machine interface, which has had a global impact.

“How does the brain encode information? We know we have billions of neurons, and somehow the activity of these cells conveys information. But we don’t really know the mechanism,” says Moxon. “I really want to understand how the brain encodes information and transforms it into action.”

Unlocking electrical signals to understand brain damage

Our neurons relay information from other cells in the body through signals in the form of an electrical pulse called an action potential, or spike, due to the way it looks in the recording trace. By examining when these spikes occur from many different cells simultaneously, researchers can study patterns of these “spikes,” to study how information is processed.

Damage to the brain that occurs after spinal cord injury, stroke, or at the onset of Parkinson’s or Alzheimer’s can change the way information is represented. These changes alters the patterns of spikes. “These patterns of spikes are the language the brain uses to convey information,” Moxon explains.

In particular, Moxon uses computational approaches to study how changes in neural encoding contribute to recovery of function after spinal cord injury. A spinal cord injury is also a brain injury, Moxon explains, because the neurons in your brain that convey information, called axons, go all the way down your spinal cord. A spinal cord injury results in inflammation in the brain and other damage; what’s more, even if the spinal cord could somehow be repaired, your brain would have to relearn how to interact with the repaired organ.

“As an engineer, I’m trying to decode a person’s intention to move”

“As an engineer, I’m trying to decode a person’s intention to move after a spinal cord injury. If we can do this, we could use our understanding of their intention to control stimulation to the spinal cord below the level of the legion to restore function,” says Moxon.

Moxon says there are two major challenges to creating reliable brain-machine interfaces: optimizing the devices that record signals from the brain, and—likely the bigger issue—decoding complex information in the brain and cognitive intent.

Techniques such as electroencephalography, or EEG, in which small electrodes attached to your scalp monitor electrical activity of the brain, can be used for simple tasks like moving a cursor on a computer, but not for complex movements of limbs or cognitive intent.

“We need to get a better signal. To do that we have to go into the brain,” Moxon says. Unsurprisingly, implanting electrodes into the brain is not easy. “But electrodes implanted directly into the brain aren’t reliable. A person has a spinal cord injury, and now you put them through brain surgery, without knowing how well the implant will work. The best implants might last up to ten years, but that is rare. It’s more like three to four years.”

Harnessing different approaches to shed light on the brain

As part of a broader effort at UC Davis, Moxon is working with other researchers to investigate how to get more information reliably from electrodes implanted directly into the brain. For example, Erkin Seker, an associate professor of electrical and computer engineering at UC Davis, is developing nanostructured materials to mimic the extracellular spaces in the brain to make the electrodes more biocompatible. Weijian Yang, an assistant professor of electrical and computer engineering at UC Davis, is developing an entirely new technology called calcium imaging to gain insight into how the brain encodes information.

Although researchers are getting better signals, decoding the complex information in the brain and knowing the cognitive intent is a tremendous challenge.

“If I’m studying how the brain encodes information, I’m building a decoder that records information and decodes the signal. So, I might know a lot about the how the brain encodes information, but I don’t know everything, and I can only get it right about 70 to 80 percent of the time,” Moxon says. “But if you are someone who is relying on that decoder, it’s not really going to be good enough—the user of these systems is going to need 100 percent reliability for decades.”

Even after a successful implantation, the electrode interface has to transfer signals to—for example—a new robotic arm that can be programmed to perform a set of tasks. Many investigators are developing complex robotic arms that can be controlled by the brain to allow those with neurological injury or disease to regain some control over their environment, such as steering the robot arm to pick up a glass of water, bring it to their mouth and drink.

“People are sort of OK with robotics, but in the long run, we need much better information from the brain if we are going to stimulate back into the nervous system and restore function of the person’s own body,” Moxon says. “I’m betting on stimulating the spinal cord, while others are betting on muscle, or nerves. It’s a very collaborative, interdisciplinary field because no one is doing all of this by themselves. And I’m one of the few who is saying, ‘I think what people want is to be able to take control over their own bodies.’”

More information

Working Around Spinal Injuries (UC Davis news release)

New Insight on Spinal Injuries (Three Minute Egghead podcast)

Aditi Risbud Bartl is director of marketing and communications for the UC Davis College of Engineering. This post was originally published on the College website

Leave a Reply

Your email address will not be published. Required fields are marked *