Interview: The Emerging Ethics of Innovative Brain Research

Lambert: Good to know the ethics are emerging. Gives confidence.

By Sara Talpos, a contributing editor at Undark. Originally published at Undark.

Nervous system disorders are among the leading causes of death and disability globally. Conditions such as paralysis and aphasia, which affects the ability to understand and produce language, can be devastating to patients and families. Significant investment has been put toward brain research, including the development of new technologies to treat some conditions, said Saskia Hendriks, a bioethicist at the U.S. National Institutes of Health. These technologies may very well improve lives, but they also raise a host of ethical issues.

That’s in part because of the unique nature of the brain, said Hendriks. It’s “the seat of many functions that we think are really important to ourselves, like consciousness, thoughts, memories, emotions, perceptions, actions, perhaps identity.”

In a June essay in The New England Journal of Medicine, Hendriks and a co-author, Christine Grady, outlined some of the thorny ethical questions related to brain research: What is the best way to protect the long-term interests of people who receive brain implants as part of a clinical trial? As technology gets better at decoding thoughts, how can researchers guard against violations of mental privacy? And what best way to prepare for the far-off possibility that consciousness may one day arise from work derived from human stem cells?

Hendriks spoke about the essay in a Zoom interview. Our conversation has been edited for length and clarity.

Undark: Your piece focuses on three hypothetical examples in which brain research raises ethical dilemmas. The first imagines a quadriplegic person named Mr. P. who enrolls in a clinical trial to receive a brain implant. The implant allows him to move his arm and improves his quality of life. But three years later, the implant stops working. The company has declared bankruptcy and replacement parts are no longer available. As things stand today, what would happen to Mr. P.?

Saskia Hendriks: Let me contextualize it a little bit. There are several studies that are ongoing that involve brain implants. These studies offer hope to patients with serious brain disorders who’ve tried all existing treatments without success. And in cases when these implants work, patients understandably may want to keep them, and may want them to keep working. In other cases, some brain implants may simply be too risky to take out.

However, if you keep an experimental implant — if you want to keep benefiting from it — you need ongoing care. That might be hardware, like a new battery; it might be just monitoring to ensure the settings are right. You also need ongoing care to reduce risks associated with an existing implant.

We know that some former participants of brain implant studies experience challenges in the continued access related to this experimental implant. For example, an implant might be so novel that only really the surgeon who put it in is willing to go back in and change it if that’s necessary. In that case, former research participants maintain relying on this initial surgeon. What happens when the surgeon relocates or retires? That can cause challenges, as you can imagine.

Like battery replacements: You might need them every five years — depends on the implant. But some patients experience challenges in terms of who pays for this procedure and whether they have access to the battery. It’s not the case that this is necessarily the health insurers. It depends on the implant and the case.

The article represents a relatively extreme scenario — the one you just outlined. Unfortunately, this is a hypothetical scenario, but we didn’t completely make it up, in the sense that there have been several examples in the past years in the media of cases where a patient received an experimental brain implant and experienced this type of situation where the company went out of business or could, for some reason, no longer support the device. And then they ended up having a need for a new hardware piece, or something like that, which was really difficult to resolve.

In the United States, there are no legal requirements that make the professionals who are involved in the study responsible. So it’s about ethics, given that there are no legal requirements at this point. And in as far as ethics goes, who is responsible for post-trial care? It always depends to some degree, I would say, on the case because it requires, on the one hand, balancing the interests of the former participants. But there’s also a concern that if we make the threshold of what we make companies and investigators and funders and others responsible for, this could have a potential important deterrent effect on whether we’re able to conduct trials, whether companies are willing to do them, [or whether] institutions are willing to have them happen.

In this article, we argue that first, if patients receive a brain implant — and especially if they lack any other treatment alternatives that might help them and end up benefiting — we think it’s inappropriate to require that they will be explanted in most cases. They should be allowed to keep the device. Of course, there might be some exceptions, but in general, we think they should get to keep the device. We make some more specific recommendations in the paper.

UD: The second hypothetical describes a woman in a study that uses brain imaging to reconstruct or read her thoughts. This type of technology may ultimately help people with Broca’s aphasia, but it raises concerns about mental privacy for the study participants. Can you discuss these concerns?

SH: In this case, it’s really important to distinguish between what’s currently possible and what may be possible in the future. For example, I don’t think we can currently read thoughts.

Most of these studies capture information from the motor cortex of the brain. That’s the part of the brain that’s involved in the execution of voluntary actions. So, for example, they may have asked a patient to imagine writing a sentence, and then they try to read the part of the brain that gives the command to write the sentence, and they try to see if by decoding the motor cortex, if they can reimagine the sentence that the person is trying to write. So in other words, unless the person gives the command to write in their mind, they wouldn’t capture it.

It’s really important to recognize that in order to do this, they had to collect 16-plus hours of fMRI data from an individual who was cooperating with this study. Now, researchers are exploring the applications of this decoder with more limited data from the subject that they’re trying to decode the information from.

If one would take it one step beyond that, and it becomes possible to apply this type of decoder on data that’s collected for different purposes — and that’s a really big if — then I would start to get pretty concerned about privacy.

For example, if we will be able to reconstruct silent speech that individuals had while being in a research fMRI for any other research study in the past, and some of this data is in public archives, that would make me concerned. By way of example, in college, I volunteered into plenty of fMRI studies. I don’t know what inner monologues I had at the time, but I would probably prefer that others don’t decipher whatever it was.

We’re still various steps from this scenario. I think for now, though, there is a reason to think carefully about protections. And that means, are there certain types of research we shouldn’t try to do?

UD: The third hypothetical asks a startling question: What should happen if evidence of consciousness or sentience emerges in organoids? Can you explain what a brain organoid is? And do some scientists believe there’s the potential for organoids to become conscious?

SH: Organoids are collections of neural cells that are derived from pluripotent stem cells that can be either induced pluripotent stem cells or embryonic pluripotent stem cells. And these are collections of cells that can develop in a manner that’s similar to that of fetal brains. I place emphasis on that because it’s really not the same as a developing fetal brain. There are some similarities.

These models are really important for brain science because it’s really hard to study a human brain of a living individual, and these models might help improve our understanding of how the brain works, its development, its function, and potentially disease. There are still important limitations in the current size and complexity and some other scientific elements of these models.

I have not heard of a single scientist who thinks current organoids have those types of capacities that we would be particularly concerned about. There is some disagreement among scientists of whether these types of morally relevant properties might be able to emerge in organoids at some point in the future. Some scientists believe that will never happen; there are some others who think it might be possible at some point in future.

However, even that group — at least some of them would still argue that the level of, let’s say consciousness, even if it emerges, it would be similar to like the level of consciousness of an insect like a grasshopper, and not like a human being, which arguably might have implications for how you should treat said organoid.

UD: Your piece recommends guidelines for organoid research. Can you give some examples?

SH: If organoids develop consciousness or sentience or other relevant capacities like being able to experience pain, it will be very important to recognize that because, arguably, we should then start treating them differently. There are some scientific challenges, actually, in being able to measure these types of things. But one of the things we recommended is trying to define some checkpoints that may help researchers determine when a line is crossed or additional oversight is needed.

Depending on the type of organoid research, including the type of stem cell it originated in, oversight may currently be somewhat limited. And so we think there may be cases in the future where more oversight is warranted.

An additional layer has to do with informed consent. There’s some preliminary studies that suggest that at least some people feel uneasy, morally, about the use of their own cells to develop these types of organoids. And so that raises questions about, should we specify, for example, as part of the informed consent when we ask people for their tissue, should we be specifying all your tissues might be used for this type of research and give people the opportunity to opt out? There are currently ongoing conversations about what should be the standards in terms of informed consent.

UD: From what you’ve seen, are brain researchers and device companies thinking enough about the ethical implications of their research and products?

SH: I’ve seen many very ethically conscientious researchers, institutional leaders, companies. This is an emerging field in terms of ethics. So it’s not always obvious what’s the best way of managing a challenge. And sometimes, if you’re really at the front of it, it’s possible that involved parties may overlook or miss ethical challenges, or miss a context that requires rethinking them, or something along those lines.

And to me, the integration of science and ethics in this field is really critical.

Print Friendly, PDF & Email
This entry was posted in Health care, Science and the scientific method, Social policy, Social values, Technology and innovation on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

4 comments

  1. wazza

    I’m doing neuro-research (not at clinic level). Ethical issues described are at surface level – note that, participant always have right of consent, and consent can be taken away at-will at any point. Most participant are willing, as alternatives are worse, and most of the time – research companies are paying for novel treatment.

    we think it’s inappropriate to require that they will be explanted in most cases. They should be allowed to keep the device. Of course, there might be some exceptions, but in general, we think they should get to keep the device.

    I think customers are better at choosing tradeoff for themselves in comparison to some tenure university professor, who does not face same reality as customers (especially in neurological area). If one _mandates_ post-term care, basically no small or medium company would be able to afford r&d in this area – and we would even have less reliable devices, and advancement would halt. How do you handle bankruptcy – how do you force extremely specialized workers to do post-care in something so novel – at cost? Most neurologists/neuro-surgeon wouldn’t perform care, if they were not part of the study itself. How do you handle liabilities? In fact, it is far more ethical to do explant. And most of the times, these are contractually agreed prior to surgery.

    Reply
  2. HH

    I highly recommend Kurzweil’s “The Singularity Is Nearer.” It is a wildly optimistic, but well reasoned, argument for dramatic advances in the merging of human and machine intelligence. A key prediction is that nanotechnology will enable a seamless interface between the human brain and external AI resources. This raises profound questions about identity and calls into question the concept of the individual as the foundation of western political thought. Moreover, a perfected brain/machine interface would also entail the feasibility of a brain/brain interface, further complicating the concept of individuality.

    Reply
  3. Bsn

    Whitney Webb has discussed at 4th hypothetical option. It includes something along the line of scanning brains for various forms of information – as a person walks by. It could be akin to facial recognition being used in Whole Foods stores and many other public places. Instead of needing ones face to determine who you are, an entity could quickly scan your brain for “shopping preferences” or other, less benign information. A link to her Neurorights and Neuromarkets podcast.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *