Lambert: Good to know the ethics are emerging. Gives confidence.
By Sara Talpos, a contributing editor at Undark. Originally published at Undark.
Nervous system disorders are among the leading causes of death and disability globally. Conditions such as paralysis and aphasia, which affects the ability to understand and produce language, can be devastating to patients and families. Significant investment has been put toward brain research, including the development of new technologies to treat some conditions, said Saskia Hendriks, a bioethicist at the U.S. National Institutes of Health. These technologies may very well improve lives, but they also raise a host of ethical issues.
That’s in part because of the unique nature of the brain, said Hendriks. It’s “the seat of many functions that we think are really important to ourselves, like consciousness, thoughts, memories, emotions, perceptions, actions, perhaps identity.”
In a June essay in The New England Journal of Medicine, Hendriks and a co-author, Christine Grady, outlined some of the thorny ethical questions related to brain research: What is the best way to protect the long-term interests of people who receive brain implants as part of a clinical trial? As technology gets better at decoding thoughts, how can researchers guard against violations of mental privacy? And what best way to prepare for the far-off possibility that consciousness may one day arise from work derived from human stem cells?
Hendriks spoke about the essay in a Zoom interview. Our conversation has been edited for length and clarity.
Undark: Your piece focuses on three hypothetical examples in which brain research raises ethical dilemmas. The first imagines a quadriplegic person named Mr. P. who enrolls in a clinical trial to receive a brain implant. The implant allows him to move his arm and improves his quality of life. But three years later, the implant stops working. The company has declared bankruptcy and replacement parts are no longer available. As things stand today, what would happen to Mr. P.?
Saskia Hendriks: Let me contextualize it a little bit. There are several studies that are ongoing that involve brain implants. These studies offer hope to patients with serious brain disorders who’ve tried all existing treatments without success. And in cases when these implants work, patients understandably may want to keep them, and may want them to keep working. In other cases, some brain implants may simply be too risky to take out.
However, if you keep an experimental implant — if you want to keep benefiting from it — you need ongoing care. That might be hardware, like a new battery; it might be just monitoring to ensure the settings are right. You also need ongoing care to reduce risks associated with an existing implant.
We know that some former participants of brain implant studies experience challenges in the continued access related to this experimental implant. For example, an implant might be so novel that only really the surgeon who put it in is willing to go back in and change it if that’s necessary. In that case, former research participants maintain relying on this initial surgeon. What happens when the surgeon relocates or retires? That can cause challenges, as you can imagine.
Like battery replacements: You might need them every five years — depends on the implant. But some patients experience challenges in terms of who pays for this procedure and whether they have access to the battery. It’s not the case that this is necessarily the health insurers. It depends on the implant and the case.
The article represents a relatively extreme scenario — the one you just outlined. Unfortunately, this is a hypothetical scenario, but we didn’t completely make it up, in the sense that there have been several examples in the past years in the media of cases where a patient received an experimental brain implant and experienced this type of situation where the company went out of business or could, for some reason, no longer support the device. And then they ended up having a need for a new hardware piece, or something like that, which was really difficult to resolve.
In the United States, there are no legal requirements that make the professionals who are involved in the study responsible. So it’s about ethics, given that there are no legal requirements at this point. And in as far as ethics goes, who is responsible for post-trial care? It always depends to some degree, I would say, on the case because it requires, on the one hand, balancing the interests of the former participants. But there’s also a concern that if we make the threshold of what we make companies and investigators and funders and others responsible for, this could have a potential important deterrent effect on whether we’re able to conduct trials, whether companies are willing to do them, [or whether] institutions are willing to have them happen.
In this article, we argue that first, if patients receive a brain implant — and especially if they lack any other treatment alternatives that might help them and end up benefiting — we think it’s inappropriate to require that they will be explanted in most cases. They should be allowed to keep the device. Of course, there might be some exceptions, but in general, we think they should get to keep the device. We make some more specific recommendations in the paper.
UD: The second hypothetical describes a woman in a study that uses brain imaging to reconstruct or read her thoughts. This type of technology may ultimately help people with Broca’s aphasia, but it raises concerns about mental privacy for the study participants. Can you discuss these concerns?
SH: In this case, it’s really important to distinguish between what’s currently possible and what may be possible in the future. For example, I don’t think we can currently read thoughts.
Most of these studies capture information from the motor cortex of the brain. That’s the part of the brain that’s involved in the execution of voluntary actions. So, for example, they may have asked a patient to imagine writing a sentence, and then they try to read the part of the brain that gives the command to write the sentence, and they try to see if by decoding the motor cortex, if they can reimagine the sentence that the person is trying to write. So in other words, unless the person gives the command to write in their mind, they wouldn’t capture it.
It’s really important to recognize that in order to do this, they had to collect 16-plus hours of fMRI data from an individual who was cooperating with this study. Now, researchers are exploring the applications of this decoder with more limited data from the subject that they’re trying to decode the information from.
If one would take it one step beyond that, and it becomes possible to apply this type of decoder on data that’s collected for different purposes — and that’s a really big if — then I would start to get pretty concerned about privacy.
For example, if we will be able to reconstruct silent speech that individuals had while being in a research fMRI for any other research study in the past, and some of this data is in public archives, that would make me concerned. By way of example, in college, I volunteered into plenty of fMRI studies. I don’t know what inner monologues I had at the time, but I would probably prefer that others don’t decipher whatever it was.
We’re still various steps from this scenario. I think for now, though, there is a reason to think carefully about protections. And that means, are there certain types of research we shouldn’t try to do?
UD: The third hypothetical asks a startling question: What should happen if evidence of consciousness or sentience emerges in organoids? Can you explain what a brain organoid is? And do some scientists believe there’s the potential for organoids to become conscious?
SH: Organoids are collections of neural cells that are derived from pluripotent stem cells that can be either induced pluripotent stem cells or embryonic pluripotent stem cells. And these are collections of cells that can develop in a manner that’s similar to that of fetal brains. I place emphasis on that because it’s really not the same as a developing fetal brain. There are some similarities.
These models are really important for brain science because it’s really hard to study a human brain of a living individual, and these models might help improve our understanding of how the brain works, its development, its function, and potentially disease. There are still important limitations in the current size and complexity and some other scientific elements of these models.
I have not heard of a single scientist who thinks current organoids have those types of capacities that we would be particularly concerned about. There is some disagreement among scientists of whether these types of morally relevant properties might be able to emerge in organoids at some point in the future. Some scientists believe that will never happen; there are some others who think it might be possible at some point in future.
However, even that group — at least some of them would still argue that the level of, let’s say consciousness, even if it emerges, it would be similar to like the level of consciousness of an insect like a grasshopper, and not like a human being, which arguably might have implications for how you should treat said organoid.
UD: Your piece recommends guidelines for organoid research. Can you give some examples?
SH: If organoids develop consciousness or sentience or other relevant capacities like being able to experience pain, it will be very important to recognize that because, arguably, we should then start treating them differently. There are some scientific challenges, actually, in being able to measure these types of things. But one of the things we recommended is trying to define some checkpoints that may help researchers determine when a line is crossed or additional oversight is needed.
Depending on the type of organoid research, including the type of stem cell it originated in, oversight may currently be somewhat limited. And so we think there may be cases in the future where more oversight is warranted.
An additional layer has to do with informed consent. There’s some preliminary studies that suggest that at least some people feel uneasy, morally, about the use of their own cells to develop these types of organoids. And so that raises questions about, should we specify, for example, as part of the informed consent when we ask people for their tissue, should we be specifying all your tissues might be used for this type of research and give people the opportunity to opt out? There are currently ongoing conversations about what should be the standards in terms of informed consent.
UD: From what you’ve seen, are brain researchers and device companies thinking enough about the ethical implications of their research and products?
SH: I’ve seen many very ethically conscientious researchers, institutional leaders, companies. This is an emerging field in terms of ethics. So it’s not always obvious what’s the best way of managing a challenge. And sometimes, if you’re really at the front of it, it’s possible that involved parties may overlook or miss ethical challenges, or miss a context that requires rethinking them, or something along those lines.
And to me, the integration of science and ethics in this field is really critical.
I’m doing neuro-research (not at clinic level). Ethical issues described are at surface level – note that, participant always have right of consent, and consent can be taken away at-will at any point. Most participant are willing, as alternatives are worse, and most of the time – research companies are paying for novel treatment.
I think customers are better at choosing tradeoff for themselves in comparison to some tenure university professor, who does not face same reality as customers (especially in neurological area). If one _mandates_ post-term care, basically no small or medium company would be able to afford r&d in this area – and we would even have less reliable devices, and advancement would halt. How do you handle bankruptcy – how do you force extremely specialized workers to do post-care in something so novel – at cost? Most neurologists/neuro-surgeon wouldn’t perform care, if they were not part of the study itself. How do you handle liabilities? In fact, it is far more ethical to do explant. And most of the times, these are contractually agreed prior to surgery.
Customer or patient? That’s an important distinction
They are customer, if they have choice in vendor/supplier. Patient if they do not have a choice in vendor (refer to Canada’s monopsony). But in the context – yea they are same.
I highly recommend Kurzweil’s “The Singularity Is Nearer.” It is a wildly optimistic, but well reasoned, argument for dramatic advances in the merging of human and machine intelligence. A key prediction is that nanotechnology will enable a seamless interface between the human brain and external AI resources. This raises profound questions about identity and calls into question the concept of the individual as the foundation of western political thought. Moreover, a perfected brain/machine interface would also entail the feasibility of a brain/brain interface, further complicating the concept of individuality.
Whitney Webb has discussed at 4th hypothetical option. It includes something along the line of scanning brains for various forms of information – as a person walks by. It could be akin to facial recognition being used in Whole Foods stores and many other public places. Instead of needing ones face to determine who you are, an entity could quickly scan your brain for “shopping preferences” or other, less benign information. A link to her Neurorights and Neuromarkets podcast.
Call me a cynic, but I do not think that serious discussions of ethics are going to have any effect, the research funding is largely driven by money from for-profit players. (I am ignoring the Musk/Thiel pursuits of immortality nutjobs for now)
Simply put, those researchers whose research has the best possibility of producing massive profits for pharma and medical device manufacturers will get their grants from the aforementioned entities.
Placing a temporary pause on IP protections for the results of this research would have the effect of getting the corrupting money out of the research, because without subsidies (which is what IP protections are), the profit driven incentives for reckless behavior are removed.
Thank you! It’s past time we took pie-in-the-penthouse phantasies back down to earth.
Saskia Hendriks: Depending on the type of organoid research, including the type of stem cell it originated in, oversight may currently be somewhat limited. And so we think there may be cases in the future where more oversight is warranted.
Amusing. The deployment of organoids is already ongoing well beyond the bounds that Hendriks, in the OP, suggests.
Earlier this year, a venture capitalist I know was asked to look at a device, about three feet long. The scientist who brought it to him was handcuffed to it and was himself accompanied by a bodyguard. Part of its interior mechanism was an organoid. The proposed applications were military.
For some idea of why there’s interest in organoids, a couple of papers on fairly anodyne use cases —
The technology, opportunities, and challenges of Synthetic Biological Intelligence
https://www.sciencedirect.com/science/article/pii/S0734975023001404?via%3Dihub
Abstract: Integrating neural cultures developed through synthetic biology methods with digital computing has enabled the early development of Synthetic Biological Intelligence (SBI). Recently, key studies have emphasized the advantages of biological neural systems in some information processing tasks. However, neither the technology behind this early development, nor the potential ethical opportunities or challenges, have been explored in detail yet. Here, we review the key aspects that facilitate the development of SBI and explore potential applications. Considering these foreseeable use cases, various ethical implications are proposed.
Biological Neurons vs Deep Reinforcement Learning:Sample efficiency in a simulated game-world
https://memari-workshop.github.io/papers/paper_2.pdf
Abstract: How do synthetic biological systems and artificial neural networks compete in their performance in a game environment? Reinforcement learning has undergone significant advances, however remains behind biological neural intelligence in terms of sample efficiency. Yet most biological systems are significantly more
complicated than most algorithms. Here we compare the inherent intelligence of in vitro biological neuronal networks to state-of-the-art deep reinforcement learning … We employed DishBrain, a system that
embodies in vitro neural networks with in silico computation using a high-density multielectrode array ….