Yves here. This is a complex topic. On the one hand, there is a disturbingly high number of doctors who cling to outdated ideas. I recall reading a survey about a decade ago that said that one-third of the gastro-enterologists still believed ulcers were caused by diet and stress, not bacteria.
At the same time, a lot of existing practice is not evidence based. One of my colleagues who was trained as a biomedical engineer and worked at the NIH says that medicine is a medieval art. And as Lambert documented, American doctors love to overtreat. But the flip side is that many HMOs and PPOs are using clinical guidelines as a way to control costs, when they often amount to a “one size fits all approach that can be misguided for quite a few particular patients.
By Charles Manski, Board of Trustees Professor in Economics, Northwestern University. Originally published at VoxEU
In medical treatment, it is assumed that adherence to clinical practice guidelines is always preferable to decentralised clinical decision-making, yet there is no welfare analysis that supports this belief. This column argues that it would be better to treat clinical judgement as a problem of decision-making under uncertainty. In this case there would be no optimal way to make decisions, but there are reasonable ways with well-understood welfare properties.
Guidance to clinicians on patient care has increasingly become institutionalised through clinical practice guidelines (CPGs). Dictionaries define a guideline as a suggestion for behaviour, but clinicians have strong incentives to comply with these guidelines when they are issued, making adherence to them almost compulsory. A patient’s health insurance plan may require adherence as a condition for reimbursement of the cost of treatment. Adherence may also be used as evidence of due diligence to defend a malpractice claim.
The medical literature contains many commentaries exhorting clinicians to adhere to guidelines. They argue that CPG developers have a better knowledge of treatment response than clinicians. As Institute of Medicine (2011, p.26)) states: “Trustworthy CPGs have the potential to reduce inappropriate practice variation.”
Statements like this demonstrate the widespread belief that adherence to guidelines is socially preferable to decentralised clinical decision-making. Yet there is no welfare analysis that supports this belief. There are two reasons why patient care adhering to guidelines may differ from the care that clinicians provide:
- Guideline developers may differ from clinicians in their ability to predict how decisions affect patient outcomes; or
- Guideline developers and clinicians may differ in how they evaluate patient outcomes.
Welfare comparison requires consideration of both factors. In recent work (Manski 2017), I consider how limited ability to assess patient risk of illness, and to predict treatment response, may affect the welfare that adherence to guidelines or decentralised clinical practice achieve.
Optimal Personalised Care Assuming Rational Expectations
To provide a baseline, I consider an idealised setting studied by medical economists such as Phelps and Mushlin (1988). These studies assume that a clinician makes accurate probabilistic risk assessments and predictions of treatment response conditional on all observed patient covariates. That is, they have rational expectations. The studies assume that the objective is to maximise a patient’s expected utility.
In this setting, analysis of optimal personalised care shows that adherence to a CPG cannot outperform decentralised practice, and may perform less well. If a CPG conditions its recommendations on all the patient covariates that clinicians observe, it can do no better than reproduce clinical decisions. If the CPG makes recommendations conditional on a subset of the clinically observable covariates, as is typically the case, adhering to the CPG may yield inferior welfare because the guideline does not personalise patient care. Thus, if clinicians have rational expectations, there is no informational argument for adhering to CPGs.
The inferiority of adhering to CPGs holds because the problem of optimising care has a simple solution. Patients should be divided into groups having the same observed covariates. All patients in a group should be given the care that yields the highest within-group expected utility. Maximum expected utility increases as more patient covariates are observed.
Treatment With Imperfect Clinical Judgment
If it were reasonable to suppose that clinicians had rational expectations, there would be no utilitarian argument to develop CPGs. Empirical psychological research, however, has concluded that evidence-based predictions consistently outperform clinical judgement, even when clinical judgment uses additional covariates as predictors.
An influential review article by Dawes et al. (1989, p.1668) distinguished statistical prediction and clinical judgment:
“In the clinical method the decision-maker combines or processes information in her or her head. In the actuarial or statistical method the human judge is eliminated and conclusions rest solely on empirically established relations between data and the condition or event of interest.”
Comparing the two, even when a clinician observes patient covariates not utilised in available statistical prediction, they cautioned against use of clinical judgment to predict disease risk or treatment response (p.1670):
“Might the clinician attain superiority if given an informational edge? … The research addressing this question has yielded consistent results … Even when given an information edge, the clinical judge still fails to surpass the actuarial method; in fact, access to additional information often does nothing to close the gap between the two methods.”
Psychological research challenged the realism of assuming clinicians have rational expectations, but it did not per se imply that adherence to CPGs would yield greater welfare than decision-making using clinical judgment. One issue has been that the psychological literature has not addressed all welfare-relevant aspects of clinical decisions. Psychologists have studied the accuracy of risk assessments made by statistical predictors and by clinicians, but they have not done similar studies of the accuracy of evaluations of patient preferences over health outcomes. Also psychological research has seldom examined the accuracy of probabilistic risk assessments. It has been more common to assess the accuracy of point predictions. Study of the logical relationship between probabilistic and point predictions shows that data on the latter at most yields wide bounds on the former.
Given these and other issues, we cannot conclude that imperfect clinical judgment makes adherence to CPGs superior to decentralised decision-making. The findings of psychological research only imply that welfare comparison is a delicate choice between alternative second-best systems for patient care. Adherence to CPGs may be inferior to the extent that CPGs condition on fewer patient covariates than do clinicians, but it may be superior to the extent that imperfect clinical judgment yields sub-optimal decisions. the precise trade-off depends on the context.
Questionable Methodological Practices in Evidence-Based Medicine
The psychological literature has questioned the judgment of clinicians, but it has not questioned the accuracy of the predictions used in evidence-based guideline development. Predictions are evidence-based, but this does not mean that they use evidence effectively. Questionable methodological practices have long afflicted research on health outcomes, and may have affected guideline development too. This further complicates a comparison of adherence to guidelines to decentralised practice.
One questionable practice would be the extrapolation of findings from randomised trials to clinical decisions. Guideline developers use trial data to predict treatment response whenever this data is available. Trials are appealing because, given sufficient sample size and complete observation of outcomes, they deliver credible findings about treatment response in the study population. Extrapolating from these findings, however, can be difficult. Wishful extrapolation commonly assumes that the treatment response that would occur in practice in the same as in trials. This may not be true. Study populations commonly differ from patient populations. Experimental treatments differ from treatments used in practice. The surrogate outcomes measured in trials differ from outcomes of health interest.
Using hypothesis testing to compare treatments is also questionable. A common procedure when comparing two treatments is to view one as the status quo and the other as an innovation. The usual null hypothesis would be that that the innovation was no better than the status quo, and the alternative would be that the innovation was better. If the null hypothesis is not rejected, guidelines recommend that the status quo is used in practice. If the null is rejected, the innovation becomes the treatment of choice. The convention has been to fix the probability of rejecting the null hypothesis when it is correct (Type I error) and choose sample size to fix the probability of rejecting the alternative hypothesis when it is correct (Type II error).
Manski and Tetenov (2016) observed that hypothesis testing may yield unsatisfactory results for clinical decisions for several reasons. These include:
- Use of conventional error probabilities: It has been standard to fix the probability of Type I error at 5% and of Type II error at 10-20%, but the theory of hypothesis testing gives no rationale for using these error probabilities. There is no reason why a clinician concerned with patient welfare should make treatment choices that have a much greater probability of Type II than Type I error.
- Inattention to magnitudes of losses when errors occur: A clinician should care about more than error probabilities. He or she should care about the magnitudes of the losses to patient welfare should errors occur. A given error probability should be less acceptable when the welfare difference between treatments is larger, but the theory of hypothesis testing would not take this into account.
Limitation to settings with two treatments: A clinician often chooses among several treatments, and many clinical trials compare more than two treatments. Yet the standard theory of hypothesis testing only contemplates choice between two treatments. Statisticians have struggled to extend it to deal with a comparison of multiple treatments.
Doing Better
Evidence-based research can inform patient care more effectively than it does at present. Studies should quantify how identification problems and statistical imprecision jointly affect the feasibility of making credible predictions of health outcomes. Identification is usually the dominant problem.
Recognising that knowledge of treatment response is incomplete, I recommend formal consideration of patient care as a problem of decision-making under uncertainty. There is no optimal way to make decisions under uncertainty, but there are reasonable ways with well-understood welfare properties. These include maximisation of subjective expected welfare, the maximin criterion, and the minimax-regret criterion.
There is precedent for verbal recognition of uncertainty in the literature on medical decision-making. For example, Institute of Medicine (2011 p.33) called attention to the assertion by the Evidence-Based Medicine Working Group:
“[C]linicians must accept uncertainty and the notion that clinical decisions are often made with scant knowledge of their true impact.”
Verbal recognition of uncertainty, however, has not led guideline developers to examine patient care formally as a problem of decision-making under uncertainty. I find this surprising. Medical research makes much use of biological science, technology, and quantitative statistical methods. Why then should CPG development acknowledge uncertainty only verbally? Formal analysis of patient care under uncertainty has much to contribute to guideline development and to decision-making by clinicians.
See original post for references
Interesting article and a couple of clarifications:
True but health economists have done so. And they got so scared by the results that some (Dolan) left the field to do something else. This particular example is that whilst the general population reckons “extreme pain” to be worse than “extreme depression/anxiety”, those members of the population who’d experienced them both put them the other way round. Which has profound implications for the UK values assigned to health outcomes. Of course other countries might do things in different ways and this is NOT some veiled attack on what the US might do if single payer gets onto the playing field. It’s merely adding to the warning in the paper about how to do it. Which leads to a second warning I’d make – averages. They conceal a lot. Mental health is the archetypal example and, again, maybe the paper is right that something like maximin is warranted, given that “living by averages” means some groups automatically lose out. Just some thoughts, which hopefully are constructive this time round and expand on points made.
One thing this article doesn’t distinguish between are hospital doctors and solo practitioners (i.e. family doctors, or occasionally doctors in small hospitals). There is a huge issue with doctors simply not keeping up with current research if they don’t have the peer pressure and oversight that you would expect in a well run hospital. I was a victim of this as for years as a child I was repeatedly given antibiotics by my family doctor for ‘chest infections’. In fact, I had asthma triggered by a cold air sensitivity, and was only diagnosed in my late teens (after I’d been carted to hospital after a school outdoors sport session).
I talked much later to a family member who is a specialist in prescribing practice who said that this was by far the most common misdiagnosis/treatment and as late as the 1990’s in the UK (where he did research on the subject), he found that 25% of GP’s (family doctors) were not identifying asthma correctly. Very often, pharmacists are the only gatekeepers to identify bad prescribing practices.
I’ve also heard numerous stories about terrible practices by specialists in small hospitals, who can become mini-emperors with nobody to contradict their professional opinions. This is one reason why all doctors will generally advise that the best place when you are ill is a large teaching hospital (definitely not a small private hospital). Bad diagnostic practice is much more likely to be stamped out in the biggest hospitals where there is greater peer oversight.
I’d ask what the author assumes is the best model for doctor-patient interaction, what “patient care” means. To me it should be two or maybe more (including nurses and family members and other caregivers) people, ones with more knowledge of physiology and systems, others with more knowledge and experience of whatever the “presenting condition” happens to be, interacting to increase longevity, reduce pain, repair damaged structures, correct physiological malfunctions and problems with homeostatic functions and so forth, to maximize function, independence and comfort — an incomplete definition of a very complex notion.
Physicians aren’t bots. There are different reasons people go into medicine, not all of them about “patient care” and altruism — “unselfish regard for or devotion to the welfare of others…behavior by … not beneficial to or may be harmful to itself but that benefits others of its species.” Sometimes quite the opposite, thanks to greed and pleasure-seeking and the burdens of “debt” assumed by so many “providers,” or even rare psych phenomena like von Muchausen’s by proxy…
Always, the smart kids in the room want to systematize and organize every kind of function, and in the neoliberal universe, reduce complexity to profit-generating, “management”-centric forms. Sometimes that application of rationalization is a good thing, it can help focus attention wisely and lead to those often undefined “good outcomes.”
But there’s almost an infinite number of ways humans can get injured, sickened and die. Human physiology is vastly complex. The interaction pathways are likewise near infinite. Medicine is an art of observation compounded over time, and a lot of the knowledge base (I personally hate that term) is just wrong, from a wide variety of causes including bias, sample size, things like referred pain, atypical “presentations,” “normal variation” and so forth. When what to me is a semi-mystical interaction between practitioner and person works well, it is a thing of beauty and kindness. As with anything human-created and -mediated, too often the result is far worse — most of us can insert one or more anecdotes here, on either extreme.
Constant mechanization of medicine results in stuff like the ICD-10 classification thing, which is mostly about Big Data and payments. “Billable” medicine has been “reduced” to about 70,000 “diagnosis codes” and a whole lot of treatment and procedure codes, up from about 13,000 diagnosis codes in ICD-9. It’s a “whole new way of doing business:”
ICD-9 is widely considered to be based on outdated technology, with codes unable to reflect the use of new equipment. ICD-10 offers far more integration with modern technology, with an emphasis on devices that are actually being used for various procedures. The additional spaces available are partly designed to allow for new technology to be seamlessly integrated into codes, which means fewer concerns about the ability to accurately report information as time goes on. In Conclusion, ICD-10 is not a simple update to ICD-9. The structural changes throughout the entire coding system are very significant, and the increased level of complexity requires coders to be even more thoroughly trained than before. However, it is possible to prepare for the changes by remembering a few simple guidelines:
Train early- The more familiar your staff are with ICD-10, the better. While currently scheduled to begin Oct. 1, 2014, beginning the training now is not a bad idea.
Understand the ICD-10- The structural changes require a change in the way people think about coding, and understanding it will help to break current coding habits. Medical professionals used to reporting things a certain way so they can be coded may need to change what they say in order to work well with the new system.
EBM is just another management buzz(kill)word, like “total quality management” and “zero defects” and “zero-based budgeting.” All supported by proponents who rationalize and argue in the language of squishy “disciplines” like psychology and economics, using “specialized” lexicons that often are cloudy restatements of commonplaces in arcane terminologies, and the creation of intellectual artifacts that have tenuous or little relationship to the reality most “uneducated” observers perceive — yes, sometimes incorrectly as more acute observations might show, but more often accurately than the modeling and force-fitting that “experts” soar off on. How many of the articles cited as authoritative on various points have anything other than presence in peer-reviewed land as proof of the claimed “findings” both of the original researchers and authors, or acuteness and accuracy of the proposition for which they are offered subsequently? And how much fraud and selectiveness (like medication trials that exclude likely non-responders to the therapies) and purblindness fills the vast swath of “published studies”
I’ve personally experienced and seen lots of misdiagnoses and clinician blindness and tunnel vision, starting as a child when the family doctor, a partisan of allergies as the most common source of disease, and who patch-tested me and my sisters unmercifully, supposedly told my mom that my broken right forearm was the result of an allergy. And our favored subsequent family doctor, who mis-diagnosed my mother’s fatal ovarian cancer (a common failing given vagueness of symptoms) for a year or more after her original office visit, as a gall bladder problem needing bile salts. (Said doctor, seeking alpha, had moved largely into “industrial medicine,” doing workers comp and employment physicals — a wonderfully nice guy, but clinical skills atrophy or lose focus or get too sharpened into narrow channels– totally understandable, given human nature — channels that get reinforced by “economic” forces, stuff like HMOs and corporate bottom lines, and stuff like the vast and geometrically growing pile of “medical knowledge” of more or less validity, on and on.
These observations only touch on an enormously complex and painfully meaningful subject. Seems to me that the best “we” patients and patients-to-be can expect is that we connect with clinicians that still start from “Do no harm” and aspire to better the lives of we who seek and depend on their expertise — a notably, and inevitably, ever smaller fraction of the available “knowledge base.” And “we” can hope that AI and EBM and the horrors wrought by the other false gods of “modern medical practice” like “Electronic Medical Records” don’t intermediate and leverage their way into the care we mopes need and hope for. EBM from what I have seen can be a useful approach in some ways, but then Smith’s Law of Crapification is as universal as Murphy’s…
Yeah I agree entirely…. But more holistic approaches (judging medicine by overall quality of life) get into areas that have got a little… Shall we say… Controversial… So I’m keeping my comments focused to stay within site guidelines.
There are two reasons why patient care adhering to guidelines may differ from the care that clinicians provide:
Guideline developers may differ from clinicians in their ability to predict how decisions affect patient outcomes; or
Guideline developers and clinicians may differ in how they evaluate patient outcomes.
I am a practicing internal medicine hospitalist in a major US city. While in the past, there were large delays in physicians taking evidence-based practice and turning it into new habit and too much unwanted variation in clinical practice — I feel like in the US, the pendulum is swinging too far the other way — and in unintelligent ways, forcing clinicians into care protocols without regard for individual circumstance. Now there are clinical care guidelines from Medicare, the American Heart Association, the CDC, and others around major disease states (like stroke, heart failure, sepsis) that hospitals must follow for reimbursement — yet the guidelines do not keep pace with current peer-reviewed evidence. My point is that there are mandates and financial incentive for hospitals to pressure physicians into adhering to guidelines which are not universally good for patients or for cost of care (sepsis guidelines now are a good example of this). Often these expectations are negotiated by bureaucrats, not clinicians. The healthcare industry needs a better way of giving physicians real-time feedback about their clinical practice habits in relation to their peers —- and having some common-sense expectations around unwanted variations in practice.
Hopefully you can get yourself on some committees dealing with these issues. Very important to have physician input.
Economics is definitely important, not only for improving the hospitals bottom line but for making medicine economically responsible generally.
Single payer, I think would be great but we still need to watch what we are paying for. No need for pharmaceutical companies to make outrageous profits.
One interesting area now is that many very expensive tests are becoming available for cancer testing. These need to be ordered responsibly and that takes physician, social and admin input. And at a deeper level needs to examine why the tests, drugs etc are so expensive.
Tranylcypromine – first generation antidepressant and still the gold standard for effectiveness (the “cheese effect” side effect has been overblown as numerous studies have more recently shown – I’m on it and can confirm this) costs the NHS over £1000 per month for me. It’s been off patent for 50 years. However there is a monopoly supplier (price gouger). Why don’t generic suppliers move in? Because the market is too small. Two generations of doctors have been taught that this class (MAOIs) are akin to leech therapy. Thus the assumption is that most people on them will be old and will die off. Scandalous, as any psychiatrist worth their salt will tell you (never mind the health economist like me).
Prime case of cr*pification in medicine if you ask me. Doctors bowled over by the drug companies selling SSRIs/SNRIs which let’s not forget don’t even work as the pharmacology says they should – they should show benefits at day 4/5 like MAOIs if their original pharmacological justification is paid attention to. Now does that mean they don’t work? No I’m not saying that. But their method of action is clearly odd and not in line with the original pharmacological data and models.
Health economics 102 is derived demand – patients rely on doctors to enunciate their demand function. But when doctors have effectively undergone the medical equivalent of regulatory capture then Houston we have a problem.
Yes indeed. These pharmacologic profits can be perniciously spread around. It can be difficult to find a true patient advocate.
Thanks for the reply. The problem here is that patient advocacy requires systemic change: change in the medical curriculum along with a concerted effort to tell GPs about the new data on “old” drugs… And they are already overburdened with stuff “coming at them from on high”.
Plus even if (say) they learn the real data concerning MAOIs they still can’t prescribe them straight off… A psychiatrist must initiate it (then GP can carry on)… And mental health services are close to breaking point. My local service is at critical levels. Austerity yet again….
I was going to a physical therapist practice for spasticity and weakness and pain related to a pretty radical cervical laminectomy and progressive spine problems. I was a Medicare patient and they insisted on using the guidelines for rehabilitation after operation, even though my operation took place 12 years earlier. This consisted of exercises which only made my spasticity worse and aggravated my arthritis. What I needed was to have my chest and arms worked on to counteract the contraction of muscles caused by spasticity, which the therapist knew how to do. But she refused and told me that If I did not do the exercises, she would no longer treat me as I was violating the “guidelines”, which did not apply to my circumstance. There was apparently nothing to allow treatment for chronic problems (except opiods, which I refused).
Sorry to hear that. I had reason to look at the UK guidelines on a range of conditions (from NICE). I was actually pleasantly surprised: although they do in many cases follow “stepped care” functions from medicine, there were a surprising number of “get outs” regarding if the patient cannot tolerate /has good reason to reject the official guidance. Patient preferences have begun to get recognised in the UK.
Of course whether austerity allows the doctors to *afford* differences is another sad story….
I guess that what I need now is what amounts to palliative care (non-pharmaceutical). I find now that I have discovered high-CBD hemp (Otto II strain) which I can grow myself, I can actually slow down the progressive effects of my condition. Ironically, though I qualify for the medical marijuana card, I can’t afford to buy from the dispensaries, and they mainly offer high THC strains anyway. I am lucky to have found a way to treat myself!
Oh my God, you’re so right about sepsis. I read Pubmed daily and I am terrified of winding up in a hospital with something halfway complicated. If I get sepsis, just drag me outside into the sunshine, stop feeding me for two days (except maybe for Bdellovibrio bacteriovorus) and douse me in DMSO and epsom salts.
When I was inside a hospital ten years ago, I asked why the patients couldn’t get any sunlight and they were completely clueless as to what cathelicidin was or how to treat a biofilm. A year ago, I met one oncologist who couldn’t understand why any cancer patient would want to take metformin and another one who didn’t know why someone with a brain tumor might want a ketogenic diet. Had never even heard of the literature and it was his specialty.
Some of these people don’t even bother to sit down and type in common conditions in their fields to see what’s new that pops up.
I had to wait ten years for one doctor I know to finally admit sugar raises cholesterol. People forget that once somebody memorizes something wrong, they really don’t want to undo all that effort…
In my case I don’t metabolize medication well. Because of that I’m under treated and over treated. Taking normal antibiotics for 14-21 days works for me. Taking Amoxicillin for 10 days followed by something stronger for 10 days a week later is more money & 2 or 3 Dr. visits.
There is a psychiatric medicine that works well for me. At the max dose my blood levels are 17 when therapeutic measurement is 80 or so. Since I can’t get approval for more from insurance I take 4 other meds, all at maximum doses. Now I take them once a day, twice a day before 5, every 12 hours and every 6 hours. My compliance is poor, so understanding what’s physical, what’s psychological and what’s a result of erratic medication dosing is hard. If I was released from a psychiatric hospital with my preferred medication at its therapeutic dose my insurance would allow it, but I can’t be committed unless I go off my meds and get arrested for something serious.
Luckily my GP will allow me to take normal doses over longer periods for things like antibiotics or iron or vitamin D. My psychiatrist has no luck, even when pointing out that they could save $1,000 month by going with blood levels rather than doses.
Medicine’s problem is that somewhere along the way, they let an economist touch it. The contamination has been fatal.
Ideal patients don’t exist any more than ideal markets. Patients don’t come in with one complaint; they have a history and sometimes dozens of issues with uniquely damaged bodies and warped immune responses. Each individual human body is deliberately highly variable otherwise we’d all be eaten by the first infection to come along. The paradox of biology is that a species has to be close enough to mate but species with too little chemical variety quickly go extinct.
The most thorough way to practice medicine is determine what a normal human body is supposed to look like. You figure out how your patient differs (e.g., diet, flora, cholesterol, etc.) and then you look at the cheapest, most effective ways to push them back into alignment with “normal.” As the article points out, you don’t know what evidence is important. That’s the biggest problem with bringing “rational expectations” junk science in. (We don’t even have a fraction of the diagnostics yet to attempt medicine this way but you’d be surprised just how much better outcomes are for chronic disease once you start approaching it this way.)
You then have an entire literature on quality control and measurement (Edwards Deming) that economists seldom discuss because it implies management should be held accountable for “errors” (we can’t even mention what normal people call fraud).
Then there are the financial motives of those setting “guidelines” and their inability to rapidly (not to mention objectively) integrate new research data, especially across fields. The article, for instance, says nothing about cell studies or genetics which can teach you a great deal even absent clinical trials. Several good, clear, consistent studies of cellular behavior can often tell you when something’s gone wrong with one clinical trial – just as several epidemiological observations can often point you to a common signaling pathway.
What doctors should be doing is more akin to modeling a climate on a supercomputer, not trading fantasy baseball players. Letting economists lead the way is nuts. These aren’t markets that perfectly clear (which don’t exist, by the way). Each person is a whole ecology – and not just unto themselves. There’s gene flow even between people; it’s called the flu.
You create a model so it shows you new insights you didn’t see before. And, quite honestly, you often only know what’s important in retrospect. Much that looks “uncertain” makes sense when you piece together a million pieces of a jigsaw puzzle. It’s this consistency under uncertainty and across circumstances that begins pointing you in the right direction.
There’s no way any single physician can do this. You need a really big team – and this is where “economics” (if we’re calling it that) comes up short again. This is an *institutional* problem or coordinated action, as other commentators have noted (which smart people used to be allowed to call “politics”).
The best medical feedback you can get comes from a good coroner’s report. But just as the bankers have dismantled oversight and auditing, so too has the medical profession been touched by the decrepit digit of the Chicago School of fraud. We don’t perform autopsies anymore because then we might find out something about how a patient died. Then we’d have to do something with that information. That would mean holding the managerial class responsible for the quality of their work (or “guidelines,” if you will). It would be great for the rest of us, but I can hear a thousand MBA’s screaming in the night…
“Care” vs. “protocols:” So, like so many parts of “modern life,” it might be fair to say “there’s no fixing it, because the fix is in”?
But of course there are still some people of good will and kindly spirit and reasonable intelligence that continue, against all the incentives and vectors and pressures, to do what sensate and empathic patients and doctors and nurses and relatives and “ordinary people” would recognize as “the right thing.” E.G., https://m.youtube.com/watch?v=lYpzgTKwk6s And do all those “right things,” as best they can, in spite of being looted and repressed every day, in all those “medical settings” where wealth is transferred in lieu of health promotion and restoration. That latter term, “the right thing,” being one of those definitional items that can so easily be hijacked and perverted, by stuff like the Homo economicus meme… “we can’t AFFORD that… I won’t pay a nickel for someone else’s medical treatments and drugs…”
I know an acupuncturist (socially) who complains about the need for “evidence-based” protocols. He says these are typically the narrowest possible studies, often drug company-funded, that limit reasonable treatment alternatives. On the other hand, it keeps him busy because most of his patients can’t get satisfactory treatment from Western medicine (doctors’ protocol requirements trump good sense).
Unable to now, but I hope to make a later comment here regarding Teaching Hospitals, since it’s been suggested as a best option in a comment above by someone who was surely well meaning.
On that commenter’s clearly innocent behalf, you have to know someone, or be someone, who had a horrid experience in one, to even realize the potential downsides of being treated in certain Teaching Hospitals in a country, the US, where profit for a small few, and immortalizing the wealthy, is insidiously declared as the end all be all.
Ideally, Teaching Hospitals would be the best place to be treated, but many qualifications are in order. And don’t even bother with the frighteningly pathetic: Teaching Hospital wiki page, which does not even clarify the centuries long history, nor does it clarify which hospitals are teaching hospitals and why, in any given country, etcetera, etcetera, etcetera.
Teaching Hospitals generally comprise some (if not all, in certain of the following categories): VA hospitals; large, Urban County [Government] Hospitals (such as Allegheny County General Hospital, in Pgh., PA and Santa Clara Valley Medical, in Santa Clara County, CA) which are Government funded to ‘treat’ the poverty ridden; State University Hospitals, such as those in the University of California (UC) group, and, occasionally, Private University Hospitals such as Stanford’s. (Not sure if any Prison Hospitals are also Teaching Hospitals, a search was fruitless, wouldn’t surprise me though.)
Many would generally stay away from, if able, many of the VA and County Hospitals, despite the fact that many of the Medical Staff are devoted and well educated. All one need do is research recent VA Hospital Scandals and overwhelmed County Hospital horrors.
Even outside of the VA and County hospitals, the Teaching Hospitals that are ‘well respected‘ may not be preferable (due to horrific MANAGEMENT policies):
For example, if one looks at the two Silicon Valley teaching hospital’s – Santa Clara Valley Medical County Hospital and Stanford Healthcare – the yelp ratings are a pathetic 2 and 2.5 respectively. When sorted by most current date (as I’ve linked to above): on the first page of Stanford ratings, one star (oddly the lowest yelp possible rating, no zero ratings allowed for the Free Markets) ratings predominate with dots of five star ratings, some understandable, as they relate to a particular human caregiver[s], some which frankly are suspect, given the predominance of the one star ratings. When that first page of ratings is calculated alone, it gets even uglier for Stanford, and it drops to a 2 star rating (interesting too, unlike Santa Clara Valley Medical County Hospital, Stanford is so powerful, it doesn’t even bother to respond to its pathetic ratings), while Santa Clara Valley Medical’s ratings increase to 2.1.
Next, there’s the emotional abuse (which already ill and/especially poverty ridden patients and their loved ones need like a hole in the head –stress can kill) of caretakers announcing that This is A Teaching Hospital! to shut people up about any valid concerns they may have when they feel like guinea pigs [1]. This, from a Doctor:
I’ve mostly referred to Silicon Valley Hospitals above, because I’ve lived and worked there for decades and have heard numerous horror stories, and witnessed some, up close and personal, with those I’m closest to.
[1] It’s a well known fact that US experiments on unwitting patients (predominantly poverty ridden minorities, elderly and single females), were predominantly, if not wholly, undertaken at Teaching Hospitals™ which the Department of Defense heavily funds – such as the UCSF Plutonium Injections – and Prison Hospitals.
sigh, my post became too long, and html link constrained, before I addressed other Teaching Hospital™ issues (such as the late night security staff treating RETURNING ER patients (despite not even having any record of even petty crimes) like terrorists, I’ve witnessed this after a Teaching Hospital™ near killed someone I love. I may follow up in a day or two, if able.