Yves here. As Lambert might say, “BWHAHAH!” But it would have been nice if challenges of “misinformation” came earlier and often.
By Sara Talpos, a contributing editor at Undark. Originally published at Undark
In June, the journal Nature published a perspective suggesting that the harms of online misinformation have been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterize as three common misperceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms are driving this exposure, and that many broader problems in society are predominantly caused by social media.
“People who show up to YouTube to watch baking videos and end up at Nazi websites — this is very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher with the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that edge cases don’t matter, he and his colleagues wrote, but treating them as typical can contribute to misunderstandings — and divert attention away from more pressing issues.
Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.
Undark: What motivated you and your co-authors to write this perspective?
David Rothschild: The five co-authors on this paper had all been doing a lot of different research in this space for years, trying to understand what it is that is happening on social media: What’s good, what’s bad, and especially understanding how it differs from the stories that we’re hearing from the mainstream media and from other researchers.
Specifically, we were narrowing in on these questions about what the experience of a typical consumer is, a typical person versus a more extreme example. A lot of what we saw, or a lot of what we understood — it was referenced in a lot of research — really described a pretty extreme scenario.
The second part of that is a lot of emphasis around algorithms, a lot of concern about algorithms. What we’re seeing is that a lot of harmful content is coming not from an algorithm pushing it on people. Actually, it’s the exact opposite. The algorithm kind of is pulling you towards the center.
And then there are these questions about causation and correlation. A lot of research, and especially mainstream media, conflate the proximate cause of something with the underlying cause of it.
There’s a lot of people saying: “Oh, these yellow vest riots are happening in France. They were organized on Facebook.” Well, there’s been riots in France for a couple hundred years. They find ways to organize even without the existence of social media.
The proximate cause — the proximate way in which people were organizing around [January 6] — was certainly a lot of online. But then the question comes, could these things have happened in an offline world? And these are tricky questions.
Writing a perspective here in Nature really allows us to then get to stakeholders outside of academia to really address the broader discussion because there’s real world consequences. Research gets allocated, funding gets allocated, platforms get pressure to solve the problem that people discuss.
UN: Can you talk about the example of the 2016 election: What you found about it and also the role that perhaps the media played in putting forth information that was not entirely accurate?
DR: The bottom line is that what the Russians did in 2016 is certainly interesting and newsworthy. They invested pretty heavily in creating sleeper Facebook organizations that posted viral content and then slipped in a bunch of non-true fake news towards the end. Certainly meaningful and certainly something that I understand why people were intrigued by. But ultimately, what we wanted to say is, “How much impact could that plausibly have?”
Impact is really hard [to measure], but at least we can put in perspective about people’s news diets and showcase that the amount of views of Russian direct misinformation is just a microscopic portion of people’s consumption of news on Facebook — let alone their consumption of Facebook, let alone their consumption of news in general, which Facebook is just a tiny portion of. Especially in 2016, the vast majority of people, even younger people, were still consuming way more news on television than they were on social media, let alone online.
While we agree that any fake news is probably not good, there is ample research to see that repeated interaction with content is really what drives underlying causal understanding of the world, narratives, however you want to describe it. Getting occasionally hit by some fake news, and at very low numbers for the typical consumer, is just not the driving force.
UD: My impression from reading your Nature paper is that you found that journalists are spreading misinformation about the effects of misinformation. Is that accurate? And why do you think this is happening if so?
DR: Ultimately, it’s a good story. And nuance is hard, very hard, and negative is popular.
UD: So what’s a good story, specifically?
DR: That social media is harming your children. That social media is the problem.
There’s a general want to cover things on a more negative light. There is certainly a long history of people freaking out over and subscribing all society ills to new technology, whether or not that was the internet, or television, or radio, or music, or books. You can just go back in time, and you can see all of these types of concerns.
Ultimately, there’s going to be people that benefit from social media. There’s going to be people that are harmed from social media, and there’s going to be many people who will progress with it in the way that society continues to progress with new technology. That is just not as interesting a story as social media is causing these problems, without counterbalancing that.
“Social media is the problem, and it’s really the algorithms” provides a very simple and tractable solution, which is that you fix the algorithms. And it avoids the harder question — the one that we generally don’t want to do — about human nature.
A lot of the research that we cite here, and ones I think that make people uncomfortable, is that some segment of the population demands horrible things. They demand things that are racist, degrading, violence-inducing. That demand is capable of being satiated in various social media, as well as it was satiated beforehand in other forms of medium, whether or not it was people reading books, or movies, or radio, whatever it was that people were listening to or gaining information from in the past.
Ultimately, the various channels that we have available definitely shift the ease and distribution and way in which these are distributed. But the existence of these things is a human nature question well beyond my capacity as a researcher to solve, well beyond a lot of people’s capacity — most people’s, everyone’s. I think it makes it tricky and also makes you uncomfortable. And I think that’s why many journalists like to focus in on “social media bad, algorithms the problem.”
UD: On the same day that Nature published your piece, the journal also published a comment titled “Misinformation poses a bigger threat to democracy than you might think.” The authors suggest that “Concern about the expected blizzard of election-related misinformation is warranted, given the capacity of false information to boost polarization and undermine trust in electoral processes.” What’s the average person to make of these seemingly divergent views?
DR: We certainly do not want to give off the impression that we tolerate any bit of misinformation or harmful content or trivialize the impact it has, especially to those people that it does affect. What we’re saying is that it is concentrated away from the typical consumer into extreme pockets, and it takes a different approach and different allocation of resources to hit that than the traditional research, and the traditional questions you see popped up about aiming towards a typical consumer, about aiming towards this mass impact.
I read that and I don’t necessarily think it’s wrong, as much as I don’t see who they’re yelling at, basically, in that piece. I don’t think that is a huge movement — to trivialize — as much as to say, “Hey, we should actually fight it where it is, fight it where the problems are.” I think that it’s a talking past each other, in a sense.
UD: You’re an employee of Microsoft. How would you reassure potentially skeptical readers that your study is not an effort to downplay the negative effect of products that are profitable to the tech industry?
DR: This paper has four academic co-authors, and went through an incredibly rigorous process. You may not [have] noticed on the front: We submitted this paper on Oct. 13, 2021, and it was finally accepted on April 11, 2024. I’ve had some crazy review processes in my time. This was intense.
We came in with ideas based off our own academic research. We supplemented it with the latest research and continue to supplement it with research coming in, especially some research that ran counter to our original conception.
The bottom line is that Microsoft Research is an extremely unique place. For those who are not familiar with it, it was founded under the Bell Labs model in which there’s no review process for publications coming out of Microsoft Research because they believe that the integrity of the work rests on the fact that they are not censoring as they come through. The idea is to use this position to be able to engage in discussions and understanding around the impact of some things that are near the company, some things that have nothing to do with it.
In this case, I think it’s pretty far afoot. It’s a really awesome place to be. A lot of work is joint-authored with academic collaborators, and that certainly always is important to ensure that there are very clear guidelines in the process and ensure the academic integrity of the work that it does.
UD: I forgot to ask you about your team’s methods.
DR: It’s obviously different than a traditional research piece. In this case, this was definitely started by conversations among the co-authors about joint work and separate work that we’ve been doing that we felt was still not breaking through into the right places. It really started by laying down a few theories that we had about the differences between our academic work, the general body of academic work, and what we were seeing in the public discussion. And then an extremely thorough review of literature.
As you’ll see, we’re somewhere in the 150-plus citations — 154 citations. And with this incredibly long review process in Nature, we went line by line to ensure that there wasn’t anything that was not undefended by the literature: either, where appropriate, the academic literature, or, where appropriate, what we were able to cite from things that were in the public.
The idea was to really create, hopefully, a comprehensive piece that allowed people to really see what we think is a really important discussion — and this is why I’m so happy to talk to you today — about where the real harms are and where the push should be.
None of us are firm believers in trying to pull out a stance and hold to it despite new evidence. There are shifting models of social media. What we have now with TikTok, and Reels, and YouTube Shorts is a very different experience than what the main social media consumption was a few years ago — with longer videos — or the main social media a few years before that with news feeds. These will continue to then be something you want to monitor and understand.
I’m glad these people aren’t jumping on the band wagon about mis-information, but they really aren’t talking about anything important either.
The idea that people haven’t existed ,probably since we learned to talk; of people making things up.. and telling people around them, is kind of missing . Children, begin learning to deal with “mis-information” in elementary school.
What about all the damaging mis-information. The stuff that comes out of the white house briefing room every day?. What about what is on fox news, or NPR or CNN or MSNBC? All I ever hear anymore whenever some major media outlet is on near me, is mis-information.
How about when congress members get up to make a statement, that is completely false and out of any context; usually just so some media outlet can run with the lead. News organizations, and our federal government , are the real sources of mis-information, that are damaging.
How about lobbyists creating the food pyramid, and push UPF’s on everyone?
It is like when I was a kid , they were saying that movies and television were a danger because of the violence. Really, the violence was on the evening news. that was real. movies are just movies.
I still say “mis-information” is the biggest problem we have. The problem is that ” mis-information”, is just the party line. That which they report in the news. that which they teach in schools. That which they will write down as “history”.
“We will know that our program of disinformation is complete when everything Americans believe is false.”
William Casey, former director of the CIA.
Attaboy, Bill! Great job! We are there, or darn near.
See also Operstion Mockingbird and so much more.
Social media was the sole source of correct information on SARS CoV2 (directly from astute Asian, Italian & US HCW & renowned epidemiologists) who’d empirical knowledge from SARS, MERS & other immune hijacking coronavirus.
Same with Ukraine, Nord Stream
Same with Syria
Same with Biden attack on AGW-mitigation
Same with RussiaGate
Same with Gaza & Canary Mission doxxing anti-genocide scholars
Same with US duopoly’s surveillance
Everything on TV, print & Prop’RNot conforming blog-aggregators is diametrically opposite consensus reality…
HMmmm?
Back in the day, or night, hah, shortwave radio filled a similar role. That on-the-ground discussion was effective at rounding out and filling in the short byline stories. In that more restrained world, there was less animus toward big players and more interest in getting at what was happening to people and communities. The players were known for what they were, from BBC World Service to Stasi, and many sought news where they could.
Seriously? It’s not social media or false information boosting polarization – they are mere tools.
I have the privilege of living in multiparty country, and during the last couple of election cycles almost all serious media organizations have established so called “candidate selection machines”, in which you answer a set of policy questions and the “machines” lists candidates that had most similar answers to yours.
The thing I’ve noticed with the results these machines give is that the closest candidates agree with me 85-90% and the furthest away agree with me 55-60% of the time. Which means that for the most part, all the candidates agree on 70-75% of the policies! It follows that in order to differentiate, they are forced to focus on the few differences they have.
And regarding the media also focusing on those minor differences, I’t my belief that modern day journalist want to be more like Lawrence Olivier than Walter Cronkite – emotions must be engaged and narrative replaces facts.
It doesn’t help that the collective West seems to be going trough an cognitive phase letting ideological to overcome utilitarian or even practical.
Just to note on the connection to Microsoft and the interviewees response to the question about it. While the interviewee answers the late interview question about Microsoft by saying there are four academic co-authors, what is not mentioned is what is in the disclosure in the “ethics” declaration in the journal article. The five co-authors work for,have worked for, or received funding from the following (initials for names because that’s what’s on my notes):
DR: Microsoft, Yahoo
CB: Microsoft
DJW: Microsoft, Google, Yahoo,
BN: Meta, Facebook, Instagram
ET: Facebook, Instagram
The two FB & Instagram authors both worked on a 2020 “election study” for the two Meta arms.
I’m fascinated by the comment: “The second part of that is a lot of emphasis around algorithms, a lot of concern about algorithms. What we’re seeing is that a lot of harmful content is coming not from an algorithm pushing it on people. Actually, it’s the exact opposite. The algorithm kind of is pulling you towards the center.”
My experience with YouTube Shorts and Instagram Reels, both ultrashort one-after-another video displayers, is that “the algorithm” tends to show me things that keep me watching. In my case, it will show a variety of things but reliably uses a particular sport I was fond of in my youth. I bet it is disingenuous to say that “the algorithm” is doing anything other than maximizing engagement. The problem of misinformation seems to me to be a bit of a red herring. So-called “social media” has become a distraction maximizer. The various bits of misinformation picked up along the way are incidental, although in some extreme cases it clearly causes a few people harm.
I feel this is an exercise in agnotology on the part of researchers to distract or confuse us about the significant problems of “social media” being controlled by a handful of private firms. It seems reasonable to think that these researchers have been sponsored by the industry because they are producing work that at best doesn’t challenge the most meaningful problems, and at worst introduces doubt and confusion about the problems.
Private control of our communications systems by these companies is perhaps the fundamental problem.
In my view algos that allow feedback from you, the eyeball provider, do also take that feedback into account. It didn’t take a lot of ‘not interested in this post ‘ responses to seeing pro Israel stuff on X for it to disappear from my feed. And it makes my reading experience a lot less stressful.
The algo after all wants you to keep your eyes glued and not giving you stuff youve said you aren’t interested in is a pretty good way of ensuring that.
There are downsides to this of course , like missing out on more thoughtful posts about the Israeli position but this is twitter after all so the chances of that are low enough for me to take the risk.
‘The bottom line is that what the Russians did in 2016 is certainly interesting and newsworthy. They invested pretty heavily in creating sleeper Facebook organizations that posted viral content and then slipped in a bunch of non-true fake news towards the end. Certainly meaningful and certainly something that I understand why people were intrigued by. But ultimately, what we wanted to say is, “How much impact could that plausibly have?”’
And with that David Rothschild completely discredited himself as somebody worth paying attention too. I’m too lazy right now to look up the exact figures but it was something on the order of 20 to 40 thousand dollars – according to US government sources at the time – and mostly coming out of a Russian clickbait factory. And half those wonky ads did not even appear in the media until well after the election was over. So this is just Russia!Russia!Russia! at work again. And to think that this guy calls himself an economist.
Yes. Everything I’ve read that has actual figures (rather than unverified assertions) has emphasized the quite *small* relative level of investment, as well as the other elements you mention. I assume the role of “Russian” content in 2016 was not actually studied in this “research.” So this seems to be just another assertion of “fact” from authority about what is now a foundational myth. Gosh, maybe some experts studying “misinformation” will analyze this sometime. At least they admitted the effects were negligible, even if it served the interests of their employers to do so.
This is what I reacted to as well. SHOW ME THESE ADS!!!
For the love of god why do they never show us the Russian ads?!
Because they are B-O-R-I-NG. Years ago I clicked thru what claimed to be these alleged adds. Only by cherry picking a small set can your impose an agenda upon them. So I concluded the people making these adds where doing what they claimed – trying to figure out what sort of adds gets the most attention, or clicks.
These are supposedly the ads – pretty lame and both sides of the aisle, as it were:
https://www.nytimes.com/2017/11/01/us/politics/russia-2016-election-facebook.html
Archive link if you need it:
https://archive.ph/lSZbt
More here:
https://ushadrons.medium.com/
Quick, someone get the big ad agencies and PR mills on the case. Hire those Rooskie geniuses as they obviously have the magic ad bullets, creative oomph
and Brawndothat people need. /sNot quite laughable but I don’t sense any agenda other than experimentation to see what gets clicks.
Honestly do not understand why the “algoes” can’t fix this. They fixed my account and I’m not even Russian!
Thanks for the link. There were some good ads on that page as evidenced by the NYTimes putting the effective ones ‘below the fold.’ For the Russians, any ad that gives you pause would be effective. Yellow Bernie and armwrestling Satan are about something else entirely but I couldn’t begin to guess what that might be. Bad ads can be very effective as witnessed by how the US media only shared the three most ridiculous ones (weirdly undercutting the seriousness of the allegations).
Now if we could see all the ads run by Israeli/Zionist front groups for perspective?
Yep. Just framing it as something ‘the Russians’ did – (all 200 million of them with their hive mind?) – instead of ‘some Russians’ shows the level of unconscious class bias here that makes one suspect everything else.
I remember the debunking of the “Russian” ads at the time. It was private Russian companies producing clickbait to make money from and had nothing to do with the Russian government.
And of course they were pretty silly and cost a total of something like $100,000 to put up.
The accusation was part of Hilary Clinton’s justification of why she lost. Since from her perspective it couldn’t be because she was a very bad candidate, it had to be someone else’s fault hence the accusation that it was “The Russians”.
This nonsense was completely debunked years ago by supremely credible sources, including the fact it was an invention of Clinton’s campaign. That this nonsense is raised again just demonstrates the dishonesty of those making the accusations.
It was not just debunking. Two internet companies owned by the late Prigozhin were sued by Mueler’s team, and against all expectations Progozhin hired lawyers that came to the court.
They quickly proved that one of the companies did not even exits yet during the election campaign, and that the other bought ads for both candidates for equal amount (which was minimal compared to other type of ads).
DOJ eventually closed the case in a way that it can not be legally opened ever again, thus clearing the “troll factories” of all wrong doing.
Just like the Steele memo and DNC hacking – the law enforcement new almost from the start that there was no proof whatsoever of Russian interference, yet they spent years trying to find something. Anything.
Look at the last name.
Declaring something as ‘misinformation’ or ‘disinformation’ is a form of censorship.
There is a difference between opinion and fact. What’s often presented and defended as ‘fact’ is actually opinion.
For example, “the covid vaccines are safe and effective” is an opinion, not a fact. Yet the government and social media companies declared any information to the contrary as ‘misinformation’ or ‘disinformation’.
Sure, but “facts” can be elusive. You have “evidence” which may or may not be complete. Then you have to make an assessment of the evidence and a value judgement when evidence may be conflicting. In the courts, there is the issue of “expert testimony” and how that kind of evidence should be viewed.
the latest Taibbi/Kirn excerpt seemed to fit the topic a bit:
Transcript – America This Week October 25, 2024:
“I Wish I Had Hitler’s Editors”: Trump and The Atlantic
How to hide the ball in peculiarly-sourced stories. Plus, “The Hanging Stranger,” another predictive sci-fi fable from Philip K. Dick
Matt Taibbi and Walter Kirn
Oct 26
Matt Taibbi: All right. Welcome to America this week. I’m Matt Taibbi.
Walter Kirn: And I’m Walter Kirn.
Matt Taibbi: Walter, are you back home? You look good. You look tanned, rested, and ready.
Walter Kirn: I’m actually exhausted, flushed, and unprepared.
Matt Taibbi: Starting off with a bang, both of us.
Walter Kirn: Yeah. I’m back in Montana, which gladdens me because as the election approaches, I want to be in a place with mountains nearby.
Matt Taibbi: High ground.
Walter Kirn: Yeah, high ground. I want to have high ground advantage. I was in New York City, and there’s a feverish sense there that something strange is coming. It’s a town that watches too much news and knows too many people in the media. Back here. It’s a ball game on TV at the bar, and some talk about things that broke nationally five weeks ago. So, I’m glad to be among the less informed, the more informed about their own lives, and the less informed about everything else.
Matt Taibbi: Yeah, I feel the same way. I live in a house that kind of floats in the middle of a ravine, and I’m thinking about putting dangerous animals in the ravine for election night.
Walter Kirn: Crocodiles.
Matt Taibbi: Yeah, exactly. I mean, the biggest thing, I can get around here out are alligator, snapping turtles, and bears, but I don’t know how I’d coax the bears in there. But as we speak, there are dueling rumors on the internet of campaign ending stories that may be coming out. The once, and I guess now again, relevant, Mark Halperin is claiming that he was pitched a story that would, what’s the language that he used, that would end the Trump campaign? Or wait, let me see if I can, if true would end the Trump campaign.
Walter Kirn: Has any story in October ever ended any campaign?
Matt Taibbi: Not even Access Hollywood killed the Trump campaign, and I thought that was the ultimate in a campaign killing story.
Walter Kirn: I was saying exactly that to my wife the other day. I said, “At this time in 2016, people were talking about grabbing by the pussy and it didn’t leave a mark.”
Matt Taibbi: So, to speak. Yep.
Walter Kirn: Yep.
Matt Taibbi: Yeah.
Walter Kirn: So, to speak. Yeah.
Matt Taibbi: Yeah. So, Halperin is saying this, should we hear what he’s … Let’s hear the audio on this, because I’m always interested in his tone of voice.
Walter Kirn: Sure.
Mark Halperin: These last two weeks are going to be filled with things like this, and I can tell you without going into detail, that I’ve been pitched a story about Donald Trump now about a week that if True would end his campaign. And there’s all sorts of things like that flying around. I’m not the only one who’s been pitched it.
Matt Taibbi: Okay. Where is he, by the way? Yeah.
Walter Kirn: He’s at the headquarters in some New York skyscraper.
Matt Taibbi: Halperin Headquarters.
Walter Kirn: Yeah. Halperin Inc. and it’s got a Lex Luthor vibe or something, or Ayn Randian vibe to be in a steel and glass skyscraper. First of all, what’s the use of saying, I’ve been pitched a story, which I’m not going to tell you, which if True would end a campaign?
Matt Taibbi: It’s a lot of ifs. Yeah. Conditionals
Walter Kirn: Even I don’t do that. I mean, in other words, if nothing comes of it, then you should not have mentioned it. If something does, then you didn’t break it.
Matt Taibbi: I’ve teased stories. I mean, I’ve talked about stories that are coming.
Walter Kirn: Yeah. Stories …
Matt Taibbi: But if you’re not going to do it…
Walter Kirn: But ones that you yourself have passed on. I guess he’s trying to dull the victory for somebody who might bring the story forth, even though he didn’t. Should it prove determinative? I don’t know.
Matt Taibbi: Yeah, it’s a strange one. If it’s that good, keep working on it and have it ready next week and then let it fly. And then simultaneously, is it this morning or last night, the rumors started flying that there was a campaign ending situation going on with Kamala that would come out today. So, by the time this show comes out, we’ll know if this was bull or not.
Walter Kirn: Well, and it has a strange provenance, that story, because Trump did say at a rally in Georgia that she may know something we don’t know as an explanation for her rather relaxing campaign schedule of the last few days in which she’s taken days off and so on. And I took it to mean that he thinks the polls are showing her losing badly, and she’s given up. That seemed like the Occam’s razor analysis. But then, George Santos the unimpeachable, but impeached, Long Island Congressman got in front of his camera in what seemed like a moment of enthusiasm because he hadn’t shut up.
Matt Taibbi: Well, you can tell. Look at the shooting.
Walter Kirn: Yeah, exactly.
Matt Taibbi: Right. Yeah. Let’s hear it.
Walter Kirn: Well, let’s see it.
Matt Taibbi: Yeah.
George Santos: Oh, hey guys. Got to tell you something. It’s so crazy. I just got off the phone with a source, and they’re telling me that tomorrow the Kamala Harris ship sinks, you’re going to see all the rats jumping off. The story that’s going to break tomorrow is so damning and so, so bad that Democrats are going to distance themselves from her something you’ve never seen before. And essentially, you might even see some asking for people to vote for Trump. This is wild. Stay tuned as this drops tomorrow.
Matt Taibbi: Boy, I’m waiting with bated breath, aren’t you, Walter?
Walter Kirn: Was that on Twitter that he released that?
Matt Taibbi: It is on Twitter. It looks like it might’ve originated elsewhere.
Walter Kirn: So now that Twitter pays actual money, and I’ve gotten a few hundred dollars here and there from him, one can imagine that he made a few hundred dollars just from that little video, but he did seem excited. It also seemed that maybe he had ducked out from a wild party in the next room to make that, because George, who I imagine is vain like the rest of us did not look put together for that thing.
Matt Taibbi: No. Well, it could have been a studied, rushed look.
Walter Kirn: Right. Cinema Verite, as they used to call it.
Matt Taibbi: Right.
Walter Kirn: Yes. Internet very, by the way.
Matt Taibbi: Don’t you love this era? You can just go on, let’s say you’re at a party and you need to score some ice to really make things go over, but you don’t have the cash. Just go on Twitter and say you got a campaign ending story. And I’m not saying that that’s what’s going on with George Santos.
Walter Kirn: No, no.
Matt Taibbi: I’m just saying one could theoretically, if one were in a situation where you needed a couple of hundred bucks to score something, you could do that now. Right?
Walter Kirn: Well, and also, even though journalism has now fallen below, I don’t know…
Matt Taibbi: Congress.
Walter Kirn: Yeah, rain gutter contracting, as a profession in terms of public esteem, still everyone wants to be a journalist. He just got off the phone with his source. He’s really adopting the swagger of Mr. Deadline there…
To the extent they get carried away and paranoid, it just becomes a negative feedback loop. The old order increasingly resembles a scab over a festering wound, with the more brittle and disconnected it gets. Under the culture and politics, it’s biology and physics.
TMI: Too much information. Per a recent trip to China, my impression was that the Chinese Government strives to ‘limit’ the flow of information to its population in an attempt to keep the discourse within a relatively narrow channel. Yet in comparison to the US, my conclusion was that in the US, the flow of information is nearly unlimited, yet the conciousness of \the general population also remains in a narrow channel!! What give?s My conclusion was that ruling bodies prefer an uncritical and docile population because a critical and reasoning population would be a threat to those in power. Maybe, two different approaches to the same objective.. Misinformation, false information, limited information, etc, etc, etc. How does one figure out what’s going on? One media outlet says Beyonce was a flop (Zero Hedge) and number of other (MSM) touted her as a success!! Lo and behold, the owners of major newspapers (LA times & WaPo) don’t endorse the ‘favorite’ (thus being condemned as supporters of that ‘facsist” (recall HRC and deplorables). As a reclusive senior, who is Beyonce and this guy Rogan. The latter, along with Tucker Carlson have double digit millions followers, not acknowledged in the NYT or MSNBC – Do they register in the polls? I don’t know. The problem isn’t with social media. It’s how easily human nature can be exploited. IMO, the majority of people In the middle of their normal lives – not students in college or seniors like myself) are too tired, too scared, and too bludgenoed with TMI to the point of insensittivity, e.g. it seems a ‘normal’ person would be outraged at a family being incinerated in a hospital tent for, exactly what objective? The joy of it? In the US – apathy. Techology is like a magic wand, which is why some in the upcoming generations will lovingly embrace chip implants and AI which will elevate their self esteem, which will self empower themselves, and which will give them powers over others. The problem with the acquisition of information over time coupled with reflection is that one realizes theat one has gone through life blind. I am still unable to figure out by what device the headlines (online) of the WaPO and the NYT match each other on a regular basis. Does the same computer generate the words? Is there an algorithim?
Clinton and Kerry, in their own words, want to limit the Ist Amendment! What form of government does that? Is that why the Founding Fathers inserted the 2nd Amendment after the 1st? It would be intersting if an analysis of Microsoft and Bernay’s book: “Prooganda” were made. E.G. in the day, Bernay’s suggested that builders add ‘piano rooms’ to their new homes and work together with the piano sellers . A 100 years later it was the home office room (until smart phones and the cloud eliminated the need for so much space and real estate). It goes on. TMI
I would rather think the real underlying issue is not too much info but too much worthless, i.e. bad-quality info.
All those Beyonces and MSNBCs and algorithm creations have one in common – they are superficial and most likely diverting from the heart of a problem or in fact are spreading lies. Were they highly valuable, critical, on a broad scale the state of affairs were completely different.
One of my major goals lies in finding the genuine sources of info about events that offer a realistic POV on the problem at hand.
Eventually you end up with very few pieces of info because 99% of what is out there is reproducing and reproducing garbage. The real substance of info surrounding us is a tiny fraction.
It works like most WaPo or NYT articles about the Ukraine War. They are way too long and almost nothing in them is worthy to be quoted and taken seriously. They are lies upon fiction upon lies. They are literally filled with “TMI”. Reduce them to what they contain of serious knowledge and you walk away with one sentences or two. At best.
It´s like those junk-fonds as described in that movie “The Big Short ” (sorry if that´s not NC-level economics). But for the metaphor it´s enough: They are huge and everywhere but are filled with nothing.
Our info space is the same. But it´s not the amount of info, it´s their lack of quality. I don´t think you can have enough info.