Yves here. I hate to sound like my usual knee-jerk contrarian self, but my experience is that the ads I get on the Web are wild misfires. So my assumption is that, contrary to this article’s assumption that our tech overlords are geniuses at figuring out who we are, maybe even more psychologically than demographically, and then manipulating us, that they are instead at most good at hitting the hot buttons of particular segments that are also big priorities for advertisers, like teenaged boys and girls.
While the plural of anecdote is not data, the persistent and wild misfires in my ad offerings say that the algos aren’t what they are cracked up to be. A list of the ones I get most often: Survivalist food packs. A travel site. An app for better parenting. An app for managing spending. A bedtime drink to treat Type II diabetes (BTW that is making a medical claim….why is that even legal?). How to make six figure a month as a Christian kingdom builder. A home buying site. Men’s t-shirts. HIV antivirals. Oh, and a pitch for donating to Ukraine. As you can imagine, none of these have the potential to sway me. So count me as unimpressed with Big Tech’s influence skills.
As for getting girls and women depressed, that’s not as hard as you think. Long before the days of the Internet, researchers ascertained that looking at a fashion magazine was a downer for women because they fell short of the glamorous photos.
By Lynn Parramore, Senior Research Analyst at the Institute of New Economic Thinking. Originally published at the Institute for New Economic Thinking website
Google. Amazon. Facebook. Apple. We live within the digital worlds they have created, and increasingly there’s little chance of escape. They know our personalities. They record whether we are impulsive or prone to anxiety. They understand how we respond to sad stories and violent images. And they use this power, which comes from the relentless mining of our personal data all day, every day, to manipulate and addict us.
University of Tennessee law professor Maurice Stuckeis part of a progressive, anti-monopoly vanguard of experts looking at privacy, competition, and consumer protection in the digital economy. In his new book, Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy, he explains how these tech giants have metastasized into “data-opolies,” which are far more dangerous than the monopolies of yesterday. Their invasion of privacy is unlike anything the world has ever seen but, as Stucke argues, their potential to manipulate us is even scarier.
With these four companies’ massive and unprecedented power, what tools do we have to effectively challenge them? Stucke explains why current proposals to break them up, regulate their activities, and encourage competition fall short of what’s needed to deal with the threat they pose not only to our individual wallets and wellbeing, but to the whole economy — and to democracy itself.
Lynn Parramore: The big firms that collect and traffic in data — “data-opolies” you call them – why do they pose such a danger?
Maurice Stucke: People used to say that dominant companies like Google must be benign because their products and services are free (or low-priced, like Amazon) and they invest a lot in R&D and help promote innovation. Legal scholar Robert Bork argued that Google can’t be a monopoly because consumers can’t be harmed when they don’t have to pay.
I wrote an articlefor Harvard Business Review revisiting that thinking and asking what harms the data-opolies can pose. I came up with a taxonomy of how they can invade our privacy, hinder innovation, affect our wallets indirectly, and even undermine democracy. In 2018 I spoke to the Canadian legislature about these potential harms and I was expecting a lot of pushback. But one of the legislators immediately said, “Ok, so what are we going to do about it?”
In the last five or six years, we’ve had a sea change in the view towards the data-opolies. People used to argue that privacy and competition were unrelated. Now there’s a concern that not only do these giant tech firms pose a grave risk to our democracy, but the current tools for dealing with them are also insufficient.
I did a lot of research and spoke before many competition authorities and heard proposals they were considering. I realized there wasn’t a simple solution. This led to the book. I saw that even if all the proposals were enacted, there are still going to be some shortcomings.
LP:What makes the data-opolies even more potentially harmful than traditional monopolies?
MS: First, they have weapons that earlier monopolies lacked. An earlier monopoly could not necessarily identify all the nascent competitive threats. But data-opolies have what we call a “nowcasting radar.” This means that through the flow of data they can see how consumers are using new products and how these new products are gaining in scale, and how they’re expanding. For example, Facebook (FB) had, ironically, a privacy app that one of the executives called “the gift that kept on giving.” Through the data collected through the app, they recognized that WhatsApp was a threat to FB as a social network because it was starting to morph from simply a messaging service.
Another advantage is that even though the various data-opolies have slightly different business models and deal with different aspects of the digital economy, they all rely on the same anti-competitive toolkit — I call it “ACK – Acquire, Copy, or Kill.” They have greater mechanisms to identify potential threats and acquire them, or, if rebuffed, copy them. Old monopolies could copy the products, but the data-opolies can do it in a way that deprives the rival of scale, which is key. And they have more weapons to kill the nascent competitive threats.
The other major difference between the data-opolies today and the monopolies of old is the scope of anti-competitive effects. A past monopoly (other than, let’s say, a newspaper company), might just bring less innovation and slightly higher prices. General Motors might give you poorer quality cars or less innovation and you might pay a higher price. In the steel industry, you might get less efficient plants, higher prices, and so on (and remember, we as a society pay for those monopolies). But with the data-opolies, the harm isn’t just to our wallets.
You can see it with FB. It’s not just that they extract more money from behavioral advertising; it’s the effect their algorithms have on social discourse, democracy, and our whole economy (the Wall Street Journal’s “Facebook Files” really brought that to the fore). There are significant harms to our wellbeing.
LP: How is behavioral advertising different from regular advertising? An ad for a chocolate bar wants me to change my behavior to buy more chocolate bars, after all. What does it mean for a company like Facebook to sell the ability to modify a teenage girl’s behavior?
MS: Behavioral advertising is often presented as just a way to offer us more relevant ads. There’s a view that people have these preconceived demands and wants and that behavioral advertising is just giving them ads that are more relevant and responsive. But the shift with behavioral advertising is that you’re no longer just predicting behavior, you’re manipulating it.
Let’s say a teenager is going to college and needs a new laptop. FB can target her with relevant laptops that would fit her particular needs, lowering her search costs, and making her better off as a result. That would be fine — but that’s not where we are. Innovations are focused on understanding emotions and manipulating them. A teenage girl might be targeted not just with ads, but with content meant to increase and sustain her attention. She will start to get inundated with images that tend to increase her belief in her inferiority and make her feel less secure. Her well-being is reduced. She’s becoming more likely to be depressed. For some users of Instagram, there are increased thoughts about suicide.
And it’s not just the data-opolies. Gambling apps are geared towards identifying people prone to addiction and manipulating them to gamble. These apps can predict how much money they can make from these individuals and how to entice them back, even when they have financial difficulties. As one lawyer put it, these gambling apps turn addiction into code.
This is very concerning, and it’s going to get even worse. Data-opolies are moving from addressing preconceived demands to driving and creating demands. They’re asking, what will make you cry? What will make you sad? Microsoft has an innovation whereby you have a camera that will track what particular events cause you to have particular emotions, providing a customized view of stimuli for particular individuals. It’s like if I hit your leg here, I can get this reflex. There’s a marketing saying, “If you get ‘em to cry, you get ‘em to buy.” Or, if you’re the type of person who responds to violent images, you’ll get delivered to a marketplace targeted to your psyche to induce the behavior to shop, let’s say, for a gun.
The scary thing about this is that these tools aren’t being quarantined to behavioral advertising; political parties are using similar tools to drive voter behavior. You get a bit of insight into this with Cambridge Analytica. It wasn’t just about targeting the individual with a tailored message to get them to vote for a particular candidate; it was about targeting other citizens who were not likely to vote for your candidate to dissuade them from voting. We’ve already seen from the FB files that the algorithms created by the data-opolies are also causing political parties to make messaging more negative because that’s what’s rewarded.
LP: How far do you think the manipulation can go?
MS: The next frontier is actually reading individuals’ thoughts. In a forthcoming book with Arial Ezrachi, How Big Tech Barons Smash Innovation and How to Strike Back, we talk about an experiment conducted by the University of California, San Francisco, where for the first time they were able to decode an individual’s thoughts. A person suffering from speech paralysis would try to say a sentence, and when the algorithm deciphered the brain’s signals, the researchers were then able to understand what the person was trying to say. When the researchers asked the person, “How are you doing?” the algorithm could decipher his response from his brain activity. The algorithm could decode about 18 words per minute with 93 percent accuracy. First, the technology will decipher the words we are trying to say, and identify from our subtle brain patterns a lexicon of words and vocabulary. As the AI improves, it will next decode our thoughts. Turns out that FB was one of the contributors funding the research — and we wondered why. Well, that’s because they’re preparing these headsets for the metaverse that not only will likely transmit all the violence and strife of social media but can potentially decode the thoughts of an individual and determine how they would like to be perceived and present themselves in the metaverse. You’re going to have a whole different realm of personalization.
We’re really in an arms race whereby the firms can’t unilaterally afford to de-escalate because then they lose a competitive advantage. It’s a race to better exploit individuals. As it has been said, data is collected about us, but it’s not for us.
LP: Many people think more competition will help curtail these practices, but your study is quite skeptical that more competition among the big platform companies will cure many of the problems. Can you spell out why you take this view? How is competition itself toxic in this case?
MS: The assumption is that if we just rein in the data-opolies and maybe break them up or regulate their behavior, we’ll be better off and our privacy will be enhanced. There was, to a certain extent, greater protection over our privacy while these data-opolies were still in their nascent stages. When MySpace was still a significant factor, FB couldn’t afford to be as rapacious in its data collection as it is now. But now you have this whole value chain built on extracting data to manipulate behavior; so even if this became more competitive, there’s no assurance then that we’re going to benefit as a result. Instead of having Meta, we might have FB broken apart from Instagram and WhatsApp. Well, you’d still have firms dependent on behavioral advertising revenue competing against each other in order to find better ways to attract us, addict us, and then manipulate behavior. You can see the way this has happened with TikTok. Adding TikTok to the mix didn’t improve our privacy.
LP: So one more player just adds one more attack on your privacy and wellbeing?
MS: Right. Ariel and I wrote a book, Competition Overdose, where we explored situations where competition could be toxic. People tend to assume that if the behavior is pro-competitive it’s good, and if it’s anti-competitive, it’s bad. But competition can be toxic in several ways, like when it’s a race to the bottom. Sometimes firms can’t unilaterally de-escalate, and by just adding more firms to the mix, you’re just going to have a quicker race to the bottom.
LP: Some analysts have suggested that giving people broader ownership rights to their data would help control the big data companies, but you’re skeptical. Can you explain the sources of your doubts?
MS: A properly functioning market requires certain conditions to be present. When it comes to personal data, many of those conditions are absent, as the book explores.
First, there’s the imbalance of knowledge. Markets work well when the contracting parties are fully informed. When you buy a screw in a hardware store, for example, you know the price before purchasing it. But we don’t know the price we pay when we turn over our data, because we don’t know all the ways our data will be used or the attendant harm to us that may result from that use. Suppose you download an ostensibly free app, but it collects, among other things, your geolocation. No checklist says this geolocation data could potentially be used by stalkers or by the government or to manipulate your children. We just don’t know. We go into these transactions blind. When you buy a box of screws, you can quickly assess its value. You just multiply the price of one screw. But you can’t do that with data points. A lot of data points can be a whole lot more damaging to your privacy than just the sum of each data point. It’s like trying to assess a painting by Georges Seurat by valuing each dot. You need to see the big picture; but when it comes to personal data, the only one who has that larger view is the company that amasses that data, not only across their own websites but in acquiring third-party data as well.
So we don’t even know the additional harm that each extra data point might be having on our privacy. We can’t assess the value of our data, and we don’t know the cost of giving up that data. We can’t really then say, all right, here’s the benefit I receive – I get to use FB and I understand the costs to me.
Another problem is that normally a property right involves something that is excludable, definable, and easy to assign, like having an ownership interest in a piece of land. You can put a fence around it and exclude others from using it. It’s easy to identify what’s yours. You can then assign it to others. But with data, that’s not always the case. There’s an idea called “networked privacy” and the concern there is that choices others make in terms of the data they sell or give up can have then a negative effect on your privacy. For example, maybe you decide not to give up your DNA data to 23andMe. Well, if a relative gives up their DNA, that’s going to implicate your privacy. The police can look at a DNA match and say, ok, it’s probably someone within a particular family. The choice by one can impact the privacy of others. Or perhaps someone posts a picture of your child on FB that you didn’t want to be posted. Or someone sends you a personal message with Gmail or another service with few privacy protections. So, even if you have a property right to your data, the choices of others can adversely affect your privacy.
If we have ownership rights in your data, how does that change things? When Mark Zuckerberg testified before Congress after the Cambridge Analytica scandal, he was constantly asked who owns the data. He kept saying the user owns it. It was hard for the senators to fathom because users certainly didn’t consent to have their data shared with Cambridge Analytica to help impact a presidential election. FB can tell you that you own the data, but to talk with your friends, you have to be on the same network as your friends, and FB can easily say to you, “Ok, you might own the data, but to use FB you’re going to have to give us unparalleled access to it.” What choice do you have?
The digital ecosystem has multiple network effects whereby the big get bigger and it becomes harder to switch. If I’m told I own my data, it’s still going to be really hard for me to avoid the data-opolies. To do a search, I’m still going to use Google, because if I go to DuckDuckGo I won’t get as good of a result. If I want to see a video, I’m going to go to YouTube. If I want to see photos of the school play, it’s likely to be on FB. So when the inequality in bargaining power is so profound, owning the data doesn’t mean much.
These data-opolies make billions in revenue from our data. Even if you gave consumers ownership of their data, these powerful firms will still have a strong incentive to continue getting that data. So another area of concern among policymakers today is “dark patterns.” That’s basically using behavioral economics for bad. Companies manipulate behavior in the way they frame choices, setting up all kinds of procedural hurdles that prevent you from getting information on how your data is being used. They can make it very difficult to opt out of certain uses. They make it so that the desired behavior is frictionless and the undesired behavior has a lot of friction. They wear you down.
LP: You’re emphatic about the many good things that can come from sharing data that do not threaten individuals. You rest your case on what economists call the “non-rivalrous” character of many forms of data – that one person’s use of data does not necessarily detract at all from other good uses of the data by others. You note how big data firms, though, often strive to keep their data private in ways that prevent society from it for our collective benefit. Can you walk us through your argument?
MS: This can happen on several different levels. On one level, imagine all the insights across many different disciplines that could be gleaned from FB data. If the data were shared with multiple universities, researchers could glean many insights into human psychology, political philosophy, health, and so on. Likewise, the data from wearables could also be a game-changer in health, giving us better predictors of disease or better identifiers of things to avoid. Imagine all the medical breakthroughs if researchers had access to this data.
On another level, the government can lower the time and cost to access this data. Consider all the data being mined on government websites, like the Bureau of Labor Statistics. It goes back to John Stuart Mill’s insight that one of the functions of government is to collect data from all different sources, aggregate it, and then allow its dissemination. What he grasped is the non-rivalrous nature of data, and how data can help inform innovation, help inform democracy and provide other beneficial insights.
So when a few powerful firms hoard personal data, they capture some of its value. But a lot of potential value is left untapped. This is particularly problematic when innovations in deep learning for AI require large data sets. To develop this deep learning technology, you need to have access to the raw ingredients. But the ones who possess these large data sets give it selectively to institutions for those research purposes that they want. It leads to the creation of “data haves” and “have nots.” A data-opoly can also affect the path of innovation.
Once you see the data hoarding, you see that a lot of value to society is left on the table.
LP: So with data-opolies, the socially useful things that might come from personal data collection are being blocked while the socially harmful things are being pursued?
MS: Yes. But the fact that data is non-rivalrous doesn’t necessarily mean that we should then give the data to everyone that can extract value from it. As the book discusses, many can derive value from your geolocation data, including stalkers and the government in surveilling its people. The fact that they derive value does not mean society overall derives value from that use. The Supreme Court held in Carpenter v United Statesthat the government needs to get a search warrant supported with probable cause before it can access our geolocation data. But the Trump administration said, wait, why do we need a warrant when we can just buy geolocation data through commercial databases that map every day our movements through our cellphones? So they actually bought geolocation data to identify and locate those people who were in this country illegally.
Once the government accesses our geolocation data through commercial sources, they can put it to different uses. Think about how this data could be used in connection with abortion clinics. Roe v. Wadewas built on the idea that the Constitution protects privacy, which came out of Griswald v. Connecticutwhere the Court formulated a right of privacy to enable married couples to use birth control. Now some of the justices believe that the Constitution really says nothing about privacy and that there’s no fundamental, inalienable right to it. If that’s the case, the concerns are great.
LP: Your book is critically appreciative of the recent California and European laws on data privacy. What do you think is good in them and what do you think is not helpful?
MS: The California Privacy Right Act of 2020 was definitely an advance over the 2018 statute, but it still doesn’t get us all the way there.
One problem is that the law allows customers to opt out of what’s called “cross-context behavioral advertising.” You can say, “I don’t want to have a cookie that then tracks me as I go across websites.” But it doesn’t prevent the data-opolies or any platform from collecting and using first-party data for behavioral advertising unless it’s considered sensitive personal information. So FB can continue to collect information about us when we’re on its social network.
And it’s actually going to tilt the playing field even more to the data-opolies because now the smaller players need to rely on tracking across multiple websites and data brokers in order to collect information because they don’t have that much first-party data (data they collect directly).
Let’s take an example. The New York Times is going to have good data about its readers when they’re reading an article online. But without third-party trackers, they’re not going to have that much data about what the readers are doing after they’ve read it. They don’t know where the readers went–what video they watched, what other websites they went to.
As we spend more time within the data-opolies’ ecosystems, these companies are going to have more information about our behavior. Paradoxically, opting out of cross-context behavioral advertising is going to benefit the more powerful players who collect more first-party data – and it’s not just any first-party data, it’s the first-party data that can help them better manipulate our behavior.
So the case for the book is that if we really want to get things right, if we want to readjust and regain our privacy, our autonomy, and our democracy, then we can’t just rely on existing competition policy tools. We can’t solely rely on many of the proposals from Europe or other jurisdictions. They’re necessary but they’re not sufficient. To right the ship, we have to align the privacy, competition, and consumer protection policies. There are going to be times when privacy and competition will conflict. It’s unavoidable but we can minimize that potential conflict by first harmonizing the policies. One way to do it is to make sure that the competition we get is a healthy form of competition that benefits us rather than exploits us. In order to do that, it’s really about going after behavioral advertising. If you want to correct this problem you need to address it. None of the policy proposals to date have really taken on behavioral advertising and the perverse incentives it creates.
Good piece and I agree with you Yves on how bad these algorithms do. Plus if you use a good ad blocker you see very few of them.
But since these tech giants hold immense political power, there’s also a danger in the shitty algorithms being used to make awful decisions, or to nudge everyone in a certain direction (in this case nudging with a sledgehammer).
For instance, Google is used in the classroom and emotional regulation software could force kids to adopt certain behaviors to compensate for shitty algos. “You look sad today,” the computer tells a child who is having her first period. Cue a bureaucratic nightmare. Not so different from compensating for a shitty teacher but you get the idea.
Or, for instance, not being allowed to enter a bank because a face algo has red flagged you as dangerous, when you were hosed out of your money and want recompense.
Unknown and strange outcomes from this.
Isn’t the real issue here about advertising itself? Back in the last century this seemed to be much more of an issue and there were significant controversies about advertising to children via Saturday morning television and about cigarette commercials. Vance Packard wrote The Hidden Persuaders and ad agencies were attacked for their use of psychological techniques to manipulate buyers. Cut to now, and when it comes to advertising we are very boiled frogs indeed.
But for some of us geezers the resistance to commerce propaganda is a still a real thing and we go out of our way to block ads, which in computer world is not that hard. To me a much greater threat from these big tech firms is the new zeal for censoring non commercial information. They are using all that advertising revenue to give themselves a power that is a threat to a free society.
I also agree that this idea of targeted ads is largely a myth promoted by big tech companies to justify exorbitant advertising rates. But they have convinced the C suite rubes that’s it’s true and now these tech firms, which are essentially just advertising agencies, are the biggest companies on the planet instead of those that actually make things. By a long shot. A quick search shows General Motors market cap to be about $53 billion while Google comes in at almost $1.5 trillion. That these tech companies have gotten so big so quickly really is the greatest grift in history.
You noted the political power they have. I’m not as worried about Google manipulating kids in the classroom as I am about them simply getting kids hooked on their products from a very young age. My kid has been forced to use Google docs, iPads, etc. for many years now because according to the school system, they get such a great deal for using these products and services, they be foolish not to use them. They’ve certainly managed to convince school boards across the country that kids won’t learn anything if they still use a chalkboard instead of a “smart” board. And all of these products as we all know are designed to be addictive, especially in the hands of young people. It’s nearly impossible to tell a kid they shouldn’t be on a device 24/7 when the school has them on one all day long.
Once they’ve used the political clout to get into the schools, that’s a huge trove of data they’re hoovering up. And while using a kid’s 5th grade essay done of Google docs to target ads at them later may not work, they are still gathering and storing all this data. We hear all the time about how much energy is wasted on ridiculous crypto scams but not so much about how much energy is being produced and wasted just so these tech giants can store every keystroke anyone ever makes.
This data fetish we have as a society needs to end. Much of it is gathered for completely frivolous purposes. We have algorithms making decisions based on databases filled with crap and no one seems to remember any more the old mantra about “garbage in, garbage out”. I’m not worried about being manipulated by ads – I’m worried about these tech giants frying the planet so the scam artists that run them can score a few billion more.
I have been arguing for a Butlerian Jihad for quite some time….
Microsoft’s MO was, “Embrace, Extend, Extinguish.”
Any number of small, useful utilities by independent companies have become part of the operating system.
Are you seriously arguing that Microsoft was, or ought to have been, a creator and guarantor of property rights for ISVs? “If your business depends on a platform…”
Thankfully, freeware replicated the functionality of many such accessories before OS vendors built them in, solving the property dispute by decommodification. On the other hand, Android and “app stores” have arrived to grant that wish for a low-friction market of small, liberalized capabilities.
Just for fun, let me add :
What does 4X mean in gaming?
Explore, Expand, Exploit, Exterminate
4X (abbreviation of Explore, Expand, Exploit, Exterminate) is a subgenre of strategy-based computer and board games, and include both turn-based and real-time strategy titles. The gameplay involves building an empire.
4X – Wikipedia
Big Tech is a more advanced parasite at exploiting the ways marketing turns our minds into French onion dip.
Had this talk with my dad about advertising on the TeeVee. His opinion is he’s too smart to be affected by TeeVee commercials. He even goes on to say “nobody falls for that stuff.”
And so he’s eating a sleeve of Pringles at the time. And I say, “Dad, where do you think you got the fever for the flavor of a Pringles? How did you first find out about Pringles? Think back to when you first ate one. Did you eat ’em at a friend’s house? Did you ask your mom to get’em at the grocery store?”
And of course, he has no response for this question. The general form of the question boils down to how would you know about product X you are using now if you (or whomever you learned about X from) hadn’t seen an ad for it in the first place?
But he’s “too smart” to fall for it.
The cognitive biases we all have – such as the illusory truth effect – work on us precisely because it doesn’t matter how smart one is. What matters is repetition. My dad eats the entire can of Pringles in one sitting in most cases, and because he “likes them,” he’s not able to question the harm being done.
“Because I like it” is exactly why people pass on these rubbish memes and quotes on Facebook. Most people don’t care if any of it is real “because I like it.”
What we’ve learned is they aren’t going to “get us” with the 1984 worst-thing-you-can-think-of-forever boot. Instead, they’re going to get us with the Brave New World Soma vacay bit. It isn’t that Pringles are the worst thing in the world (dad did lose 30 pounds over the last year, so he does have some control over his diet), it’s that Pringles are just one manifestation of a problem that’s legion.
https://medium.com/grim-tidings/american-infantilization-and-the-age-of-reason-2da7faf92c34
The word “consumer” is virtually synonymous with “adult baby.” After all, the infant, too, is a professional consumer, since that’s all the infant does: she eats and expels waste.
The impact on the political left is a reversion to a sense of childhood helplessness.
That may be generally true, but all the Internet ads I get are in categories that I don’t buy at all. Diabetes? Men’s T-shirts? Improved versions of MREs? Oh, and I forgot the ones for prostrate treatments. On TV, the ads are for anti-tardive dyskinesia, anti-eczema, anti-thyroid eye disease, cancer meds, add on anti depressants, oh, and the Medicare Advantage ads….the only food ads are for Spam, pizza, Taco Bell, Arbys, Daily Harvest and fast food chicken, none of which I have or would ever eat. Even the one supposedly healthy food I ad I do get on the Internet, for a meal replacement called Ka’chava, I would never eat because healthy meal replacements are an oxymoron. They are highly processed foods and highly processed foods are not good for you.
So your theory of susceptibility does not apply when the ads are for things I would never evah consider buying.
Actually, Yves, you are probably not their customer of interest. Their customers of interest are purchasers of ads who actually pay them. They likely design all their “insights” to appeal to these ad buyers. To the designers of algorithms, you are just a tiny bit of aggregate in the statistical demo that people over age 60 are more likely to have diabetes and other chronic ailments than persons under 60 years. Or that people living in a Southern State are more likely to favor processed food. All these algorithms have to do to prevail is to slightly increase the aggregate level of interest (among tens of millions of people) to make it a datapoint to their real customers of interest.
I think you are missing the point. They make clear they have no idea who I am. The Ka’chava ads are targeting what IMDoc calls the soy man bun segment (except it does have some women), young to at most early middle aged fitness buffs, lotta tattoos but they still look middle to upper income. The 90 days of better than MRE ads are targeting male survivalists and wannabes, young to middle aged men worried about themselves and their families. Aside from Medicare Advantage ads, none of the ads I get are targeting the over 60 contingent. Some of the drug ads interestingly are targeting children, or more accurately, parents of children. The TV food ads are pretty much all mass market fast food except for the occasional home delivered “healthy” upscale alternative. Those look like no targeting at all for them.
Advertising is effective
It’s stochastic, but it “works”
I did a large study many years ago for a major toy company and the relationship between ads and sales was unbelievably strong. Convex, so diminishing returns, thus the main challenge is optimizing the spend
You may be right that you get poorly targeted ads.
But over the population receiving “impressions”, behavior is modified. T-stats are huge
Hi Yves….furzy here….for years now, I mostly get heavy metal tooling & machinery ads on the ‘net…the algos think I’m a structural engineer!! Have no idea why, but I do ponder this puzzle from time to time…
for a couple of years now I’ve been collecting screenshots of absolutely wild ads I’m fed online (including on NC). It’s just absolutely unhinged, comical stuff, probably even worse than early-WWW/email spam. But spam is exactly what it reminds me of.
I’m sitting here looking around my little office — then zooming out mentally to think about what’s in the rest of my place right out through to the kitchen/mudroom…I can honestly report that there aren’t too many articles present in my domicile which were suggested to me through commercial advertising. Maybe a pair of Levi’s? Carlsbergs in the fridge?? (Possibly the best lager in the world)…
I’m betting I’m like a lot of slobs who bridle at being targeted & funneled toward goods that I don’t need/want, be it through media or strategic placements on shelves at the store or in films. In fact, it’s a turn off. I also keep my mental real estate priced stupidly high so as to keep the brain folds as free of extraneous noise as possible. Its gotten so that I actually resent the idea of ads — even in the very very rare instances when something makes the antennae twitch. In 25 years, I think I’ve clicked on an ad *maybe* three times.
Here’s an example from the literature. The coginitive bias called the “illusory truth effect” is independent of intelligence and being an “analytical” rather than emotional person.
https://digest.bps.org.uk/2019/06/26/higher-intelligence-and-an-analytical-thinking-style-offer-no-protection-against-the-illusory-truth-effect-our-tendency-to-believe-repeated-claims-are-more-likely-to-be-true/
What this means is the average Blue Team person isn’t necessarily “stoopid” for believing in Russiagate. Rather, they’ve heard the terms “Putin’s puppet” and “all 17 intelligence agencies agree” and “Russia!Russia!Russia” so many times, their brains ceased questioning the validity of the information.
So even if I do my best not to “believe” in advertisments on the TeeVee or Fakebook or anywhere else, there is still an effect from said marketing on my beliefs, no matter how smart I am. The premise of “targeted ads” is to tell me what I want to hear, but if I want to hear something, I seek it out as well. The two are inextricably linked.
My dad wants to be “smart” so he reads the NYTimes, like all “smart” people do. He never reads Breitbart, ’cause that’s for the crazy folks, right? The Russiagate narrative is pushed on him there, and he decides only Trumpets are against Russiagate, and he’s not a Trumpet. He doesn’t believe the “illusory truth effect” applies to himself; it only works on those “Breitbart” people.
In truth, all that’s happened is the “Russiagate” narrative was marketed in the NYTimes community. Amplified by Fakebook and Google algorithms. That’s how they get you… even the smart people.
And the narrative we are sold: nothing can be done about neoliberalism and neoconservatism, it’s how the left-wing crowd gets learned helplessness.
“What can you do about it?” is what I hear when I bring up data surveillance or endless war or predatory capitalism to the average person. We gave up, so time to either doomscroll or watch cat videos.
IT and AI are complicated machine instruction sets. However, “complicated” doesn’t mean it can parse the “complex.” Humans are complex. (Yes, that does sound a bit… um…) / ;)
If you think fizer fibs, and if you think this war’s a real disaster, what do the algors send your way? Only the upbeat stuff friends-who-are-like-you post? This is akin to the discourage voting tactic. People who are like you are spreading out in a diaspora off to myriad quiet enclaves…..speaking up wise. But it must be due to we have not let prospective candidates know we want some reason on both issues?
I suspect that this type of repetitive broadcasting on tv is probably more effective than the online narrow-casting of ads. If you fill the airwaves with Pringles ads, you are probably gonna hit some people who are then when they go to the store and see the prominently placed Pringles tubes are going to buy some, and some of those are going to get hooked on Pringles. Similar with rare purchases like a fridge or a car, once you get to the point of buying one you will have seen lots of ads about different brands, getting a general impression that a brand is safe or trustworthy or cool.
With online narrow-casting the advertiser runs the risk of hitting only the wrong segments, plenty of examples in this thread, and with the rare purchases like a fridge or a car only starting delivering ads after you have already bought the product and it will be a decade or more to the next purchase.
Here’s an example of when I realized how closely we’re being watched. My daughter’s 9 now, and when she was an infant, say 8 years ago, I was looking for baby swings on Amazon. Didn’t order one. Just browsing…
I go on Fakebook later that day, and damn if there aren’t ads for baby swings over there in the FB margin. One of ’em is exactly the model I spent time contemplating on Amazon. It was really creepy.
So what this tells me is through cookies (or some other data collection method I’m not aware of) the Fakebook talks to the Amazon about what I’ve been shopping for.
And so I got upset and deleted my Facebook account when the Cambridge Analytica scandal hit.
Even so, these people still follow us around on the internet everywhere we go. Just because I’m not getting Amazon baby swing ads on Fakebook anymore (because I’m not on Fakebook), doesn’t mean Fakebook isn’t still writing a book on me and selling it to whomever is willing to buy these data.
They’re still tracking me, and you too, regardless of whether or not you have an Amazon or Fakebook account.
Go manually clear the cookies in your internet browser, one at a time. You’ll find tracking cookies from all sorts of data mining corporations you don’t intentionally deal with, such as Fakebook and Amazon. I’m no computer guru or anything, but I also recognize the idea I’m unaffected and above all this stuff because “I’m too smart” is a fallacy.
https://www.consumerreports.org/privacy/how-facebook-tracks-you-even-when-youre-not-on-facebook-a7977954071/
not my wheelhouse so grain of salt etc, but I don’t think there has been innovative work in the algos and how they function in a long time (well over a decade). I say this because the power of the ads is due to the absolute dominance of a handful of player(s), specifically Google AdSense and then the platform-specific ads in Facebook’s and Amazon’s ecosystems. They don’t need to innovate in terms of capturing more eyes because they don’t need to capture more market share. Since Google has a defacto monopoly on search and they insert ads, AdSense has already won, they don’t have to improve anything but the rate at which they get paid. This absolute dominance hides a lot of other sins that more accurately explain why online marketing works the way it does (like your browsing fingerprint advertising your search history to the advertisers regardless of your settings) and obscure the fact that it was built and set up by the people who became the monopolists, so they could make money.
The stuff about algos understanding people’s thoughts is just soft advertising for technofetishists. A good nurse would probably have a better understanding of a paralyzed patient’s thoughts and needs after time learning their behaviors. I realize the motivation here is not having to pay a nurse for a portion of the same service they provide, and instead giving that money to the technology monopolist, but people who think the algos are magic get mad when you point that stuff out.
Bad information or algorithms have been used “authoritatively” to much social damage for a long, long time. For instance, in 17th, 18th, 19th century war or revolution or political action, it didn’t matter what an individual thought or had done; individuality rarely mattered; people were just aggregated into a group or class and treated accordingly. You had to be a rare, powerful entity to be treated as an individual. If you were part of a group or class deemed to be an enemy, you were de-humanized, made a “former people”. Happened all the time. Today, only the modality is new; the aspiring sovereign class now includes computer programmers. Maybe their obvious glaring oversights and simpliying assumptions will undermine their social utility. But I doubt it. They are “cost savers” to corporations. Like those awful automated answering systems, it won’t go away b/c it’s cheap and easy for the creators. Downstream consequences, time and frustrations are problems borne by other, less powerful people.
“Treated as an individual” sounds like neoliberal Whig history or alt-right myth-making. As just one counterexample, witch hunts unquestionably tested the individual against an arbitrary moral standard and penalized those who failed. Calvinist economics created many masters of the house and many subordinates beneath them. The memes of absolute private property and possessive individualism were historical contingencies, not laws of motion. We can quit them anytime.
I believe the ads are probably a variation of the 419 Scam, whereby scammers purposely send obvious and outrageous tales of woe, as the purpose is to attract only those persons gullible and foolish enough to respond. It is a means to separate the wheat from the chaff so to speak. Only the people most likely to be conned will respond, and those are the people the scammers want to contact. I would think that this is something similar. An interesting experiment would be to respond to one of the ads and see if the nature of the ads you see changes?
Parramorw’s opener is alarming because of what she thinks those people understand ( really think about the difference between knowing that something happened understanding it) and that they believe their own hype.
They can plan lots of things. They can advertise lots of just-around-the-corner “breakthroughs”, ( advertising written, no doubt, to attract billion dollar defense contracts).
Believe it or not, they’re financially hurting right now, and attracting new big-bucks contracts is important. From Wolf Street (audio):
https://wolfstreet.com/2022/05/29/the-wolf-street-report-tech-bust-takes-next-step-layoffs-hiring-freezes/
Maybe Herbert was right and we just need a Butlerian Jihad.
There’s obviously a bad algo behind the ad feed to my house.
Almost every single commercial has black people in it.
We are all white.
Stop ’em before they get started. Use a VPN. The one I have allows the choice of numerous countries. So I get ads in Rumanian, quite humorous that they are wasting their money. Also, strips the ISP of the ability to mine your data and profile.
In connection with privacy and security, I know a number of folks of an anti BIGTECH/WEF/etc. persuasion and none of them use a VPN or have even heard of it.
Protect yourself because the vast Majority is not doing so and what they want, with individual rights waning, is what you will get. Majority opinion is the default; one size fits all, just like Injections.