Yves here. I must confess to being late to this particularly policy debate, and therefore being gobsmacked by the Orwellian use of language. “Data competition”? “Democratizing data”?
As you’ll see from this piece, at least some have awoken to the idea that the big tech players collect huge amounts of information about our activities, and worse, it’s close to a winner take all game. The ones who hoover up a lot of information can afford to do more analysis simply by virtue of their greater size, and have much more and therefore almost certainly richer and deeper data to mine.
The utterly disingenuous proposal of some of the tech kahunas, is rather than have restrictions placed on their information-gathering and use, is instead to require them to share more. It’s not hard to see that this can be construed as a bribe to states that don’t have the surveillance apparatus of a NSA and therefore what Google or even a Facebook has on their citizens would put them way ahead of where they are now.
Of course, a non-trivial problem is the users themselves. They may well be bothered when they see how much Google and Apple or even Facebook knows about them. But they aren’t willing to make more effort in the name of privacy (for instance, punch in their current address to find out what businesses are near rather than have their device) or even make a stink about snooping to elected officials.
By Maurice Stucke, a Professor of Law at the University of Tennessee. Originally published at the Institute for New Economic Thinking website
With the bustle of policy proposals and antitrust enforcement, it looks like the tech giants Google, Apple, Meta, and Amazon will finally be reined in. TheNew York Times, for example, recently heralded Europe’s Digital Markets Act (DMA) as “the most sweeping legislation to regulate tech since a European privacy law was passed in 2018.” As Thierry Breton, one of the top digital officials in the European Commission, said in the article, “We are putting an end to the so-called Wild West dominating our information space. A new framework that can become a reference for democracies worldwide.”
So, will the DMA, along with all the other policies proposed in the United States, Europe, Australia, and Asia make the digital economy more contestable? Perhaps. But will they promote our privacy, autonomy, and well-being? Not necessarily, as my latest book Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy explores.
Today a handful of powerful tech firms – or data-opolies – hoard our personal data. We lose out in several significant ways. For example, our privacy and autonomy are threatened when the data-opolies steer the path of innovation toward their interests, not ours (such as research on artificial neural networks that can better predict and manipulate our behavior). Deep learning algorithms currently require lots of data, which only a few firms possess. A data divide can lead to an AI divide where access to large datasets and computing power is needed to train algorithms. This can lead to an innovation divide. As one 2020 research paper found: “AI is increasingly being shaped by a few actors, and these actors are mostly affiliated with either large technology firms or elite universities.” The “haves” are the data-opolies, with their large datasets, and the top-ranked universities with whom they collaborate; the “have nots” are the remaining universities and everyone else. This divide is not due to industriousness. Instead, it is attributable, in part, to whether the university has access to the large tech firms’ voluminous datasets and computing power. Without “democratizing” these datasets by providing a “national research cloud,” the authors warn that our innovations and research will be shaped by a handful of powerful tech firms and the elite universities they happen to support.
When data is non-rivalrous, that is when use by one party does not reduce its supply, many more firms can glean insights from the data, without affecting its value. As Europe notes, most data are either unused or concentrated in the hands of a few relatively large companies.
Consequently, recent policies, such as Europe’s DMA and Data Act and the U.S.’s American Choice and Innovation Online Act, seek to improve interoperability and data portability and reduce the data-opolies’ ability to hoard data. In democratizing the data, many more firms and non-profit organizations can glean insights and derive value from the data.
Let us assume that data sharing can increase the value for the recipients. Critical here is asking how we define value and value for whom. Suppose one’s geo-location data is non-rivalrous. Its value does not diminish if used for multiple, non-competing purposes:
- Apple could use geolocation data to track the user’s lost iPhone.
- The navigation app could use the iPhone’s location for traffic conditions.
- The health department could use the geolocation data for contact tracing (to assess whether the user came into contact with someone with COVID-19).
- The police could use the data for surveillance.
- The behavioral advertiser could use the geolocation data to profile the individual, influence her consumption, and assess the advertisement’s success.
- The stalker could use the geolocation data to terrorize the user.
Although each could derive value from the geolocation data, the individual and society would not necessarily benefit from all of these uses. Take surveillance. In a 2019 survey, over 70% of Americans were not convinced that they benefited from this level of tracking and data collection.
Over 80% of Americans in the 2019 survey and over half of Europeans in a 2016 survey were concerned about the amount of data collected for behavioral advertising. Even if the government, behavioral advertisers, and stalkers derive value from our geo-location data, the welfare-optimizing solution is not necessarily to share the data with them and anyone else who derives value from the data.
Nor is the welfare-optimizing solution, as Breaking Away explores, to encourage competition for one’s data. The fact that personal data is non-rivalrous does not necessarily point to the optimal policy outcome. It does not suggest that data should be priced at zero. Indeed, “free” granular personal datasets can make us worse off.
In looking at the proposals to date, policymakers and scholars have not fully addressed three fundamental issues:
- First, will more competition necessarily promote our privacy and well-being?
- Second, who owns the personal data, and is that even the right question?
- Third, what are the policy implications if personal data is non-rivalrous?
As for the first question, the belief is that we just need more competition. Although Google’s and Meta’s business model differs from Amazon’s, which differs from Apple’s, these four companies have been accused of abusing their dominant position, using similar tactics, and all four derive substantial revenues from behavioral advertising either directly (or for Apple, indirectly).
So, the cure is more competition. But as Breaking Awayexplores, more competition will not help when the competition itself is toxic. Here rivals compete to exploit us by discovering better ways to addict us, degrade our privacy, manipulate our behavior, and capture the surplus.
As for the second question, there has been a long debate about whether to frame privacy as a fundamental, inalienable right or in terms of market-based solutions (relying on property, contract, or licensing principles). Some argue for laws that provide us with an ownership interest in our data. Others argue for ramping up California’s privacy law, which the realtor Alastair Mactaggart spearheaded; or adopting regulations similar to Europe’s General Data Protection Regulation. But as my book explains, we should reorient the debate from “Who owns the data” to “How can we better control our data, privacy, and autonomy.” Easy labels do not provide ready answers. Providing individuals with an ownership interest in their data doesn’t address the privacy and antitrust risks posed by the data-opolies; nor will it give individuals greater control over their data and autonomy. Even if we view privacy as a fundamental human right and rely on well-recognized data minimization principles, data-opolies will still game the system. To illustrate, the book explores the significant shortcomings of the California Consumer Privacy Act of 2018 and Europe’s GDPR in curbing the data-opolies’ privacy and competition violations.
For the third question, policymakers currently propose a win-win situation—promote both privacy and competition. Currently, the thinking is with more competition, privacy and well-being will be restored. But that is true only when firms compete to protect privacy. In crucial digital markets, where the prevailing business model relies on behavioral advertising, privacy and competition often conflict. Policymakers, as a result, can fall into several traps, such as when in doubt, opting for greater competition.
Thus, we are left with a market failure where the traditional policy responses—define ownership interests, lower transaction costs, and rely on competition—will not necessarily work. Wresting the data out of the data-opolies’ hands won’t work either – when other firms will simply use the data to find better ways to sustain our attention and manipulate our behavior (consider TikTok). Instead, we need new policy tools to tackle the myriad risks posed by these data-opolies and the toxic competition caused by behavioral advertising.
The good news is that we can fix these problems. But it requires more than what the DMA and other policies currently offer. It requires policymakers to properly align the privacy, consumer protection, and competition policies, so that the ensuing competition is not about us (where we are the product), but actually for us(in improving our privacy, autonomy, and well-being).
I think it is also important that these data sets only represent a small part of human endeavor. While you may be able to force a parameter with big data or to predict the likely outcome of a person viewing an ad, the data is meaningless to most people in a deeper sense. If it has any relationship at all to the qualify of an average person’s life, it would be an inverse relationship.
When we think about the fact that enormous resources are being poured into the development of these technologies, it’s fairly clear that they are not being done for human benefit. Instead, they are being developed to gain control. The question is why.
There’s little prospect of any of this data being used for us, as the author hopes. If there were any intent to use this data for human well-being, it would not be collected as secretly and involuntarily as it often is. Public input would be a part of the process. The fact that public input is not a part of the process, and that data is often collected surreptitiously should tell us that the Collectors do not have our well-being foremost in their minds.
Collection is not analysis, meaning, or intellect. It presents itself as more profound than it is. It brings greater computing power to mostly banal methods, but fails to create meaning.
Stronger privacy protections are needed as a matter of urgency.
I think this is a “that’s not even wrong” analysis, I’m sorry to say.
The DMA prohibits commercialization of user data; it’s not about data competition, it’s about making sure that (for example) when Amazon sells you a widget on behalf of a vendor, that it doesn’t data-mine that transaction to decide whether to clone that vendor’s gadget and come after your market.
More substantially, the DMA exists under the framework of the GDPR, the most comprehensive (and, sadly, underenforced) data-protection law on Earth (the EU Commission in the persons of Breton and Vestager both pledged to increase enforcement at the Charles River tech/antitrust event in Brussels at the end of last month).
The US app store bill isn’t about making Apple share data with its rivals – it’s about letting users choose rival app stores, all of which will be bound by the privacy framework in the bill, which, while not as strong as the GDPR, is nevertheless designed *not* to create “data competition.”
Fighting monopolies and preserving privacy are fully compatible:
https://www.eff.org/deeplinks/2021/04/fighting-floc-and-fighting-monopoly-are-fully-compatible
That’s true so long as you’re fighting monopolies to give the people more control over their lives, rather than to produce “competition” as an end in and of itself. As I wrote in that essay, when the UK Competition and Markets Authority proposes giving every Briton a lifelong ad tracking ID to increase competition within the ad-tech sector, that’s competition for its own sake (a competition to see who can violate your human rights most efficiently and profitably).
But that’s not what’s going on in either of these legislative proposals.
These issues are much larger and older than people realize. The mail, telegraph and telephone monopolies were the first examples of this sort of “public-private partnership” game of patty-cake.
A consumer has no enforceable rights unless the consumer has a complete set of the stored data and a complete set of information about how that data is used by a company or a government in question. A true privacy rule should be “I give you data to use in a transaction. If you want to save that data or to use that data for any purpose, show me the price list. How much are you willing to pay me?”
And this will not happen because all governments want access to the data without the knowledge of the consumer/citizen/target. When we talk about “the big companies”, remember, they are at the middle of the food chain; governments are at the top of the food chain. And to further complicate the issue, most of us want the government to snoop- sometimes.
During the US Civil War, Western Union (a government- issued monopoly) started saving a copy of all messages at the behest of the US Government. And at least as early as WWI the government-issued monopolies for local phone service (usually local Bells) and also long distance service (ATT Long Lines) started retaining all call data via a nifty little devise called a pen register. They saved the origin and destination numbers plus the time and date of the beginning and end of the call, all of which the government could access without a court order because that information was “Company property”.
This was followed by the infamous “pen register” Court rulings, saying a company could get a Court order to see those records and the order could be “sealed” (made invisible). That fell apart in the 1970s when Procter and Gamble got the records for the whole city of Cincinatti and used them to identify and fire a Cincinnati employee/leaker. He found out and sued at the same time that the NYT and WP discovered they could be targets too.
Finally, the Europeans got a hint of this early-on. The postal monopolies of the European empires opened mail at will from 1790-1918. And the end of WWI allowed many people to confirm the suspicion that their mail was being opened and read. Then in 1940, when the Nazis occupied the Netherlands, they simply went to the ever-efficient Dutch phone companies, took all the pen register records and identified all the Jews and anti-Nazi refugees by seeing who had what number and tracing who they had called.
The issues of “private vs public records” and “Who can see these records” are not new. What has changed is that instead of 1% of the average person’s life being in those records (which could only be searched by hand), now more than 50% of a person’s life is in those records and they can be indexed and sorted by machines- and the records from different organizations can also be incorporated into one vast data store by relatively inexpensive machines.
Informative post. The typical mantra is that data is being collected to deliver more targeted ads to consumers. This is clearly not the reason. Consumers are notoriously fickle, so collected data for targeted ads would only be relevant for a very short time. On the other hand, collected data is stored indefinitely in data lakes for purposes that clearly are not related to consumption or ads.
One of my concerns is that if you have enough data on every person, you can effectively silence any critic. Even the most law-abiding people have occasionally jay-walked or committed some other trivial offense. There are so many laws on the books that if every iota of information about every person were stored, you could probably charge every single person with something.
There is tremendous potential for stored information to be used to control free speech. This type of collection is inherently anti-democratic.
Of course it’s all for control. Certainly not to represent. Just the fact clicking okay on a screen could possibly be legally binding says so much.
As a slight aside I have to wonder if one could even begin to calculate if all of us, our digital profile, in all its redundancy from cookie to credit to cloud eats far more useless energy per person than a bitcoin transaction. We should all get a million bucks for saying save the planet by never ever collecting or saving any info on me ever.
Eureka,
Regarding the clicking, that’s the consent theater that’s been discussed before on NC. If you actually had to contest a consent theater click action in court, it probably wouldn’t be legally binding because you did not actually participate in choosing an option. There’s an arrogance behind even posting these notices, though.
I believe the blank check data hoovering today is much, much worse. I understand the pen register cases and such. But iirc there were were some trade offs for these industries being utilities. I’m old enough to recall when you couldn’t be turned down for a telephone. It was a utility. Now, just filling out the documents for a cell phone or phone service gathers up more information than is necessary or reasonable. Yet another gratuitous data grab. As was explained humorously by John Oliver, I believe data isn’t “sold,” it’s passed to a broker, which renders California’s “privacy law” toothless, yet sounding comforting. How one stops this runaway train, I don’t know. Most people I know don’t care. And therein lies the largest part of the problem.
CCPA has some failings, but permitting resale to a broker isn’t one of them (the big one is the absence of a private right of action).
It’s true that US privacy law is a mess, and the commercial surveillance industry is terrible, but it’s really important to distinguish between things that make that better and things that make it worse.
The ACCESS Act and other tech antitrust laws are *good* for privacy.
Here’s a white-paper I co-authored on privacy and its relationship to tech antitrust:
https://www.eff.org/wp/interoperability-and-privacy