Tech Law in 2025: A Look Ahead at AI, Privacy and Social Media Regulation Under the New Trump Administration

Yves here. There are so many hot-button areas where Team Trump is planning big changes, such as immigration, tariffs, and the Ukraine conflict, that there are many other key areas on the new Administration’s target list that are not getting the attention they warrant. As the post below demonstrates, there’s a long list in the tech arena. The one readers are likely aware of is social media censorship, as implemented by big platforms subject to Section 230. But I was surprised AI regulation, or rather arguably deregulation, is also on the list. As you will see below, the White House has published an AI Bill of Rights, which looks like a handwave, and an Executive Order, which although detailed, seems to be lacking in teeth ex enforcement action (of which I have yet to see any; hopefully readers can provide specific examples otherwise if this impression is inaccurate). Given the rampant implementation of AI despite still too many cases of dodgy results, the idea that there are any (to use a favored Administration buzz word) guardrails is news to me.

By Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan. Originally published at The Conversation

Artificial intelligence harms, problematic social media content, data privacy violations – the issues are the same, but the policymakers and regulators who deal with them are about to change.

As the federal government transitions to a new term under the renewed leadership of Donald Trump, the regulatory landscape for technology in the United States faces a significant shift.

The Trump administration’s stated approach to these issues signals changes. It is likely to move away from the civil rights aspect of Biden administration policy toward an emphasis on innovation and economic competitiveness. While some potential policies would pull back on stringent federal regulations, others suggest new approaches to content moderation and ways of supporting AI-related business practices. They also suggest avenues for state legislation.

I study the intersection of law and technology. Here are the key tech law issues likely to shape the incoming administration’s agenda in 2025.

AI regulation: Innovation vs. Civil Rights

The rapid evolution of AI technologies has led to an expansion of AI policies and regulatory activities, presenting both opportunities and challenges. The federal government’s approach to AI regulation is likely to undergo notable changes under the incoming Trump administration.

The Biden administration’s AI Bill of Rights and executive order on AI established basic principles and guardrails to protect safety, privacy and civil rights. These included requirements for developers of powerful AI systems to report safety test results, and a mandate for the National Institute of Standards and Technology to create rigorous safety standards. They also required government agencies to use AI in responsible ways.

Unlike the Biden era, the Trump administration’s deregulatory approach suggests a different direction. The president-elect has signaled his intention to repeal Biden’s executive order on AI, citing the need to foster free speech. Trump’s nominee to head the Federal Trade Commission, Andrew Ferguson, has echoed this sentiment. He has stated his opposition to restrictive AI regulations and the adoption of a comprehensive federal AI law.

AI policy experts discuss likely changes in federal regulation of technology in the Trump administration.

With limited prospects for federal AI legislation under the Trump administration, states are likely to lead the charge in addressing emerging AI harms. In 2024, at least 45 states introduced AI-related bills. For example, Colorado passed comprehensive legislation to address algorithmic discrimination. In 2025, state lawmakers may either follow Colorado’s example by enacting broad AI regulations or focus on targeted laws for specific applications, such as automated decision-making, deepfakes, facial recognition and AI chatbots.

Data Privacy: Federal or State Leadership?

Data privacy remains a key area of focus for policymakers, and 2025 is a critical year to see whether Congress will enact a federal privacy law. The proposed American Privacy Rights Act, introduced in 2024, represents a bipartisan effort to create a comprehensive federal privacy framework. The bill includes provisions for preempting state laws and allowing private rights of action, meaning allowing individuals to sue over alleged violations. The bill aims to simplify compliance and reduce the patchwork of state regulations.

These issues are likely to spark key debates in the year ahead. Lawmakers are also likely to wrestle with balancing regulatory burdens on smaller businesses with the need for comprehensive privacy protections.

In the absence of federal action, states may continue to dominate privacy regulation. Since California passed the Consumer Privacy Rights Act in 2019, 19 states have passed comprehensive privacy laws. Recent state privacy laws have differing scopes, rights and obligations, which creates a fragmented regulatory environment. In 2024, key issuesincluded defining sensitive data, protecting minors’ privacy, incorporating data minimization principles, and addressing compliance challenges for medium or small businesses.

At the federal level in 2024, the Biden administration issued an executive order authorizing the U.S. attorney general to restrict cross-border data transfers to protect national security. These efforts may continue in the new administration.

Cybersecurity, Health Privacy and Online Safety

States have become key players in strengthening cybersecurity protections, with roughly 30 states requiring businesses to adhere to cybersecurity standards. The California Privacy Protection Agency Board, for example, has proposed rulemaking on cybersecurity audits, data protection risk assessments and automated decision-making.

Meanwhile, there is a growing trend toward strengthening health data privacy and protecting children online. Washington state and Nevada, for example, have adopted laws that expand the protection of health data beyond the scope of the federal Health Insurance Portability and Accountability Act.

Numerous states, such as California, Colorado, Utah and Virginia, have recently expanded protections for young users’ data. In the absence of federal regulation, state governments are likely to continue leading efforts to address pressing privacy and cybersecurity concerns in 2025.

Social Media and Section 230

Online platform regulation has been a contentious issue under both the Biden and Trump administrations. There are federal efforts to reform Section 230, which shields online platforms from liability for user-generated content, and federal- and state-level efforts to address misinformation and hate speech.

While Trump’s previous administration criticized Section 230 for allegedly enabling censorship of conservative voices, the Biden administration focused on increasing transparency and accountability for companies that fail to remove concerning content.

Section 230 explained.

With Trump coming back to office, Congress is likely to consider proposals to prohibit certain forms of content moderation in the name of free speech protections.

On the other hand, states like California and Connecticut have recently passed legislation requiring platforms to disclose information about hate speech and misinformation. Some existing state laws regulating online platforms are facing U.S. Supreme Court challenges on First Amendment grounds.

In 2025, debates are likely to continue on how to balance platform neutrality with accountability at both federal and state levels.

Changes in the Wind

Overall, while federal efforts on issues like Section 230 reform and children’s online protection may advance, federal-level AI regulation and data privacy laws could potentially slow down due to the administration’s deregulatory stance. Whether long-standing legislative efforts like federal data privacy protection materialize will depend on the balance of power between Congress, the courts and the incoming administration.

Print Friendly, PDF & Email

16 comments

  1. GM

    If you go back to the middle of the 20th century and review how people imagined the development of AI in the future, you may notice something very disturbing with respect to what is actually happening now — there was a kind of an implicit assumption there that we would only develop AI once we have sorted our internal socio-economic mess first, i.e. in most cases real AI was envisioned against the background of a post-predatory capitalist society that would be pursuing it with higher goals in mind than making profit.

    You can see how that kind of assumption was creeping in, especially in the immediate post-WWII environment, when the predatory capitalists were on the shortest leash they have ever been on.

    But then several things happened — first, it turned out that the projections for the speed of the development of AI were quite a bit off and it did not happen as quickly as people imagined. Which also had the effect of much of the very lively debates that were had on its safety and directions of development being forgotten by the early 21st century, when it actually took off. And the predatory capitalists eventually did a slow motion soft coup in the West, followed by the collapse of the USSR, and here were are in the early 21st century, more ideologically committed to predatory capitalism that we have ever been, AI is actually just about to reach escape velocity, and it is all pursued in the name of making profits, with zero concern about either the short-term effects or the longer term future of humanity.

    So about the worst possible position to be in when that evolutionary transition happens.

    Trump being owned by tech oligarchs (not that the other side wasn’t, just a different set of oligarchs) who want all restrictions lifted is just another small step of that larger process…

    Reply
  2. GlassHammer

    I think the hasty addition of AI via deregulation will create products loaded with either actively harmful features (Spyware that works against the user) or drains the resources of their hardware (“bloat-ware” that adds nothing but drains resources). The users will then see the “AI” label as equivalent to “junk”.

    I think in a few years a “Non-AI” label will be to technology what the “Non-GMO” label is to food.

    Reply
    1. GM

      This is actually the least of our worries, those kind of concerns are the superifical fluffy layer of snow on top of the AI iceberg.

      The much more serious issue is that most jobs will be eliminated. First people in creative fields and the white collar office jobs, then further down the line, once humanoid robots are sufficiently advanced, everyone else too.

      Quite ominously, you don’t even see that topic discussed much in the mainstream media, which might be for a reason, not just because nobody sees what is coming.

      And if most people are not needed, and there is no clear plan what they will be doing when they are not needed, then what follows after that?

      Reply
      1. GlassHammer

        I don’t buy the “AI replacement theory”, (at this time) a.) because much of what is labeled “AI” is basic/superficial automation via software (overglorified Macros) and b.) you can make a human legally liable for fraud/failure but a machine can’t be held to that type of legal process.

        We still live in market governed by contracts/law and I don’t see anyone wanting to go in front of a judge an say “Your Honor, the machine defrauded our customer, we only owned it.”

        To me the immediate and long term threat is smuggling in pernicious software (not just Spyware but system vulnerabilities) under the label of AI to unsuspecting customers. If I was going to stretch my concerns a bit further it would be something like contested ownership of technology, i.e. “You own the hardware and software but… whatever the AI provides or extracts belongs to another party.”

        Reply
        1. GM

          I didn’t buy it until very recently either, but then it started doing as well as the very best of humans on some of the most challening reasoning tasks (e.g. fairly high level math), with the prospect of a runaway self-improvement loop being established soon becoming quite realistic too.

          At which point one had no choice but to take it very seriously.

          Worse, the plan clearly is for embodied robots to be controlled by the cloud, where the AI resides, which, combined with the above considerations, is absolute nightmare fuel.

          P.S. What is legal and what is not should not feature much in the thinking on the subject. Evolution plays by no such rules.

          Reply
          1. GlassHammer

            If it helps, a mind and a body is much better than a mind alone when manipulating objects in the real world (which what most human work is). Even if AI has made leaps, the mind has not met a suitable body at this time.

            I watch robotics advancements with more concern than AI when it comes to job replacement.

            P.S. The legal is often “the barrier” to adoption and proliferation. You can stall technology for quite awhile by narrowing its legal pathways or make it “evolve” in entirely different ways to cope with those legal barriers.

            P.P.S. The first jobs I ever heard of being automated away were wallstreet jobs by trading algorithms. It happened way before trucking being automated away (trucking was supposed to be on the chopping block early) because it was way easier to automate a mind instead of a mind and a body. (Getting all the sensory input and physical reaction of a trucker turned out to be damaged difficult.)

            Reply
      2. Randall Flagg

        >The much more serious issue is that most jobs will be eliminated. First people in creative fields and the white collar office jobs, then further down the line, once humanoid robots are sufficiently advanced, everyone else too.

        Quite ominously, you don’t even see that topic discussed much in the mainstream media, which might be for a reason, not just because nobody sees what is coming.

        And if most people are not needed, and there is no clear plan what they will be doing when they are not needed, then what follows after that?

        That’s what I don’t get. People without jobs, no source of income, what will they use to buy food, products and services? Do these companies think of these things past what helps their bottom line this end of quarter? Isn’t this the stuff of revolutions?
        Is this AI development and the loss of jobs a good reason for Universal Basic Income?

        Reply
        1. Synoia

          If jobs are destroyed , then consumption could also be , by destroying white collar jobs leading to a widespread professional job collapse. .

          Reply
  3. The Rev Kev

    I think that when Trump gets sworn in, that he is going to let the tech boys run rampant with cryptocurrency, AI and anything else that they want to do. All restrictions will be lifted and he will let them open up nuclear plants, coal plants and anything else they they want to power their ropy inventions. It will be like a digital wild west and I would not be surprised if under a Trump admin, that he lets power be restricted to households so that the saved power can be diverted to all those server farms. It’s going to be a mess but it may just be that the tech boys will be given enough rope to hang themselves with when the final results become obvious.

    Reply
    1. Thuto

      Big Tech hyperscalers spent $125billion on datacentres in 2024, Microsoft just announced they’re spending $80billion in FY25 on expanding datacentre footprint. These people have burnt the proverbial boats and are all in on the AI revolution, society will just have to deal with downstream effects like the US electrical grid’s transmission capacity being pushed to the limit to deliver power to all those hungry GPUs, to say nothing of households located in the vicinity of datacentre clusters experiencing voltage fluctuations as powering AI workloads will rank higher than e.g. warming your house in winter.

      Trump has handed the henhouse over to the same VC foxes who funneled billions into AI startups in 2024 and they’ll be damned if they allow safety guardrails to risk those asymmetric returns. Pesky, uppity “doomers” (as the people calling for safe development of AI are derisively called) clamoring for safety to become a core tenet of AI development will be mocked and ostracized until they acquiesce to the establishment narrative to save their careers. And of course, if you’re ever asked to bear the inconvenience of having your electricity supply throttled, just know you’re making a sacrifice “for the sake of humanity” and doing your part to make sure the big bad wolf aka China doesn’t outpace the west in AI because we are all responsible for making sure this powerful technology doesn’t land in the “wrong hands”.

      Reply
      1. Joy

        How about heating your home at a discount rate with a networked hardware module that’s part of a distributed virtual datacenter? (Big penalties for messing with it, of course.)

        Reply
    2. BeliTsari

      “Circus-Circus is what the whole hep world would be doing Saturday night if the Nazis had won the war?”

      Hunter S Thompson

      Reply
  4. Watt4Bob

    Has anyone considered that all the AI data centers may not be supporting AI at all?

    As I recall, bitcoin mining requires lots of hardware and lots of power.

    A clever architect could figure out how to mate the two endeavors in such a way as to utilize all the ‘spare‘ capacity.

    AI data centers are bound to be easier to sell that giant mining operations.

    And then there is the situation with what to do with all those data centers when the AI bubble bursts?

    Why let all that potential go to waste when there is a way to re-purpose, and start making money again?

    Reply
  5. Bsn

    This article is an example of “bla, bla, bla”. Name one example of any regulation diminishing the government and corporate efforts at tracking, surveillance and selling of all of our personal/private data to any bidder. I’ll wait.

    Reply
  6. Not Moses

    Isn’t it a bit like when the Sacklers introduced OxyContin? No regulations back then either, for the miracle cure. The tragedy that followed was predictable. Though, the Sacklers made out like thieves. Go figure.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *