Angles: Should the US government regulate data privacy on the internet?

Earlier this month, Frances Haugen, a former Facebook data scientist, gave testimony to what she witnessed while working for the company. She provided evidence that the Facebook algorithm spread misinformation and was “exploited by foreign adversaries,” according to NPR.

Her testimony once again brings up the question of data privacy and regulation.

In the United States, the internet is regulated by the Federal Communications Commission, which has been implementing net neutrality policies since 2015. According to the Bill of Rights Institute, “Net neutrality is the concept that internet service providers ought to treat all internet traffic equally and not intercede between users and their internet destinations.”

However, tech companies still have the right to sell user data from platforms such as social media, so the question remains: how should the government regulate privacy on the internet — considering the sale of user data and consumer manipulation but still protection for small website providers and the right to free speech?

Yes, sale of personal data is a major problem

By: Sydney Denekamp

Despite rising concerns over data privacy amid manipulative algorithms, free sale of consumer data and countless database breaches, the United States has failed to adopt any national policies safeguarding the American people from the whims of data brokers and data corporations.

As the datafication of our lives increases with more fitness trackers, dating apps, smart houses and increased social media use, the right to data privacy becomes more important than ever. As we put more information about ourselves into online spaces, we become more open to manipulation by whomever has access to that data.

Currently in the United States, many corporate databases, including most social media sites, have free reign to sell any data to third-party data brokers. While these brokers usually only use the information to show users targeted advertisements, there is nothing stopping them from selling user data to even more nefarious buyers.

By giving data brokers unchecked access to public data, we open ourselves up to manipulation by whoever buys this data. The United States saw this manipulation in action during the 2016 election when Cambridge Analytica, a British political consulting firm hired by the Trump administration’s marketing team, used the data of tens of millions of Facebook users to influence voters. Cambridge Analytica used extremely detailed and specific data profiles to deliver pointed, emotionally charged political messages to their targets.

While the degree to which Cambridge Analytica changed public opinion on Donald Trump through targeted messaging is unclear, it is clear that there should be protections against the manipulation of public consciousness by regulating the flow of private data or stopping it altogether.

Using consumer data for profit is not new. Supermarkets like Target and Tesco have been tracking what consumers buy and sending them advertisements based on their purchases for decades. What is new is the scale of commercializing data, the openness to exploitation and the immense amount of data now available.

While some consumers may benefit from data-brokered targeted advertisements, the threat of manipulation outweighs any benefit targeted advertisements could offer.

There is nothing stopping a corporate database like Fitbit from selling user data to the highest bidding insurance company and that company using the data to discriminate against individuals looking for health insurance.

Countless corporate data breaches in the last several years have accounted for the loss of privacy for millions of Americans.

Even babies cannot escape datafication, with products like the Owlet smart sock monitoring their health and pumping their heart rate into a corporate database. Their information can be sold at any time.

Data privacy is a huge problem, but the United States does not have any laws to safeguard internet-using Americans, adult or child, from their identities being stolen and has left the public open to algorithmic manipulations.

While the European Union has begun attempting to safeguard data privacy with laws like the General Data Protection Regulation which ensures companies store only necessary data and are held accountable for the safe preservation of that data, no country has laws substantial enough to truly mitigate the risk of manipulation through data and algorithms.

Because of our increasingly online lives, the United States must adopt a more stringent and serious policy enforcing the data and privacy rights of Americans and put the safety of the American people above the commercialization of our identities.

No, dangerous if not regulated correctly

By: Slater Dixon

The U.S. public’s trust in technology companies has plummeted since 2019 amid constant scandals and criticism from the media.

There is a popular consensus that the federal government should do more to regulate shadowy “big-tech” corporations. Comprehensive consumer data protections are viewed as a crucial step toward a better internet. Advocates of these protections argue that the U.S. should adopt a comprehensive framework comparable to the European Union’s General Data Privacy Regulation (GDPR). However, there is reason to be skeptical of sweeping changes.

Overreaching government intervention has the potential to do more harm than good, disrupting the internet as we know it.

Congress’s historical approach to the internet has been largely hands-off, to the benefit of the online ecosystem. For example, intermediary liability exemptions allow online platforms to host and moderate content posted by users without fear of being sued for copyright infringement or defamation.

This light-touch strategy doesn’t just benefit massive companies like Facebook and Google — it’s a major reason why small websites exist. Expansive privacy legislation would shift this balance, harming the very players who are best positioned to “take on big tech.”

The rules are simply different for massive platforms. Although Facebook spent millions of dollars on compliance since the EU instituted the GDPR, Mark Zuckerberg publicly supports similar laws in the United States. This cynical stance reflects the fact that Facebook is more than capable of molding such legislation in Congress, bankrolling compliance and evading its mandates when necessary. Smaller actors don’t have those capabilities.

While it is too early to fully assess the effects of the GDPR, early research suggests that the EU’s framework may have allowed Google to increase its share of the European online advertising market. Small businesses would bear the brunt of the burden imposed by poorly-calibrated government action, inadvertently reinforcing the grip of big tech companies.

Despite the risks involved, the federal government is likely to pursue federal regulation in order to avoid creating a patchwork of state regulations.

Where the government does act on consumer data privacy, their ambitions should be narrower than the scope of the GDPR. The current Congress is simply not positioned to create a sober, comprehensive framework for consumer privacy rights. Democrats are currently pursuing a bill to reform Section 230 of the 1934 Communications Act that reflects a fundamental misunderstanding of the challenges related to regulating online speech. At the same time, Republicans at the state and federal level have repeatedly taken actions based on the myth that content moderation disproportionately censors conservative voices. Our nation’s most important deliberative body has proven time and time again that it is fully capable of entertaining proposals that would be disastrous for the internet.

Congress will likely pivot to privacy reform once it becomes apparent that changes to Section 230 do not “fix” the internet. When they do so, their focus should be narrow, emphasizing proper definitions and addressing how “informed consent” breaks down in the context of the internet.

By narrowing their ambitions, lawmakers can lay the groundwork for policies that foster innovation while ensuring fundamental human rights online.