Why Government Regulation Won’t Work

In this post I would like to expand upon a point that was discussed during the seminar today: whether government regulation, or law, as one of Lessig’s four forces regulating Facebook, is an effective solution.

The title suggests an absolutist point of view; I will contradict that now by saying that government regulation is indeed effective in some cases, namely, when harm results from users’ expectations either being symmetric to reality or antisymmetric to reality through no fault of their own. The first case comprises a small subset of cases in which the user knows that harm will be done to them, and the company follows through on that expectation. The second, more prevalent case, comprises the subset of cases in which the user expects certain behavior from a company, the company intentionally violates that expectation, and the user could not have known any better. These two cases describe instances in which government regulation is in fact necessary. In both cases, the company is acting maliciously, so solutions which require the company to act productively, such as by a change in architecture or code, are destined to fail. The problem has nothing to do with the users or peer-to-peer violations in either case, so different social norms won’t work. And lastly, even though over time, users will recognize companies which fall under these categories as undesirable business partners, government regulation is important in the short term when users do not know any better, and it is important in order to completely eradicate these companies’ practices; just because the market will minimize their market share does not mean there is no public interest in completely eliminating intentionally malicious practices from markets. Therefore, in the real, non-idealized market, government regulation is still an important factor.

Unfortunately, Facebook does not really fall into either of these categories. It is not a company acting maliciously. Facebook is the type of company to comply with FTC regulations as it did in 2011 in order to resolve the last of its issues which fall under either category that can be solved as government regulation. The FTC found that Facebook had been violating users’ expectations of privacy; maybe not an intentionally malicious practice, but certainly one which creates harm through information asymmetry, through no fault of the user. Facebook has corrected this: it gives users advance notice before changing privacy settings, lets them know exactly (within their Terms of Service) with whom their information is being shared, among other adjustments. One could argue that more could be done for Facebook to be transparent about what exactly certain privacy settings do and what information is shared with apps and third parties, but the fact is that even if those changes were to be enacted because of more government action, people would still complain about privacy. They would still complain that potential employers found photos of them partying, and they would still complain that Farmville knows their marital status. Why? Because the majority of privacy problems on Facebook, after the major substantive changes to transparency were enacted, are now the result of peer-to-peer violations: violations that occur because of a faulty risk calculation on the part of the user with regard to an implicit or explicit agreement they enter into with other parties with respect to their information.

The series of clicks that one makes on the internet is merely a series of decisions, decisions that can be considered in much the same way as real-life decisions are. Each decision one makes online comes with a set of explicit and implicit agreements. As examples, explicit in the decision to join Facebook in the first place is the  agreement to Facebook’s Terms and Conditions; implicit in the decision to add a person as a friend on Facebook is the agreement to allow your friend to have all the information about you that you allow under your general privacy settings; etc. Of course, a fundamental assumption of this point of view is that a person owns a certain piece of information if he or she knows it. This is of course a rebuttal to the point of view that transferring a piece of information to someone else constitutes a shared ownership in which both parties must agree to any further transaction of that piece of information. Perhaps a future blog post will discuss this IP debate more thoroughly. For now, assuming this point of view, we therefore have only ourselves to blame for peer-to-peer violations (assuming Facebook’s architecture is secure and honest, which it very much is, especially after the 2011 FTC review). Much like Laurie’s email (from Professor Felten’s Class of 2016 Freshman Assembly reading by the same author of this week’s reading, James Grimmelmann), once person A makes a piece of information known to person B under certain conditions and person B does something permitted under these conditions that still does harm to person A, then person A only has themselves to blame. Perhaps person A, whom we’ll call Al, shouldn’t have made such sensitive information known to anyone but himself? Perhaps he should have considered more carefully to whom he was sending this piece of information? All we can say for sure is that given a network-secure architecture with honest and clear privacy settings (like who gets to see your wall post to Bob or the To: field in an email), people only have themselves to blame if their act of sharing information did harm to them.

People’s gut feelings about the realities of what goes on on Facebook will surely disagree with me, but when Facebook and other social networks are viewed as platforms on which users make completely free decisions — or can opt out of even making a decision at all — then perhaps social norms about information-sharing or better education about legal rights with regard to internet privacy are better courses of action than government intervention which ultimately cannot regulate the actions of individuals.

Leave a Reply