Just Because it’s Open Source Doesn’t Mean it’s Objective

Today, our guest Aneesh Chopra, the first Chief Technology Officer of the United States, shared with us the ongoing projects that he and others implemented as a part of the Obama Administration’s attempt at a more open government. These include the MyGov initiative, Data.gov, open-source challenges like the FTC’s RoboCall challenge (which I hope someone can solve), among others.

However, there is an important danger associated with open-source government. The problem is that even though the government claims to be open-source, there is no guarantee that this is completely true. Let me be specific: this need not manifest itself as malicious intent on the part of the government. Indeed, the most insidious problem arises when there is every reason to believe that the government is being genuine, providing false allusion to citizens and developers alike that the open data they see on sites such as data.gov is objective and exhaustive. But since it is the government declassifying and providing the data, there is the inherent issue that it is the government’s choice of which data exactly to put out, how to package it, and how to organize it. I call this presentational spin; the term “data” has objective and neutral connotations, but presentation and choice have everything to do with distorting this objectivity. This has a couple of implications. The first is that open government is not truly achieved; we have a situation in which the government is open … because it says it is. The second is that the open data that the government provides is not neutral, and can be used to influence peoples’ perceptions of issues, such as energy and weather reports; the government chooses not to falsify data, but perhaps to organize and publicize it in such a way as to emphasize the key points that would be beneficial to its motives, or to declassify certain data but not others (we can never really know which government data is not open source).

There is no solution to this problem. As long as we rely on government to keep and classify important data sets, we will have to rely on their presentational spin whenever they choose to release data. There is however a practical compromise, which is to institute something of an Open Data Review Board, with power and basis independent of the Executive Agency (perhaps a congressional committee?), which can oversee that presentational spin of data is minimized. This obviously has its flaws, similar to the ones plaguing the notion of data.gov, but it is the best solution to ensure that truth prevails, in the interest of the people.

The Tracking Conundrum

Many people, upon learning that millions of sites they have never heard of are constantly tracking their every online move, are justifiably creeped out. Online tracking has been a hotly debated issue, especially with the advent and collapse of the “Do Not Track” movement. Therefore, it is worthwhile to analyze the specific problems and potential solutions associated to this issue.

First, the problems: why is it so bad that Facebook knows all of your personality traits, relationships, and tendencies? The easiest answer is that it is not just large, reputable companies such as Facebook and Google which track your online activity and seek to profit by mining large databases of big data. Due to increased amounts of legal scrutiny and a special interest in maintaining good public relations with users, these companies will almost certainly make sure that your data is properly cared for. Certainly the risk that something goes badly is important to keep in mind, but government, the market, norms, and secure technological architecture decrease the probability of this risk. So who else is in the business of tracking online activity? Companies such as scorecardresearch.com, quantserve.com, and doubleclick.net, which most Internet users have never even heard of. We now have an entirely different issue on our hand, in which reputable first-party websites use cookies from several (upwards of 50 in some cases) third-party companies whose sole job is to track your online activity. This is a significant problem for many reasons. Firstly, these companies operate in an entirely different sphere than Facebook and Google; government regulation is near-impossible, and the companies have no market incentive to be careful with data. In fact, they have every incentive to profit from your data by selling it to advertisers, who could then go on to do such things as using price discrimination by bracketing Internet users by supposed income, or compiling databases of sensitive information such as medical records. Secondly, neither legislation nor architecture has been updated so as to provide users with meaningful technical controls over the way in which their information is shared over the web. Lastly, there are so many third-party tracking sites that even if we had the tools with which to protect our privacy, we wouldn’t be able to do it. In fact, even sites such as ghostery.com, whose applications are intended to give users more control over such matters, cannot possibly tell you about tracking hidden cookies.

It is now very clear that given these problems, which have indeed manifested themselves, users’ fear of online tracking is indeed a rational one. We must therefore consider the tools in our arsenal, given succinctly by Lessig’s four forces: those of government, the market, norms, and technology. It was explained above why the market will fail: those playing loose with peoples’ information have no incentive whatsoever to respect privacy rights. Additionally, the “Do-Not-Track” movement, which attempted to implement a technical tool which really didn’t do anything (and was thoroughly rejected by the tracking industry), is an empirical example of the failure of norms: telling tracking sites you do not want to be tracked will not accomplish anything. Therefore, it is my position that the two remaining forces, government and technology, should work in concert to provide a solution: technology to architect more sophisticated user controls, and government to provide the nudge for the Internet industry as a whole to do so. There will need to be a substantive legislative debate about what these controls might look like. Until then, users have two choices: accept the insecurity of their information, or completely opt out of Cyberspace.

Don’t Undo, just Don’t Do

We all wish we could forget certain things or undo certain actions of the past. While technology has created a world in which it has never been so easy to undo actions, it has also created the conditions whereby forgetting that the actions occurred in the first place is far more difficult (just consider Facebook’s Timeline). Let’s say Joe wants to clean up his Facebook before he applies for a job. He would probably go through his entire Timeline and ask himself whether each post is worthy of remaining published for his Facebook friends (and the world, by extension) to see. Now flip the coin: let’s say I’m the job interviewer and I want to check whether my applicant’s Facebook profile is clean. It would be much easier for me to merely look through all the posts, pictures, etc., just looking for something incriminating.

My point is not that companies want to do this, or that they even have the ability to do this (although even technological privacy settings cannot prohibit peer-to-peer violations). The point is that in general, when confronted with a large amount of relatively unsorted information, it is far easier to search for items related to a specific issue rather than for items in general which violate a certain norm (with regard to my personal Facebook privacy, for example, the norm might be that posts should not be harmful to my identity, future job prospects, relationships, etc.). For example, it would take far less time and effort to go through one’s Timeline looking for posts which are related to, say, sports, rather than going through, evaluating each post one-at-a-time for its favorability to one’s “Facebook image.” The reason is possibly that the decision for the former scenario occurs at an easier level: sorting by issue is a less intensive task than sorting by favorability. Perhaps it is also the case that sorting by issue is more easily automated by computers and search, whereas sorting by favorability requires a more human decision which is not so easily computed.

This notion translates well into the real world, where the commonly expressed fears of investigators — perhaps related to a job application, political campaign, or FBI investigation — digging through one’s online activity from the distant past in search of additional information or evidence. But as a society we must realize that there is asymmetry with regard to the unrealistic time and effort regular people would have to expend in order to sift through the increasingly unmanageable swaths of information on the Internet in order to polish and cull information unfavorable to their interest, versus the time and effort investigators would need to expend in order to seek information relevant to a specific issue. The result is a perverse calculation of incentives: should I incur the costs of maintaining and clearing up all of my online interactions from the past, or should I, from the get-go, just self-censor my behavior so as to ensure no potential blowback in the future? If forgetting and anonymity continue their trend of becoming increasingly difficult to achieve, then more and more people will opt for the latter course of action: to live in a low-risk, low-freedom, virtual Panopticon of self-censorship.

Cyberbullying: Striking a Balance

In the offline world, children bully each other in ways which are seen, such as physical violence, and ways which are mostly unseen, like gossip and verbal commentary. That the Internet, particularly its myriad social networking sites, have become increasingly significant extensions of childrens’ lives presents an opportunity in this field. What administrators could not do so effectively in the offline world, such as prevent verbal abuse and hate speech, they now theoretically have the technological capability to do in the age of social networking. However, the extent to which this newfound ability should be exerted must depend on the various pitfalls that can come about when government and schools have too strong a presence in people’s personal lives.
Cyberbullying is a completely different animal than regular bullying. The Internet, in many cases provides anonymity, and in others, provides the opportunity for plausible anonymity, making it more difficult to catch perpetrators and decreasing the deterrent effect that law enforcement usually provides due to the decreased probability of negative consequences. Furthermore, Facebook and Twitter’s unprecedented ubiquity and potentials for being platforms for dissemination to much higher numbers of the population of a child’s offline social networks can make the effects of cyberbullying much more emotionally and psychologically devastating than regular bullying. Lastly, the Internet can be used as a mechanism which facilitates offline bullying and violence; ranging from sex offenders and fraudulent scammers — obvious criminals from which the law already pushes very hard to protect children — to online marketers who target children and their parents’ credit cards. Therefore, not only does the technology of the Internet present an opportunity for administrators and the government to prevent many forms of violence done to children, it is also the case that a failure to act appropriately constitutes a significant problem in that very real dangers do exist on the Internet.
So what should be done? Clearly private schools and public schools have different standards based on public schools’ status as extensions of government, subject to the same Fourth Amendment restrictions as police. However, this is not to say that private schools should be able to do whatever they like and public schools are powerless to protect their students. One stance on this policy issue is that schools should have complete access to students’ Facebook accounts so as to be able to completely monitor activity. My view is that this would be misguided, as students would still find ways to bully each other online (new sites such as Formspring pop up every day), and the school would have such oversight over students’ lives that new, more restrictive norms would be created so as to limit childrens’ potential to be content in their social lives (see my post on government intrusiveness on Facebook). However, schools also should not adopt a completely laissez-faire attitude, as inaction, or even indeed action only after harm is done,  can and will lead to negative consequences. If you wait for something bad to happen, then surely something bad will happen. I advocate a middle of the road approach, focusing on preventative measures, while still allowing normal means of law enforcement (such as students themselves coming forward with evidence, and there needing to be something akin to a search warrant for schools to be able to have access to Facebook documents) to provide for the safety of children.

Social Gaming Needs to Rethink its Business Model, Big Time.

The future of social gaming is grim. Zynga’s 100+ employee layoffs, its CFO just defecting to Facebook, and its stock prices plummeting are key indicators that this may well be the case. Yet beyond these empirics, there are reasons why Zynga’s failure (or coming failure, depending on how you look at it) is structural, the result of a fundamental business problem which cannot be solved by corporate restructuring. I’m talking about the fact that Zynga’s goal of making money through games with frequent users is contradictory with the reality that the only games which can be sensibly monetized are impossible to implement on the Facebook platform.

In order for a game to have frequent users, it must be a good game, or at least one with lots of social value. This is why Zynga’s games have succeeded so much in the past: one can debate the merits of Farmville or The Ville, but no one can deny that its social network-scale growth to millions of users made the Zynga model seem so attractive in the onset. However, one must look back to the basics of user growth. Surely social dynamics have a lot to do with growth after a certain point (for example, anyone currently without a Facebook essentially needs to have one because everyone else does; or, one may be socially compelled to adopt a certain friends once enough of one’s friends do so), yet to get to the point past which social dynamics dominate the growth model, the product in question must have some value. For games, that means things like fairness and accessibility. Cow Clicker is an example of a game which some may argue does not have much value, yet its widespread adoption is a testament to the core notions of fairness and accessibility. A game doesn’t need to be all that exciting; it just needs to be exciting enough to a broad enough range of people in order to reach the point past which the social aspects of the game will help it dominate. This is the crux of the reason why Zynga’s games fail to have frequent, consistent returning users, and will continue to do so if they continue to use the same model: when previously fun games such as Tetris now enable Facebook’s affluent users to gain unfair advantages (this is essentially the model of every social game on Facebook), these games become unfair and destroy accessibility for users who are simply not able to afford the cost of winning, and, more critically, of those users who do not care enough about the game to pay to win. So how was Farmville, for example, able to reach a critical mass of millions of players despite a similar monetization strategy. One answer is that Farmville taps so well into users’ reward-competition psychology that it was able to gain enough users for its excellent utilization of the tool of social capital to be able to dominate its growth curve. However, the failure of Zynga is that even its successful games are losing popularity. An increasing population of users leads to diminishing returns on the social side, while the game’s negative aspects may be exacerbated every time social effects get amplified; “winning” at the game becomes that much more valuable, prompting more and more dedicated users to take advantage of pay-to-win, leaving currently potentially dedicated users unsatisfied.

In order to be successful in the future, Zynga will need to change the way it makes money through social gaming. Perhaps Facebook is not the best platform. The problem Zynga will have to navigate is that its existential reliance on Facebook perhaps will make the cost of moving forward as a company too great to bear sometime in the future..

No, Really, He Didn’t Do That Badly

Today, seminar guest speaker Zeynep Tufekci pointed out the startling reality of technology’s, and specifically, social networking’s, increasing influence in the offline world with an anecdote: after this year’s first Presidential debate, people who did not simultaneously watch Twitter feeds perceived President Obama to have had a somewhat off night, while those who monitored and actively participated in the concurrent Twitter conversation felt strongly that President Obama performed very badly. The reason? Within the first ten minutes of the debate, the phrase “#whatswrongwithobama” had been tweeted and re-tweeted so much as to create a cascading, back and forth conversation among popularly followed and influential tweeters, allowing them to collectively generate a sentiment which through network effects amplified itself so much that by the end of the debate, the consensus was that Mitt Romney had won the debate by a landslide. At first glance, this result is not significant for the offline world: a self-contained community like Twitter’s reaching a collective opinion could be nothing more than a product of Twitter’s being comprised of elites and journalists with similar opinions. However, the impacts on offline politics are far more important. We now have a new spin room in politics; while in the past, television pundits decided winners of debates based on journalists’ interviews of campaign spokespeople after the debate, it is now the case that television pundits generate opinion based on a far broader base: the opinions of tweeters.

This phenomenon can have both positive and negative effects for democracy. One could argue that because the opinion of political pundits are more broadly based, and because the opinions of these pundits generate large waves in the general population’s opinion, there will be more truth in political spin than was had in a time when the standard stump speech sound-bytes were repeated by campaign spokespeople, either leading to no new information at all, or, worse for democracy, allowing pundits to use their position to signal their own personal opinions, for lack of a better, more popular source of information. However, those on the other side of debate could argue that online communities like Twitter do not actually represent the population as a whole, but rather represent the opinions of a homogeneous, affluent elite, echoing, amplifying, and self-confirming themselves in the online environment, causing pundits’ opinions to actually end up distorting national discourse by re-projecting Twitter’s consensus.

Either way one leans on the issue, it must be admitted that social networking has changed the game of politics and political movements. This country will need to find more ways to exploit these technologies for even more productive and inclusive discussion, as well as to prevent the meme culture from creating too much unproductive discussion.

The Case for an Openly Data-Driven Facebook

Facebook and other social networking sites such as Twitter, Myspace, and Friendster, among others, revolutionized society. They transformed the Internet into a venue in which not only dedicated content producers disseminate information, but in which ordinary people as well can share their lives with others and interact among each other. Social networking has changed the offline world as well: from establishing new social norms about friendship and relationships, to aiding in the organization of groups such as class groups, and, more importantly political groups, Facebook has turned the Internet into a tool for real world benefit. However, major social networking sites as they currently exist have one fundamental problem: they do not seek to be a mechanism through which opinions form, change, and grow; rather, they exist for people to share their preexisting opinions.

Don’t get me wrong: social networking has the potential to accomplish this. Furthermore, it is true that already by scanning someone’s Twitter page, or by randomly happening upon a friend’s opinion in a status, I might develop my opinion further about an issue. However, I have found that it is my experience that 140 word (maximum) posts and short statuses which seek “likes” have, at best, informed me of the basics of others’ opinions. Social networking as it exists today just does not seem to be equipped to allow people to disseminate well-thought out views about issues, hold discussions which are easily spread to others, and, most importantly, allow people to view the aggregate of users’ opinions about issues.

Today I will touch on one proposition which could substantially alter the way we view social networks. What I propose is that rather than Facebook deliberately turning away social science researchers and other academic data collectors, Facebook should provide free, open-source data on the site itself. This would be good, both for society and for Facebook.

It is widely known that Facebook’s potential for research on a wide number of social, political, and economic issues is extremely great. Indeed, a great deal of research has been done using Facebook’s vast capacities for data mining, and an even greater deal of research has never occurred due to Facebook’s unnecessarily strained relationship with research institutions. Open data on Facebook would allow social and political science researchers to much more easily — and openly — study trends and conduct experiments. However, I believe that the even greater potential of this proposal lies with ordinary people. Regular users would be able to compare their friends’ beliefs with those of society at large, for example. Obviously, it would be difficult to implement such an idea: how exactly does Facebook determine which issues are important? How would Facebook decide how to measure such trends? I believe that in collaboration with researchers, Facebook can transparently begin the process of allowing people to understand more about society.

Lastly, open data could be great for Facebook. What makes Facebook great now is the free exchange of social capital that is enabled through the medium of social networking. However, efforts to open up intellectual capital — alongside with structural changes to the site to allow for a space for more serious discussion, posting, and dissemination — could free up and make possible, more than ever before in the offline world, the ability for people to grow their opinions and see, in real time, how society thinks about things. This would attract more use and interest, from people and governments alike.

Would Facebook ever implement such a policy? It certainly does not seem like it right now. But I believe that in an age when our actions on the Internet become less and less meaningful, the pressure will mount for one site to differentiate itself and provide on an unprecedented scale the seriousness that people currently crave.

If Facebook doesn’t do it, I guarantee that someone else will.




Why Facebook’s Revenue Model is So Flawed

I was scrolling down my Facebook newsfeed yet again, but this time paying more attention to the ads that Facebook decided to show me: two for films similar to Quentin Tarantino’s style, one for Wendy’s, one for a comedian I have never heard of who will be playing in Philadelphia, and lastly one for a yogurt place in West Windsor, New Jersey. Aside from the ads for the films, which Facebook probably decided to show me because I had “liked” the movie Pulp Fiction a couple years ago, I honestly cannot say what piece of information Facebook used in order to decide to display the other ads. I’ve never liked the Wendy’s page, nor that of any other fast food chain for that matter. I’ve never searched for it in the Facebook search bar, and I’ve been to Wendy’s once in my life, not that Facebook has a way of knowing that. Philadelphia and West Windsor are both places near Princeton, but not exactly easily accessible for a college student.

Needless to say I didn’t click on any of the ads. In fact, I’ve never clicked on a Facebook ad in my life. Conclusion: for me, Facebook ads are a terrible way to make money.

Now, many of you reading must be thinking that I’ve jumped to the wrong conclusion. Many would argue that someday, for some product, Facebook will present me with an ad which I will click on, which will have made their entire ad campaign to me up to that point worth it. Or rather, many would argue that even though Facebook ads may not be so profitable for me, they are for the entire social network, when taking an aggregate over a long period of time for lots of people.

However, let us look at the facts. When, even for ads at the onset of social advertising, the clicks per user ratio is one in one thousand, we can see that even though Facebook does make money on ads, this revenue comes solely from the existence of a small proportion of users on the site. How does Facebook make money off of the great majority of its users? It doesn’t.

Facebook makes billions of dollars, but it could generate a more steady, and much more substantial revenue stream if it were to monetize the majority of its users. Consider a pay-per-impression model. Instead of placing ads to the side of the screen where I don’t usually look (I generally pay much more attention to my news feed), Facebook could introduce an option to place ads directly in the news feed, or at the top of the site, where they cannot be ignored. This would greatly increase the value of the ad, thus generating more revenue for Facebook. Each user would now be targeted, since every user would see the ad. While sites with pay-per-impression models like Youtube and Hulu essentially force users to watch an add before being able to consume their content (thus causing most of them to switch tabs or space out), Facebook could find a middle path which increases both ad value and perception.

Now, I am not necessarily advocating this from the standpoint of the user. It would probably create a worse user experience, on balance. But really, would I suddenly leave Facebook, the place in which so much of my social capital has been invested? The answer is “No” for me, as well as for most people. Speaking from the point of view of a Facebook business strategist, I advocate that a prominent, pay-per-impression model is the way to go.



Facebook’s Secret Police

So I was scrolling down my Facebook newsfeed like any respectable college student would in a time of procrastination, and I happened upon a post by one of my friends which ended up really intriguing me. At first, I saw this post and thought to myself that the poster, who will henceforth be referred to as “Orange” in an effort to protect his privacy and prevent my causing a peer-to-peer privacy violation, was just jumping on the bandwagon of those most ardent of defenders of Internet civil liberties. On a site like Facebook, where an accepted, tacit norm is that information and freedom of expression rule, I dismissed this as a “duh” post. Of course the governments cannot use information acquired through monitoring Facebook, I thought. I even dismissed the possibilities that governments had access to private information on Facebook or that they would go through so much trouble to monitor such vasts amount of information.

The post was not all that surprising, and I would have skipped through most of it had I not seen the comment from the person henceforth referred to as “Blue”: “I believe you forfeit that right by agreeing to Facebook’s terms and conditions”. This was certainly intriguing, and has formed the basis for this post. What incentive does Facebook have to deceive its customers and aid in government monitoring? Should government be able to have access to such information?

My answer is no. When the government overly monitors people’s private actions and opinions — broadly construed as modes of expression — it causes people to change their actions in accordance with the norms that the government is effectively imposing. Take one justification for monitoring of social media: to prevent threats. While this sounds like a noble aim in the abstract, giving the government so much license can lead to somewhat arbitrary exertions of power when the law is applied. Meet Jason. Jason posted the following to a private page’s political thread “I wish there was a magic wand to make Senator Santorum disappear.” The result: a police investigation at both his work and his home. While Jason was later found to not be a “threat,” that the police got involved leading to potential workplace embarrassment and a search of his home for possible evidence that Jason would harm Senator Santorum are major warning signs which deter both Jason and those he knows from posting possibly negative comments about political candidates. In this way, government’s action exerts a strong normative force on individuals’ actions.

With the ability to view users’ content and the algorithms to help it flag down activity which potentially violate its norms, the government is rendered omniscient. What are the implications? People have ideas in their minds, and free speech is the notion that, within certain limits, people should have control over which of their minds’ ideas they express to others. Yes, there are limitations on free speech, some of which include those related to potential threats and obscenity, but never before in the history of the world have liberal democracies had so much ability to monitor their citizens. Consider Communist Russia. Not a liberal democracy by any means, but an excellent example of a place where the police came knocking on your door whenever you uttered something that could be construed to be against the state. How was the government able to monitor peoples’ actions so closely? With the help of spies and bribes. At the time (in other words, in the days before the mass information potential of the internet), liberal democracies like the United States were not able and not willing to monitor their citizens so closely. People in those times, just like today, expressed themselves in the real world quite freely, saying much worse than what Jason said on a whole host of topics. Fast forward to today in the age of social media, where the potential of the internet and social media allows a much wider scope and volume of expression than was ever possible, causing most expression to take place in the virtual world. While in the real world, government maintains its laissez-faire attitude toward comments like Jason’s, it is cracking down on everything in the virtual world, particularly because technology makes it so easy to do so. Algorithms are the new spies, code the new Russian secret police.

On the aggregate, governments’ exertion of this power over expression can be translated into other ways in which people live their lives: recall the USSR’s totalitarian control of all modes of living. This control of decisions at the level of the life of a citizenry, effectively termed “biopower,” has the condition of allowing governments to normatively reduce their citizens’ lives by controlling expression (which is indeed the way in which people interact with their world), reducing life to “bare life.” The term suggests its meaning: a life controlled by the state is no life at all.

Think it’s a scary scenario? Something must be done to preserve the internet as an unfettered medium of expression. “oh well. :(” is not good enough to preserve our liberty.


Why Government Regulation Won’t Work

In this post I would like to expand upon a point that was discussed during the seminar today: whether government regulation, or law, as one of Lessig’s four forces regulating Facebook, is an effective solution.

The title suggests an absolutist point of view; I will contradict that now by saying that government regulation is indeed effective in some cases, namely, when harm results from users’ expectations either being symmetric to reality or antisymmetric to reality through no fault of their own. The first case comprises a small subset of cases in which the user knows that harm will be done to them, and the company follows through on that expectation. The second, more prevalent case, comprises the subset of cases in which the user expects certain behavior from a company, the company intentionally violates that expectation, and the user could not have known any better. These two cases describe instances in which government regulation is in fact necessary. In both cases, the company is acting maliciously, so solutions which require the company to act productively, such as by a change in architecture or code, are destined to fail. The problem has nothing to do with the users or peer-to-peer violations in either case, so different social norms won’t work. And lastly, even though over time, users will recognize companies which fall under these categories as undesirable business partners, government regulation is important in the short term when users do not know any better, and it is important in order to completely eradicate these companies’ practices; just because the market will minimize their market share does not mean there is no public interest in completely eliminating intentionally malicious practices from markets. Therefore, in the real, non-idealized market, government regulation is still an important factor.

Unfortunately, Facebook does not really fall into either of these categories. It is not a company acting maliciously. Facebook is the type of company to comply with FTC regulations as it did in 2011 in order to resolve the last of its issues which fall under either category that can be solved as government regulation. The FTC found that Facebook had been violating users’ expectations of privacy; maybe not an intentionally malicious practice, but certainly one which creates harm through information asymmetry, through no fault of the user. Facebook has corrected this: it gives users advance notice before changing privacy settings, lets them know exactly (within their Terms of Service) with whom their information is being shared, among other adjustments. One could argue that more could be done for Facebook to be transparent about what exactly certain privacy settings do and what information is shared with apps and third parties, but the fact is that even if those changes were to be enacted because of more government action, people would still complain about privacy. They would still complain that potential employers found photos of them partying, and they would still complain that Farmville knows their marital status. Why? Because the majority of privacy problems on Facebook, after the major substantive changes to transparency were enacted, are now the result of peer-to-peer violations: violations that occur because of a faulty risk calculation on the part of the user with regard to an implicit or explicit agreement they enter into with other parties with respect to their information.

The series of clicks that one makes on the internet is merely a series of decisions, decisions that can be considered in much the same way as real-life decisions are. Each decision one makes online comes with a set of explicit and implicit agreements. As examples, explicit in the decision to join Facebook in the first place is the  agreement to Facebook’s Terms and Conditions; implicit in the decision to add a person as a friend on Facebook is the agreement to allow your friend to have all the information about you that you allow under your general privacy settings; etc. Of course, a fundamental assumption of this point of view is that a person owns a certain piece of information if he or she knows it. This is of course a rebuttal to the point of view that transferring a piece of information to someone else constitutes a shared ownership in which both parties must agree to any further transaction of that piece of information. Perhaps a future blog post will discuss this IP debate more thoroughly. For now, assuming this point of view, we therefore have only ourselves to blame for peer-to-peer violations (assuming Facebook’s architecture is secure and honest, which it very much is, especially after the 2011 FTC review). Much like Laurie’s email (from Professor Felten’s Class of 2016 Freshman Assembly reading by the same author of this week’s reading, James Grimmelmann), once person A makes a piece of information known to person B under certain conditions and person B does something permitted under these conditions that still does harm to person A, then person A only has themselves to blame. Perhaps person A, whom we’ll call Al, shouldn’t have made such sensitive information known to anyone but himself? Perhaps he should have considered more carefully to whom he was sending this piece of information? All we can say for sure is that given a network-secure architecture with honest and clear privacy settings (like who gets to see your wall post to Bob or the To: field in an email), people only have themselves to blame if their act of sharing information did harm to them.

People’s gut feelings about the realities of what goes on on Facebook will surely disagree with me, but when Facebook and other social networks are viewed as platforms on which users make completely free decisions — or can opt out of even making a decision at all — then perhaps social norms about information-sharing or better education about legal rights with regard to internet privacy are better courses of action than government intervention which ultimately cannot regulate the actions of individuals.