I chose to analyze the rules of Facebook for this week’s assignment. For the final project, I will compare the different features of Facebook with a similar, Korean social network site (Cyworld). The official rules governing Facebook can be found on the main page by clicking the “Terms” tab on the bottom; you are automatically linked to the Statement of Rights and Responsibilities (http://www.facebook.com/terms.php) page, and this statement includes additional links to a detailed privacy policy (http://www.facebook.com/policy.php), safety policy, etc. Also, these rules can be found on the Help Center page (http://www.facebook.com/help/); this also provides a useful guide for Facebook users.
Broken Rules
1. This group of people created a group page to discuss their Anti-Israel thoughts; many of them used hate speech (using “F” words) towards Jews based on ethnicity, religion, or national origin. This violates the safety policy that “You will not post content that is hateful, threatening… or [that] contains … graphic or gratuitous violence”. Because there is a systemic feature, other users can report their hate speech or violent behavior. If I were an administrator, I would first let users leave feedback before removing the group for sharing inappropriate content. Answerbag, which also has a feedback system that allows users to flag content as spam or inappropriate for administrator review, uses a similar system (Gazan, 2009). However, if the plan of the group is terrorism or the collection of money to support threats, I would remove the group immediately; the security policy states that “Any credible threats to harm others will be removed. We may also remove support for violent organizations that intimidate or harass any user”.
2. This user continuously posted pornographic content to their wall and promoted their adult video website with a link. This violates Facebook’s strict rule that “there is no nudity or pornography policy and any content that is inappropriately sexual will be removed”. Because there is a feature for other users to report the person for inappropriate wall posts or even to block the person, administrators can remove the pornographic content and suspend the account immediately after receiving feedback. As Gazan (2007) explains, many rogue behaviors are reported by site moderators by reviewing and monitoring users’ feedback. However, unlike Flickr or YouTube, which both have safety mode as the default setting, Facebook does not have a feature to filter content that is not suitable for minors. As such, I would create such a feature that supports the idea that administrators or designers should listen to users’ feedback and never stop creating a user-centered design (Grimes et al, 2008).
3. There are several fake Facebook accounts or fan pages of Justin Bieber in the search results that violate the Registration and Account Security policy that “You will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission … You will not post content or take any action on Facebook that infringes or violates someone else's rights or otherwise violates the law”.
I heard there are fans that create fake accounts to promote their favorite celebrities. Even though there is already an official account, like the screenshot above, the user of the below screenshot uses almost the same images and structure to imitate Justin Bieber. What is worse is that there are some people who believe, because the name of the page includes the word “Official”, that it is real. Because impersonation is not allowed on Facebook, there is a feature that allows you to report the person by choosing the option, “This profile is impersonating someone or is fake". However, in the real world, people can create multiple accounts using false information with different email accounts; as such, these fake accounts are not easily removed, since the process of verifying identities takes time and leads to privacy issues. However, in order to build trust between users, I would rather limit an IP address to creating only one account. Also, there are many groups that are created for reporting abuse and violation of Facebook. It is good that people gather and send feedback at the same time so that administrators can pay attention, “By working collectively, users have a much stronger ability to influence producers to make alterations. Functioning as a collective group in the form of a team or guild allows users the ability to exert more formidable pressure than they could ever accomplish individually” (Grimes et al, 2008). I also feel that there are so many similar groups that separate people into smaller groups; given this I would combine the groups into one under the heading “Feedback Center”, which I would also build using updated documents of community standards and policies, “Oversight increased both the quantity and quality of contributions while reducing antisocial behavior, and peers were as effective at oversight as experts” (Cosley et al, 2005).
Finally, here are Five Unwritten Rules that I came up with for users:
- Do not share personal information with anyone.
- Use freedom of speech and expression, unless it violates the Facebook rules.
- Be aware of the importance of diversity.
- Always learn newly featured applications.
- Act like you are in the real world; treat other users with respect and good faith.
References
Cosley, Dan, Dan Frankowski, Sara Kiesler, Loren Terveen, John Riedl (2005). How Oversight Improves Member-Maintained Communities. CHI 2005, April 2-7 2005, Portland, Oregon.
Grimes, Justin, Paul Jaeger and Kenneth Fleischmann (2008). Obfuscatocracy: A stakeholder analysis of governing documents for virtual worlds. First Monday 13(9).
Gazan, Rich (2009). When Online Communities Become Self-Aware. Proceedings of the 42nd Hawaii International Conference on System Sciences, Waikoloa, HI, 5-8 January 2009.
Gazan, Rich (2007). Understanding the Rogue User. In: Diane Nahl and Dania Bilal, eds. Information & Emotion: The Emergent Affective Paradigm in Information Behavior Research and Theory. Medford, New Jersey: Information Today, 177-185.
Good post! Your first example included a “hate group,” which may draw negative comments and may lead to harassment. I do wonder about the point at which Facebook administrators react and take action. When considering the security policy, “Any credible threats to harm others will be removed. We may also remove support for violent organizations that intimidate or harass any user.”, I am wondering what a “credible threat” is.
ReplyDeleteYour second example includes a method of spamming that includes pornographic material. Even though Facebook allows for this type of material to be reported before removing it, I am wondering if anything else can be done on the front end to prevent it. I do agree that such a filter is needed; however, it appears that it would be difficult to create if the material was not supposed to be there.
Your third example includes fake accounts for people. I do wish that there was a better way of ensuring that fake accounts are not created. However, limited one account per IP address may be difficult due to the number of people likely using the same connection (e.g. 4 people in a household could only share one account). It is interesting that so many people work together to report fake accounts, as this level of self oversight can make a large difference when considering the quality of content on an OC.
Regarding your unwritten rules, I think that it is interesting that you included “Do not share personal information with anyone.” I am wondering how many people start with this disposition, as opposed to learning it as they go. For example, some people may start with everything in their account being open before learning the specific privacy features and implementing them as they learn them.
Great evaluation of FB! This is the 3rd post I've read about FB in the last hour. I'll try not to repeat comments and much of what I said in the other two comments can be applied here as well.
ReplyDeleteOne thing I was wondering as read this post concerned intellectual freedom issues. What people find offensive varies greatly which might be why deleting/flagging comments is difficult on FB.I found it interesting that you included this aspect in your five rules, "Use freedom of speech and expression, unless it violates the Facebook rules." But where the line is crossed seems very subjective. What I consider hate speech someone else might not so who gets the final say?
Nice job. I also wanted to focus my final on FB. Clearly there is a lot of meat to chew on there!!
ReplyDeleteI agree with Philip that it people's tolerance levels vary dramatically. What I find offensive can be considered a learning tool or a resource for others.
And although the fake accounts can lead to a lot of confusion and may damage some reputations, I am not sure if limiting one facebook account per IP address is the solution. What about folks who have two accounts: one for business and the other for friends/family? Also, what about college dorms?
I am also in disbelief that you found some porn on FB. I just found out that a friend uploaded an image of two men kissing as his profile picture and the next day FB sent an email deeming it inappropriate. What, what, what???
Very well documented post, and some excellent comments as well. However, your first unwritten rule, "Do not share personal information with anyone," seems to run counter to the entire purpose of Facebook! Understanding the consequences of sharing certain types of information, and how the site helps you manage self-determined risk via the privacy setting options, is I think more along the lines of what you're proposing.
ReplyDeleteI really liked your idea in your second example about nudity and pornography in Facebook. While it wouldn't be trivial to do, especially with Facebook large number of users, other sites have implemented similar filters with varying degrees of success. I also agree that fake profiles on Facebook is a problem. It could be done in a way that would be very damaging to a user if it were done with harmful intentions.
ReplyDeleteGreat post! I like you unwritten rules, which reminds me very much of some of the rules library and some other kinds of information center have to stick to, like Intellectual Freedom to speak and express anything not offensive, remember the importance of diversity, and keep privacy for patrons (or yourself). So it makes me wonder, does that mean OCs are another kind of information center, like library? maybe...
ReplyDeleteI agree with Palabra Lau that there is still a lot of room debating which is appropriate while which is inappropriate. Just like what I found in Wikipedia that someone thinks what he posts is true while others do not think so. When discrepancy occurs, it takes time to determine whether to delete the post or punish the person posting it. As for your friend’s example, it shows that FB is serious about enforcing the policy and control the quality of the content, but on the other side, it is questionable why the photo is inappropriate. What is the standard to say the photo is inappropriate? In which way? And decided by whom? I think similar cases are also happening in other SNSs. I used to post a question on Yahoo Answers, but few days after my post, I got an email from Yahoo, saying that I did not include the key word of my post in the title. Because it violates the rule of using accurate words in the title to facilitate others to search for similar questions, I got the letter as a warning. However, my question is the words I regarded as key words might not be the same as the Yahoo Answers’ monitors, vice versa. I’m sure the issue still needs reasearching.
ReplyDeleteSorry but I accidentally deleted my whole comment when clicking on one of your images to get a better view. I was just wondering why Facebook never seems to have gone the Twitter way of somehow validating celebrity accounts, since the possibility of communication directly with one's favorite actor, singer etc. seems to be a pretty big draw for users. Or maybe with the tons of regular fake accounts already floating around, including people using very obviously fake names, Facebook has just given up on trying to manage this issue?
ReplyDeleteAlso freedom of speech is always a complicated issue, and while your rule definitely sounds good, it might be very hard to manage. There will always be those who use freedom of speech as an excuse for hate speech, and on the other extreme those who believe that anything outside their own values and beliefs should not be protected by freedom of speech. Finding a middle ground is definitely not easy. Although your example seems to be an obvious case of trolls and hateful people getting together to be as annoying and rude as possible... removing a group like that certainly doesn't seem like a great loss.
This comment has been removed by the author.
ReplyDeleteThank you for feedback, everyone! Because you asked similar questions, I wanted to response to them together.
ReplyDeleteI didn’t witness Facebook administrators actually taking action to remove threats to harm others, but I got a lot of posting complaints by users to say that no matter how many times they reported threats, most of the time, they didn’t get a reply at all from the administrators. Since Facebook doesn’t explain what a credible threat is but says they review each case of users’ complaints, I don’t know at what point they determine removal. However, I found a case where a poll asking whether President Obama should be assassinated or not was posted on Facebook and the administrators removed the threat (http://criminal.lawyers.com/Criminal-Law-Basics/Cyber-Threats-Like-Obama-Facebook-Poll-are-Crimes.html).
Regarding the prevention of pornographic content, I think there should be some kind of software to help administrators, because there is not enough staff to monitor content, which is posted continuously. I think not allowing sexual keywords would be a good way to prevent spam. Also, thank you for letting me know that there are many cases of people sharing the same IP address and limiting users to one IP address only to create one account would be troublesome. In this case, maybe administrators should check only when more than two people are using Facebook at the same time, then ask them for proof that they are different people. Finally, I think I should have explained clearly about what personal information means in the words “do not share personal information with anyone.” I would say that identifiable personal information, such as phone numbers, home addresses or social security numbers should not be posted on the profile.
@Philip Kent Whitford: Very good point that judging whether messages are offensive or not is very subjective. Therefore Facebook reminds users, in the policy, that reporting content doesn’t guarantee that it will be removed. I think that unless the contents clearly violate the terms of the policy, such as death threats or pornographic contents, they are not likely to be removed from the site.
ReplyDelete@Nan: I think we consider libraries information centers because the primary purpose of a library is to provide information of all types and from all points of view. Also, a library teaches people how to get information. In the sense of providing information, online communities may also be considered information centers. However, unlike libraries, which strictly support Intellectual freedom as their mission, online communities can create policies that specifically restrict certain issues from being discussed. Therefore, the mission is different from that of a library. I would say that not every online community should be considered an information center.
@Bug: Thank you for sharing your experience of other SNSs. I also think it’s hard to determine which is appropriate or inappropriate and for what reasons. I wonder how administrators decide that such contents are inappropriate and worthy of deletion. Do they consider the number of complaints by users? Even in Flickr, I saw many people discussing whether such images were art or pornography, and when I searched for “nude” or “naked,” I felt some images were well filtered, whereas what I considered pornographic images were not filtered, which made me wonder about the standards for judging.
@Julia: I found some celebrity accounts were validated by Facebook, but it seems they don’t take the fake account issues seriously, or maybe they can’t manage them with the few staff they have monitoring and checking identities. I think Facebook should equally enforce this kind of policy in order to prevent fake accounts issues. I think Facebook relies only on user monitoring, such as the reporting of fake profiles, and unless they receive reports, they just leave the fake accounts.