A new controversy related to Facebook and privacy emerged after it was discovered that the network's outsourced content moderators can see user information that was previously alluded to be unseen.
Recently,
Gawker exposed how Facebook handles content that is either questionable, offensive or breaks terms of service.
The process is not internal as many users probably assume it is, but is handled by outsourced workers. The social network giant allegedly pays these moderators $1 an hour to scan offensive content reported by users.
Moderators are outsourced workers
Facebook hires moderators through
oDesk, and these workers are responsible for examining flagged content and then either delete, ignore, or escalate the flag; if the latter choice is made, the content goes back to California, where a Facebook employee takes action.
Gawker's article highlighted some of the mystery that is behind the network's 'policing' and outlined how Facebook's moderation process works. The information was shared by a former moderator, who was discontented and feels Facebook is "exploiting the third world."
In Facebook's defense, for years various companies have farmed out this sort of task to outsourced workers, and in some instances, unpaid volunteers. It is not an uncommon practice in the digital world. However, presumably most users would expect a level of privacy and diligence in safeguarding user information.
It's like "looking at a friend's" page
New information has surfaced that illuminates Facebook hasn't been, perhaps not surprisingly, protecting user privacy as much as the network has alluded. Facebook has a long history of privacy-centric controversies.
According to
The Telegraph, Facebook said in response to the Gawker piece, “No user information beyond the content in question and the source of the report is shared.”
Turns out that may not exactly mean what it sounds as if it does. Appears this depends upon your definition of "content."
Moderators apparently see information beyond the questionable, often disturbing, content that is routinely reported by Facebook users. According to one former moderator, he was able to see the name of the user uploading the reported as offensive content, the subject of the image or person tagged in the photo, and also the person who did the reporting. Reportedly, there are no security measures in place to prevent screen shots from being captured and looking up additional information on the user online, as one former moderator admitted to doing.
Amie Derkaoui, 21, of Morocco, showed
The Telegraph screenshots of exactly what moderators are able to see when they evaluate the flagged content received. Derkaoui claims moderators could take screen shots if they chose and said what he saw equated to much personal information, describing it as like "looking at a friend’s Facebook page."
Derkaoui also said he was not "explicitly told" the oDesk client he was working for was Facebook.
Facebook defines 'content'
Facebook responded to
The Telegraph by saying, “On Facebook, the picture alone is not the content. In evaluating potential violations of our rules it is necessary to consider who was tagged and by whom, and well as additional content such as comments…Everything displayed is to give content reviewers the necessary information to make the right, accurate decision.”
In a separate
Telegraph report, the privacy issues are further highlighted. The publication notes no real vetting is done for the people hired, as individuals work from home and do not seem to be subjected to criminal background checks.
Yet they are able to see information that many users might place behind Facebook's privacy settings. And there is no way to control moderators taking these images and republishing them on the web, or as security specialist Graham Cluley told
The Telegraph, "By sharing information about a Facebook account holder, there is obviously the potential for abuse and blackmail."