Data Privacy - The Responsibility of Facebook
Draft

Data Privacy - The Responsibility of Facebook


The conversation regarding data privacy of user information has been and continues to be a controversial issue that has to consider personal privacy rights, and effects of personal data being accessed by unauthorized parties. Facebook has found itself in the middle of this issue as they have experienced a breach of sorts through Cambridge Analytica according to some sources.

This brings up a few questions regarding privacy: 1) what responsibility does Facebook have to protect users’ information? 2) Will this make a difference in what information social channels ask for (and share) from users? And 3) how will Zuckerberg’s decision to testify affect others, like Google and Twitter CEOs?


First, the responsibility that Facebook faces is heavy, and is the same responsibility faced by any other company that collects, stores, and processes personal information. The stakes are only getting higher, especially for those companies that collect and process user data (who live in European Union Countries), as the General Data Protection Regulation (GDPR) becomes effective in May of this year. As part of business operations, privacy policies are commonplace, and Facebook is no exception.


We have all “clicked” through the terms of service and privacy policy pages (or ignored them altogether) of many company websites, likely without even looking at what they say, relying on the company’s “altruistic” motives to protect our data. If you have ever actually read one you may find that they are in very small print, and full of language that can be difficult to decipher. Hence, we don’t usually spend the time to read them. Facebook has a data policy posted online that outlines their data collection process and how they use that information.


Remarkably, it is relatively straightforward and easy to read. Keep in mind that “data policy” is not necessarily the same as “Terms of Service”. In reviewing Facebook’s terms of service, it seems to be down to earth in terms of the security that can be expected by a company. For instance, in their terms of service it states, “We do our best to keep Facebook safe, but we cannot guarantee it. We need your help to keep Facebook safe…” Security is ultimately the responsibility of everyone. If you ask a security professional, they will tell you that no network is 100% secure, however, there are many things we can do to mitigate the effects of networks that are less than secure.


Second, will this added responsibility affect what information is asked for and collected by social media companies? This is a difficult question to answer, as each company handles their data differently and have different policies associated with data collection, and what is important to their company operations. Not being privy to inner workings of the company decisions that have to be made in relation to their privacy policies and data collection on this controversial topic, it is reasonable to say that there has to be some effects that come from this incident. In order for companies to continue to grow and thrive, they need to be responsive to industry trends and security issues that affect their constituents (customers) and the operation of their companies. Although it is not reasonable to expect changes every time there is a breach announced. There are simply too many breaches happening that would cause the policies to be a swiftly moving target.


Third, and lastly, how will Zuckerberg’s decision to testify affect others, like Google and Twitter CEOs? In watching this CNN interview with Mark Zuckerberg, one may notice that he does not say right out that he is committing to testify before congress. He states that, “I’m happy to, if it is the right thing to do.” He does a great job in stating that Congress deserves to have the person who is best qualified to answer questions relating to specific topics come before them.


Again, if that is Mark Zuckerberg, he says he will go. I agree with Mark, that depending on what is being asked, should be the determining factor as to who should be testifying. He also mentions that Facebook sends many people to testify, based on this philosophy. In relation to what this means for other CEOs, it seems sensible that they would follow a similar philosophy in sending the person best qualified to answer the proposed questions. If that is the CEO, then they should take that opportunity. Regardless of whether they end up testifying, a hard look needs to be taken at what security policies and procedures companies have in place to meet the responsibility of being the caretaker of personal information.


In relation to this incident being an actual security breach that also can be up for debate. Additional research would have to be done to determine if it was negligence on Facebook’s part, or Cambridge Analytica (CA). In a CNN article, Facebook suspended CA’s account “over concerns the firm violated the social media site’s policies.” This, on the surface leads us to believe that CA is the responsible party. In that same article it mentions that the “data in question was properly gathered a few years ago by psychology professor, Aleksandr Kogan, who said he was using it for academic purposes…But then the information was later transferred to third parties, including Cambridge Analytica. The transfer violated Facebook policies.”


This would lead us to believe that the breach actually occurred during this transfer of the data from Professor Kogan to CA. Having said that, it does not alleviate the responsibility that CA had in protecting the information once they had it.


However, asking the question from a security perspective, what in Facebook’s policies and procedures allowed Kogan and/or CA to take advantage of the policy and access those 50 million user’s information? All parties have some responsibility in this situation; one needing to take a closer look at their security policies and procedures, and others needing to look at their ethical practices within their agreements with partners.


To sum up this article, technology is not inherently bad, it also requires someone to give it instructions, at least for now. The computer security industry has found that where there is new technology, software, applications, etc. created, there will be individuals or groups ready to exploit and take advantage of that technology for their own gain. We need to take greater care in the technology we create to minimize these types of misuse. The moral responsibility continues to grow.


Dr. Robertson is actively engaged in research focusing on network security monitoring, intrusion analysis, and advanced persistent threats. He is familiar with the media and available for an interview.


powered by

You might also like...