Facebook and other social networks are employing keyword-sniffing programs that monitor our private communications such as chats and direct messages. When a potential criminal activity is found, the communication is presented to human auditors who determine whether or not to inform law enforcement. Is this a good thing or a bad thing?
This was one of the most thought provoking questions I’ve ever explored before writing an editorial. The conspiracy-theory-loving, “leave my internet alone” part of me was absolutely outraged (albeit not a bit surprised) once I heard about this practice. The fatherly part of me felt pretty good about it. Then, the journalist part of me started asking the real question: is it conceivable to have a program that knows enough about the communication patterns of sexual predators, stalkers, or terrorists to make it effective without exposing private conversations to humans that turned out to be false-positives?
Is my conversation going to be monitored if I say, “I’m going to blow up your house,” in a private chat? Can I be flagged for talking to the wrong people through social media? If so, is a permanent record of my conversations being stored somewhere and potentially accessed by hidden watchdogs?
The investigation got nowhere. Nobody’s saying much. There’s a well-publicized case of an alleged sexual predator who was captured in March for trying to meet up with a 13-year-old girl to have sex, but other than bits and pieces here and there, the reality behind the software that Facebook and other companies use is still a mystery. It’s not one of those things that you’d want to keep hidden for the sake of effectiveness. You wouldn’t want the software to be effective, not because you don’t want it to work, but because you don’t want it to have to work.
Wouldn’t it make more sense to publicize it? Wouldn’t we want criminals to know that they can’t start looking for young teens on Facebook to exploit? Some would argue that you would want to keep it relatively hidden so that criminals would have a false sense of security, but estimates show that a mere 10% of the actual criminal activities happening on these sites are being discovered. It’s not a good percentage. One might even be willing to roll the dice considering the massive popularity of social media.
Instead, publicizing it would probably make more sense. If you were able to discourage¬† half of the activity, they’ll have to seek out different venues. Criminals will find a way to be criminals – there’s no doubt about that – but the more doors that we close on them, the more likely it will be that they’ll be unable to fulfill their goals, unwilling to risk their lives and reputations. The goal isn’t to catch them in this case. It’s to stop them.
Then, there’s the issue of privacy. After much deliberation, I’ve come to the conclusion that if you’re going to be on social media, you have to assume that everything is public. Even if your privacy settings are strong, there are too many hackers, too many security programs, and too much potential for a mistake being made to ever assume that anything is private online. In short, if it’s private, say it in person. The phones aren’t even a safe venue.
The original question was whether or not our social media privacy was being breached by having software listening to our conversations. The real question is this: is social media privacy a realistic expectation anymore?
The answer: no. Perhaps it never was.
* * *
“Robots at Computer” image courtesy of Shutterstock.