In this day and age people use their various social media networks to get their voice out, or even talk about their lives. The majority of status updates I see are people venting about what’s happening in their lives, and most of the time these people are seeking advice or looking for solutions to their problems.
Recently a new program launched to help these people with their problems on social media. Dubbed the Durkheim Project, the program will use artificial intelligence (AI) algorithms to identify common words and phrases among those who might be contemplating suicide, such as war veterans. The reason the Durkheim Project is targeting veterans is because they have very high suicide rates when they return home from active duty. These apps keep track of what users post and upload it to a medical database. After they post, a medical algorithm will keep track of certain posts that might lead to suicide or self-harm.
According to an article on Mashable, this app will monitor posts from Facebook, Twitter, and LinkedIn. However, most people should don’t have to worry about this because none of your information will be leaked to third parties. As an extra precaution, a firewall will be used to ward off potential hackers.
“The study we’ve begun with our research partners will build a rich knowledge base that eventually could enable timely interventions by mental health professionals,” said Chris Poulin, principal investigator on the project, in the article. “Facebook’s capability for outreach is unparalleled.”
After first tracking veterans actively using social media in 2011, Durkheim Project founders discovered that more than 65 percent of users who committed suicide also had certain key words in common when posting on social networks. However, people must opt into this app, which might deter people from downloading it.
Overall, social media is not just for Internet marketing companies, it can actually help save lives. If this app is implemented, it could detect statuses that signify suicide and doctors can intervene to potentially save a life. It does not hurt to track statuses that can potentially lead to self-harm—it can be good for everyone who might have thoughts of suicide.
Well that’s really debate, sometimes it can do more good to others while it can sometimes harm also.
Yeah its I don’t believe this will work. People who are suicidal simply will not use it. And privacy is another issue. At what point does the program alert the proper authorities? Will users have to agree to having there privacy breached when the program ‘thinks’ the user is in danger from him/herself? The moment the Durkheim project detect danger is a very debatable and deter people from using the site.
I believe if people want to take their lives; they will find a way. Its up to the person’s family and friends to watch out for them not a system.
So suicide is another huge issue; what authorities have the right to deny a person’s will to die? Is it their place to say ‘no’.