Twitter has brought its rules supervising abusive behavior under review again.
Twitter is again in the process of revising its rules to curtail what it sees as abusive behavior following its updates of its user rules so as to clamp down on revenge porn postings earlier this year.
Though the San Francisco-based social media site had pointed out in a blog post that it will always “embrace and encourage” different opinions, it will not “tolerate behavior intended to harass, intimidate, or use fear to silence another user’s voice.”
The changes are emanating in an era when the dreaded radical terrorist groups in the mould of the Islamic State in Iraq and Syria (ISIS), have subsequently garnered a formidable presence on Twitter and makes use of the services of the site in disseminating its messages and facilitating communication with followers. And in face of prevailing measures to clamp down on online abuse and harassment (an issue Twitter itself is aware of), tech companies like Twitter and Facebook are still insufficient in this duty.
Twitter’s Abusive Behavior Policy already prohibits messages which engage in the act of threatening or promoting terrorism and violence in general. But now included in its rules is new language for what is possibly denoted as abusive behavior:
“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
In addition to allowing users to block or mute other accounts that are abusive, Twitter can also ask offending users to delete their tweets if they are found violating the company’s rules. If users do not comply, the company can lock a user out of their account altogether”.