Facebook removed 25 million pieces of content identified as 'spam,' 1.8 million pieces of content containing 'adult nudity and sexual activity', reports Neha Alawadhi.
Facebook took action against 2.5 million pieces of “violent and graphic content” in the country between May 15 and June 15, of which 99.9 per cent cases were dealt with proactively, the social media giant said in its monthly compliance report released on Friday.
In total, the firm “actioned” over 30 million content pieces across 10 violation categories during this period.
The report was mandated by the new Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021.
It also provided details of the content actioned on Instagram.
Facebook had earlier said this would be an interim report.
While Facebook removed 25 million pieces of content identified as “spam”, it took down 1.8 million pieces of content containing “adult nudity and sexual activity”, of which 99.6 per cent was done proactively.
Action was also taken in cases of suicide and self-injury-related content numbering 589,000.
On Instagram, the highest number of content actioned (699,000) was under the suicide and self-injury category, of which the proactive rate was 99.8 per cent.
This was followed by 668,000 pieces of “violent and graphic content”, while “adult nudity and sexual activity” led to action on 490,000 pieces of content.
Facebook described “content actioned” as content such as posts, photos, videos or comments it takes action on for going against its standards.
“This metric shows the scale of our enforcement activity. Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning,” it said.
It described the proactive rate as the metric that shows the percentage of all content or accounts acted on which it found and flagged before users reported them to Facebook or Instagram.
“We use this metric as an indicator of how effectively we detect violations. The rate at which we can proactively detect potentially violating content is high for some violations, meaning we find and flag most content before users do. This is especially true where we have been able to build machine learning technology that automatically identifies content that might violate our standards,” it said.
Facebook will publish its next report on July 15, containing details of user complaints received and action taken.
“We expect to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation. We will continue to bring more transparency to our work and include more information about our efforts in future reports,” it said.
Google was the first to publish a transparency report in accordance with the new IT rules on June 30.
That report covered complaints received and actioned between April 1 and 30.
Google said there would be a two-month lag for reporting to allow sufficient time for data processing and validation.
The total number of complaints received by Google in the reported period was 27,762, of which 96.2 per cent were related to copyright.
The number of removal actions taken by Google based on these complaints was 59,350.
The IT rules, notified on February 25, ask significant social media intermediaries, or those with over 5 million users, to “publish periodic compliance report every month mentioning the details of complaints received and action taken thereon, and the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools or any other relevant information as may be specified”.