Facebook discloses new details on removing terrorism content

Adjust Comment Print

"We want to find terrorist content immediately, before people in our community have seen it", Facebook said in a statement. Other questions, he said, include: "Is social media good for democracy?"

This is analysing text previously removed for praising or supporting a group such as IS and trying to work out text-based signals that such content may be terrorist propaganda.

"AI allows us to remove the black-and-white cases very, very quickly", said Brian Fishman, the lead policy manager for counterterrorism at Facebook. The first post addresses how the company responds to the spread of terrorism online.

The post comes after United Kingdom Prime Minister Theresa May announced her goals to challenge internet and tech companies to play a more active role in counterterrorism.

"This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too".

Facebook is also using artificial intelligence more to find terrorist content that users attempt to post to the social network, Bickert and Fishman said. For instance, the shooter behind Wednesday's attack on a congressional softball game had previously posted "vitriolic anti-Republican and anti-Trump viewpoints" on Facebook, according to the SITE Intelligence Group, which tracks extremists.

They revealed that the company has a team of more than 150 counterterrorism specialists, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, working exclusively or primarily on countering terrorism as their core responsibility.

Maryland, DC to Sue Trump Over Foreign Payments
Trump of breaking many promises to keep his presidential responsibilities separate from his business interests, the report said. This is not the first time Trump has been accused of violating the clause.

Facebook Inc (NASDAQ:FB) said that it is focusing on "cutting edge techniques" to fight terrorist content about ISIS, Al Qaeda and their affiliates on its social networking site.

The team felt a need to put out the information now in light of recent attacks and added scrutiny on tech companies. "We want Facebook to be a hostile environment for terrorists".

The ability of so-called Islamic State to use technology to radicalise and recruit people has raised major questions for the large technology companies. Facebook, Twitter, Google and Microsoft said they would begin sharing unique digital fingerprints of flagged images and video, to keep them from resurfacing on different online platforms.

Facebook is also experimenting with systems to catch bad actors across platforms, as well as its various properties, including WhatsApp and Instagram.

The same system, they wrote, could learn to identify Facebook users who associate with clusters of pages or groups that promote extremist content, or who return to the site again and again, creating fake accounts in order to spread such content online.

This team of specialists has "significantly grown" over the a year ago, according to a Facebook blog post Thursday detailing its efforts to crack down on terrorists and their posts. In their post, Bickert and Fishman said encryption was essential for journalists, aid workers and human rights campaigners as well as keeping banking details and personal photos secure from hackers.