订阅

多平台阅读

微信订阅

杂志

申请纸刊赠阅

订阅每日电邮

移动应用

商业 - 科技

最强利器!AI帮助YouTube制止不当视频的传播

David Meyer 2018年05月03日

据YouTube报告,经过评估删除的视频之中,超过83%都来自机器判断,并非人工评定。

YouTube在一份报告中首次详细披露,已删除多少违反平台政策的视频,数量确实不少。

2017年第四季度,谷歌母公司Alphabet旗下的YouTube删除视频超过800万条。那么,YouTube怎样判断哪些视频应该删除?机器学习技术在其中发挥了重要作用。

据YouTube报告,经过评估删除的视频之中,超过83%都来自机器判断,并非人工评定。超过四分之三的视频还没点击量时就被删去,大部分都是垃圾广告或者色情内容。

科技业人士喜欢将该技术称为机器学习或人工智能(AI),主要利用数据改进算法,辨识出模式后自行采取行动,无需人工干预。这次YouTube就用人工智能自动识别会引起不满的内容。

YouTube团队在博客文章中表示,运用人工智能技术成效显著。

举例来说,YouTube平台禁止播放含“暴力极端主义”内容的视频,2017年初采用人工智能技术以前,仅有8%的相关视频在评论不足十条的时候被删除。2017年年中,YouTube开始用机器学习识别视频,一半以上包括暴力极端主义的视频评论不足十条时就被删除。

然而某些原本应保留的视频也被删除,因而机器学习也导致一些疑问,比如有些看似暴力极端主义的视频其实只是讽刺,或者只是如实的报道。

Middle East Eye和Bellingcat等多家新闻机构发现,去年年末,YouTube删除了之前分享有关叙利亚战争罪行的视频。调查马航17号航班飞经乌克兰遇袭坠毁事件中,Bellingcat曾发挥公民记者的重要角色,却发现在YouTube的整个频道都被中止播放了。

YouTube当时表示:“网站上视频数量庞大,有时确实会弄错。发现某条视频或者某个频道被误删后,我们会迅速恢复。”

YouTube在本周一的博客文章中称,机器学习系统审查可能违规内容时仍需要人工协助。随着人工智能技术处理视频数量变多,实际上也增加了视频审核人手。

YouTube团队称:“去年我们承诺,到2018年年末谷歌内部处理违规内容的工作人员增加到1万人。在YouTube,大多数新增人手都是为了审核内容。我们已聘请了解暴力极端主义、反恐和人权领域的全职专家,各地区专家团队也已扩充。”(365娱乐场)

译者:Pessy

审稿:夏林

?

YouTube has for the first time revealed a report detailing how many videos it takes down due to violations of the platform’s policies—and it’s a really big number.

The Alphabet-owned site removed more than 8 million videos during the last quarter of 2017. But how did it decide to take them down? Machine learning technology played a big role.

According to YouTube, machines rather than humans flagged up more than 83% of the now-deleted videos for review. And more than three quarters of those videos were taken down before they got any views. The majority were spam or porn.

Machine learning—or AI, as the tech industry often likes to call it—involves training algorithms on data so that they become able to spot patterns and take actions by themselves, without human intervention. In this case, YouTube uses the technology to automatically spot objectionable content.

In a blogpost, the YouTube team said the use of the technique had a big effect.

Regarding videos containing “violent extremism,” which is banned on the platform, only 8% of such videos were flagged and removed in early 2017 before 10 views had taken place. After YouTube started using machine learning for flagging in the middle of the year, “more than half of the videos we remove for violent extremism have fewer than 10 views,” the team said.

However, the use of machine learning does raise serious questions about content being taken down that should stay up—some depictions of violent extremism, for example, may be satire or just reportage.

Several news organizations, such as Middle East Eye and Bellingcat, found late last year that YouTube was taking down videos they had shared, depicting war crimes in Syria. Bellingcat, which played a key citizen-journalist role in investigating the downing of Malaysia Airlines Flight 17 over Ukraine in 2014, found its entire channel suspended.

“With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it,” YouTube said at the time.

In its Monday blog post, YouTube said its machine learning systems still require humans to review potential content policy violations, and the number of videos being flagged up using the technology has actually increased staffing requirements.

“Last year we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018,” the team said. “At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.”

我来点评

  最新文章

最新文章:

500强情报中心

财富专栏