Received Feb. 24th, 2021

Dear Ms. Nimens,

I am contacting you in response to your letters to Mr. Caron, Mr. Antoon and Mr. Tassillo.

Due to the extremely limited amount of available details regarding the allegations in the class action lawsuit, Mr. Antoon and Mr. Tassillo are unable to comment on them at this time. However, please find below a statement in response to the questions included in your letters. If you would like to send additional written questions, we would be happy to answer them as best we can.

MindGeek has zero tolerance for non-consensual content, child sexual abuse material (CSAM), and any other content that lacks the consent of all parties depicted. We were encouraged to learn that the Canadian Centre for Child Protection rated our platform, Pornhub, as being as good as, if not better than, its peers in the adult and social media industries.

MindGeek has a comprehensive moderation process to prevent illegal content from making it up to our platform. We use a team of human moderators to review each upload. Our moderators are trained to recognize non-consensual content and child sexual abuse material (CSAM), as well as other content that violates our terms of service. We do not allow the public to access content until it is first reviewed by our team of moderators.

Our human moderators are supported by cutting edge tools including CSAI Match, Content Safety API, PhotoDNA, Vobile. In mid-2020, after these tools were implemented, we took the extraordinary step of running them recursively on our existing content. At the time, this required us to reprocess over 20 million pieces of content.

When content is flagged or when someone fills out the content removal form, the content is rereviewed by our human moderators. We also conduct an audit of the live content on our website to ensure our processes are working.

Once identified and removed, all CSAM or non-consensual content is fingerprinted using Pornhub’s fingerprinting tools, and the digital fingerprint—not the content itself—is transmitted to Microsoft, YouTube, and Vobile to be added to their databases as appropriate. This process prevents the same content from being uploaded to our platforms and any third-party platforms that use these technologies. Pornhub reports every instance of CSAM to the National Center for Missing and Exploited Children (NCMEC), who refers reported CSAM to the appropriate law enforcement agencies around the world.

Over the past year, MindGeek has created and implemented comprehensive measures for verification, moderation, and detection that will ensure Pornhub is the safest platform online. We are continually pushing to be at the forefront of combating and eradicating illegal content. Most recently, we have taken the unprecedented step of banning content from unverified uploaders, an industry first among tech and social media platforms.

That said, we are continually improving our processes. Every online platform has a responsibility to join this fight, and it requires collective action and constant vigilance. We are committed to this fight and will continue to work with law enforcement globally to stamp out CSAM and non-consensual material on our platforms and on the internet as a whole.

Best,

Anthony Penhale

Chief Legal Officer

 

Received March 28th, 2021

Riley,

MindGeek is more aggressive in its moderation than many other popular platforms—both within and outside of the adult space. MindGeek does not allow content onto its websites until it has been run through a range of tools designed to flag unwanted content (including CSAM and non-consensual imagery) and has also been reviewed by a trained human moderator. We are the only major social media platform—adult or mainstream—that moderates content before it goes live (other major social media platforms moderate content only after it has been publicly posted.) MindGeek has a substantial team of moderators who are trained to recognize non-consensual content and CSAM, as well as other content that violates our terms of service, and review each and every upload before they are accessible on our platforms.

Our human moderators are supported by cutting edge tools including CSAI Match, Content Safety API, PhotoDNA, Vobile and our own proprietary tool, SafeGuard. (Further information on MindGeek’s moderation process is included below). Importantly, MindGeek is also the first and, as far as we’re aware, the only social media platform—adult or mainstream—that does not permit unverified users to upload content. In other words, only verified users, meaning professional studios and verified semi-professionals or amateurs whose personal identity and date of birth have been confirmed, may upload content. Personal user verification makes it easy to identify criminals who upload CSAM, non-consensual or other unlawful content to our platforms and strengthens our ability to notify the relevant authorities as soon as attempted criminal activity is identified. This policy is a fundamental shift in the way any social platform operates on the internet. We hope and expect that the entire social media industry will follow our lead.

We will not stop working to lead the technology platforms, both adult and mainstream, in eradicating unlawful content online. Every online platform should join this fight, and it requires collective action and constant vigilance. MindGeek and all 1,800 of its employees are committed to this fight.

Background on Moderation, Flagging and Removal Process:

MindGeek has a comprehensive moderation process to prevent illegal content from making it up to our platform. As mentioned above, MindGeek employees review every single photograph and video before it is authorized for public or member access on our platforms. We have trained our moderators to recognize content that violates our terms of service or that may be illegal, including for signs of consent and underage content. AII offending content caught by our first level review team is referred to our second level review team for confirmation.

Once identified and removed, all CSAM or non-consensual content is fingerprinted using Pornhub’s fingerprinting tools, and the digital fingerprint—not the content itself—is transmitted to Microsoft, YouTube, and Vobile to be added to their databases as appropriate. This process prevents the same content from being uploaded to our platforms and any third-party platforms that use these technologies.

MindGeek also uses artificial intelligence tools and machine learning to detect content that may contain a person under 18 years of age. Unlike fingerprinting tools, which rely on an existing database of known CSAM, our predictive tools can detect and flag previously unreported CSAM. We employ two separate AI tools to assess incoming content: Google's Content Safety API, and a machine learning tool developed by a third party (and trained in part using MindGeek's copyrighted content).

In addition to these third-party tools, MindGeek has developed its own tool, called SafeGuard, to combat CSAM and non-consensual imagery. SafeGuard relies on artificial intelligence and machine learning to identify known CSAM and non-consensual imagery. It does so in a manner that is harder to evade than other market solutions. We will offer SafeGuard, for free, to our non-adult peers including Facebook, YouTube, and Reddit. We are optimistic that all major social media platforms will implement SafeGuard and contribute to its fingerprint database. Such cooperation will be a major step to limit the spread of non-consensual material on the internet.

MindGeek has zero tolerance for non-consensual content, child sexual abuse material (CSAM), and any other content that lacks the consent of all parties depicted. Pornhub reports every instance of CSAM to the National Center for Missing and Exploited Children (NCMEC), who refers reported CSAM to the appropriate law enforcement agencies around the world, including the RCMP.

Regards,

Anthony Penhale

Chief Legal Officer