What does an advertiser actually do?

What is the best way to prevent advertiser TOS violations for porn on a site that offers free forum/community hosting?

  • Sites that allow anyone to build their own community are at an increased risk from all sorts of spammers, but especially those in the adult space.  What can be done to proactively prevent suspect communities from serving ads in cases where the volume of new sites created makes using human moderation cost prohibitive.  Are there any crawlers that mimic what advertisers and google look for so one can flag any issues prior to dynamically serving ads?

  • Answer:

    The most straightforward approach, which we've all seen many times, is to rely on user complaints as the first line of defense.  Users are a free resource that inherently scales along with the site, tend to be as irritated as you are by porn spam, etc., and hence motivated to report it.  (This is generalizable to all forms of prohibited content, which may include hate speech, sexual harrassment, stalking, etc.)  Minimizing the friction in reporting problematic content (e.g., a one-click "Report This Post" link) maximizes the feedback from users.  There is a tradeoff between friction and specificity, though; if you have a flow like , which requires the user to indicate why he or she is reporting the post (anything from to bullying to porn spam), that provides more data for moderation, but is also likely to reduce the amount of user feedback.  In addition to traffic, the community's overall level of engagement can make a significant difference in how well this approach works overall. The next question, of course, is how to handle the complaints, which can vary widely depending on the nature of the community, content, and resources available.  If operating on a shoestring and not terribly concerned about false positives, you could build in "one-strike" logic that automatically takes down every post that receives even one complaint. More reasonable in my opinion would be to either require more (e.g., at least two complaints from two different users who have attributes of being "legit") for automatic take-down or have complaints send the content to a moderation queue.  If taking the latter approach, you can either be more conservative (take down automatically, review, restore if determined not to violate ) or more risk-tolerant (keep up, add to queue, take down only after mod review confirms it's prohibited content). One counterintuitive point to keep in mind is that transparency about the system (e.g., "one big strike or two little strikes" gets a post taken down) is a bad thing, simply because the most malicious bad guys out there will use every piece of information to game the system.  This can frustrate good users to no end, but it really is in their best interest for the site not to hand a road map to the enemy.  This principle applies more broadly in fraud and abuse matters, with everything from what the company does to reject stolen credit cards to whether and how it screens new registrants against sex offender registries, identifies duplicate accounts, flag Nigerian money-transfer scam accounts, and on and on. Returning to the question details, is essentially just a variation on this theme — so if "strike one" is a user reporting porn spam, it might make sense to automatically flag that content NO-ADS but leave it up, and wait for strike two (if and when it occurs) to take the content down altogether or send to the moderation queue.  As with everything else, there's a business-specific balance here:  If you really need those ad impressions and are selling to advertisers who are comfortable with an uninhibited, rough-at-the-edges online culture, the priorities are different than if you have tons of unsold inventory and are courting large corporate advertisers with "family-friendly" brands. Also relevant is what exactly you put in the ad agreement or insertion order .  Working at big sites in the past (, ), I would never agree to categorical prohibitions on offensive content in our deals:  With 200 million unruly pseudonymous users, you simply can't guarantee that some nasty stuff won't slip through and wind up on the same page as BigBrand's ad once in a while.  I'd wordsmith a little to either add "best efforts" type language or "knowingly," "materially," etc.  (This is one of those areas where having a good lawyer who knows the space can add value.)  The good news here is that the http://www.iab.net/guidelines/508676/tscs3 nowadays (version 3.0) have a dedicated clause that acknowledges the realities of UGC sites and exempts them from the usual commitments to abide by "editorial adjacency guidelines." * There's a place for algorithmic filtering, but I've never seen it done on a standalone, fully automated basis, only alongside complaint-based systems or as an input to a dedicated moderation queue.  The most obvious reason is the sheer number of false positives and false negatives that will get through any filter; it will never be zero, and the more aggressively you filter, the more users you'll anger by "censoring" their posts.  (They tend to be hyper-sensitive and vocal about these things, spreading the damage.)  An equally important reason is that every online community is unique, so filters have to be fine-tuned to match that community's standards.  To make up an example, "talking dirty" on your eHarmony profile page should trigger a filter, whereas the exact same language might be just fine on a MySpace profile.  Then there's the mode of communication: This same language in a private message with someone you've already gotten to know on eHarmony should be none of the site's business, but ideally, coming in a first communication from a stranger, should trip an abuse/spam filter. There are many other variables, but the one common denominator that comes to mind is the handling of links.  Porn spam won't make the spammer any money without links, but again, there's a tradeoff in user frustration to the extent you crack down on their ability to link to external content from within legitimate posts.  A notorious example is 's ban on links in , which caused widespread outcry among users, but was apparently necessary to avoid DM spamming that would render the service unpleasant if not unusable.  There are always tradeoffs made behind the scenes. * http://www.iab.net/guidelines/508676/tscs3 contains this language: For any page on the Site that primarily consists of user-generated content, [Editorial Adjacency Guidelines] will not apply. Instead, Media Company will make commercially reasonable efforts to ensure that Ads are not placed adjacent to content that violates the Site’s terms of use. Advertiser’s and Agency’s sole remedy for Media Company’s breach of such obligation will be to submit written complaints to Media Company, which will review such complaints and remove user-generated content that Media Company, in its sole discretion, determines is objectionable or in violation of such Site’s terms of use.

Antone Johnson at Quora Visit the source

Was this solution helpful to you?

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.