Documentation
Search
K

Custom Moderation Rules

Guide on rulesets, rules and conditions and how they can be used
Custom moderation rules (rulesets) play a significant role in automated moderation within the Lasso Moderation platform. A ruleset is a customizable collection of rules. Every rule can be customized to automatically apply moderation action on content and users.

Rules

Each rule can have one or more conditions (criteria). If a user or piece of content meets the rule’s conditions, your specified action will be automatically applied.
An example of a rule targeting images containing nudity
A rule requires a few different inputs:
  • Name: This will be used in the dashboard to identify the rule.
  • Description: A short explanation of the rule. This is helpful to quickly understand how the rule works and what it is meant for. (optional)
  • Action: When all the rule’s conditions are matched, the selected action (allow, flag or ban/remove) will be applied.
  • Policy: Here you can categorize each rule under a specific policy. This is especially useful if you need to track actions taken under specific policy guidelines.
  • Conditions: A list of your rule’s custom conditions. The rule is applied when all of the conditions are met. Below you will find more information on what conditions can be used.
  • Enabled / Disabled: Option to enable or disable the rule. When disabled, the rule is turned off and no action will be taken.

Conditions

A rule can have one or more conditions. Conditions can be combined to target specific content and/or users. For example, in the case of potential spam, you can specify users that have signed up in the last 10 minutes and have posted more than 20 messages.
Example of a set of conditions targeting recently signed up users that have sent over 20 flagged or removed messages
There are two groups of condition types. One is for rules targeting users, and the other is for rules targeting content.

User Condition Types

The following condition types can be used in Rules that target users:
  • Name: The name of a user. For example, you can flag users that use inappropriate or offensive language in their names.
  • Status: The moderation status of a user (allowed, flagged or banned). This condition type works best in combination with other condition types. For example, you may want to ban users that are currently flagged and have signed up with a temporary email address domain.
  • Signed up: The timeframe in which a user has signed up. For example, you can target all users that have signed up in the last 10 minutes.
  • User’s type: The type of user refers to either a normal user, moderator or admin in the customer’s product. For example, you can allow exceptions for users that are admins or moderators in the customer’s product.
  • Content volume: The volume of content posted by a user. You can target by the number of messages and/or images sent by a user and filter based on the moderation status of this content. For example, you can flag a user that has had more than 5 messages flagged or removed.
  • User’s country: The country of a user. For example, if there is an event located in the United States, you can flag users from outside of the United States for this specific event.
  • Signup method: The method the user has used when signing up (Google, Facebook, email, phone number etc.). For example, users that sign up with SSO methods like Google or Facebook are often less likely to be spammers, so you can opt to allow these users without any review.
  • Email domain: The user’s email domain ( @gmail.com, @somecompany.com , etc.). For example, you can flag users that have signed up with a temporary email domain like @tempmail.com.
  • Profile image: If the user has a profile image or not. This condition is best used in combination with other conditions. For example, users without a profile image may be more likely to be fake users.
  • # of times reported: The number of times a user has been reported. For example, if the user has been reported by 3 or more unique users, you can automatically ban the reported user.
  • # of reports made: The number of reports a certain user has made. For example, you can flag users that have made over 10 reports. Flagging these users will help against malicious users that report others users without any reason.
  • Reported by: The user type that the report is made by (normal user, moderator or admin). A report made by a moderator often weighs more than a report from a normal user. For example, a moderator reports a user in your app. This report will be sent to Lasso, where the user can be automatically flagged/banned by the system as it is coming from a moderator.

Content Conditions Types

The following condition types can be used in Rules that target content:
  • Text: The text within the message or image (Lasso automatically extracts text from images). This condition can be used to detect certain words/URLs/phone numbers in the text. For example, you can flag the content if it contains a word from a specified word list. Another example, you can flag all content that has a link/URL/phone number in it.
  • Text length: The number of characters in the text. For example, if you have problems with people spamming large messages, you can flag text with more than 300 characters.
  • Content type: The type of content (message or image). This condition is best used in combination with other conditions. For example, you can flag all images that are sent by users created within the last day.
  • Project: The project that the content is a part of. This condition can be used if you want to have slightly different rules for a specific project. For example, you can have stricter rules for projects that involve children.
  • Topic: The topic that the content is a part of. This condition can be used if you want to have slightly different rules for a specific topic. For example, you can have stricter rules for topics that involve children.
  • Image Labels: The moderation labels (nudity, violence, drugs, etc.) found in an image with a certain likelihood. Lasso’s AI labels images automatically. For example, you can remove all images that potentially contain violence, or you can flag images that are 50% likely to contain nudity.
  • Similar Images: How similar the image is to previously removed images. You can opt for images that are exactly the same, very similar, or somewhat similar. For example, there is an image removed by a moderator and this image is posted multiple times by other users. This rule can automatically remove or flag similar images.
  • # of times reported: The number of times content has been reported. For example, you can automatically flag messages that have been reported, or you can automatically remove messages that have been reported by 3 or more unique users.
  • Reported by: The user type that the report is made by (normal user, moderator or admin). A report made by a moderator often weighs more than a report from a normal user. For example, a moderator reports a piece of content in your app. This report will be sent to Lasso, where the content can be automatically flagged/removed by the system as it is coming from a moderator.
  • User’s status: The status of the user sending the content. For example, flag all content sent by a user that is currently flagged.
  • User signed up: The timeframe in which a user sending the content has signed up. For example, you can target all content sent by users who have signed up in the last 10 minutes.
  • User’s type: The type of user sending the content (normal user, moderator or admin). For example, you can allow exceptions for users that are admins or moderators.
  • User’s country: The country of the user sending the content. For example, if there is an event located in the United States, you can flag all content coming from users from outside of the United States for this specific event.