Video Moderation

This documentation outlines the video moderation capabilities available in Lasso Moderation. Video moderation helps ensure that content uploaded to your platform complies with guidelines and community standards. The moderation process includes analyzing videos for inappropriate or harmful content using various labels and generating screenshots for further image analysis.

Video Labels

Lasso Moderation applies a set of predefined labels to categorize and detect content within videos. These labels help identify a wide range of content, from adult material to potentially harmful or offensive content. Below is a list of available labels:

Video Labels:

  • Adult Content: Detects explicit adult content.

  • Suggestive Content: Identifies content that is suggestive in nature but not explicitly adult.

  • Medical: Identifies medical procedures or related content.

  • Violence: Detects violent behavior or imagery.

  • Over 18: Flags content meant for viewers aged 18 and over.

  • Adult Toys: Detects the presence of adult toys.

  • Firearms: Identifies guns and other firearms in the video.

  • Knifes: Detects the presence of knives.

  • Violent Knifes: Identifies knives being used in a violent manner.

  • Alcohol: Detects alcohol or alcoholic beverages.

  • Drinks: Identifies non-alcoholic beverages.

  • Smoking and Tobacco: Detects smoking or tobacco-related products.

  • Marijuana: Identifies marijuana-related content.

  • Pills: Detects pills or other pharmaceutical products.

  • Recreational Pills: Identifies recreational drugs in pill form.

  • Confederate Flag: Detects imagery of the Confederate flag.

  • Pepe Frog: Identifies the Pepe the Frog meme, often associated with specific ideological contexts.

  • Nazi Swastikas: Detects Nazi symbolism, such as swastikas.

  • Meme: Identifies memes present within the video.

  • Face Filter: Detects the use of face filters in the video.

  • Toxic Text: Flags text within the video that contains harmful or toxic language.

  • Severe Toxic Text: Detects text with severe levels of toxicity.

  • Obscene Text: Identifies obscene language in the video.

  • Insult Text: Detects insulting text directed at individuals or groups.

  • Identity Hate Text: Identifies hate speech targeted at specific identities.

  • Threat Text: Detects threatening language within the video.

  • Sexual Explicit Text: Identifies sexually explicit language.

  • Children: Detects the presence of children in the video.

  • Toys: Identifies toys within the video content.

These labels provide robust content categorization, making it easier to moderate videos effectively according to your platform’s rules and regulations.

Screenshots

Lasso Moderation has the capability to automatically generate screenshots from videos at various intervals. These screenshots can then be analyzed using the same Image Moderation techniques that are applied to still images. This allows for deeper content inspection and ensures that inappropriate or harmful content within the video frames can be flagged.

For detailed information on image moderation techniques used on screenshots, refer to the Image Moderation section.

Last updated