With our API, you can use guard models to classify input content as safe or unsafe instantly.
meta-llama/Meta-Llama-Guard-3-8B Meta-Llama/Llama-Guard-7b meta-llama/Llama-Guard-3-11B-Vision-Turbo meta-llama/LlamaGuard-2-8b | Meta-Llama-Guard is an artificial intelligence model developed by Meta, designed to ensure content safety and moderation. It analyzes textual data (input) and determines whether the content complies with established guidelines, identifying harmful, toxic, or undesirable material. As an output, the model provides an assessment or recommendations, helping platforms and developers maintain a safe and ethical environment for users. | https://docs.aimlapi.com/api-overview/guard-models |
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article