Amazon has announced the general availability of image content filters in Amazon Bedrock Guardrails, an innovative tool designed to moderate both text and image content in generative artificial intelligence applications. Previously, moderation was limited only to text, but with this new functionality, content control capabilities are significantly expanded, making it easier to manage inputs and responses generated by artificial intelligence.
Tero Hottinen, Vice President of Strategic Partnerships at KONE, has expressed his opinion on this innovation, emphasizing that Amazon Bedrock Guardrails could play a fundamental role in protecting generative AI applications, especially by ensuring the verification of relevance and context. KONE plans to integrate product design diagrams and manuals in their applications, using these security measures to enhance the diagnosis and analysis of multimodal content.
With Amazon Bedrock Guardrails, users can set configurable safeguards that allow them to block harmful inputs and outputs. The tool includes six distinct policies, ranging from content filters to hallucination detection in the model, effectively eliminating up to 88% of harmful multimodal content. Developers can now combine image and text moderation, adjusting content thresholds from low to high.
The utility of this new feature extends to various AWS regions, including US East, US West, Europe, and Asia-Pacific, allowing organizations in various sectors such as healthcare, manufacturing, financial services, media, and education to enhance brand security without the need to build custom safeguards or carry out manual evaluations that may be error-prone.
To start using image content filters, users must create a guardrail in the AWS management console and configure the desired filters. Additionally, they have the option to use the standalone ApplyGuardrail API to validate content in any application flow.
This tool represents a significant step towards more responsible and safer use of artificial intelligence, enabling the creation of applications that align with each organization’s responsible AI policies.
Referrer: MiMub in Spanish