Sure! Here’s the translation into American English:
—
Amazon Bedrock has taken a significant step in proactive cost management for generative AI deployments. In a recent series of updates, a solution was introduced that incorporates a robust cost monitoring mechanism designed to enforce real-time token usage limits. As organizations seek to control their expenses related to generative AI, initial strategies for token tracking and techniques for budget execution have been explored.
The latest update focuses on advanced cost monitoring strategies. The monitoring solution has been developed as a centralized mechanism that proactively limits generative AI usage, ensuring that expenses remain within established budgets. Highlights include customized tagging approaches, which allow for precise cost allocation and the development of comprehensive reports.
One of the most notable additions is invocation-level tagging, which attaches metadata to each API request, creating a useful audit trail for investigating budgeting decisions and analyzing usage patterns across different applications and teams. To support this approach, the core workflow of AWS Step Functions has been updated.
Additionally, the API input structure has evolved to allow for custom tagging, introducing optional parameters that adapt to specific model configurations. These new components include model identification, request content, and a tags object that facilitates application-level tracking.
To support validation and tagging, a new step has been added to the workflow using an AWS Lambda function, ensuring that each request aligns with the relevant model and contains the necessary tags for subsequent analysis.
With the implementation of custom metrics in CloudWatch, organizations can now monitor data across various dimensions, such as model type and cost centers, providing detailed visibility into their AI usage. The strategy includes generating unique metrics and storing monitored data for more granular analysis.
Furthermore, Amazon Bedrock has introduced application inference profiles, allowing organizations to apply custom cost allocation tags, overcoming previous limitations in expense tracking. These profiles can be created using the AWS CLI or the API, facilitating the definition of tags such as department and team identifiers.
Thanks to tools like AWS Cost Explorer, organizations can visualize and analyze their cloud spending, segmenting costs by service, tags, and custom dimensions. This provides a clear perspective on AI expenses and allows for reports that break down Amazon Bedrock costs by business units or specific projects.
In summary, the integration of real-time monitoring with comprehensive cost reporting provides organizations with the ability to proactively manage their AI resources, keeping innovation budgets under control and ensuring smooth project development.
—
Let me know if you need further assistance!
via: MiMub in Spanish