Microsoft Azure AI Fundamentals AI-900 Practice Question
An organization is developing a generative AI application using Azure OpenAI Service to create automated customer support responses.
To align with responsible AI principles, what should the team prioritize to prevent potential issues related to biased or inappropriate content generation?
Increasing the dataset size to include more diverse languages
Enhancing the model to maximize response diversity
Implementing content filtering to monitor and remove harmful outputs
Implementing content filtering is essential to monitor and remove harmful outputs that the model might generate. Content filters help ensure that any inappropriate, biased, or offensive content is identified and prevented from reaching end-users, supporting responsible AI deployment.
While enhancing response diversity and increasing dataset size can improve model performance, they do not directly prevent harmful content. Optimizing the model for higher throughput focuses on performance rather than responsible content generation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are content filters and how do they work?
Open an interactive chat with Bash
Why is preventing bias in AI models important?
Open an interactive chat with Bash
How can organizations enhance responsible AI deployment beyond content filtering?
Open an interactive chat with Bash
Microsoft Azure AI Fundamentals AI-900
Describe features of generative AI workloads on Azure
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access