NIST published a report on preventing bias in artificial intelligence

On 15 March 2022, the National Institute of Standards and Technology published a report entitled Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

The document is a practical guide on how to prevent bias when designing, implementing and using artificial intelligence. The authors point out that developing an objective AI system requires a broader approach than just using statistical and technical methods for preventing bias. In the authors’ opinion, this matter should be approached from a broader perspective, which means taking into account also human biases as well as social and systemic biases.

The report includes a range of tips on managing, testing and verifying AI systems. The authors suggest, among others, that the persons responsible for developing AI systems should check whether the data sets used by them are appropriate for the social circumstances in which a given AI system will be used. The report also includes suggestions regarding taking into account human factors which may contribute to bias, such as social biases.

Source: https://www.nist.gov/publications/towards-standard-identifying-and-managing-bias-artificial-intelligence

We use cookies for functional purposes to enhance the user experience on our website and to create anonymous service statistics. By not blocking cookies, you consent to their use and storage on your device.