Search
Homepage This page's url is: -crn- Rankings and Research Companies CRNtv Events WOTC Jobs Tech Provider Zone

Accenture Working On 'AI Fairness Tool' As Part Of Larger Machine-Learning Ethics Package

The platform is a lightweight tool that integrates with the data science process without hindering the speed of innovation, Rumman Chowdhury, global responsible AI lead at Accenture, told CRN.

Accenture has designed a platform designed to fight racial and gender bias contained in artificial intelligence programming as part of a larger toolset addressing machine-learning ethics.

’In the U.S. we’ve had a few instances where algorithms have acted unfairly,’ Rumman Chowdhury, global responsible AI lead at Accenture, told CRN. ’Every machine makes errors sometimes."

To fight this, Chowdhury, said Accenture’s AI Fairness Tool first looks at areas where the data merges and overlaps into areas of mutual information.

’The user selects sensitive variables and it shows you what sort of relationship that variable has with the variables you are going to put into your algorithm, and then it actually removes that bias from the variable that you’re putting in your algorithm,’ she said. ’What we can look at, for example, is particular variables in our data set that shouldn’t be influencing other things, are influencing other things.’

The second part is called disparate impact, or whether or not different people are being impacted differently within the data model, she said.

’So if I look at my error term, does it look different for different people? So literally is my error distribution different for men than it is for women?’ Chowdhury said. ’What we actually want is for the error to be evenly distributed. It's not that we have a problem with bias. We have a problem with bias that is overwhelmingly applied to one group or the other.’

Finally, the tool looks at what she called predictive parity, where it looks for false positive errors. She said when data scientists are looking at these models they often correct for overall false positives but fail to look at different subgroups.

’What we do is normalize false positive errors across the group. So again, there’s still bias, but that bias looks the same for all the groups that the user has decided is a sensitive group,’ she said. ’We want to make sure that all the genders are treated equally, we want to make sure that all races are treated equally, we want to make sure that all nationalities are treated equally. That’s what this tool does for you.’

This platform is a lightweight tool that integrates with the data science process without hindering the speed of innovation, Chowdhury said. While the product is still in beta testing, she there has been early interest from the finance and health-care industries.

’There’s so much potential for AI in health care when you think about diagnosing diseases,’ she said. ’We think about personalizing health care, but you’re also very, very careful to make sure that care is being given to people in an appropriate and fair manner. ... You have to keep up with the pace of the development of the technology and that’s what I’m hoping this tool will enable people to do -- implement agile ethics.’

Back to Top

Video

 

sponsored resources