AI is changing the world, whether we’re ready for it or not. We are already seeing examples of AI algorithms being applied to reduce doctors’ workloads by intelligently triaging patients, connect journalists with global audiences because of accurate language translations, and reduce customer service wait time, according to Google. But even as we begin to benefit from AI, there is still an air of uncertainty and unease about the technology.
For example, Google recently backed out of a controversial military contract using AI after receiving public backlash.
Now, the company is taking the future of responsible AI more seriously. In June, Google laid out its AI principles, and this week the it began opening up a discussion of the concerns that customers most frequently have about AI. The concerns are broken into four areas: unfair bias, interpretability, changing workforce, and doing good.
Unfair bias: How can it be sure that its machine learning models treat all users fairly and justly?
Machine learning models are only as reliable as the data they were trained on. Since humans prepare that data, the slighted bias can make a measurable difference in results. Because of the speed at which algorithms perform, unfair bias is amplified, Google explained.
Unfair bias is now always the result of deliberate prejudice — we naturally gravitate towards people and ideas that confirm our beliefs, while reject those that challenge them.
In order to address issues of bias, Google has created educational resources such as recommended practices on fairness and the fairness module in its ML crash course. It is also focusing on documentation and community outreach, the company explained.
“I’m proud of the steps we’re taking, and I believe the knowledge and tools we’re developing will go a long way towards making AI more fair. But no single company can solve such a complex problem alone. The fight against unfair bias will be a collective effort, shaped by input from a range of stakeholders, and we’re committed to listen. As our world continues to change, we’ll continue to learn,” Rajen Sheth, director of product management for Cloud AI for Google, wrote in a post.
Interpretability: How can it make AI more transparent, so that it can better understand recommendations?
In order to trust AI systems, we need to understand why it’s making the decisions it’s making. The logic of traditional software can be laid out by examining the source code, but that is not possible with neural networks, the company explained.
According to Google, progress is being made as a result of establishing best practices, a growing set of tools, and a collective effort to aim for interpretable results.
Image classification is an area where this transparency is being exhibited. “In the case of image classification, for instance, recent work from Google AI demonstrates a method to represent human-friendly concepts, such as striped fur or curly hair, then quantify the prevalence of those concepts within a given image. The result is a classifier that articulates its reasoning in terms of features most meaningful to a human user. An image might be classified ‘zebra,’ for instance, due in part to high levels of ‘striped’ features and comparatively low levels of ‘polka dots,’” Sheth wrote.
Changing workforce: How can it responsibly harness the power of automation while ensuring that today’s workforce is prepared for tomorrow?
Our relationship to work is changing, and many organizations are trying to balance the potential of automation and the value of their workforce, Google explained.
While not all jobs can be automated, there needs to be something done to ease the transition for those that can. Google has set up a $50 million fund for nonprofits that are preparing for this time by providing training and education, connecting potential employees with ideal job opportunities based on skills and experience, and supporting workers in low-wage employment.
Doing good: How can it be sure that it is using AI for good?
The final facet is to ensure that AI is having a positive impact. There is an enormous grey area, especially in terms of areas such as AI for weaponry.
“Our customers find themselves in a variety of places along the spectrum of possibility on controversial use cases, and are looking to us to help them think through what AI means for their business,” Sheth wrote.
Google is working with customers and product teams to work through these grey areas. It has brought in technology ethicist Shannon Vallor to bring an informed outsider perspective on the matter, Google explained.
“Careful ethical analysis can help us understand which potential uses of vision technology are inappropriate, harmful, or intrusive. And ethical decision-making practices can help us reason better about challenging dilemmas and complex value tradeoffs—such as whether to prioritize transparency or privacy in an AI application where providing more of one may mean less of the other,” wrote Sheth.
The post Google explores the challenges of responsible artificial intelligence appeared first on SD Times.
from SD Times https://ift.tt/2yU6cBM