New tools from IBM and Google reveal it’s hard to build trust in AI

By | 10:30 PM Leave a Comment

The unseen dangers inherent in artificial intelligence (AI) are proving the importance of IBM and Google’s diverse approach to this multifaceted problem.

Brad Shimmin and Luciano C. Oviedo offer their perspective on this important issue.

Brad Shimmin, Service Director at GlobalData

trust ai

Artificial Intelligence (AI) has already changed the way consumers interact with technology and the way businesses think about big challenges like digital transformation. In fact, GlobalData research shows that approximately 50% of IT buyers have already prioritized the adoption of AI technologies, and that number is expected to jump to more than 67% over the next two years.

However, there is a growing realization that good AI is hard to come by and such decisions AI makes, may only appear to be correct, when in reality they harbor unseen biases, based on incorrect or incomplete data. Many facets of AI such as Deep Learning (DL) algorithms are in essence a black box, unable to reveal how and why a given decision has been made.

Over the last two weeks, IBM and Google, both took an important next step by introducing tools, capable of building trust and transparency into AI itself. Both offer highly divergent approaches yet neither solves the problem in its entirety.

Google’s new tool, named What-If Tool, allows users to analyze a Machine Learning (ML) model directly, without any programming. Intended for use long before an AI solution is put into operation, this tool allows users to readily visualize how the outcome of a given ML model will change, according to any number of “what if” scenarios surrounding the model itself or its underlying dataset.

Conversely, IBM has taken an operational approach to the problem with its new trust and transparency capabilities for AI on IBM Cloud. IBM’s new tools evaluate the effectiveness of a given model, based on how the business expects it to behave, explaining its effectiveness and accuracy in natural and business language.

Despite each solution not being enough to solve the overall problem, what these two highly divergent solutions point to, is the necessity of a multi-pronged approach to building trust in AI; first in the underlying data, next in the model and algorithms, and finally, in the final solution running in the wild.

trust ai

Luciano C. Oviedo, Warwick Business School/Arizona State University

The galactic collision and convergence of AI, IoT, Fog or 5G stands to create plausible futures that range from utopian to dystopian, and everything in between. No one knows which future will hit us but what we can control are to proactively stress-test and adapt our respective strategy and plans to mitigate issues and risks as well as to promote values and opportunities.

Yet, unlike previous technology waves, current research indicates that this specific suite of technology convergence stands to impact society in ways that we’ve not ever seen before. What’s also evident is that companies, and their respective platform ecosystems are, in general, lagging in pro-actively and rigorously analyzing the social impact and implications of these emerging technologies. I recommend companies use this as an opportunity re-engage with non-traditional stakeholders to tackle these topics head on.


from Help Net Security https://ift.tt/2OsWMGs

0 comments:

Post a Comment