Progress in implementing ethical and trusted AI-enabled systems still inconsistent

By | 9:14 PM Leave a Comment

COVID-19 has put a spotlight on ethical issues emerging from the increased use of AI applications and the potential for bias and discrimination.

ethical AI systems

A report from the Capgemini Research Institute found that in 2020 45% of organizations have defined an ethical charter to provide guidelines on AI development, up from 5% in 2019, as businesses recognize the importance of having defined standards across industries.

However, a lack of leadership in terms of how these systems are developed and used is coming at a high cost for organizations.

The report notes that while organizations are more ethically aware, progress in implementing ethical AI has been inconsistent. For example, the progress on “fairness” (65%) and “auditability” (45%) dimensions of ethical AI has been non-existent, while transparency has dropped from 73% to 59%, despite the fact that 58% of businesses say they have been building awareness amongst employees about issues that can result from the use of AI.

The research also reveals that 70% of customers want a clear explanation of results and expect organizations to provide AI interactions that are transparent and fair.

Ethical governance has become a prerequisite

The need for organizations to implement an ethical charter is also driven by increased regulatory frameworks. For example, the European Commission has issued guidelines on the key ethical principles that should be used for designing AI applications.

Meanwhile, guidelines issued by the FTC in early 2020 call for transparent AI, stating that when an AI-enabled system makes an adverse decision (such as declining credit for a customer), then the organization should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.

However, while globally 73% of organizations informed users about the ways in which AI decisions might affect them in 2019, today, this has dropped to 59%.

According to the report, this is indicative of current circumstances brought about by COVID-19, growing complexity of AI models, and a change in consumer behavior, which has disrupted the functionalities of the AI algorithms.

New factors, including a preference of safety, bulk buying, and a lack of training data for similar situations from the past, has meant that organizations are redesigning their systems to suit a new normal; however, this has led to less transparency.

Discriminatory bias with AI systems come at a high cost for orgs

Many public and private institutions deployed a range of AI technologies during COVID-19 in an attempt to curtail the impacts wrought by the pandemic. As these continue, it is critical for organizations to uphold customer trust by furthering positive relationships between AI and consumers. However, reports show that datasets collected for healthcare and the public sector are subjected to social and cultural bias.

This is not limited to just the public sector. The research found that 65% of executives said they were aware of the issue of discriminatory bias with AI systems. Further, close to 60% of organizations have attracted legal scrutiny and 22% have faced a customer backlash in the last two to three years because of decisions reached by AI systems.

In fact, 45% of customers noted they will share their negative experiences with family and friends and urge them not to engage with an organization, 39% will raise their concerns with the organization and demand an explanation, and 39% will switch from the AI channel to a higher-cost human interaction. 27% of consumers say they would cease dealing with the organization altogether.

Establish ownership of ethical issues – leaders must be accountable

Only 53% of organizations have a leader who is responsible for the ethics of AI systems at their organization, such as a Chief Ethics Officer. It is crucial to establish leadership at the top to ensure these issues receive due priority from top management and to create ethically robust AI systems.

In addition, leaders in business and technology functions must be fully accountable for the ethical outcomes of AI applications. Our research shows that only half said they had a confidential hotline or ombudsman to enable customers and employees to raise ethical issues with AI systems.

The report highlights seven key actions for organizations to build an ethically robust AI system, which need to be underpinned by a strong foundation of leadership, governance, and internal practices:

  • Clearly outline the intended purpose of AI systems and assess its overall potential impact
  • Proactively deploy AI for the benefit of society and environment
  • Embed diversity and inclusion principles throughout the lifecycle of AI systems
  • Enhance transparency with the help of technology tools
  • Humanize the AI experience and ensure human oversight of AI systems
  • Ensure technological robustness of AI systems
  • Protect people’s individual privacy by empowering them and putting them in charge of AI interactions

Anne-Laure Thieullent, Artificial Intelligence and Analytics Group Offer Leader at Capgemini, explains, “Given its potential, it would be a disservice if the ethical use of AI is only limited to ensure no harm to users and customers. It should be a proactive pursuit of environmental good and social welfare.

“AI is a transformational technology with the power to bring about far-reaching developments across the business, as well as society and the environment. This means governmental and non-governmental organizations that possess the AI capabilities, wealth of data, and a purpose to work for the welfare of society and environment must take greater responsibility in tackling these issues to benefit societies now and in the future.”


from Help Net Security https://ift.tt/2GsmiK4

0 comments:

Post a Comment