Jumping into AI & big data FEAT first: Why companies must consider Fairness, Ethics, Accountability & Transparency when implementing AI programmes

The astonishing potential of AI and big data spurs us to greater enthusiasm for pushing the boundaries to develop commercial, economic, social and environmental applications that could solve challenging problems—both for business enterprises and the world. As with any new technology, however, we must consider the wider impact of mass deployment of AI on individuals and society.

Fairness, ethics, accountability and transparency in AI and big data—the FEAT principles—are intrinsically linked and central to the development and successful implementation of AI and big data. Let’s take a closer look at these FEAT principles.

FAIRNESS

It is often said that Artificial Intelligence is only as good as the data it uses. However, even when the data is fair, if the algorithm analysing it has been built with bias, the decisions it makes will be skewed. Organisations need to consider fairness in both big data selection and algorithm design. These include well-known biases such as gender or race, but also contingent biases that may occur as a result of multiple factors in combination that are less easy  for a human—or a machine— to detect. In support of this, last year Amazon made grants of up to $10M available to researchers exploring how to ensure fairness in AI. From a company pioneering consumer-facing AI, that is an indicator of the importance it places on fairness.  

ETHICS

The ethics of AI and big data are fascinating and complex. To what degree should someone’s personal information be used by third parties to make decisions affecting their lives? Even if a decision is ostensibly in a person’s best interests, can or should it be made unilaterally? This risk and the complexity of ethically managing AI is leading businesses to establish and fund think tanks and research centres to explore these issues.  Salesforce for example appointed a Chief Ethical and Humane Use Officer at the start of 2019 with a remit to develop a strategic framework for the responsible use of technology across the business.

ACCOUNTABILITY

The fast pace of AI deployment and lack of controls have led some commentators to refer to it as a “Wild West” landscape. However, accountability is essential to building public trust in automated decision-making and times are changing. Recent developments in privacy legislation have raised public awareness of rights and responsibilities and this, together with growing government and regulatory interest. As the AI environment matures, expect more legislation and regulation governing business and public sector use of AI. As with development of global financial and privacy regulations, organisations will have to comply with rules specifically relating to AI and big data, requiring governance frameworks, monitoring protocols and risk management strategies to drive visibility and accountability.

TRANSPARENCY

Accountability cannot exist without transparency. Organisations must be open about how they use AI algorithms and big data and the effect of this use.  This can offer a dilemma though. Some technological companies can be understandably hesitant about sharing it all. Despite these difficulties inherent with transparency, its importance to trust and the moral use of AI means organisation will be forced to address it.

Reactie toevoegen