Businesses are right to embrace AI because the technology presents fantastic commercial opportunities. In fields ranging from logistics to digital advertising, it has the power to reduce costs and maximise results, argues data privacy expert Ivana Bartoletti
There is little doubt that enterprises in all sectors will treat this technology as a core part of their transformation plans over the next few years. But recent high-profile incidents have highlighted that artificial intelligence (AI) – if inadequately governed – poses serious risks.
Last year alone, for instance, a self-driving Uber car killed a pedestrian; the Home Office falsely accused 7,000 foreign students of cheating in their visa exams as a result of a flaw in voice-recognition software; and Amazon was forced to drop its AI led recruitment programme after it picked up only male applicants’ CVs.
All of these cases illustrate why companies must treat ethics as a vital concern when deploying AI.
Policy-makers have been particularly active in this field over the past few months. The British government has partnered the World Economic Forum to define the remit and practicalities of AI regulation and governance. It has also set up the Centre for Data Ethics and Innovation, while numerous advocacy groups and parliamentary initiatives are suggesting how ethical concerns can be turned into pragmatic requirements.
In my view, organisations need to focus on two key aspects as they navigate the ethical, legal and practical complexities surrounding their use of AI. The first is strategic – namely: deciding exactly how AI will serve your business’s goals and values. The second is technical – the algorithms, processes and systems required.
Let’s start with the latter. Algorithmic impact assessments (AIAs) are a vital part of AI governance, as they aim to act both ex ante, by providing a framework for ethics by design, and ex post, by serving as useful audit tools.
To read the full blog please visit the director.co.uk website: