On 8th April 2019, the European Commission’s High Level Expert Group on Artificial Intelligence (AI) released their Ethics Guidelines for Trustworthy AI, focusing on outlining various ethical and moral principles for ‘trustworthy AI’, including that such systems be lawful (including the data used therein); ethical (in terms of compliance with core principles and values); and robust (including central security requirements).
The introduction of the guidelines follows a public consultation process which took place between December 2018 and February 2019, and were then heavily debated at the EU’s AI Alliance Assembly on 26 June. In particular, the guidelines build on other frameworks for autonomous and machine learning solutions, such as those launched by the European Commission (EC), the Institute for Electrical and Electronic Engineers (IEEE), the International Organisation for Standardisation (ISO), the International Telecommunications Union, and several European data protection authorities. As such, the guidelines aim to form a framework for ensuring AI systems are deployed and designed in an accountable fashion, which will allow organisations to test and develop approaches to ethical AI, and which can potentially be translated into hard regulation, particularly given that the incoming European Commission President, Ursula Von der Leyen, has announced plans to introduce legislation on the ethical implications of artificial intelligence in her first 100 days in office.
To read the full summary on the European Commission’s ethics guidelines please simply click on the link below:
Please do not hesitate to contact us if we can support you in your work, share our thoughts and ideas and answer any questions you may have with regards to our response.