INSIGHTS
In The Know.

  • Home
  • Insights
  • Five Questions about Artificial Intelligence: Ethical, Legal and Business Dimensions

Search by

Five Questions about Artificial Intelligence: Ethical, Legal and Business Dimensions

June 2019 – Dessislava Fessenko, heading the Technology practice in our Kinstellar Sofia office, speaks exclusively for Colliers International on the incorporation of AI in contemporary businesses and its interaction with society on ethical, legal and business grounds.

Dessislava has over sixteen years of experience advising on technology, competition and compliance law— including five years at a leading international law firm in Brussels prior to joining Kinstellar. Her practice spans all aspects of Bulgarian and EU technology, competition and compliance regulations. Her sector-specific experience ranges from banking and insurance, pharmaceuticals, and equity and venture funds to consumer products, retail, IT, and energy and mining. Dessislava advises clients on technology agreements, acquisitions of technology businesses, data protection and compliance, and on the interplay between antitrust regulations, intellectual property rights and data protection, cyber security and cyber-crime. She is a member of the European Artificial Intelligence Alliance and in this capacity has taken part in the expert consultations leading to the creation of the ethical guidelines on the development and use of trustworthy artificial intelligence promoted by the European Commission.

1. Is artificial intelligence (AI) still terra incognita for law and regulators? How regulated – if at all – is this area of computer science and technology?

The general perception prevails that it is not regulated at all. In fact, certain branches of this complex body of systems, processes and data are already regulated, at least in Europe. European regulations on privacy, non-personal data and information security already cover the personal and non-personal data used to fuel and train AI systems and the security settings that AI systems must have.

The European Union has now moved to regulate the core of AI, i.e., its conceptual and technical design. In April, the European Commission rolled out ethics guidelines for the development, deployment and use of AI that should be applied by all developers, suppliers and users of AI in Europe. The guidelines were drafted by an expert group of institutional and industry stakeholders and also incorporated broader input from non-governmental advisory platforms, of which I am also a member.

2. What is the relevance of these ethical guidelines to business and investors?

Although not binding as law, the guidelines clearly set out the ethical and legal framework within which the EU expects businesses to operate when developing, deploying and using AI. In practical term this means that there is already the nucleus of the rulebook that businesses and investors need to understand and keep in mind when making investment decisions and technological choices involving AI. The guidelines make it clearer what features an AI system would need to have or need to scrap in order to be rolled out to markets in the European Union.

3. What are the key requirements that these guidelines introduce?

The guidelines set a standard for AI solutions to build upon — namely that AI systems need to be trustworthy. This standard goes beyond the merely technical parameters of AI solutions. The guidelines identify seven key requirements that AI systems, in various settings and industries, should respect to be considered trustworthy. These requirements essentially relate to the human oversight, security and safety, privacy, transparency and accountability standards that AI solutions must meet. The guidelines also include an assessment list to help check whether these requirements are fulfilled.

Setting aside the Kafkaesque flare of these formulations, the framework can essentially be understood as follows:

First and foremost, the use of fully autonomous AI, i.e., without a sufficient degree of human oversight, is not acceptable from the ethical and policy perspective at this point in time. AI functionalities need to ensure that humans review/validate/overrule actions/decisions implied by an AI system.

AI algorithms must be robust and reliable enough to adequately deal with erroneous outcomes. This feature is essentially a function derived from the quality of the data inputted into an AI system and of the alignment of the AI’s objective with overall societal objectives. Having access to sufficient, consistent and unbiased data in order to “train” and operate an AI system will be the paramount challenge for years to come. Resolving this has to do with how both technical and legal problems, such as aggregation, computing capabilities, the ownership of data, control by individuals of their personal data, etc., are tackled. Aligning an AI system’s objectives with overall societal objectives requires a great deal of ethical and engineering discipline: modern societies must define through constructive debate the common values they would like to preserve and promote in a globalised world, while technology businesses must develop AI around those moral decisions without bias.

Furthermore, AI systems also need to meet the latest state-of-the-art security standards to ensure that they are safe at every stage of their operations. The safety and security of an AI solution needs to be verified and evaluated regularly, including by external auditors. AI systems must integrate mechanisms to mitigate and remedy any adverse outcomes.

4. What is the impact of this new framework on businesses and investors?

The guidelines have set the principle boundaries within which AI solutions need to fit in order to be used on European markets. This should level the playing field to a certain extent, given the strong competition from the US and China in this area of science and technology. However, like all regulations (even if a soft one for the time being), the guidelines will likely raise the overall transaction costs for developers and businesses incorporating AI solutions in their IT systems. To meet the AI safety, security and accountability standards established by the guidelines necessitates constant, on-going investment by developers and businesses.

5. What follows next for AI in Europe?

From the business/investment perspective, there is clear commitment from European Union institutions to support and promote the development and use of AI in a wide range of sectors. The EU aims to significantly incentivise private investment (including via public-private partnerships in research and development, defense, etc.). To this end, the European Commission has increased its funding for AI via the various sector development instruments that it applies. The Commission plans to earmark EUR 1 billion annually in support of AI-based public services, such as e-healthcare, smart cities, (including connected and automated vehicles) and automated manufacturing. EU Member States, including Bulgaria, are currently working out their own priorities in the area of AI and putting together national strategies for the development and use of such systems. This process also holds opportunities for businesses with social and corporate responsibility agendas.

In terms of regulations, the ethical guidelines currently proposed by the European Commission can be expected to evolve, both conceptually and in terms of the technical standards promoted. The guidelines can be also expected to be brought to bear on other binding regulations in Europe, such as those for safety, product liability and consumer protection.

For further information and for any questions on artificial intelligence, please contact Dessislava Fessenko, Of Counsel, at

e-mail

.