AI, risk allocation and liability

On 16 February 2017, the European Parliament adopted a resolution with recommendations to the Commission on Civil Law Rules on Robotics. The Parliament raised several legal questions on how to deal with risk and damages induced by the use of AI and robots. The Parliament even considered the creation of an electronic personality in particular for cases where artificial intelligence (AI) makes autonomous decisions. On 24 April 2018 the European Commission Staff Working Document on liability for emerging digital technologies was published. This document explored once more the existing legal framework and potential gaps in the liability systems of EU Member States. This shows that on a legislative basis a need has been identified to address liability and risk allocation of AI. Why is this the case? Are there gaps that would leave an injured consumer without sufficient protection? Is the introduction of an e-personality required?

Is AI dangerous?

Potentially, AI and robots are obviously dangerous. For example, a military device – a weapon – can be created that kills humans on purpose. Such a device is dangerous. But the real question is, whether AI operated devices are more dangerous than human operated devices? In all likelihood, this will not be the case. McKinsey even estimated that with the introduction of autonomous vehicles, traffic accidents could drop by 90%! Even though this means the use of AI promises to make our world safer, there are scenarios in which the use of AI creates difficult legal questions.

Contractual Liability and the use of AI devices

The refrigerator, which orders milk as soon as the last milk bottle is opened, seems to be concluding a purchase contract for milk. As the conclusion of a contract requires the consent of the contractual partners, the questions arises as to whether that consent can expressed by AI? However, in this example, one can deal with this issue by predefined declarations of will between the (human) contractual partners, i.e. I will order milk as soon as I need them in a predefined price range. Then, there is still human consent on the essential elements of a contract and the action of the refrigerator can be attributed to the human declaration of will.

In a more complex scenario, considering automated supply chain management, AI may actually make decisions that go beyond a predefined consent. Such a case cannot be dealt with sufficiently within the existing legal framework. What would be required is a legal instrument that allows AI to represent the actual contractual partner. In the current legal framework, representation requires that the representative expresses the declaration of will which – so far – requires a human action. So, to allow AI devices to act as a representative of another legal personality would actually require a change of the legal framework. It would, however, not require the introduction of an e-personality. It would be sufficient to introduce a rule that allows declaration of will by an AI device to be attributed to the represented party.

Extra Contractual Liability: Accidents and the like

Liability can arise outside the scope of contracts as well. The most discussed example is the autonomous vehicle that injures a pedestrian. Another example would be the analysis of medical images by AI.

Strict Liability

The existing legal framework recognizes various forms of strict liability that requires no fault by the liability partner. This can be strict liability in a narrow sense, where only causation is required. Such a system is applied in many jurisdictions to motor vehicles and is typically combined with a mandatory insurance regime.

Such a strict liability regime can be easily adopted to deal with AI. The legislator would only need to define for which devices the regime and the insurance requirement shall apply. In Germany, this step has already been taken for high and fully automated (n.b. not autonomous) vehicles in § 12 of the German Road Traffic Act that provides for a higher liability cap for such vehicles than for regular vehicles.

The product liability regime in the EU can be understood as a strict liability system as well, as it does not require fault by the producer of the product causing the damage but still requires a defect of the product. In this sense, the liability situation for an AI product is not different to a regular product, when the defect was inherent in the product at the time of placement on the market. In this case, the producer would be liable.

What becomes more difficult is the situation, when a self-learning device actually acquired the defect by self-learning. For example, when a device analyzing medical images learned to diagnose images incorrectly. These cases are only easy to solve where the self-learning element of the device was not designed correctly. In this case, the producer would still be liable. However, in other scenarios, for example where the “teacher” teaches the device incorrectly, there may not be a defect inherent to the product that can be attributed to producer. Though, this would only be correct if the self-learning device would still be safer than a traditional device as otherwise the product could be regarded as unsafe and therefore defective as it should not have been self-learning in the first place. For the remaining gaps, an e-personality for AI devices is still not needed, as it would be sufficient to either expand the producer’s liability or to introduce an owner’s liability potentially combined with a mandatory insurance regime.

Fault based liability

Fault based liability applies to AI devices as to any other device. The difference is that factually the scope for liability may be greater. One concern producers may have, is that they can actually monitor, repair and update defective devices as long as they are connected for a potentially unlimited period. The enhanced ability of the producer to mitigate risk comes with an enhanced obligation to do so and thus increases the liability risks for producers. Another obvious field is cybersecurity. In particular connected devices that have access to large amounts of personal data must be secured. Failing to provide for adequate cyber security will increase the liability risk for producers significantly.

A separate field is where the AI device actually acted “negligently” and where this fault has not been caused by the producer or the owner. In this case, a liability gap may actually exist. The solution to this gap depends on a risk assessment for the individual device. In some cases, an extension of the strict liability regime (including mandatory insurance) may be required. In other cases, the public may have to accept the liability gap when it comes with larger benefits for the public.

Conclusion

In general the existing legal framework is suitable to deal with AI. It may need to be adopted to allow for the conclusion of contracts by AI devices and to close remaining liability gaps. In practice, producers of AI devices will experience greater obligations to monitor devices to mitigate their risks.