LAWS0013 Liability and Algorithmic Systems (Tort Law Special Seminars)

        .---. 
 boom .'_:___".
      |__ --==|
      [  ]  :[|
      |__| I=[| 
      / / ____| 
     |-/.____.' 
..../___\ /___\ ...

Dr Michael Veale, Associate Professor in Digital Rights & Regulation

Last revised 6 January 2022

In these special seminars, we will look at how to navigate some of the challenges that appear when tort law meets emerging digital technologies. We will consider what lawyers need to know about algorithmic systems and artificial intelligence in order to analyse them, and how in practice lawyers can distil the essence of what seem like complex technologies and ask the right questions needed to explore how legal principles might and should apply to them. This year, will look at emerging issues in the ‘cyberphysical’ domain, as different jurisdictions grapple with the appropriate regulation of algorithmic systems, robots, automated cars and other ‘smart’ devices that can cause serious harm. We will look at different discussions of how tort law can and should deal with these challenges, including proposals from scholars and legislators, and seek to appraise what the best way forward might be.

Reading and preparation

In order to follow the class, you shall need to have done the required reading for each session. Spend time on the reading – read carefully and critically, and make sure you understand the material as well as possible, and can answer the questions on the handout. You should expect to spend around 6-8 hours preparing for each class.

I shall expect everyone to contribute to class and will ask questions directly of individuals on occasion. The intention is not to catch anybody out – if for some reason you haven’t been able to prepare, do come to the class, but let me know in advance so I don’t put you on the spot (m.veale@ucl.ac.uk).

I have set out some ‘points for consideration’ at the end of the reading – you should use them to help with your preparation, and if you are unsure about any of them, you should ask. We will refer to some of them in the class, but will not necessarily go through all of them.

Assessment

You shall have 48 hours to write a 3,000 word essay, so it is important that you are well prepared and have revised the seminar material before we give you the essay questions. At the start of the 48-hour period, I shall provide you with four essay titles, from which you shall choose one.

The 48-hour window is designed to test a range of skills, including time management. Some guidelines for this assessment: • the basic reading should be well-prepared before essays are released; • the limited time allows only for very focused research and additional reading; • you have limited time to re-draft the essay itself during the 48-hour assessment period, making planning (including for word count) very important. You shall not perform at your best if you view this as an endurance test – you need food, exercise and sleep if you are to write a good essay! This applies to all of your studies and assessments of course.

You shall be notified of the 48-hour window at or about the same time as the exam timetable is released. This is so that the tort teachers can pick the best and fairest slot, taking into account your other scheduled assessments, rather than leaving the slot to the central exams office.

Remember that breaching word limits and submission deadlines is very strictly penalised. Please refer to your Student Handbook.

Session 1: What Lawyers Need to Know About Algorithmic Systems

Required Reading

  • The Royal Society, Machine Learning: The Power and Promise of Computers that Learn by Example (The Royal Society, 2017). OA link pages 19-31.
  • Agathe Balayn and Seda Gürses, Beyond Debiasing: Regulating AI and Its Inequalities (European Digital Rights (EDRi), 2021), pp 81-110 (‘Alternative framings for AI policymakers’) OA link
  • Expert Group on Liability and New Technologies, Liability for Artificial Intelligence and Other Emerging Digital Technologies (European Commission, DG JUST 2019). pp 1-31 OA link

Optional Further Reading

  • Neil M Richards and William D Smart, ‘How Should the Law Think about Robots?’ in Ryan Calo and others (eds), Robot Law (Edward Elgar 2016). UCL link preprint-link
  • Jennifer Cobbe and Jatinder Singh, ‘Artificial Intelligence as a Service: Legal Responsibilities, Liabilities, and Policy Challenges’ (2021) 42 Computer Law & Security Review 105573. You can either skip or just skim the 8 pages on data protection (section 3.1).
  • David Lehr and Paul Ohm, ‘Playing with the Data: What Legal Scholars Should Learn about Machine Learning’ (2017–18) 51 UCD L Rev 653. OA link

Points for Consideration

  • How do software, hardware and business models interact around automated systems? Which organisations have influence over the way technologies deployed in our homes or by businesses function?
  • What features of these technologies in use might create challenges with causes of action you have already learned about in this module?
  • Do ‘algorithm’, ‘artificial intelligence’, ‘autonomous system’ and ‘robot’ have distinct meanings? What terms might be most useful for understanding their legal relevance?
  • What kind of automated devices that might cause harm or damage have you come across in life? In news stories? In fiction? What kind of damage might they cause?

Session 2: Models for Liability of Algorithmic Systems

Required Reading:

  • Ernst Karner and Bernard A Koch, ‘Civil Liability for Artificial Intelligence’ in Ernst Karner and others (eds), Comparative Law Study on Civil Liability for Artificial Intelligence (European Commission, DG JUST 2021). OA link
  • Andrew D Selbst, ‘Negligence and AI’s Human Users’ (2020) 100 BU L Rev 1315. OA link
  • Deborah G Johnson and Mario Verdicchio, ‘Why Robots Should Not Be Treated like Animals’ (2018) 20 Ethics Inf Technol 291. UCL link

Further Reading

  • Madeleine Clare Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (2019) 5 Engaging Science, Technology, and Society 40. OA link
  • Eric Tjong Tjin Tai, ‘Liability for (Semi)Autonomous Systems: Robots and Algorithms’ in Research Handbook in Data Science and Law (Edward Elgar 2018). UCL link pre-print link
  • Mark Lunney and others, “Special Liability Regimes” in Mark Lunney, Donal Nolan and Ken Oliphant (eds) Tort Law: Text and Materials (6th Ed., Oxford University Press 2017) part III, “Product Liability”. UCL link
  • Chris Reed, ‘How Should We Regulate Artificial Intelligence?’ (2018) 376 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20170360. OA link
  • Joanna J Bryson and others, ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’ (2017) 25 Artif Intell Law 273. OA link

On Automated Vehicles in the United Kingdom

  • Automated and Electric Vehicles Act 2018, part 1.
  • James Davey, ‘By Insurers, For Insurers: The UK’s Liability Regime for Autonomous Vehicles’ (2020) 13 Journal of Tort Law 163. UCL link
  • James Marson and others, ‘The Automated and Electric Vehicles Act 2018 Part 1 and Beyond: A Critical Review’ (2020) 41 Statute Law Rev 395. UCL link pre-print link
  • Law Commission and Scottish Law Commission, ‘Automated Vehicles: Consultation Paper 3 - A Regulatory Framework for Automated Vehicles’ (18 December 2020), pages 277-286 (Civil Liability) link

Points for Consideration

  • What are the logics behind the way the product liability law works? Is product liability already a solution to the challenges of algorithms?
  • What challenges emerge when algorithmic or robotic systems are being used or relied on as a tool by humans who are also involved in the decisions or choices (decision-support), rather than acting with more autonomy?
  • Is AI-related injury or loss forseeable in the same way as other situations you have seen in tort? Why might forseeability be challenging, and what might be done about this?

Session 3: Setting Standards

  • Karni Chagal-Feferkorn, ‘The Reasonable Algorithm’ (2018) 2018 U Ill JL Tech & Pol’y 111. OA link
  • Ryan Abbott, ‘Reasonable Robots’ in The Reasonable Robot: Artificial Intelligence and the Law (Cambridge University Press 2020). UCL link
  • Jack Stilgoe, ‘How Can We Know a Self-Driving Car is Safe?’ (2021) 23 Ethics Inf Technol 635. OA link

Further reading

  • Bryan Choi, ‘Crashworthy Code’ (2019) 94 Washington Law Review 39. OA link
  • Bryan Casey, ‘Robot Ipsa Loquitur’ (2019–20) 108 Geo LJ 225. OA link
  • Bryant Walker Smith, ‘Lawyers and Engineers Should Speak the Same Robot Language’ in Ryan Calo and others (eds), Robot Law (Edward Elgar 2016). UCL link
  • Helen Smith and Kit Fotheringham, ‘Artificial Intelligence in Clinical Decision-Making: Rethinking Liability’ (2020) 20 Medical Law International 131. UCL link

Points for Consideration

  • Do algorithmic systems require tort law to be rethought, or is the existing paradigm suitable?
  • Some authors seek to apply reasonableness tests to algorithmic systems or robots themselves. Is this wise? Is this practical? Why or why not?
  • Should the development of tort law in this area be left up to courts? What might the consequences of this be?
  • What are the challenges of assessing ‘safety’ of autonomous or semi-autonomous technologies?

Session 4: What to Do? Appraising a Proposed Civil Liability Reform.

Readings

  • European Parliament, ‘Civil liability regime for artificial intelligence: European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))’ (P9_TA(2020)0276), in particular the proposed statute (unusually) within this resolution, ‘Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems’. link
  • Expert Group on Liability and New Technologies, Liability for Artificial Intelligence and Other Emerging Digital Technologies (European Commission, DG JUST 2019). pp 32-64 OA link

Further Reading

  • Ozlem Ulgen, ‘A “Human-Centric and Lifecycle Approach” to Legal Responsibility for AI’ (2021) 26 Comms L 97. Westlaw UK link
  • Gerhard Wagner, ‘Liability for Artificial Intelligence: A Proposal of the European Parliament’ in Horst Eidenmuller and Gerhard Wagner (eds), Law by Algorithm (Mohr Siebeck 2021). pre-print link
  • Mark Lunney and others, “How Tort Works” in Mark Lunney, Donal Nolan and Ken Oliphant (eds) Tort Law: Text and Materials (6th Ed., Oxford University Press 2017) part II, “Tort and the Fault Principle Evaluated”. UCL link

Points for Consideration

  • What are the main ingredients of the proposed European Parliament reform? What is clear from the text, and what is left for future elaboration or development?
  • Does the regime strike you as sensible, or placing a fair balance?
  • What are the benefits or drawbacks of this approach compared to approaches we have discussed in previous seminars?
  • Who benefits from this regime? How is it likely to function in practice?