Artificial Intelligence Law
CLD has published a number of articles on artificial intelligence and the law.
- Rafael Dean Brown, Property Ownership and the Legal Personhood of Artificial Intelligence, Information & Communications Technology Law (Dec 10, 2020)https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1861714
- Abstract:This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.
- Jon Truby & Rafael Brown, Human digital thought clones: the Holy Grail of artificial intelligence for big data, Information & Communications Technology Law (Dec 1, 2020)https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1850174?scroll=top&needAccess=true
- Abstract: This article explores the legal and ethical implications of big data’s pursuit of human ‘digital thought clones’. It identifies various types of digital clones that have been developed and demonstrates how the pursuit of more accurate personalised consumer data for micro-targeting leads to the evolution of digital thought clones. The article explains the business case for digital thought clones and how this is the commercial Holy Grail for profit-seeking big data and advertisers, who have commoditised predictions of digital behaviour data. Given big data’s industrial-scale data mining and relentless commercialisation of all types of human data, this article identifies some types of protections but argues that more jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorised development and use of digital thought clones.
- Jon Truby , Rafael Brown & Andrew Dahdal, Banking on AI: mandating a proactive approach to AI regulation in the financial sector, Law and Financial Markets Review, Volume 14, Issue 2 (May 15, 2020)https://www.tandfonline.com/doi/full/10.1080/17521440.2020.1760454
- Abstract: Despite an emerging international consensus on principles of AI governance, lawmakers have so far failed to translate those principles into regulations in the financial sector. Perhaps, in order to remain competitive in the global race for AI supremacy without being typecast as stifling innovation, typically cautious financial regulators are unusually allowing the introduction of experimental AI technology into the financial sector, with few controls on the unprecedented risks to consumers and financial stability. Once an unregulated AI software causes serious economic harm, a public and regulatory backlash would lead to over-regulation that could harm innovation of this potentially beneficial technology. Artificial intelligence is rapidly influencing the financial sector with innumerable potential benefits, such as enhancing financial services and improving regulatory compliance. This article argues that the best way to encourage a sustainable future in AI innovation in the financial sector is to support a proactive regulatory approach prior to any financial harm occurring. This proactive approach should implement rational regulations that embody jurisdiction-specific rules in line with carefully construed international principles.
- Jon Truby, Governing Artificial Intelligence to benefit the UN Sustainable Development Goals, Sustainable Development, 28: 946-959. https://doi.org/10.1002/sd.2048 (February 26, 2020) https://onlinelibrary.wiley.com/doi/full/10.1002/sd.2048
- Abstract: Big Tech's unregulated roll‐out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision‐making software making important financial decisions affecting customers. Automated decision‐making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre‐emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well‐designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where a public and regulatory backlash may result in over‐regulation that could damage the otherwise beneficial development of AI.