Call for independent watchdog to monitor NZ government use of artificial intelligence
<p>New Zealand is a leader in government use of artificial intelligence (AI). It is part of a<span> </span><a href="https://www.digital.govt.nz/digital-government/international-partnerships/the-digital-9/">global network of countries</a><span> </span>that use predictive algorithms in government decision making, for anything from the optimal scheduling of public hospital beds to whether an offender should be released from prison, based on their likelihood of reoffending, or the<span> </span><a href="https://www.acc.co.nz/about-us/news-media/latest-news/acc-speeds-up-claims-approval/">efficient processing of simple insurance claims</a>.</p>
<p>But the official use of AI algorithms in government has been<span> </span><a href="https://www.data.govt.nz/assets/Uploads/Algorithm-Assessment-Report-Oct-2018.pdf">in the spotlight in recent years</a>. On the plus side, AI can enhance the accuracy, efficiency and fairness of day-to-day decision making. But concerns have also been expressed regarding transparency, meaningful human control, data protection and bias.</p>
<p>In a<span> </span><a href="https://www.cs.otago.ac.nz/research/ai/AI-Law/NZLF%20report.pdf">report</a><span> </span>released, we recommend New Zealand establish a new independent regulator to monitor and address the risks associated with these digital technologies.</p>
<p><strong>AI and transparency</strong></p>
<p>There are three important issues regarding transparency.</p>
<p>One relates to the inspectability of algorithms. Some aspects of New Zealand government practice are reassuring. Unlike some countries that use commercial AI products, New Zealand has tended to build government AI tools in-house. This means that we know how the tools work.</p>
<p>But intelligibility is another issue. Knowing how an AI system works<span> </span><a href="https://link.springer.com/article/10.1007/s13347-018-0330-6">doesn’t guarantee</a><span> </span>the decisions it reaches will be understood by the people affected. The best performing AI systems are often extremely complex.</p>
<p>To make explanations intelligible, additional technology is required. A decision-making system can be supplemented with an “explanation system”. These are additional algorithms “bolted on” to the main algorithm we seek to understand. Their job is to construct simpler models of how the underlying algorithms work – simple enough to be understandable to people. We believe explanation systems will be increasingly important as AI technology advances.</p>
<p>A final type of transparency relates to public access to information about the AI systems used in government. The public should know what AI systems their government uses as well as how well they perform. Systems should be regularly evaluated and summary results made available to the public in a systematic format.</p>
<p><strong>New Zealand’s law and transparency</strong></p>
<p>Our<span> </span><a href="https://www.cs.otago.ac.nz/research/ai/AI-Law/NZLF%20report.pdf">report</a><span> </span>takes a detailed look at how well New Zealand law currently handles these transparency issues.</p>
<p>New Zealand doesn’t have laws specifically tailored towards algorithms, but some are relevant in this context. For instance, New Zealand’s Official Information Act (<a href="http://legislation.govt.nz/act/public/1982/0156/107.0/DLM65628.html">OIA</a>) provides a right to reasons for decisions by official agencies, and this is likely to apply to algorithmic decisions just as much as human ones. This is in<span> </span><a href="http://classic.austlii.edu.au/au/journals/SydLawRw/2015/22.html">notable contrast to Australia</a>, which doesn’t impose a general duty on public officials to provide reasons for their decisions.</p>
<p>But even the OIA would come up short where decisions are made or supported by opaque decision systems. That is why we recommend that predictive algorithms used by government, whether developed commercially or in-house, must feature in a public register, must be publicly inspectable, and (if necessary) must be supplemented with explanation systems.</p>
<p><strong>Human control and data protection</strong></p>
<p>Another issue relates to human control. Some of the concerns around algorithmic decision-making are best addressed by making sure there is a “human in the loop,” with a human having final sign off on any important decision. However, we don’t think this is likely to be an adequate solution in the most important cases.</p>
<p>A persistent theme of research in industrial psychology is that humans become overly trusting and uncritical of automated systems, especially when those systems are reliable most of the time. Just adding a human “in the loop” will not always produce better outcomes. Indeed in certain contexts, human collaboration will offer false reassurance, rendering AI-assisted decisions<span> </span><a href="https://researchportal.bath.ac.uk/en/publications/effective-forecasting-and-judgmental-adjustments-an-empirical-eva">less accurate</a>.</p>
<p>With respect to data protection, we flag the problem of “inferred data”. This is data inferred about people rather than supplied by them directly (just as when Amazon infers that you might like a certain book on the basis of books it knows you have purchased). Among other recommendations, our report calls for New Zealand to consider the legal status of inferred data, and whether it should be treated the same way as primary data.</p>
<p><strong>Bias and discrimination</strong></p>
<p>A final area of concern is bias. Computer systems might look unbiased, but if they are relying on “dirty data” from previous decisions, they could have the effect of “baking in” discriminatory assumptions and practices. New Zealand’s anti-discrimination laws are likely to apply to algorithmic decisions, but making sure discrimination doesn’t creep back in will require ongoing monitoring.</p>
<p>The report also notes that while “individual rights” — for example, against discrimination — are important, we<span> </span><a href="https://scholarship.law.duke.edu/dltr/vol16/iss1/2/">can’t entirely rely on them</a><span> </span>to guard against all of these risks. For one thing, affected people will often be those with the least economic or political power. So while they may have the “right” not to be discriminated against, it will be cold comfort to them if they have no way of enforcing it.</p>
<p>There is also the danger that they won’t be able to see the whole picture, to know whether an algorithm’s decisions are affecting different sections of the community differently. To enable a broader discussion about bias, public evaluation of AI tools should arguably include results for specific sub-populations, as well as for the whole population.</p>
<p>A new independent body will be essential if New Zealand wants to harness the benefits of algorithmic tools while avoiding or minimising their risks to the public.</p>
<p><em>Alistair Knott, James Maclaurin and Joy Liddicoat, collaborators on the<span> </span><a href="https://www.cs.otago.ac.nz/research/ai/AI-Law/">AI and Law in New Zealand</a>project, have contributed to the writing of this piece.</em></p>
<p><em>Written by John Zerilli and Colin Gavaghan. Republished with permission of <a href="https://theconversation.com/call-for-independent-watchdog-to-monitor-nz-government-use-of-artificial-intelligence-117589">The Conversation</a>.</em></p>