Santow, speaking on Tuesday about the recently released Human Rights and Technology Report, pointed to the Centrelink online compliance intervention program – of which the automation element became colloquially known as robo-debt – to highlight this risk. “Our concerns about the use of AI in decision-making, especially in really important decision-making – I’m very conscious that in the move towards the use of independent assessments in the NDIA that there is a risk that some of the mistakes that were made with regard to robo-debt could be made again in this context,” he said. The National Disability Insurance Agency (NDIA) announced the introduction of independent assessments for NDIS participants. They are designed, according to the government, to make sure access to the NDIS is fair and equitable for new and existing participants. According to the federal opposition, independent assessments is another way of saying “robo-planning”. Must read: NDIS sidesteps blockchain in government kitchen sink debt-chasing app “We have to learn the lessons from robo-debt and that in turn means we have to make sure that whenever AI is used, especially when it is used by government, it must be fair, it must be accurate, and it must be accountable,” Santow said. “I think some of the concerns that have been expressed in public about the use of an algorithm in the independent assessment process for the NDIS is that some of those elements, fairness, accuracy, and accountability, could well be compromised.” Santow said if an algorithm was used to make those crucial decisions at the NDIS, then the government needed to be very confident in the quality of the information being fed into the system and make sure it would be accurate, contain no errors, and be bias-free. “Accountability is also crucially important – whatever decision is made, for example in the independent assessment area, it must be accountable,” he added. “People must understand the reasons for their assessment and they must be able to challenge that decision if they think that the decision is wrong or if it is unfair, especially if it is unlawful.” In its report, the AHRC made a number of recommendations and suggestions to government, including that it place a ban on the use of facial recognition and other biometric technology in “high-risk” areas. But where accessible technology was concerned, the commission asked that federal, state, territory, and local governments to commit to using digital communication technology that fully complies with recognised accessibility standards. It recommended doing so through the introduction of whole-of-government requirements for compliance with these standards. “People with disability have a right to access technology,” the AHRC said. “Access to new technology, especially digital communication technology, is an enabling right for people with disability because it is critical to the enjoyment of a range of other civil, political, economic, social and cultural rights. “Good technology design can enable the participation of people with disability as never before – from the use of real-time live captioning to reliance on smart home assistants. On the other hand, poor design can cause significant harm, reducing the capacity of people with disability to participate in activities that are central to the enjoyment of their human rights, and their ability to live independently.”

LATEST FROM CANBERRA

Digital Health Agency says My Health Record risk mitigation work on-trackLabor Bill would force Aussie organisations to disclose when they pay ransomsAI bias and discrimination aplenty: Australian Greens want Online Safety Bill repealedFederal Court approves AU$112m compensation in settlement for robo-debt failure