Police, hospitals and municipalities have difficulty understanding how to use artificial intelligence due to a lack of clear government ethical guidelines, according to the country's sole supervisor.
The surveillance camera commissioner, Tony Porter, said he received requests all the time from government agencies that do not know where the limits are when it comes to the use of face, biometric and lip-reading technology.
“The face recognition technology is now being sold as a standard in cable television systems, for example, so hospitals must work out if they should use it,” Porter said. “The police are wearing more and more body cameras. What are the correct limits for their use?
“The problem is that there is insufficient guidance for government agencies to know what is appropriate and what is not, and that the public has no idea what is going on because there is no real transparency.
The comment from the watchdog emerged when it appeared that Downing Street had ordered a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to disclose when they use decision-making algorithms.
Lord Evans, a former head of MI5, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector,” and that “it should at least be visible, and stated, where it has the potential to influence civil liberties and human rights and freedoms “.
AI is increasingly used in the entire public sector for supervision and elsewhere. The Supreme Court ruled in September that police use of automatic face recognition technology for scanning people in the crowd was legitimate.
Its use by the South Wales police was challenged by Ed Bridges, a former Lib Dem councilor who noticed the cameras when he went to buy a lunch time sandwich, but the court ruled that the invasion of privacy was proportional .
The government must ensure that the technology serves the public interest. The alternative is a libertarian free-for-everything …
For three years, the Durham police have evaluated an AI tool devised by the University of Cambridge to predict whether an arrested person is likely to be offended again and should not be released on bail.
Similar technologies used in the US, where they also serve as a guideline for imposing penalties, have been accused of finding that black people are more likely to be future criminals, but the results of the British trial have yet to be made public.
The committee will report to Boris Johnson in February, but Porter said the task was urgent due to the rapid pace of technological change and an unclear system of regulation in which no agency oversaw.
The information commissioner is responsible for the use of personal data, but not for supervision, while the Porter office regulates the use of CCTV systems and all related technologies, including face recognition and lip-reading software.
“We've been asking for a broader investigation for months,” Porter said. “The SCC, for example, is the only regulator in England and Wales and we date back to the time the iPhone 5 was new and exciting. So much has changed since then.
Source: https://www.theguardian.com/technology/2019/dec/29/lack-of-guidance-leaves-public-services-in-limbo-on-ai-says-watchdogTags: #ArtificialIntelligence, #latestNewsAI, #researchAi, #Robotics, AI, amsterdam, Artificial Intelligence