Every company uses data and an increasing proportion of that information is entered into Artificial Intelligence (AI) engines. In addition to retail organizations, banking institutions and construction companies (in the absence of three standard industry verticals), AI-driven data analysis is also used by the butcher, the baker and the candlestick (in the absence of three less traditionally quoted industry verticals with a rub-a-dub- dub connection) and that happens every day.
Our police forces also make use of AI – and in particular are sieved by the huge amounts of information recorded on portable body cameras. But there is a problem. To properly use these new techniques, we must ensure that the AI software engines that have the task of understanding what is happening in the video images are unbiased and thoroughly responsible.
Police forces across Europe, North America and beyond have not yet fully embraced these technologies and in many cases are still dependent on old, time-consuming practices (such as watching video images by people and filling out a barrage of paper forms) for all understand this information.
The police in the sixth sense of the word
The goal is to give police officers an extra feeling. Forces around the world have long talked about having their five basic human senses (touch, sight, sound, taste and hearing) and an additional sixth sense, which they call police instinct. The police sixth sense is drawn from suspicion, natural gut feeling, fear, intuition and common sense.
With the right kind of AI applied to the enforcement tasks of our corps, we might be able to reach a point where the police are also talking about their seventh sense … and in this case it would be an electronic AI-based sense. But the most important challenge in this world of mission-critical communication is how we can ensure that it is used responsibly.
In the area of public safety, the application of responsible AI means a more focused (or limited) approach to this technology, as we try to put better decision-making in the hands of professionals who are already familiar with making decisions in a fraction of a second under pressure. AI must support and increase human judgment, not dampen it or displace it.
The argument for this is that any implementation of AI in any form of security services must be comparable to existing processes and practices. This is because these processes form the basis for the performance of a particular criminal justice system.
AI must “get” human culture.
According to Paul Steinberg, head of technology at Motorola Solutions, security AI must be anchored in (and measured against) generally accepted culturally and ethically sound methods. It must also be fair, easy to understand and adhere to strict codes of privacy and security. In short, it must be trusted by its users, the police and society as a whole.
“It is important to understand that AI is fundamentally amoral. It is not directly influenced by human discriminatory tendencies, emotions, distractions or fatigue. But issues such as bias occur when the output of the AI process results in inconsistent treatment in a group. This is often because, for example, the data used to train the AI are biased themselves. For example, incorrectly identifying faces for a specific population, such as race, gender, age and physiology, “said Steinburg of Motorola Solutions.
Why should we listen to Motorola Solutions on this topic? The company (Motorola, the parent brand that spans Motorola Mobility and Motorola Solutions) is known for its smartphones and its microprocessors (if you are crazy enough to know) and its personal beeper products (if you are old enough to remember) , but it is also known for its Tetra brand of mission-critical communication equipment, which has been widely used by police forces in the UK and elsewhere.
Motorola Solutions emphasizes the necessity of “Human in the Loop” as part of the design principles of mature AI systems. In the field of public safety, this amounts to AI that increases human decision making without displacing or invalidating human judgment. In other words, AI can make suggestions, but leave decisions to people. This means that the knowledge and intuition of the police, their sixth sense, is not being moved.
High speed human factors (HVHF)
Within Human in the Loop there are a number of disciplines that are referred to as High Velocity Human Factors (HVHF). These factors recognize that the more stress an individual experiences, the less cognitive capacity they have to apply to something else. If a police officer has deployed a weapon, there is therefore little else that he or she can or must focus on matters other than the threat that arises. The paradox of HVHF is that the more an individual could benefit from technology, the less mental capacity they have to use it.
“In these circumstances, the temptation is to apply AI to ease the burden on the officer and make decisions in their name.” After all, AI is not tied to the limitations of HVHF and human emotion, “Steinberg of Motorola Solutions added.” But to keep people informed in the right way, we need to be extremely informed about all AI applications and proceed thoughtfully and leave them to human judgment, procedures and training.
So instead of helping to make decisions, the approach is that AI is more helpful by automating existing workflows. It must first understand the workflows and limitations and then, without intervention, help the user step by step. Here, AI is most suitable for recognizing, interpreting and contextualizing meaningful input, ie the use of a weapon means that a threat is created – and then only to play an advisory role in determining appropriate actions, such as requesting backup.
The seventh sentence
As with any AI system, another important aspect of trust is having the ability to easily explain how it works. This is essential, not just for police officers … but for those they serve to have faith in the tools used for them. It is difficult to rely on a system that operates like a “black box” that draws conclusions and makes recommendations that are difficult to understand.
Again, this is where Motorola Solutions' Steinberg says that mature AI is crucial. “As with any technology, AI-based solutions should be as reliable as possible. For officers in those HVHF situations, this means ensuring that, at the time of emergency, what worked best or even learned as” muscle memory ” “Never gets compromised,” he said.
There is clearly a certain degree of inevitability here. The police forces are starting to use much more electronic data capture and analysis techniques and we want our men and women in blue (or whatever color the power of your country bears) to be strengthened with extra intelligence, but at the same time we don't want them to have their personal human nose for problems (as a desire to do well) that led them to pick up the professionTags: #ArtificialIntelligence, #latestNewsAI, #researchAi, #Robotics, amsterdam, motorola