09-01-2020 5:43 am Published by Nederland.ai Leave your thoughts

Artificial intelligence is here, and it really influences our lives – whether it's the Alexa smart speaker on our bedside table, the online customer service chat bots, or the smart replies Google drafts for our emails.

But so far the development of technology has surpassed regulations. Now government institutions are increasingly dealing with AI-based tools, and they need to figure out how to evaluate them. Take the Food and Drug Administration, which highlights new medical products: It must review and approve new medical products with AI capabilities – such as those that promise eye problems related to diabetes – before they are sold to us. Or think of the Equal Employment Opportunity Commission, which investigates discrimination in the field of employment. Today, the agency also has to make decisions about AI-based recruitment algorithms, such as those that screen the CVs of applicants and decide whether you deserve an interview or not.

On Wednesday at CES, the prominent Las Vegas-based technology trade show, White House officials formally announced how the Office of Science and Technology wants federal agencies to regulate new artificial intelligence-based tools and industries approaching the technology.

The proposed White House AI guidance discusses some of the greatest healthcare technologists, AI ethicists, and even some government officials about the technology, but the guidelines are most centered on encouraging innovation in artificial intelligence and making sure regulations are ” unnecessarily “on the way.

This points to a persistent problem for AI, a problem that has already occurred in other technology sectors, where a rush to innovate without much oversight has just come after us.

While encouraging innovation in AI is certainly a consideration, technology critics have said that regulators should keep a closer eye on artificial intelligence, as it is still being rolled out in the real world. They argue that artificial intelligence can replicate and even reinforce human prejudices. These tools often function in black boxes – meaning they are owned and managed by the companies that sell them – which makes it difficult for us to know when or how they can harm real people (or even if they work as intended) ). And new AI-based tools can also worry about privacy and supervision.

For the time being these new guidelines are precisely that – guidelines – which means that today's memo will not have an immediate effect on the artificial intelligence technology that you might encounter in your daily life. But the memo shows how the government thinks about AI and its possible consequences for the Americans. “People should ensure that the White House attempts to bring a framework to assess and justify the placement of AI tools, because what we find as these tools develop and emerge is that there are some are applications that have deeper effects than others, “said Nicol Turn-Lee, a fellow at the Brookings Institution that examines technology and equality.

The Trump administration wants a national AI effort

Trump and his government want the US to dominate the AI sector – and they certainly want the US to be better at AI than China. At the beginning of last year, President Donald Trump signed an executive order establishing the “American AI Initiative,” which is intended to speed up AI research and help build an AI-competent US workforce, among others.

In an overview of 10 basic principles, today's note to the federal departments and agencies echoes the objectives of that executive order. It urges regulators to take innovation into account and to “consider ways to reduce barriers to AI development and adoption” when considering how existing laws and potential new rules apply are on emerging technology.

“Federal agencies must avoid regulatory or non-regulatory actions that unnecessarily hinder AI innovation and growth,” the note says. “Agencies should avoid a precautionary approach that keeps AI systems at such an impossibly high level that society cannot benefit from them.” At the same time, the guidelines also insist that regulators are aware of values as transparency, risk management, fairness and non-discrimination.

These are all honest points. By encouraging these federal departments and agencies to take action, the Trump administration also hopes to avoid a future in which US AI companies might be confronted with a patchwork of local and national regulations, or possibly an excess of federal regulations, which could hinder the expansion of technology.

AI experts told Recode that the AI guidelines are a starting point. “It will take time to assess how effective these principles are in practice, and we will keep a close eye on them,” said Rashida Richardson, director of policy research at the AI Now Institute. “Defining limits for the federal government and the private sector around AI technology will provide more insight to those of us who work in the accountability area.”

Aaron Rieke, the director of technology rights non-profit Upturn, said in an email to Recode that, for now, he doesn't think the memo will have much influence: “I don't think these principles will have much influence on the average person, especially in the short term. I think regulators will be able to justify their decisions, good or bad, without much effort “.

It is important that the memo does not actually apply to artificial intelligence that the American government uses itself (of which there are enough). For example, a study by a US federal contract database shows that the Centers for Disease Control have purchased face recognition products (an AI-based technology), while the Department of Commerce appears to be using AI to improve its patent search system.

One of the reasons why AI needs regulation: It involves risks

AI systems are not inherently objective. People build these tools, and AI is often developed using inadequate or biased data, meaning that technology can inherit or even magnify human prejudices such as sexism and racism. For example, when scientists learned a computer program in 2017 to learn English by exploiting the internet, it was eventually biased against women and black people.

Critics say that risk means that the government should aggressively regulate and even prohibit certain uses of artificial intelligence. And some AI tools, such as face recognition, which depend on the collection of sensitive information, have also raised concerns about how this technology could potentially lead to privacy and surveillance nightmares.
“AI systems can discriminate against the American public based on race, gender, gender …

All this is important because AI already has the potential to have a real impact on your life, even if you haven't realized it yet. Some landlords have started to float and demand that tenants use face recognition to enter their home, even though it is known that technology is less accurate with color and women (and especially women with dark skin), among other groups. Another example: Although never used, a resume algorithm, produced by Amazon's, unwittingly discriminated against female applicants because it was trained on resumes that the company had previously collected, and that were mostly from men. Imagine losing your dream job with a biased algorithm.

“AI systems have a potential to discriminate against the American public based on race, gender, gender – every conceivable criterion,” said Albert Fox Cahn, a lawyer who heads the Surveillance Technology Oversight Project at the University of New York, to Recode. “This can affect everything, whether you get a job offer, whether you are approved for an apartment or a mortgage, whether you get the good or the bad interest. It can affect admission to the university and the placement at school. ”

This has disappointed him with the new proposed guidelines. “” Rather than providing a framework for regulators to actually target discrimination frontally, the White House encourages a hands-off approach that will allow AI to simply target historically marginalized communities without the interventions we need, “said Cahn. He said the references from the memorandum to values of non-discrimination and transparency do not have much strength behind them.

“When you think of where most consumers are more AI-vulnerable, it is in those areas such as housing, health care, and employment – the areas that essentially make the front page of the newspaper,” Turn-Lee said. She said it is not clear what the memorandum will mean for agencies such as the Department of Labor and the Consumer Financial Protection Service compared to, say, the Department of Agriculture.

She adds that it is also unclear whether the agencies are actually willing to identify the risks AI technology poses, or if they keep up the job of ensuring their regulations pace with innovation. “There are many more devils in the details that I would like to see, but I think they are just trying to give us a general framework for some kind of ethical and honest commitment.”

Now the White House wants feedback, also from you.

The draft directive is not set in stone. The coming months will be the subject of public feedback, including yours (we will update this piece with how to do that as soon as the information becomes available). Once the guidelines have been formally approved, the White House expects the agencies to report on how they intend to meet the new AI expectations.

Tags: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

6 − 4 =

The maximum upload file size: 20 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here