28-01-2020 7:01 am Published by Nederland.ai Leave your thoughts

Artificial intelligence (AI) has traditionally been used in the cloud because AI algorithms mess up huge amounts of data and consume huge computing resources. But AI does not only live in the cloud. In many situations, AI-based data must crack and decisions must be made locally, on devices that are close to the edge of the network.

With AI on the edge, business-critical and time-sensitive decisions can be made faster, more reliably and more securely. The rush to push AI to the edge is fueled by the rapid growth of smart devices at the edge of the network – smart phones, smart watches and sensors that are placed on machines and infrastructure. Earlier this month, Apple spent $ 200 million on acquiring Xnor.ai, a Seattle-based AI startup that focused on software and hardware for learning low-power machines. Microsoft offers an extensive toolkit called Azure IoT Edge that makes it possible to move AI workloads to the edge of the network.

Will AI continue to move to the edge? What are the advantages and disadvantages of AI on the edge versus AI in the cloud? To understand what the future holds for AI on the edge, it is useful to look back at the history of computer science and how the pendulum from centralized intelligence to decentralized intelligence is hurled across four calculation paradigms.

Centralized versus decentralized

Since the early days of computer science, one of the design challenges has always been where intelligence should live in a network. As I noted in an article in the Harvard Business Review in 2001, there has been an “intelligence migration” from centralized intelligence to decentralized intelligence – a cycle that is now repeating itself.

The first era of computer science was the mainframe, with intelligence concentrated in a huge central computer that had all the computing power. On the other side of the network, there were terminals that consisted essentially of a green screen and a keyboard with little intelligence of their own, which they called “stupid terminals.”

The second era of computing was the desktop or personal computer (PC), which turned the mainframe paradigm upside down. PCs contain all the intelligence for storage and calculation locally and did not even have to be connected to a network. This decentralized intelligence heralded the democratization of computer science and led to the emergence of Microsoft and Intel, with the vision to place a PC in every home and on every desk.

The third era of computing, called client-server computing, offered a compromise between the two extremes of intelligence. Large servers did the heavy work at the back and “front-end intelligence” was collected and stored on network client hardware and software.

The fourth era of computing is the cloud computing paradigm, pioneered by companies such as Amazon with its Amazon Web Services, Salesforce.com with its SaaS (Software as a Service) offering, and Microsoft with its Azure cloud platform. The cloud offers an enormous amount of computing power and very cheap memory and storage. It would only make sense for AI applications to be housed in the cloud, since the computing power of AI algorithms increased 300,000 times between 2012 and 2019, doubling every three and a half months.

The pendulum swings again

However, cloud-based AI has its problems. First, cloud-based AI suffers from latency – the delay that occurs when data goes to the cloud for processing and the results are sent back to a local device via the network. In many situations, latency can have serious consequences. For example, if a sensor in a chemical plant predicts an impending explosion, the plant must be shut down immediately. A security camera at an airport or factory must recognize intruders and respond immediately. An autonomous vehicle cannot even wait a tenth of a second to activate an emergency stop when the AI algorithm predicts an impending collision. In these situations, AI must be at the edge, where decisions can be made faster without having to depend on network connections and without moving huge amounts of data back and forth across a network.

The pendulum is swinging again, from centralization to decentralization of intelligence – just like we saw 40 years ago with the shift from mainframe computing to desktop computing.

However, as we have discovered with PCs, life is not easy on the edge. There is a limit to the amount of computing power that can be put into a camera, sensor or smartphone. In addition, many of the devices on the edge of the network are not connected to a power source, which raises questions about the battery life and heat dissipation. These challenges are being addressed by companies such as Tesla, ARM and Intel, as they develop more efficient processors and leaner algorithms that use less power.

But there are still times when AI is better off in the cloud. When decisions require enormous computing power and do not have to be taken in real time, AI must remain in the cloud. For example, if AI is used to interpret an MRI scan or to analyze geospatial data collected by a drone above a farm, we can utilize the full power of the cloud, even if we have to wait a few minutes or a few hours on the decision.

Education vs. Conclusion

One way to determine where AI should live is by understanding the difference between training and inference in AI algorithms. When AI algorithms are built and trained, the process requires huge amounts of data and computing power. To teach an autonomous vehicle to recognize pedestrians or traffic lights, you must feed the algorithm millions of images. However, once the algorithm has been trained, the local can make a “conclusion” by looking at one object to determine if it is a pedestrian. In the inference mode, the algorithm uses its training to make less calculation-intensive decisions at the edge of the network.

AI in the cloud can work synergistically with AI on the edge. Consider an AI-powered vehicle like Tesla. AI on the edge of the network displays countless decisions in real time, such as braking, steering and lane changes. At night, when the car is parked and connected to a Wi-Fi network, data is uploaded to the cloud to further train the algorithm. The smarter algorithm can then be downloaded to the car via the cloud – a virtual cycle that Tesla has repeated hundreds of times through software updates in the cloud.

Embracing the wisdom of the “And ”

There will be a need for AI in the cloud, just as there will be more reasons to put AI on the edge. It is not an or / or answer, it is a “and. ” AI will be where it should be, just as intelligence will live where it must live. I see AI evolving towards “environmental intelligence” – distributed, ubiquitous and connected. In this vision for the future, intelligence at the edge will complement intelligence in the cloud for a better balance between the demands of centralized computing and localized decision-making.

Tags: , , ,

Leave a Reply

Your email address will not be published.

five × one =

The maximum upload file size: 256 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here