07-01-2020 5:21 am Published by Nederland.ai Leave your thoughts

During a press conference at the 2020 Consumer Electronics Show, Intel gave a small update about the current AI and machine learning hardware acceleration. The details were a bit hard to come by in the press time, but platforms group executive vice president Navin Shenoy previewed the performance enhancement that will arrive with the third generation Xeon Scalable processor family of chip maker, codename Cooper Lake.

The 14-nanometer (14nm ++) Cooper Lake, which will be available in the first half of 2020, will yield up to a 60% increase in both AI inferencing and training performance. That is compared to Intel's 30 times better performance in deep learning inferencing in 2019 from 2017, the year the company released its first processor with AVX-512, a set of 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions.

Delivering this enhancement is in part DL Boost, which includes a series of x86 technologies designed to accelerate AI vision, speech, language, generative and recommendation workloads. It will support the bfloat16 (Brain Floating Point), starting with Cooper Lake products, a number format originally used by Google and implemented in the third generation of the custom-designed Tensor Processing Unit AI accelerator chip.

By way of refreshment, Cooper Lake features up to 56 processor cores per socket, or twice processor core count of Intel's second-gen Scalable Xeon chips. They have higher memory bandwidth, higher AI inference, and training performance at lower power, as well as platform compatibility with the upcoming 10-nanometer Ice Lake processor.

Intel products are used for more data center runs on AI than on any other platform, the company claims.

Intel's future is AI. His books involve so much – the AI chip segments of the Santa Clara company notches $ 3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $ 2.5 billion in 2017 to $ 10 billion by 2022. From the perspective of the AI chip segments, AI's revenue in 2017 had risen from $ 1 billion a year. And Intel expects the AI silicon market to be more than $ 25 billion by 2024.

Earlier this year, Intel bought Habana Labs, an Israeli developer of programmable AI and machine learning accelerators for cloud data centers, for an estimated $ 2 billion. It came after the purchase of San Mateo-based Movidius, that specialized low-power processor designs chips for computer vision, in September 2016. Intel purchased Field Programmable Gate Array (FPGA) manufacturer Altera in 2015 and a year later the company acquired Nervana, thereby completing its hardware platform offering and paving the way for an entirely new generation of AI accelerator-chipsets … And in August, Intel grabbed up Vertex.ai, a startup that is developing a platform-diagnostic AI model series.

Tags: , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

8 − 5 =

The maximum upload file size: 20 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here