Intel expects AI portfolio to earn over $3.5 billion in revenue in 2019

Intel AI Summit 2019

San Francisco, USA : Intel has announced expansion of its AI portfolio. With the newly expanded AI portfolio, Intel aimed to impact the AI ecosystem with rapid AI development, deployment and performance for the customers and importantly, boost its revenue this year.

Intel’s expanded AI portfolio include new computational hardware and memory products namely, Intel Nervana NPPs, Intel Movidius VPUs and Intel DevCloud for the Edge.

“These products further strengthen Intel’s portfolio of AI solutions, which is expected to generate more than $3.5 billion in revenue in 2019.”

“The broadest in breadth and depth in the industry, Intel’s AI portfolio helps customers enable AI model development and deployment at any scale from massive clouds to tiny edge devices, and everything in between,” Intel said in a statement.

Intel Nervana NPPs ( Neural Network Processors) is an AI based computing hardware processors built for training (NNP-T1000) and inference (NNP-I1000) purpose. It is claimed to be the first purpose-built ASICs for complex deep learning with higher scale and efficiency for cloud and data center customers.

While, Intel Movidius Vision Processing Unit (VPU) is a next-generation memory unit for edge media, computer vision and inference applications. It is scheduled to be released mid next year and will offer 10 times inference performance compared to previous generation.

Intel Nervana NNPs are part of the systems-level AI approach, offering a full software stack developed with open components and deep learning framework integration for maximum use.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius VPUs are necessary to continue the incredible progress in AI,” said Naveen Rao, Intel Corporate VP and GM – Intel Artificial Intelligence Products Group.

“Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” added Rao.

Intel Nervana NNP-T, for instance is built and designed to provide customers the right balance between computing, communication and memory. Allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers.

While, Intel Nervana NNP-I is built and designed to provide power and budget efficient processing and computing. It is ideal for running intense, multimodal inference at real-world scale using flexible form factors.

These products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook, Intel said.

“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, Director – AI System Co-Design, Facebook.

Related posts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.