• Skip to main content
  • Skip to secondary menu
  • Skip to footer

Technologies.org

Technology Trends: Follow the Money

  • Technology Events 2026-2027
  • Sponsored Post
  • Technology Markets
  • About
    • GDPR
  • Contact

Beyond the CPU or GPU: Why Enterprise-Scale Artificial Intelligence Requires a More Holistic Approach

May 23, 2018 By admin Leave a Comment

Industry Assembles at Intel AI DevCon; Updates Provided on Intel AI Portfolio and Intel Nervana Neural Network Processor

The following is an opinion editorial provided by Naveen Rao, vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation.

This is an exciting week as we gather the brightest minds working with artificial intelligence (AI) at Intel AI DevCon, our inaugural AI developer conference. We recognize that achieving the full promise of AI isn’t something we at Intel can do alone. Rather, we need to address it together as an industry, inclusive of the developer community, academia, the software ecosystem and more.

So as I take the stage today, I am excited to do it with so many others throughout the industry. This includes developers joining us for demonstrations, research and hands-on training. We’re also joined by supporters including Google*, AWS*, Microsoft*, Novartis* and C3 IoT*. It is this breadth of collaboration that will help us collectively empower the community to deliver the hardware and software needed to innovate faster and stay nimble on the many paths to AI.

Indeed, as I think about what will help us accelerate the transition to the AI-driven future of computing, it is ensuring we deliver solutions that are both comprehensive and enterprise-scale. This means solutions that offer the largest breadth of compute, with multiple architectures supporting milliwatts to kilowatts.

Enterprise-scale AI also means embracing and extending the tools, open frameworks and infrastructure the industry has already invested in to better enable researchers to perform tasks across the variety of AI workloads. For example, AI developers are increasingly interested in programming directly to open-source frameworks versus a specific product software platform, again allowing development to occur more quickly and efficiently.

Today, our announcements will span all of these areas, along with several new partnerships that will help developers and our customers reap the benefits of AI even faster.

Expanding the Intel AI Portfolio to Address the Diversity of AI Workloads

We’ve learned from a recent Intel survey that over 50 percent of our U.S. enterprise customers are turning to existing cloud-based solutions powered by Intel® Xeon® processors for their initial AI needs. This affirms Intel’s approach of offering a broad range of enterprise-scale products – including Intel Xeon processors, Intel® Nervana™ and Intel® Movidius™ technologies, and Intel® FPGAs – to address the unique requirements of AI workloads.

One of the important updates we’re discussing today is optimizations to Intel Xeon Scalable processors. These optimizations deliver significant performance improvements on both training and inference as compared to previous generations, which is beneficial to the many companies that want to use existing infrastructure they already own to achieve the related TCO benefits along their first steps toward AI.

We are also providing several updates on our newest family of Intel® Nervana™ Neural Network Processors (NNPs). The Intel Nervana NNP has an explicit design goal to achieve high compute utilization and support true model parallelism with multichip interconnects. Our industry talks a lot about maximum theoretical performance or TOP/s numbers; however, the reality is that much of that compute is meaningless unless the architecture has a memory subsystem capable of supporting high utilization of those compute elements. Additionally, much of the industry’s published performance data uses large square matrices that aren’t generally found in real-world neural networks.

At Intel, we have focused on creating a balanced architecture for neural networks that also includes high chip-to-chip bandwidth at low latency. Initial performance benchmarks on our NNP family show strong competitive results in both utilization and interconnect. Specifics include:

General Matrix to Matrix Multiplication (GEMM) operations using A(1536, 2048) and B(2048, 1536) matrix sizes have achieved more than 96.4 percent compute utilization on a single chip1. This represents around 38 TOP/s of actual (not theoretical) performance on a single chip1. Multichip distributed GEMM operations that support model parallel training are realizing nearly linear scaling and 96.2 percent scaling efficiency2 for A(6144, 2048) and B(2048, 1536) matrix sizes – enabling multiple NNPs to be connected together and freeing us from memory constraints of other architectures.

We are measuring 89.4 percent of unidirectional chip-to-chip efficiency3 of theoretical bandwidth at less than 790ns (nanoseconds) of latency and are excited to apply this to the 2.4Tb/s (terabits per second) of high bandwidth, low-latency interconnects.

All of this is happening within a single chip total power envelope of under 210 watts. And this is just the prototype of our Intel Nervana NNP (Lake Crest) from which we are gathering feedback from our early partners.

We are building toward the first commercial NNP product offering, the Intel Nervana NNP-L1000 (Spring Crest), in 2019. We anticipate the Intel Nervana NNP-L1000 to achieve 3-4 times the training performance of our first-generation Lake Crest product. We also will support bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs. This is part of a cohesive and comprehensive strategy to bring leading AI training capabilities to our silicon portfolio.

AI for the Real World

The breadth of our portfolio has made it easy for organizations of all sizes to start their AI journey with Intel. For example, Intel is collaborating with Novartis on the use of deep neural networks to accelerate high content screening – a key element of early drug discovery. The collaboration team cut time to train image analysis models from 11 hours to 31 minutes – an improvement of greater than 20 times4.

To accelerate customer success with AI and IoT application development, Intel and C3 IoT announced a collaboration featuring an optimized AI software and hardware solution: a C3 IoT AI Appliance powered by Intel AI.

Additionally, we are working to integrate deep learning frameworks including TensorFlow*, MXNet*, Paddle Paddle*, CNTK* and ONNX* onto nGraph, a framework-neutral deep neural network (DNN) model compiler. And we’ve announced that our Intel AI Lab is open-sourcing the Natural Language Processing Library for JavaScript* that helps researchers begin their own work on NLP algorithms.

The future of computing hinges on our collective ability to deliver the solutions – the enterprise-scale solutions – that organizations can use to harness the full power of AI. We’re eager to engage with the community and our customers alike to develop and deploy this transformational technology, and we look forward to an incredible experience here at AI DevCon.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.

Source: Intel measurements on limited release Software Development Vehicle (SDV)

1 General Matrix-Matrix Multiplication (GEMM) operations; A (1536, 2048), B(2038, 1536) matrix sizes

2 Two chip vs. single chip GEMM operation performance; A (6144, 2048), B(2038, 1536) matrix sizes

3 Full chip MRB-CHIP MRB data movement using send/recv, Tensor size = (1, 32), average across 50K iterations

4 20X claim based on 21.7X speed up achieved by scaling from single node system to 8-socket cluster.

8-socket cluster node configuration: CPU: Intel® Xeon® 6148 Processor @ 2.4GHz ; Cores: 40 ; Sockets: 2 ; Hyper-threading: Enabled; Memory/node: 192GB, 2666MHz ; NIC: Intel® Omni-Path Host Fabric Interface (Intel® OP HFI); TensorFlow: v1.7.0 ; Horovod: 0.12.1 ; OpenMPI: 3.0.0 ; Cluster: ToR Switch: Intel® Omni-Path Switch

Single node configuration: CPU: Intel® Xeon® Phi Processor 7290F; 192GB DDR4 RAM; 1x 1.6TB Intel® SSD DC S3610 Series SC2BX016T4; 1x 480GB Intel® SSD DC S3520 Series SC2BB480G7; Intel® MKL 2017/DAAL/Intel Caffe

Filed Under: Tech Tagged With: CPU or GPU, artificial intelligence

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Footer

Recent Posts

  • Nu Quantum’s $60M Leap Toward the Entanglement Era
  • Haven Energy Raises $40M to Scale Virtual Power Plants Across the U.S. Grid
  • Supermicro Expands NVIDIA Blackwell Portfolio with Liquid-Cooled HGX B300 Systems
  • UMC and imec Push Silicon Photonics Into Its Next Act
  • Wizerr AI Unveils Agentic BOM Engine, Ushering Hardware Into Its Long-Awaited AI Era
  • ZincFive Secures $30 Million to Support AI-Era Data Center Resilience
  • Ply secures $8.5M to automate inventory for the trades, partners with Ferguson Ventures
  • LizzyAI Secures $5M to Rebuild the Interview From the Ground Up
  • When Open Source Meets Custom Silicon: Red Hat and AWS Shift the AI Infrastructure Game
  • Sokin Secures $50M Series B to Scale Global Payments Ambitions

Media Partners

  • Market Analysis
  • Cybersecurity Market
Crisp’s $26M Series B1 Shows Why Vertical AI Is Pulling Ahead
Europe’s Spectrum Trap: How Smarter Policy Could Unlock a €75 Billion 5G Boost
Airwallex’s $330M Series G: The New Gravity Center of Borderless Finance
InterAcademic.com — Where Institutions Connect and Ideas Travel Further
Salesforce Q3 FY26: Agentic AI Momentum in a Slower-Growth World
Housing Inventory Stalls as Buyers Retreat and Sellers Lose Confidence
Rio Tinto’s First Nuton® Copper in Arizona Marks a Quiet Technological Turning Point for U.S. Copper Supply
Next-Gen Nuclear Could Transform Emerging Economy Power Grids
Diamond Market, November 2025 — A Cooling Curve for Small Stones, Steady Ground for Big Gems
The Silent Monopoly: Why China’s Grip on Shipping Containers May Be the Real Strategic Risk
Opal Security Names Howard Ting CEO as AI Access Governance Enters Its Defining Moment
Cyber Week Israel 2025, December 8–11, Tel Aviv
Qryptonic Names Senior Leadership Team Driving Quantum-Era Cryptographic Security
Thales AI Security Fabric, 2025–2026: A New Perimeter for the Age of Agentic AI
Cybersecurity, AI Turbulence, and the New Fragility of Data Resilience in 2026
CrowdStrike, 2025 MITRE ATT&CK® Enterprise Evaluations, Cross-Domain Security Validation
Holly Ventures Launches $33M Debut Fund to Redefine Day-Zero Cybersecurity Investing
Prime Security Raises $20M Series A to Push Agentic Product Security Into the Design Phase
SPIE Expands Its Cybersecurity Footprint with the Acquisition of Cyqueo
Acronis and Synology Join Forces to Reinvent Personal Data Protection

Media Partners

  • Market Research Media
  • Technology Conferences
PlayStation and the Quiet Power Center of a $200 Billion Gaming Industry
Adobe FY2025: AI Pulls the Levers, Cash Flow Leads the Story
Canva’s 2026 Creative Shift and the Rise of Imperfect-by-Design
fal Raises $140M Series D: Scaling the Core Infrastructure for Real-Time Generative Media
Gaming’s Next Expansion Wave, 2026–2030
Morphography — A Visual Language for the Next Era of AI
Netflix’s $83B Grab for Warner Bros. & HBO: A Tectonic Shift in Global Media
Clipbook Raises $3.3M Seed Round — And the PR World Just Got a Warning Shot
BrandsToShop.com — the right domain to have for Cyber Monday, Black Friday and every loud shopping season ahead
PressEspresso.com
Humanoids Summit Tokyo 2026, May 28–29, 2026, Takanawa Convention Center
Japan Pavilion at CES 2026, January 6–9, Las Vegas
KubeCon + CloudNativeCon Europe 2026, 23–26 March, Amsterdam
4YFN26, 2–5 March 2026, Fira Gran Via — Barcelona
DLD Munich 26, January 15–17, Munich, Germany
SPIE Photonics West 2026, January 17–22, San Francisco
Gurobi Decision Intelligence Summit, October 28–29, 2025, Vienna
MIT Sloan CFO Summit, November 20, 2025, Cambridge
Roblox Expands the Future of Creation at RDC 2025
Apple Announces WWDC25, June 9 to 13, 2025

Copyright © 2022 Technologies.org

Media Partners: Market Analysis & Market Research and Exclusive Domains