• Skip to main content
  • Skip to secondary menu
  • Skip to footer

Technologies.org

Technology Trends: Follow the Money

  • Technology Events 2026-2027
  • Sponsored Post
  • Technology Markets
  • About
    • GDPR
  • Contact

Beyond the CPU or GPU: Why Enterprise-Scale Artificial Intelligence Requires a More Holistic Approach

May 23, 2018 By admin Leave a Comment

Industry Assembles at Intel AI DevCon; Updates Provided on Intel AI Portfolio and Intel Nervana Neural Network Processor

The following is an opinion editorial provided by Naveen Rao, vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation.

This is an exciting week as we gather the brightest minds working with artificial intelligence (AI) at Intel AI DevCon, our inaugural AI developer conference. We recognize that achieving the full promise of AI isn’t something we at Intel can do alone. Rather, we need to address it together as an industry, inclusive of the developer community, academia, the software ecosystem and more.

So as I take the stage today, I am excited to do it with so many others throughout the industry. This includes developers joining us for demonstrations, research and hands-on training. We’re also joined by supporters including Google*, AWS*, Microsoft*, Novartis* and C3 IoT*. It is this breadth of collaboration that will help us collectively empower the community to deliver the hardware and software needed to innovate faster and stay nimble on the many paths to AI.

Indeed, as I think about what will help us accelerate the transition to the AI-driven future of computing, it is ensuring we deliver solutions that are both comprehensive and enterprise-scale. This means solutions that offer the largest breadth of compute, with multiple architectures supporting milliwatts to kilowatts.

Enterprise-scale AI also means embracing and extending the tools, open frameworks and infrastructure the industry has already invested in to better enable researchers to perform tasks across the variety of AI workloads. For example, AI developers are increasingly interested in programming directly to open-source frameworks versus a specific product software platform, again allowing development to occur more quickly and efficiently.

Today, our announcements will span all of these areas, along with several new partnerships that will help developers and our customers reap the benefits of AI even faster.

Expanding the Intel AI Portfolio to Address the Diversity of AI Workloads

We’ve learned from a recent Intel survey that over 50 percent of our U.S. enterprise customers are turning to existing cloud-based solutions powered by Intel® Xeon® processors for their initial AI needs. This affirms Intel’s approach of offering a broad range of enterprise-scale products – including Intel Xeon processors, Intel® Nervana™ and Intel® Movidius™ technologies, and Intel® FPGAs – to address the unique requirements of AI workloads.

One of the important updates we’re discussing today is optimizations to Intel Xeon Scalable processors. These optimizations deliver significant performance improvements on both training and inference as compared to previous generations, which is beneficial to the many companies that want to use existing infrastructure they already own to achieve the related TCO benefits along their first steps toward AI.

We are also providing several updates on our newest family of Intel® Nervana™ Neural Network Processors (NNPs). The Intel Nervana NNP has an explicit design goal to achieve high compute utilization and support true model parallelism with multichip interconnects. Our industry talks a lot about maximum theoretical performance or TOP/s numbers; however, the reality is that much of that compute is meaningless unless the architecture has a memory subsystem capable of supporting high utilization of those compute elements. Additionally, much of the industry’s published performance data uses large square matrices that aren’t generally found in real-world neural networks.

At Intel, we have focused on creating a balanced architecture for neural networks that also includes high chip-to-chip bandwidth at low latency. Initial performance benchmarks on our NNP family show strong competitive results in both utilization and interconnect. Specifics include:

General Matrix to Matrix Multiplication (GEMM) operations using A(1536, 2048) and B(2048, 1536) matrix sizes have achieved more than 96.4 percent compute utilization on a single chip1. This represents around 38 TOP/s of actual (not theoretical) performance on a single chip1. Multichip distributed GEMM operations that support model parallel training are realizing nearly linear scaling and 96.2 percent scaling efficiency2 for A(6144, 2048) and B(2048, 1536) matrix sizes – enabling multiple NNPs to be connected together and freeing us from memory constraints of other architectures.

We are measuring 89.4 percent of unidirectional chip-to-chip efficiency3 of theoretical bandwidth at less than 790ns (nanoseconds) of latency and are excited to apply this to the 2.4Tb/s (terabits per second) of high bandwidth, low-latency interconnects.

All of this is happening within a single chip total power envelope of under 210 watts. And this is just the prototype of our Intel Nervana NNP (Lake Crest) from which we are gathering feedback from our early partners.

We are building toward the first commercial NNP product offering, the Intel Nervana NNP-L1000 (Spring Crest), in 2019. We anticipate the Intel Nervana NNP-L1000 to achieve 3-4 times the training performance of our first-generation Lake Crest product. We also will support bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs. This is part of a cohesive and comprehensive strategy to bring leading AI training capabilities to our silicon portfolio.

AI for the Real World

The breadth of our portfolio has made it easy for organizations of all sizes to start their AI journey with Intel. For example, Intel is collaborating with Novartis on the use of deep neural networks to accelerate high content screening – a key element of early drug discovery. The collaboration team cut time to train image analysis models from 11 hours to 31 minutes – an improvement of greater than 20 times4.

To accelerate customer success with AI and IoT application development, Intel and C3 IoT announced a collaboration featuring an optimized AI software and hardware solution: a C3 IoT AI Appliance powered by Intel AI.

Additionally, we are working to integrate deep learning frameworks including TensorFlow*, MXNet*, Paddle Paddle*, CNTK* and ONNX* onto nGraph, a framework-neutral deep neural network (DNN) model compiler. And we’ve announced that our Intel AI Lab is open-sourcing the Natural Language Processing Library for JavaScript* that helps researchers begin their own work on NLP algorithms.

The future of computing hinges on our collective ability to deliver the solutions – the enterprise-scale solutions – that organizations can use to harness the full power of AI. We’re eager to engage with the community and our customers alike to develop and deploy this transformational technology, and we look forward to an incredible experience here at AI DevCon.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.

Source: Intel measurements on limited release Software Development Vehicle (SDV)

1 General Matrix-Matrix Multiplication (GEMM) operations; A (1536, 2048), B(2038, 1536) matrix sizes

2 Two chip vs. single chip GEMM operation performance; A (6144, 2048), B(2038, 1536) matrix sizes

3 Full chip MRB-CHIP MRB data movement using send/recv, Tensor size = (1, 32), average across 50K iterations

4 20X claim based on 21.7X speed up achieved by scaling from single node system to 8-socket cluster.

8-socket cluster node configuration: CPU: Intel® Xeon® 6148 Processor @ 2.4GHz ; Cores: 40 ; Sockets: 2 ; Hyper-threading: Enabled; Memory/node: 192GB, 2666MHz ; NIC: Intel® Omni-Path Host Fabric Interface (Intel® OP HFI); TensorFlow: v1.7.0 ; Horovod: 0.12.1 ; OpenMPI: 3.0.0 ; Cluster: ToR Switch: Intel® Omni-Path Switch

Single node configuration: CPU: Intel® Xeon® Phi Processor 7290F; 192GB DDR4 RAM; 1x 1.6TB Intel® SSD DC S3610 Series SC2BX016T4; 1x 480GB Intel® SSD DC S3520 Series SC2BB480G7; Intel® MKL 2017/DAAL/Intel Caffe

Filed Under: Tech Tagged With: CPU or GPU, artificial intelligence

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Footer

Recent Posts

  • DealHub Raises $100M to Redefine Enterprise Quote-to-Revenue
  • Preply Reaches $1.2B Valuation After $150M Series D to Scale Human-Led, AI-Enhanced Language Learning
  • Datarails Raises $70M Series C to Turn the CFO’s Office into an AI-Native Nerve Center
  • Emergent Raises $70M Series B as AI Turns Software Creation Into an Entrepreneurial Commodity
  • Fujifilm Introducing SX400: A Long-Range Camera Designed for the Real World
  • D-Wave Becomes the First Dual-Platform Quantum Computing Company After Quantum Circuits Acquisition
  • Wasabi Technologies Secures $70M to Fuel the Next Phase of AI-Ready Cloud Storage
  • Samsung Maintenance Mode: The Quiet Feature That Actually Changed How I Buy Phones
  • Miro AI Workflows Launch: From Whiteboard Chaos to Enterprise-Grade Deliverables
  • 10 Breakthrough Technologies of 2026

Media Partners

  • Market Analysis
  • Cybersecurity Market
Baseten Raises $300M to Dominate the Inference Layer of AI, Valued at $5B
Nvidia’s China Problem Is Self-Inflicted, and Washington Should Stop Pretending Otherwise
USPS and the Theater of Control: How Government Freezes Failure in Place
Skild AI Funding Round Signals a Shift Toward Platform Economics in Robotics
Saks Sucks: Luxury Retail’s Debt-Fueled Mirage Collapses
Alpaca’s $1.15B Valuation Signals a Maturity Moment for Global Brokerage Infrastructure
The Immersive Experience in the Museum World
The Great Patent Pause: 2025, the Year U.S. Innovation Took a Breath
OpenAI Acquires Torch, A $100M Bet on AI-Powered Health Records Analytics
Iran’s Unreversible Revolt: When Internal Rupture Meets External Signals
India’s Cyber Delegation Arrives in Tel Aviv for CyberTech 2026
Andersen Consulting Expands Cybersecurity and Legal Tech Capabilities in Strategic HaystackID Partnership
Lionsgate Network to Present AI-Powered Crypto Fraud Solutions at CyberTech Tel Aviv 2026
Cybertech 2026, January 26–28, Tel Aviv Expo
When Fraud Learns Faster Than Humans: The 2026 Wake-Up Call for Enterprise Finance
Fortinet Stock Rises as Wall Street Drops the AI Fear Narrative
Lumu’s 2026 Compromise Report: Why Cybersecurity Has Entered the Age of Silent Breaches
Novee Emerges from Stealth, 2025, Offensive Security at Machine Speed
depthfirst Raises $40M Series A to Build AI-Native Software Defense
Bitwarden Doubles Down on Identity Security as Passwords Finally Start to Lose Their Grip

Media Partners

  • Market Research Media
  • Technology Conferences
BBC and the Gaza War: How Disproportionate Attention Reshapes Reality
Parallel Museums: Why the Future of Art Might Be Copies, Not Originals
ClickHouse Series D, The $400M Bet That Data Infrastructure, Not Models, Will Decide the AI Era
AI Productivity Paradox: When Speed Eats Its Own Gain
Voice AI as Infrastructure: How Deepgram Signals a New Media Market Segment
Spangle AI and the Agentic Commerce Stack: When Discovery and Conversion Converge Into One Layer
PlayStation and the Quiet Power Center of a $200 Billion Gaming Industry
Adobe FY2025: AI Pulls the Levers, Cash Flow Leads the Story
Canva’s 2026 Creative Shift and the Rise of Imperfect-by-Design
fal Raises $140M Series D: Scaling the Core Infrastructure for Real-Time Generative Media
MIT Sloan CIO Symposium Innovation Showcase 2026, May 19, 2026, Cambridge, Massachusetts
Humanoid Robot Forum 2026, June 22–25, Chicago
Supercomputing Asia 2026, January 26–29, Osaka International Convention Center, Japan
Chiplet Summit 2026, February 17–19, Santa Clara Convention Center, Santa Clara, California
HumanX, 22–24 September 2026, Amsterdam
CES 2026, January 7–10, Las Vegas
Humanoids Summit Tokyo 2026, May 28–29, 2026, Takanawa Convention Center
Japan Pavilion at CES 2026, January 6–9, Las Vegas
KubeCon + CloudNativeCon Europe 2026, 23–26 March, Amsterdam
4YFN26, 2–5 March 2026, Fira Gran Via — Barcelona

Copyright © 2022 Technologies.org

Media Partners: Market Analysis & Market Research and Exclusive Domains, Photography