What does Optimizing HPC architecture for AI convergence mean? Intel Corporation has a nice solution brief on Optimizing #HPC architecture for #AI convergence. https://lnkd.in/gHrBzCN Key points/steps according to Intel are, 1. Understand the current capabilities of your HPC 2. Evaluate the various AI frameworks and libraries 3. Ensure the AI framework(s) you choose are optimized 4. If you choose to develop your own algorithms, focus on optimizing them [for underlying hardware]* 5. Know what your workloads will look like * text in square parenthesis are my simplification/edits The steps 3 and 4 are most critical to HPC infrastructure. The AI Frameworks like #PyTorch, #Tensorflow can be optimized several order of magnitude by software means. Your implementation matters! Combined with Intel technology, we write algorithms with HPC-first principles and achieve speed ups in the range of 10x – 100x Customers get cost savings and productivity boost on commodity hardware! Companies looking to scale their ML program must check out RocketML. It is a HPC-first product that enables Machine Learning at Scale. I would love to hear from folks having scaling issues.