Accelerate Deep Learning with Intel-Optimized TensorFlow | Intel® On | Intel Software

Learn how Intel and Google have collaborated to deliver TensorFlow optimizations such as quantization and op fusions. Penporn Koanantakook of Google joins Anavai Ramesh and Andres Rodriguez of Intel to discuss how to use TensorFlow with Intel Neural Compressor to automatically convert to int8 or Bfloat16 data types for improved performance with minimal accuracy loss. They also discuss the new PluggableDevice mechanism co-architected by Intel and Google to deliver a scalable way to add device support to TensorFlow.