Deep Learning Essentials
上QQ阅读APP看书,第一时间看更新

CPU cores

Most deep learning applications and libraries use a single core CPU unless they are used within a parallelization framework like Message-Passing Interface (MPI), MapReduce, or Spark. For example, CaffeOnSpark (https://github.com/yahoo/CaffeOnSpark) by the team at Yahoo! uses Spark with Caffe for parallelizing network training across multiple GPUs and CPUs. In most normal settings in a single box, one CPU core is enough for deep learning application development.