Evaluation of GPU architectures using spiking neural networks
Document Type
Conference Presentation
Department
Electrical and Computer Engineering
Conference Title
Proceedings - 2011 Symposium on Application Accelerators in High-Performance Computing, SAAHPC 2011
Date of Presentation
11-1-2011
Abstract
During recent years General-Purpose Graphical Processing Units (GP-GPUs) have entered the field of High- Performance Computing (HPC) as one of the primary architectural focuses for many research groups working with complex scientific applications. Nvidia's Tesla C2050, codenamed Fermi, and AMD's Radeon 5870 are two devices positioned to meet the computationally demanding needs of supercomputing research groups across the globe. Though Nvidia GPUs powered by CUDA have been the frequent choices of the performance centric research groups, the introduction and growth of OpenCL has promoted AMD GP-GPUs as potential accelerator candidates that can challenge Nvidia's stronghold. These architectures not only offer a plethora of features for application developers to explore, but their radically different architectures calls for a detailed study that weighs their merits and evaluates their potential to accelerate complex scientific applications. In this paper, we present our performance analysis research comparing Nvidia's Fermi and AMD's Radeon 5870 using OpenCL as the common programming model. We have chosen four different neuron models for Spiking Neural Networks (SNNs), each with different communication and computation requirements, namely the Izhikevich, Wilson, Morris Lecar (ML), and the Hodgkin Huxley (HH) models. We compare the runtime performance of the Fermi and Radeon GPUs with an implementation that exhausts all optimization techniques available with OpenCL. Several equivalent architectural parameters of the two GPUs are studied and correlated with the application performance. In addition to the comparative study effort, our implementations were able to achieve a speed-up of 857.3x and 658.51x on the Fermi and Radeon architectures respectively for the most compute intensive HH model with a dense network containing 9.72 million neurons. The final outcome of this research is a detailed architectural comparison of the two GPU architectures with a common programming platform. © 2011 IEEE.
First Page
93
Last Page
102
DOI
10.1109/SAAHPC.2011.20
Recommended Citation
Pallipuram, V. K.,
Bhuiyan, M. A.,
&
Smith, M. C.
(2011).
Evaluation of GPU architectures using spiking neural networks.
Paper presented at Proceedings - 2011 Symposium on Application Accelerators in High-Performance Computing, SAAHPC 2011.
https://scholarlycommons.pacific.edu/soecs-facpres/478