文字
缩放
增大字体
减小字体
香港大学 Prof. Cho-Li Wang:Perspectives of GPU Computing in the New Big Data Era

([西财新闻] 发布于 :2017-12-15 )

光华讲坛——社会名流与企业家论坛第4795期

 

主 题Perspectives of GPU Computing in the New Big Data Era

主讲人香港大学  Prof. Cho-Li Wang

主持人365网址经济信息工程学院 黄宇副教授

时 间:2017年12月18日 14:00

地 点:通博楼B423

主办单位:经济信息工程学院  科研处

 

主讲人简介:

Cho-Li Wang is currently a Professor in the Department of Computer Science at The University of Hong Kong. Prof. Wang’s research is broadly in the areas of parallel architecture, operating systems, and software systems for Cloud and Big Data computing. He is/was on the editorial boards of scholarly journals, including IEEE Transactions on Cloud Computing (2013-) and IEEE Transactions on Computers (2006-2010). He has served as the programme chair for numerous conferences such as IEEE Cluster’03, CCGrid'09, ICPADS’09, Cluster’12, and IC2E’16; and the General Chair for IPDPS’12. He is the primary investigator of China's 863 project "Hong Kong Grid Point'' (2006-2011). He is also a member of China’s Supercomputing and Innovation Alliance.

内容提要:

The rise of graphics processing units (GPUs) in the data centers, in self-driving vehicles, and in mobile AR navigation systems is one of the latest technology trends. Compared with the CPU architecture, GPUs bring orders of magnitude more computing power with a significantly smaller hardware and energy footprint. Although current techniques (e.g., CUDA and OpenCL) can automatically parallelize and offload sizable workload onto the GPU, there are still a family of computation patterns that cannot be parallelized or executed efficiently on GPUs. In this talk, we will first discuss the emerging trends in GPU hardware architectures (e.g., the new AI chips), then introduce a new compiler and runtime platform, called Japonica, which helps in unfettering the power of the GPU with minimal programming effort.  We will show how Japonica accelerated OpenCL code execution on Nvidia/AMD GPUs, and Snapdragon mobile phone.  Our solutions could bring big opportunities to harness the power of GPUs for real-time analytics, deep learning, and artificial intelligence.


☆该新闻已被浏览: 次★

打印本文】 【关闭窗口