Module 6: AI Hardware and Acceleration

Overview

This week delves into the world of AI Hardware and Acceleration, exploring how specialized hardware and acceleration techniques can significantly enhance the performance of AI systems. We’ll cover various aspects of hardware optimization and acceleration methods used in modern AI applications.

Instructors

Xiyuan Tang and Yaoyu Tao, PKU

Topics Covered

  • Introduction to AI hardware acceleration
  • Data sensing and processing for edge intelligent
  • Software-hardware co-design for AI chip architectures
  • Emerging AI computation acceleration technologies

Assignments

Practice Assignment: Compare matrix multiplication performance using Numpy and PyTorch with varying matrix sizes and dimensions. Analyze the impact of GPU acceleration on training time and accuracy for an MNIST model.

Written Assignment: Compare matrix multiplication performance using Numpy and PyTorch with varying matrix sizes and dimensions.

Assignment: AI Hardware and Acceleration

Additional Resources

Notes

  • This module integrates concepts from previous weeks, particularly in machine learning and robotics, with a focus on hardware optimization.
  • For the practice assignment, you will need access to a GPU in the Boya-1 cluster, which uses Slurm for job scheduling. Familiarize yourself with basic Slurm commands such as srun, sbatch, and squeue.
  • In your written assignment, try to connect the hardware advancement you discuss with potential applications or improvements in AI systems.
  • As always, document your code thoroughly and use version control (Git) for your project.
  • Submit your assignments on GitHub Classroom.
Previous
Next