Continual Learning on the iPhone GPU
Project Description
Various incremental learning (IL) approaches have been proposed to help deep learning models learn new tasks/classes continuously without forgetting what was learned previously (i.e., avoid catastrophic forgetting). With the growing number of deployed applications that need to dynamically incorporate new tasks and changing input distribution from users, the ability of IL on-device becomes essential for both efficiency and user privacy. However, high computational costs of IL hinder the deployment of IL on-device.

In this work, we want to explore the potential of the recently developed TensorFlow.js which utilizes the WebGL backend to enable the GPU training on an iPhone. Can we overcome the computation bottleneck using phone GPUs?

Contact: Interested students can actively discuss and suggest their ideas and also contact Young D. Kwon (ydk21@cam.ac.uk), Dr. Jagmohan Chauhan (jc2161@cam.ac.uk)/J.Chauhan@soton.ac.uk, and Prof. Pan Hui (panhui@cse.ust.hk) for more details.

Reference papers (can be a good starting point)
[1] 2021. HotMobile. SplitEasy: A Practical Approach for Training ML models on Mobile Devices. Kamalesh Palanisamy et al.
Supervisor
HUI Pan
Quota
3
Course type
UROP1000
UROP1100
UROP2100
UROP3100
UROP4100
Applicant's Roles
Required skills/aptitudes to successfully complete the project are:

1. Self-motivated and proactive, which means you should be able to set your own milestones and finish them on time.

2. iOS & React Native programming, basic knowledge of how neural networks work

3. Understanding of frameworks for the development (e.g., TensorFlow.js, WebGL) is a plus
Applicant's Learning Objectives
1. Opportunity to gain hands-on experience in on-device ML/DL research

2. Opportunity to tackle a challenging and novel problem that no one has solved before

3. Opportunity for having a top-tier academic publication, depending on the quality and the amount of novel contribution of the project
Complexity of the project
Challenging