DistilNAS: Neural Architecture Search With Distilled Data
Can we perform Neural Architecture Search (NAS) Rangehood Switch Box with a smaller subset of target dataset and still fair better in terms of performance with significant reduction in search cost? In this work, we propose a method, called DistilNAS, which utilizes a curriculum learning based approach to distill the target dataset into a very effic