Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Lectures/Lecture 4 - Sustainable AI/Lecture 4 Notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Several research lines have been pursued to reduce this amount of computation. I

3. **Portfolio design**. Obtaining the original points can be done through the Random Search presented above, or even better, by leveraging previous experience. In **Portfolio design**, engineers keep a record of which hyperparameters worked well in the past. Unfortunately, in computer science, there is no such thing as a free lunch, and actually building a portfolio turns out to be [NP-Hard](https://en.wikipedia.org/wiki/NP-hardness), which means that it takes an exponential amount of time to finish computation. Alternatively, AI engineers perform approximations and try to link up dataset and problem features to hyperparameters and the algorithm's performance, as in the figure below. The most promising of these 'metafeatures' would be simple information about the dataset, such as NumberOfInstances, NumberOfClasses, Minority/MajorityClassSize etc. 'Landmarkers' were also used: the recorded performance of a very simple model (e.g. Decision Trees) on the dataset.

![portfolio design](portfolio%20design.png)
![portfolio design](Figures/portfolio_design.png)
[Predicting the distribution of model performance based on multidimensional feature spaces.](https://www.studyguide.tudelft.nl/courses/study-guide/educations/14789)

It is clear that this field did not pass smoothly into the modern deep learning age. Therefore, significant research efforts are underway to bring it up to date. The features above are all 'hand-crafted', which was the modus operandi of the field before AI 'learned to learn'; AI/data researchers needed to select worthwhile features to focus on. Even though this rather field comprises ambiguous, non-descriptive hand-crafted features, its potential to improve the performance of AI workloads has incentivized continued research.
Expand Down Expand Up @@ -183,7 +183,7 @@ A simple technique that uses fidelity training is **Successive halving**:
3. Continue configs with budget B.
4. Repeat until you are left with one.

![Successive Halving](Successive%20Halving.png)
![Successive Halving](Figures/successive_halving.png)
[Some configs may behave worse at the beginning, but better at the end of training.](https://www.studyguide.tudelft.nl/courses/study-guide/educations/14789)

## Transfer learning
Expand Down