The purpose was to transform data fetched from a face detection AI Google Cloud Vision. I used positions and sentiments of faces detected by the AI to create a layer containing emojis overlaying the faces.
- Layer selection (CSS + SVG vs. Canvas)
- How to call the API in a secure way
- Several unsuccessful tests on the display of textual data below the image
- I wasn't sure how to code the widget components after I analysed the code available in the HUB.
- Tailwind is not ideal with Svelte, compared to SCSS, but some nice and interesting discoveries/features in its use.
For each sentiment (in a list of 4: joy, anger, sorrow, surprise) Google Cloud Vision provides a degree of likelihood. I converted these degrees into emojis. My goal in further work is to add granularity in the emojis selection, by studying different ways of combining sentiments. Even though Rekognition could discriminate between 9 different sentiments, I found that one of its drawback was that its outputs tend to be more polarized (e.g. 99.3% Happy, when there is only one little smile).
The code is clearly improvable, as it's a hybrid between what you did for the widgets and the limited need for the exercise. I may have spent a bit too much time on this.
I saw that you use className
to pass class
in components but I can suggest to use {$$props.class}
which requires fewer lines of code in the child component.