Emotion Detection Model in less than 20 minutes.

Annanya Vedala
4 min readOct 22, 2020

Yes, you heard that right! Embrace yourself as we go on a journey that will be both engaging and surprising all at once. In this article, we will learn how to build an Emotion detection using Microsoft Power Apps’ feature AI Builder.

To start off, you will need a Microsoft 365 developers account. So make sure you create one or use what you already have. I promise you it won’t take too long!

Now, let’s dive right into the process!

You will, firstly, have to visit https://make.powerapps.com . Once you get there, you sign in with your credentials and you will be presented with the following webpage.

You then navigate to the left address pane and click on AI Builder, followed by Build. On doing so, you will be presented with the following screen.

Since the objective of this app is to detect emotions, you will click on object detection. On selecting the said option, you will be presented with the following page.

Give an appropriate name to your model and then click on Create. On doing so, you will see this:

We will be sticking to common objects since we are simply trying to detect the emotion on one’s face. We then click on next and are presented with a space to give our model tags. These tags are what the model is expected to detect. Make sure to give the tags thoughtfully. This is because you will be expected to provide at least 15 images of each of the tags in order to train the model. To get a more accurate model, you could give upto 50 images.

The tags that we will be giving would be “Happy”, “Sad” and “Angry”. The next step would be to upload your respective happy, sad and angry faces. Make sure that you upload either jpg, png or bmp. A maximum size of 6MB and a minimum width / height of 256 pixels x 256 pixels is required for training. After uploading these images, you will manually have to select the faces on all the images and give them the respective tag of emotion.

When you are done with tagging all the images, click on done tagging. You then start training your model. Depending upon the number of training images you have given the model, the training might take a while. After you’re done training the model, you can see the model’s accuracy on the left side of the page. Anywhere near 80%–90% would be a decent accuracy to aim for. You can also run a test to see how well your model works on new data. Make sure you don’t overfit or underfit your model.

Soon after this, you will be presented with an option to connect your model to your Power App, click on that option and you will see a blank app(screen) and an option to use your model inside your app. The model you created is now absolutely ready to be a part of any Power App you choose to build.

Navigating across different screens and making the app responsive with respect to the AI builder is something we will soon see in my next article, but for now, you have your emotion detection model ready in less than 20 minutes!

I hope you were all as surprised as I was, when I first tried this!

Cheers!

--

--