Quickstart: Integrating Azure Cognitive Services Face Detection API with .NET Core

Nishān Wickramarathna
6 min readApr 11, 2020

--

In this article we are specifically looking at Azure Cognitive Services Face Detection API. At a minimum, each detected face corresponds to a faceRectangle field in the response. This set of pixel coordinates for the left, top, width, and height mark the located face. Using these coordinates, you can get the location of the face and its size. In the API response, faces are listed in size order from largest to smallest.

Prerequisites

  • Basic understanding about how to create Azure Services
  • Basic understanding about .NET Core/ C#

Project files can be found here.

We’ll look at the response returned by Face API first.

We may not want all the features given here, and you might want to return some selected features to the client application. We will look into how to extract specific features using .NET Core.

Small description about the attributes returned from Face API are as follows.

Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.

  • Age. The estimated age in years of a particular face.
  • Blur. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
  • Emotion. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
  • Exposure. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
  • Facial hair. The estimated facial hair presence and the length for the given face.
  • Gender. The estimated gender of the given face. Possible values are male, female, and genderless.
  • Glasses. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
  • Hair. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
  • Makeup. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
  • Noise. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
  • Occlusion. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
  • Smile. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
  • Head pose. The face’s orientation in 3D space. This attribute is described by the pitch, roll, and yaw angles in degrees. The value ranges are -90 degrees to 90 degrees, -180 degrees to 180 degrees, and -90 degrees to 90 degrees, respectively. See the diagram for angle mappings.

Okay! let’s start.

Visit https://portal.azure.com/#create/Microsoft.CognitiveServicesFace, you will be prompted to login, if you don’t have an Azure account, you might want to create one.

You will get this screen if you are already logged in, provide a Name, Select the subscription, Location, Pricing tier and Resource group. F0 (Free) Pricing tier is enough for a basic application.

Once you created the service you can go to the respective resource page. There you will find the API key and endpoint.

Copy the Key and Endpoint, we need it in a minute.

Now using Visual Studio or .NET CLI, create a new Wep API project. Open appsettings.json and put a new section named “Keys” as follows, and replace <key> with your key and <endpoint> with your endpoint url.

Create a new Folder “Models” and under that, create new file ‘FaceImage.cs’. Inside “Controllers” folder create new Controller ‘FaceController.cs’.

Add following classes to FaceImage model class. What we are trying to do is (as mentioned earlier) get some specific data filtered from the response. So our output will be formatted according to this model classes. You can see it in action soon.

Using package manager console or .NET CLI, install Microsoft.Azure.CognitiveServices.Vision.Face nuget package.

Package Manager

Install-Package Microsoft.Azure.CognitiveServices.Vision.Face -Version 2.5.0-preview.1

.NET CLI

dotnet add package Microsoft.Azure.CognitiveServices.Vision.Face --version 2.5.0-preview.1

After that we will inject the IConfiguration interface to FaceController so that we can read API key and Endpoint from appsettings.json, Don’t forget to include the namespaces using Microsoft.Azure.CognitiveServices.Vision.Face etc.

Now create ‘GetFaceDetails’ method inside the controller. Don’t forget to include the namespaces. In the Authenticate method, instantiate a client with your endpoint and key. Create a ApiKeyServiceClientCredentials object with your key, and use it with your endpoint to create a FaceClient object.

Notice the RECOGNITION_MODEL has set to ‘Recognition01’. You can choose which AI model to use to extract data from the detected face(s). See Specify a recognition model for information on these options.

Now let’s create DetectFaceExtract method. It doesn’t has to be this implementation, I adapted what’s given in the documentation. All you need to know is that the API will recognize multiple faces. And what we have to do is iterate over those detectedFaces and extract features.

First we convert the file to a stream using file.OpenReadStream() and call DetectWithStreamAsync() on the stream. This will create the list of faces we want. Now you can get the features for each face detected. That’s what the for-each loop does.

foreach (var face in detectedFaces.Select((value, i) => new { i, value }))
{
double age = face.value.FaceAttributes.Age;}

You can change the implementation of this method and extract features you want.

All done. Let’s send a request via postman to this endpoint. I’m using Tabbed Postman. Send a request to https://localhost:5001/api/face with the image, you have to send it as form-data with key as ‘file’.

Let’s check the output for the following image.

Face API detected 2 faces, with respective attributes. If you want to create a front-end application for this API you just have to send the image over http as FormData. Good guide for Angular can be found here.

--

--