Quickstart: Integrating Azure Cognitive Services Face Detection API with .NET Core

In this article we are specifically looking at Azure Cognitive Services Face Detection API. At a minimum, each detected face corresponds to a faceRectangle field in the response. This set of pixel coordinates for the left, top, width, and height mark the located face. Using these coordinates, you can get the location of the face and its size. In the API response, faces are listed in size order from largest to smallest.


Project files can be found here.

We’ll look at the response returned by Face API first.

We may not want all the features given here, and you might want to return some selected features to the client application. We will look into how to extract specific features using .NET Core.

Small description about the attributes returned from Face API are as follows.

Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.

Okay! let’s start.

Visit https://portal.azure.com/#create/Microsoft.CognitiveServicesFace, you will be prompted to login, if you don’t have an Azure account, you might want to create one.

You will get this screen if you are already logged in, provide a Name, Select the subscription, Location, Pricing tier and Resource group. F0 (Free) Pricing tier is enough for a basic application.

Once you created the service you can go to the respective resource page. There you will find the API key and endpoint.

Copy the Key and Endpoint, we need it in a minute.

Now using Visual Studio or .NET CLI, create a new Wep API project. Open appsettings.json and put a new section named “Keys” as follows, and replace <key> with your key and <endpoint> with your endpoint url.

Create a new Folder “Models” and under that, create new file ‘FaceImage.cs’. Inside “Controllers” folder create new Controller ‘FaceController.cs’.

Add following classes to FaceImage model class. What we are trying to do is (as mentioned earlier) get some specific data filtered from the response. So our output will be formatted according to this model classes. You can see it in action soon.

Using package manager console or .NET CLI, install Microsoft.Azure.CognitiveServices.Vision.Face nuget package.

Package Manager

Install-Package Microsoft.Azure.CognitiveServices.Vision.Face -Version 2.5.0-preview.1


dotnet add package Microsoft.Azure.CognitiveServices.Vision.Face --version 2.5.0-preview.1

After that we will inject the IConfiguration interface to FaceController so that we can read API key and Endpoint from appsettings.json, Don’t forget to include the namespaces using Microsoft.Azure.CognitiveServices.Vision.Face etc.

Now create ‘GetFaceDetails’ method inside the controller. Don’t forget to include the namespaces. In the Authenticate method, instantiate a client with your endpoint and key. Create a ApiKeyServiceClientCredentials object with your key, and use it with your endpoint to create a FaceClient object.

Notice the RECOGNITION_MODEL has set to ‘Recognition01’. You can choose which AI model to use to extract data from the detected face(s). See Specify a recognition model for information on these options.

Now let’s create DetectFaceExtract method. It doesn’t has to be this implementation, I adapted what’s given in the documentation. All you need to know is that the API will recognize multiple faces. And what we have to do is iterate over those detectedFaces and extract features.

First we convert the file to a stream using file.OpenReadStream() and call DetectWithStreamAsync() on the stream. This will create the list of faces we want. Now you can get the features for each face detected. That’s what the for-each loop does.

foreach (var face in detectedFaces.Select((value, i) => new { i, value }))
double age = face.value.FaceAttributes.Age;}

You can change the implementation of this method and extract features you want.

All done. Let’s send a request via postman to this endpoint. I’m using Tabbed Postman. Send a request to https://localhost:5001/api/face with the image, you have to send it as form-data with key as ‘file’.

Let’s check the output for the following image.

Face API detected 2 faces, with respective attributes. If you want to create a front-end application for this API you just have to send the image over http as FormData. Good guide for Angular can be found here.

Systems Design • Social Innovation • Cloud • ML