Advertisements

Getting Started With Angular Routing Using Angular CLI – Part Three


Introduction

In this article we are going to learn the basic routing concepts in Angular 6 using Angular CLI( Command Line Interface ). So before reading this article, you must read the following articles for more knowledge.

Routing in Application

In every application we do a minimal number of route concepts. For Eg While clicking the menu page it will redirect into child page or different page,etc. How can we do this in Angular ? Yes, There are few basic stuff we need to configure before the routing implementation in Angular.

Creating a Routing Module

By using command line parameter "--routing" we’re able to create a routing module in the Angular application. So by using any of the following command we can create routing module in Angular with the default routing setup. We have given “app-routing” as our routing module name.

ng generate module app-routing --flat --module=app
ng g m app-routing --flat --module=app

Note :

--flat puts the file in src/app instead of its own folder.
--module=app tells the CLI to register it in the imports array of the AppModule

The following code is generated after the above command execution.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

@NgModule({
  imports: [
    CommonModule
  ],
  declarations: []
})
export class AppRoutingModule { }

Import Router in the Application

Angular Router is an optional service and it is not a part of "@angular/core" and Router Modules are the part of "@angular/router" library package.  So we need to import “RouterModule” and “Routes” from "@angular/router" library package.

import { RouterModule , Routes } from '@angular/router';

Add Routes

A very simple explanation in Angular docs is “Routes” tell the router which view to display when a user clicks a link or pastes a URL into the browser address bar. So the scenario is whether a click or paste a URL into the browser !!

const routes: Routes = [{}];

We have created a empty routes in our routing module, Now we need to add redirect page, default page , 404 page , etc. For this just type “a-path” inside the “{}” and It will display the possible routing options in the routing module.

Now we have added path and component name in the Routes.

const routes: Routes = [{ path: 'customer', component: CustomerComponent }];

We already know “app component” is our default launching  page in Angular application. So we need to setup a routing configuration in the app component.

RouterModule.forRoot()

We first initialize and import the router setup and it start listening the browser location changes.Whatever routes we have mentioned earlier will be injected into the forRoot().

@NgModule({
  exports: [RouterModule] ,
  imports: [
    CommonModule, [RouterModule.forRoot(routes)]
  ],
  declarations: []
})

RouterModule.forChild()

forChild() is used for the submodules and lazy loaded submodules as the following way.

@NgModule({
  exports: [RouterModule] ,
  imports: [
    CommonModule, [RouterModule.forChild(routes)]
  ],
  declarations: []
})

Router outlet

RouterOutlet  acts as a placeholder and it looks like a component. So when you place this outlet in an app component then it will be dynamically filled based on the current router state.

<router-outlet></router-outlet>

Navigation

The navigation we have added in the same app component page and while clicking on the “/customer” string content in “routerLink” then  It will redirect into the respective page. We can add more functionality in the anchor tag like active,binding array of links,etc

The Router Output

 

Page Not Found – 404 Error !!

If user is trying to access different page in the application and that is not part of the routing configuration then we need to display an Error message page called as Page Not Found. Any of the following command will create a “Page Not Found” page with using inline template.

ng generate component PageNotFound --inline-template
ng g c PageNotFound -t

We have modified the PageNotFound typescript file.

If you want to add bootstrap style in the application then import the following reference link in the style.css.

@import url('https://unpkg.com/bootstrap@3.3.7/dist/css/bootstrap.min.css');

We have updated the router URL as “PageNotFoundComponent” with path mentioned as “**” in the last route is a wildcard. if the requested URL doesn’t match any paths for routes defined in the following configuration. This will help us to displaying a “404 – Not Found” page or redirecting to another route.

const routes: Routes = [{ path: '', component: EmployeeComponent },
{ path: 'customer', component: CustomerComponent },
{ path: '**', component: PageNotFoundComponent }];

Output

Download :

Reference

Summary

From this article we have learned the basic routing concepts of  Angular CLI. I hope this article is useful for all the Angular CLI beginners.

Advertisements

Getting Started With Angular 6 Using Angular CLI – Part Two


Introduction

In this article we are going to learn the basic development commands in Angular CLI ( Command Line Interface ). So before reading this article, you must read our previous article Angular – Part One for more knowledge.

Angular CLI Version

We know recently Angular 7 is released but still we are using Angular 6 !  Yes, we have started the application before the release of Angular 7, may be upcoming years we can expect more number of updates from Angular Team. Using any of the following command we can identify the current Angular CLI version in the application.

ng --version
ng -v

The following output we can see the current Angular CLI version in the application.

Run the following command to identify the global version of Angular CLI.

npm list -global --depth 0

We realise that we are using the lower version of Angular CLI comparing to the global version. Before migrating to the higher version of Angular CLI,we need to understand the necessary updates.

Component

Components are the basic UI building block of an Angular application. We already created an angular basic application in our previous article and by default it contains  an app component and it’s respective files. So now we are going to create a component inside the application using any of the the following command. “customer” is our component name.

ng generate component customer
ng g c customer

Component is created successfully !! The customer component created inside the “app” folder.

Then how will you know which are the places we need to register the current generated component ? This is why we integrate a version control like “Git” in the application otherwise people blindly create the application without knowing anything. Then we will know exactly which are the places we need to register the component in the application like the following way.

Dry Run

If you are new to the Angular CLI and planning to create a component,module,class,etc in the application. Then the first time you will have some confusion for what name will give where this file will be created. For that we can use “dry run” command in Angular CLI, this command will stop the CLI from making any changes to the application or only display which are the files are going to create in the file system. We can use any of the following command to verify the dry run command.

ng generate component customer --dryRun
ng g c s -d

In the following screen shot at the end of the result it will mention the dry run “Note”.

Module

Module is a mechanism to group the components, pipes, services and directives that are related, in such a way that we can create an application. To define a module then we have to use the decorator “NgModule” in the typescript file for the following way.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';
import { CustomerComponent } from './customer/customer.component';

@NgModule({
  declarations: [
    AppComponent,
    CustomerComponent
  ],
  imports: [
    BrowserModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

We can use any of  the following command to create a new module in the application.

ng generate module customer
ng g m customer

Class

Using any of the following command,we can create a class in the application.

ng generate class customer
ng g cl customer

In the above command we can see “cl” as the alias of class, but why they are not using “c”for class, because it is considered as “component”. Now the class will create under the “app” folder and but we include this as in the customer folder or any other folder. Then add the folder name at the front of the class name with slash “customer/”. Because we are planning to add the class inside the customer folder that is why we add slash after the folder.

ng generate class customer/customer

Interface

We are going to create a major contract in the application called as interface. We can run any of the following command to create an interface in the application.

ng generate interface customer
ng g i customer

Enum

We can run any of the following command to create an Enum in the application.

ng generate enum customer
ng g e customer

Inline Template and Style

As we know when we running the Angular component command it automatically generating all the respective files for the component. So how we can add separate Inline template or Inline Style or Css ? So run the following command to achieve both inline and style in the component.

Inline Template Command :

The following command will create a inline template instead of creating a separate html file in the system.

ng generate component employee --inline-template
ng g c employee -t

We can see that the typescript file  created an inline template instead of giving reference of the html file.

Inline Style Command :

The following command will create a inline style option instead of creating a separate css file in the system.

ng generate component student --inline-style
ng g c student -s

In the following typescript file, we can see it put as blank in the style reference area instead of referring the style css.

Download :

Reference

Summary

From this article we have learned the basic command of  Angular CLI. I hope this article is useful for all the Angular CLI beginners.

Getting started with Angular 6 using Angular CLI – Part 1


Introduction

Today we are going to learn most popular Single Page Application called as Angular Js using Angular CLI ( Command Line Interface ). We know by using Angular we can build modern applications such as web, mobile, or desktop in the real world. Previously we learnt about another famous single page application called as Aurelia.

Prerequisites

  • Download and Install : Node.Js ( Angular requires Node.js version 8.x or 10.x. )
  • Download and Install : VS Code ( A best opensource editor for developing Angular Application,etc ).
  • Download and Install : Git ( Not mandatory but use for a best practice ).

Angular CLI Installation

We can install Angular Command Line Interface after the installation of “node js” in our environment.  This will help you to create projects, generate application and library code, and perform such variety of ongoing development tasks such as testing, bundling, and deployment in our application. So this will reduce the time of development activities because of command line Interface register and generate files automatically at the beginning stage.  So that is why we need a version control for newly changed item in the application and it will identify  in which files what types of changes are reflected. This is more helpful for the beginners in   Angular CLI application development. Now we can install the Angular CLI globally by using the following command in our application.

npm install -g @angular/cli

Create a Project

Now we are going to create a simple project repository in our environment. The repository I have created in Github account and it is been cloned  in my environment(machine). The best practice of Angular CLI or any other application is configuring a version control in our VS Code otherwise we blindly create and check-in the code in project repository. No issues we can continue without configuring a version control in our application.

Go to our repository or  the place where we are planning to create a project setup , open the command prompt and run the following command.

ng new angular-app

Eg : “ng new [ProjectName]” our project name is a “angular-app”

Angular CLI created “angular-app” in our repository.

Install Angular Essential Extension

Click on Visual Studio Code Extension menu in left side bar then search “Angular Essential”  (john papa) in the search box. Once we install Angular Essential,  it will install all other supporting packages in the application automatically. Also It will generate a different icons for all the folders,ts files,style,json,etc.

Angular Build

We are generated a dummy application in our repository. The next step will be to build our application. For that open a “Command Terminal” in Visual Studio Code.

  1. Click on Visual Studio “Terminal” menu from the top of the menu list.
  2. “Terminal” menu displays a list of options ,  just click “New Terminal (  Cntrl + Shift + ~ )”.

There is one more short key for opening “Terminal” in VS Code ( “Cntrl + ~ ” ) . We can see the following terminal displayed on the right side bottom of  VS Code.

Now we need to build our application for that we need to open root directory of the application. May be when you open the Terminal in the VS code it will display the repository path of our application. So just change to application path by the following way.

The build artifacts will be stored in the dist/ directory folder in the application. Now run the Angular CLI build Command.

ng build

If you are getting the following error then that means we need to install “@angular-devkit/build-angular” dev dependency. This package is newly introduced in Angular 6.0.

The following command will help us to create devkit dependency in our application.

npm install --save-dev @angular-devkit/build-angular

If you are facing this same issue after the installation of dev kit then you need to uninstall and install the angular cli.

App Component

Components are the basic UI building block of an Angular app. So here we can see there is a “app” component generated under the “src -> app” folder in “angular-app” application. If you are using Angular CLI then it will auto generate all the files that are relevant for your basic application. For example in the following screen shot app folder contain such auto generated files are css,spec,ts and module.

Angular Serve

Now build is succeeded, Yes our application is ready to serve. Please run any of the following command ( one is alias or a short command ) to open our application in a browser.

ng serve --open
ng s -o

or If we don’t want to open application in a browser then just run the following command and navigate to "http://localhost:4200/".

ng serve

Bundling the application

So we can bundle our application using any of the following command and flag “–prod” bundle for production.

ng build --prod
ng serve --prod

for more options click here.

Changing the default port number

Every application should navigate to "http://localhost:4200/" as default. So If you are really thinking to open an application in a different port address then it is possible. Just run any of the following command.

ng s --port 3000 --open
ng s --port 3000 -o

Output :

As I mentioned earlier the default port we have changed to “3000” instead of “4200”.

Download :

Reference

Summary

From this article we have learned the basic configuration of Angular 6 using Angular CLI and few basic CLI commands. I hope this article is useful for all the Angular CLI beginners.

Cognitive Services : Convert Text to Speech in multiple languages using Asp.Net Core & C#


Introduction

In this article, we are going to learn how to convert text to speech in multiple languages using one of the important Cognitive Services API called Microsoft Text to Speech Service API ( One of the API in Speech API ). The Text to Speech (TTS) API of the Speech service converts input text into natural-sounding speech (also called as speech synthesis). It supports text in multiple languages  and gender based voice(male or female)

You can also refer the following articles on Cognitive Service.

Prerequisites

  1. Subscription key ( Azure Portal ) or Trail Subscription Key
  2. Visual Studio 2015 or 2017

Convert Text to Speech API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Speech Service API in the Azure portal.
So please click on the “Create a resource” on the left top menu and search “Speech” in the search bar on the right side window or top of Azure Marketplace.

Now we can see there are few speech related “AI + Machine Learning ” categories listed in the search result.

Click on the “create” button to create Speech Service API.

Provision a Speech Service API ( Text to Speech ) Subscription Key

After clicking the “Create”, It will open another window. There we need to provide the basic information about Speech API.

Name : Name of the Translator Text API ( Eg. TextToSpeechApp ).

Subscription : We can select our Azure subscription for Speech API creation.

Location : We can select  location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one ( We created a new resource group as “SpeechResource” ).

Now click on the “TextToSpeechApp” in dashboard page and it will redirect to the detailed page of TextToSpeechApp ( “Overview” ). Here, we can see the “Keys” ( Subscription key details ) menu in the left side panel. Then click on the “Keys” menu and it will open the Subscription Keys details. We can use any of the subscription keys or regenerate the given key for text to speech conversion using Microsoft Speech Service API.

Authentication

A token ( bearer ) based authentication is required in the Text To Speech conversion using Speech Service API. So we need to create an authentication token using “TextToSpeechApp” subscription keys. The following “endPoint” will help to create an authentication token for Text to speech conversion. The each access token is valid for 10 minutes and after that we need to create a new one for the next process.

https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken&#8221;

Speech Synthesis Markup Language ( SSML )

The Speech Synthesis Markup Language (SSML) is an XML-based markup language that provides a way to control the pronunciation and rhythm of text-to-speech. More about SSML ..

SSML Format :

<speak version='1.0' xml:lang='en-US'><voice xml:lang='ta-IN' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (ta-IN, Valluvar)'>
        நன்றி
</voice></speak>

How to make a request

This is very simple process,  HTTP request is made in POST method. So that means we need to pass secure data in the request body and that will be a plain text or a SSML document. As per the documentation,it is clearly mentioned in most cases that we need to use SSML body as request. The maximum length of the HTTP request body is 1024 characters and the following is the endPoint for our http Post method.

https://westus.tts.speech.microsoft.com/cognitiveservices/v1&#8221;

The following are the HTTP headers required in the request body.

Pic source : https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-text-to-speech

Index.html

The following html contains the binding methodology that we have used in our application by using the  latest Tag helpers of ASP.Net Core.

Model

The following model contains the Speech Model information.

using Microsoft.AspNetCore.Mvc.Rendering;
using System.Collections.Generic;
using System.ComponentModel;

namespace TextToSpeechApp.Models
{
    public class SpeechModel
    {
        public string Content { get; set; }

        public string SubscriptionKey { get; set; } = "< Subscription Key >";

        [DisplayName("Language Selection :")]
        public string LanguageCode { get; set; } = "NA";

        public List<SelectListItem> LanguagePreference { get; set; } = new List<SelectListItem>
        {
        new SelectListItem { Value = "NA", Text = "-Select-" },
        new SelectListItem { Value = "en-US", Text = "English (United States)"  },
        new SelectListItem { Value = "en-IN", Text = "English (India)"  },
        new SelectListItem { Value = "ta-IN", Text = "Tamil (India)"  },
        new SelectListItem { Value = "hi-IN", Text = "Hindi (India)"  },
        new SelectListItem { Value = "te-IN", Text = "Telugu (India)"  }
        };
    }
}

Interface

The “ITextToSpeech” contains one signature for converting text to speech based on the given input. So we have injected this interface in the ASP.NET Core “Startup.cs” class as a “AddTransient”.

using System.Threading.Tasks;

namespace TextToSpeechApp.BusinessLayer.Interface
{
    public interface ITextToSpeech
    {
        Task<byte[]> TranslateText(string token, string key, string content, string lang);
    }
}

Text to Speech API Service

We can add the valid Speech API Subscription key and authentication token into the following code.

/// 

<summary>
        /// Translate text to speech
        /// </summary>


        /// <param name="token">Authentication token</param>
        /// <param name="key">Azure subscription key</param>
        /// <param name="content">Text content for speech</param>
        /// <param name="lang">Speech conversion language</param>
        /// <returns></returns>
        public async Task<byte[]> TranslateText(string token, string key, string content, string lang)
        {
            //Request url for the speech api.
            string uri = "https://westus.tts.speech.microsoft.com/cognitiveservices/v1";
            //Generate Speech Synthesis Markup Language (SSML) 
            var requestBody = this.GenerateSsml(lang, "Female", this.ServiceName(lang), content);

            using (var client = new HttpClient())
            using (var request = new HttpRequestMessage())
            {
                request.Method = HttpMethod.Post;
                request.RequestUri = new Uri(uri);
                request.Headers.Add("Ocp-Apim-Subscription-Key", key);
                request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token);
                request.Headers.Add("X-Microsoft-OutputFormat", "audio-16khz-64kbitrate-mono-mp3");
                request.Content = new StringContent(requestBody, Encoding.UTF8, "text/plain");
                request.Content.Headers.Remove("Content-Type");
                request.Content.Headers.Add("Content-Type", "application/ssml+xml");
                request.Headers.Add("User-Agent", "TexttoSpeech");
                var response = await client.SendAsync(request);
                var httpStream = await response.Content.ReadAsStreamAsync().ConfigureAwait(false);
                Stream receiveStream = httpStream;
                byte[] buffer = new byte[32768];

                using (Stream stream = httpStream)
                {
                    using (MemoryStream ms = new MemoryStream())
                    {
                        byte[] waveBytes = null;
                        int count = 0;
                        do
                        {
                            byte[] buf = new byte[1024];
                            count = stream.Read(buf, 0, 1024);
                            ms.Write(buf, 0, count);
                        } while (stream.CanRead && count > 0);

                        waveBytes = ms.ToArray();

                        return waveBytes;
                    }
                }
            }
        }

Download

Demo

Output

The given text is converted into speech in desired  language listed in a drop-down list using Microsoft Speech API.

Reference

See Also

You can download other source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned how to convert text to speech in multiple languages using Asp.Net Core & C# as per the API documentation using one of the important Cognitive Services API ( Text to Speech API is a part of Speech API ). I hope this article is useful for all Azure Cognitive Services API beginners.

Cognitive Services : Translate Text into multiple languages using Asp.Net Core & C#


Introduction

In this article, we are going to learn how to translate text into multiple languages using one of the important Cognitive Services API called Microsoft Translate Text API ( One of the API in Language API ). It’s a simple cloud-based machine translation service and obviously we can test through simple Rest API call. Microsoft is using a new standard for high-quality AI-powered machine translations known as Neural Machine Translation (NMT).

Pic source : https://www.microsoft.com/en-us/translator/business/machine-translation/#whatmachine

You can also refer the following articles on Cognitive Service.

Prerequisites

  1. Subscription key ( Azure Portal ).
  2. Visual Studio 2015 or 2017

Translator Text API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Translator Text API in the Azure portal. So please click on the “Create a resource” on the left top menu and search “Translator Text” in the search bar on the right side window or top of Azure Marketplace.

Click on the “create” button to create Translator Text API.

Provision a Translator Text Subscription Key

After clicking the “Create”, It will open another window. There we need to provide the basic information about Translator Text API.

Name : Name of the Translator Text API ( Eg. TranslatorTextApp ).

Subscription : We can select our Azure subscription for Translator Text  API creation.

Location : We can select our location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one.

Now click on the “TranslatorTextApp” in dashboard page and it will redirect to the detailed page of  TranslatorTextApp ( “Overview” ). Here, we can see the “Keys” ( Subscription key details ) menu in the left side panel. Then click on the “Keys” menu and it will open the Subscription Keys details. We can use any of the subscription keys or regenerate the given key for text translation using Microsoft Translator Text API.

Language Request URL

The following request url gets the set of languages currently supported by other operations of the Microsoft Translator Text API.

https://api.cognitive.microsofttranslator.com/languages?api-version=3.0

Endpoint

The version of the API requested by the client and the Value must be 3.0 and also we can include query parameters and request header in the following endPoint used in our application.

https://api.cognitive.microsofttranslator.com/translate?api-version=3.0

Mandatory required parameters in the query string are “api-version” and “to” . The “api-version” value must be “3.0” as per the current documentation. “to” is the language code parameter used for translating the entered text into the desired language.

The mandatory request headers are “authorization header” and “Content-Type”. We can pass our subscription key into the “authorization header” and the simplest way is to pass our Azure secret key to the Translator service using request header “Ocp-Apim-Subscription-Key”.

 Index.html

The following html contains the binding methodology that we have used in our application by using the  latest Tag helpers of ASP.Net Core.

site.js

The following ajax call will trigger for each drop-down index change in the language selection using drop-down list.

// Write your JavaScript code.
$(function () {
    $(document)
        .on('change', '#ddlLangCode', function () {
            var languageCode = $(this).val();
            var enterText = $("#enterText").val();
            if (1 <= $("#enterText").val().trim().length && languageCode != "NA") {

                $('#enterText').removeClass('redBorder');

                var url = '/Home/Index';
                var dataToSend = { "LanguageCode": languageCode, "Text": enterText };
                dataType: "json",
                    $.ajax({
                        url: url,
                        data: dataToSend,
                        type: 'POST',
                        success: function (response) {
                            //update control on View
                            var result = JSON.parse(response);
                            var translatedText = result[0].translations[0].text;
                            $('#translatedText').val(translatedText);
                        }
                    })
            }
            else {
                $('#enterText').addClass('redBorder');
                $('#translatedText').val("");
            }
        });
});

Interface

The “ITranslateText” contains one signature for translating text content based on the given input. So we have injected this interface in the ASP.NET Core “Startup.cs” class as a “AddTransient”.

using System.Threading.Tasks;

namespace TranslateTextApp.Business_Layer.Interface
{
    interface ITranslateText
    {
        Task<string> Translate(string uri, string text, string key);
    }
}

Translator Text API Service

We can add the valid Translator Text API Subscription Key into the following code.

using Newtonsoft.Json;
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using TranslateTextApp.Business_Layer.Interface;

namespace TranslateTextApp.Business_Layer
{
    public class TranslateTextService : ITranslateText
    {
        /// 
<summary>
        /// Translate the given text in to selected language.
        /// </summary>

        /// <param name="uri">Request uri</param>
        /// <param name="text">The text is given for translation</param>
        /// <param name="key">Subscription key</param>
        /// <returns></returns>
        public async Task<string> Translate(string uri, string text, string key)
        {
            System.Object[] body = new System.Object[] { new { Text = text } };
            var requestBody = JsonConvert.SerializeObject(body);
            
            using (var client = new HttpClient())
            using (var request = new HttpRequestMessage())
            {
                request.Method = HttpMethod.Post;
                request.RequestUri = new Uri(uri);
                request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
                request.Headers.Add("Ocp-Apim-Subscription-Key", key);

                var response = await client.SendAsync(request);
                var responseBody = await response.Content.ReadAsStringAsync();
                dynamic result = JsonConvert.SerializeObject(JsonConvert.DeserializeObject(responseBody), Formatting.Indented);
                
                return result;
            }
        }
    }
}

API Response – Based on the given text

The successful json response.

[
  {
    "detectedLanguage": {
      "language": "en",
      "score": 1.0
    },
    "translations": [
      {
        "text": "सफलता का कोई शार्टकट नहीं होता",
        "to": "hi"
      }
    ]
  }
]

Download

Output

The given text is translated into desired  language listed in a drop-down list using Microsoft Translator API.

 

Reference

See Also

You can download other source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned translate a text(typed in english) in to different languages as per the API documentation using one of the important Cognitive Services API ( Translator Text API is a part of Language API ). I hope this article is useful for all Azure Cognitive Services API beginners.

Cognitive Services : Extract handwritten text from an image using Computer Vision API With ASP.NET Core & C#


Introduction

In this article, we are going to learn how to extract handwritten text from an image using one of the important Cognitive Services API called Computer Vision API. So, we need a valid subscription key for accessing this feature. So before reading this article, you must read our previous articles related to Computer Vision API because we have explained other features of Computer Vision API in our previous article. This technology is currently in preview and is only available for English text.

Before reading this article, you must read the articles given below for Computer Vision API Knowledge.

Prerequisites

  1. Subscription key ( Azure Portal ).
  2. Visual Studio 2015 or 2017

Subscription Key Free Trail

If you don’t have Microsoft Azure Subscription and want to test the Computer Vision API because it requires a valid Subscription key for processing the image information. Don’t worry !! Microsoft gives a 7 day trial Subscription Key ( Click here ). We can use that Subscription key for testing purposes. If you sign up using the Computer Vision free trial, then your subscription keys are valid for the westcentral region (https://westcentralus.api.cognitive.microsoft.com ).

Requirements

These are the major requirements mentioned in the Microsoft docs.

  1. Supported input methods: Raw image binary in the form of an application/octet stream or image URL.
  2. Supported image formats: JPEG, PNG, BMP.
  3. Image file size: Less than 4 MB.
  4. Image dimensions must be at least 40 x 40, at most 3200 x 3200.

Computer Vision API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Computer Vision Subscription Key in the Azure portal.

Click on “Create a resource” on the left side menu and it will open an “Azure Marketplace”. There, we can see the list of services. Click “AI + Machine Learning” then click on the “Computer Vision”.

Provision a Computer Vision Subscription Key

After clicking the “Computer Vision”, It will open another section. There, we need to provide the basic information about Computer Vision API.

Name : Name of the Computer Vision API ( Eg. HandwrittenApp ).

Subscription : We can select our Azure subscription for Computer Vision API creation.

Location : We can select our location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one.

Now click on the “HandwrittenApp” in dashboard page and it will redirect to the details page of HandwrittenApp ( “Overview” ). Here, we can see the Manage Key ( Subscription key details ) & Endpoint details. Click on the “Show access keys…” links and it will redirect to another page.

We can use any of the subscription keys or regenerate the given key for getting image information using Computer Vision API.

Endpoint

As we mentioned above the location is the same for all the free trial Subscription Keys. In Azure we can choose available locations while creating a Computer Vision API. We have used the following endpoint in our code.

https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText

View Model

The following model contains the API image response information.

using System.Collections.Generic;

namespace HandwrittenTextApp.Models
{
public class Word
{
public List<int> boundingBox { get; set; }
public string text { get; set; }
}

public class Line
{
public List<int> boundingBox { get; set; }
public string text { get; set; }
public List<Word> words { get; set; }
}

public class RecognitionResult
{
public List<Line> lines { get; set; }
}

public class ImageInfoViewModel
{
public string status { get; set; }
public RecognitionResult recognitionResult { get; set; }
}
}

Request URL

We can add additional parameters or request parameters ( optional ) in our API “endPoint” and it will provide more information for the given image.

https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText%5B?mode%5D

Request parameters

These are the following optional parameters available in Computer Vision API.

  1. mode

mode

The mode will be different for different versions of Vision API. So don’t get confused while we are using Version v1 that is given in our Azure portal. If this parameter is set to “Printed”, printed text recognition is performed. If “Handwritten” is specified, handwriting recognition is performed. (Note: This parameter is case sensitive.) This is a required parameter and cannot be empty.

Interface

The “IVisionApiService” contains two signatures for processing or extracting handwritten content in an image. So we have injected this interface in the ASP.NET Core “Startup.cs” class as a “AddTransient”.

using System.Threading.Tasks;

namespace HandwrittenTextApp.Business_Layer.Interface
{
interface IVisionApiService
{
Task<string> ReadHandwrittenText();
byte[] GetImageAsByteArray(string imageFilePath);
}
}

Vision API Service

The following code will process and generate image information using Computer Vision API and its response is mapped into the “ImageInfoViewModel”. We can add the valid Computer Vision API Subscription Key into the following code.

using HandwrittenTextApp.Business_Layer.Interface;
using HandwrittenTextApp.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace HandwrittenTextApp.Business_Layer
{
public class VisionApiService : IVisionApiService
{
// Replace <Subscription Key> with your valid subscription key.
const string subscriptionKey = "<Subscription Key>";

// You must use the same region in your REST call as you used to
// get your subscription keys. The paid subscription keys you will get
// it from microsoft azure portal.
// Free trial subscription keys are generated in the westcentralus region.
// If you use a free trial subscription key, you shouldn't need to change
// this region.
const string endPoint =
"https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText";

///
<summary>
/// Gets the handwritten text from the specified image file by using
/// the Computer Vision REST API.
/// </summary>

/// <param name="imageFilePath">The image file with handwritten text.</param>
public async Task<string> ReadHandwrittenText()
{
string imageFilePath = @"C:\Users\rajeesh.raveendran\Desktop\vaisakh.jpg";
var errors = new List<string>();
ImageInfoViewModel responeData = new ImageInfoViewModel();
string extractedResult = "";
try
{
HttpClient client = new HttpClient();

// Request headers.
client.DefaultRequestHeaders.Add(
"Ocp-Apim-Subscription-Key", subscriptionKey);

// Request parameter.
// Note: The request parameter changed for APIv2.
// For APIv1, it is "handwriting=true".
string requestParameters = "mode=Handwritten";

// Assemble the URI for the REST API Call.
string uri = endPoint + "?" + requestParameters;

HttpResponseMessage response;

// Two REST API calls are required to extract handwritten text.
// One call to submit the image for processing, the other call
// to retrieve the text found in the image.
// operationLocation stores the REST API location to call to
// retrieve the text.
string operationLocation;

// Request body.
// Posts a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);

using (ByteArrayContent content = new ByteArrayContent(byteData))
{
// This example uses content type "application/octet-stream".
// The other content types you can use are "application/json"
// and "multipart/form-data".
content.Headers.ContentType =
new MediaTypeHeaderValue("application/octet-stream");

// The first REST call starts the async process to analyze the
// written text in the image.
response = await client.PostAsync(uri, content);
}

// The response contains the URI to retrieve the result of the process.
if (response.IsSuccessStatusCode)
operationLocation =
response.Headers.GetValues("Operation-Location").FirstOrDefault();
else
{
// Display the JSON error data.
string errorString = await response.Content.ReadAsStringAsync();
//Console.WriteLine("\n\nResponse:\n{0}\n",
// JToken.Parse(errorString).ToString());
return errorString;
}

// The second REST call retrieves the text written in the image.
//
// Note: The response may not be immediately available. Handwriting
// recognition is an async operation that can take a variable amount
// of time depending on the length of the handwritten text. You may
// need to wait or retry this operation.
//
// This example checks once per second for ten seconds.
string result;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
result = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && result.IndexOf("\"status\":\"Succeeded\"") == -1);

if (i == 10 && result.IndexOf("\"status\":\"Succeeded\"") == -1)
{
Console.WriteLine("\nTimeout error.\n");
return "Error";
}

//If it is success it will execute further process.
if (response.IsSuccessStatusCode)
{
// The JSON response mapped into respective view model.
responeData = JsonConvert.DeserializeObject<ImageInfoViewModel>(result,
new JsonSerializerSettings
{
NullValueHandling = NullValueHandling.Include,
Error = delegate (object sender, Newtonsoft.Json.Serialization.ErrorEventArgs earg)
{
errors.Add(earg.ErrorContext.Member.ToString());
earg.ErrorContext.Handled = true;
}
}
);

var linesCount = responeData.recognitionResult.lines.Count;
for (int j = 0; j < linesCount; j++)
{
var imageText = responeData.recognitionResult.lines[j].text;

extractedResult += imageText + Environment.NewLine;
}

}
}
catch (Exception e)
{
Console.WriteLine("\n" + e.Message);
}
return extractedResult;
}

///
<summary>
/// Returns the contents of the specified file as a byte array.
/// </summary>

/// <param name="imageFilePath">The image file to read.</param>
/// <returns>The byte array of the image data.</returns>
public byte[] GetImageAsByteArray(string imageFilePath)
{
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
}

API Response – Based on the given Image

The successful json response.

{
"status": "Succeeded",
"recognitionResult": {
"lines": [
{
"boundingBox": [
170,
34,
955,
31,
956,
78,
171,
81
],
"text": "Memories ! are born not made !",
"words": [
{
"boundingBox": [
158,
33,
378,
33,
373,
81,
153,
81
],
"text": "Memories"
},
{
"boundingBox": [
359,
33,
407,
33,
402,
81,
354,
81
],
"text": "!"
},
{
"boundingBox": [
407,
33,
508,
33,
503,
81,
402,
81
],
"text": "are"
},
{
"boundingBox": [
513,
33,
662,
33,
657,
81,
508,
81
],
"text": "born"
},
{
"boundingBox": [
676,
33,
786,
33,
781,
81,
671,
81
],
"text": "not"
},
{
"boundingBox": [
786,
33,
940,
33,
935,
81,
781,
81
],
"text": "made"
},
{
"boundingBox": [
926,
33,
974,
33,
969,
81,
921,
81
],
"text": "!"
}
]
},
{
"boundingBox": [
181,
121,
918,
112,
919,
175,
182,
184
],
"text": "Bloom of roses to my heart",
"words": [
{
"boundingBox": [
162,
123,
307,
121,
298,
185,
154,
187
],
"text": "Bloom"
},
{
"boundingBox": [
327,
120,
407,
119,
398,
183,
318,
185
],
"text": "of"
},
{
"boundingBox": [
422,
119,
572,
117,
563,
181,
413,
183
],
"text": "roses"
},
{
"boundingBox": [
577,
117,
647,
116,
638,
180,
568,
181
],
"text": "to"
},
{
"boundingBox": [
647,
116,
742,
115,
733,
179,
638,
180
],
"text": "my"
},
{
"boundingBox": [
757,
115,
927,
113,
918,
177,
748,
179
],
"text": "heart"
}
]
},
{
"boundingBox": [
190,
214,
922,
201,
923,
254,
191,
267
],
"text": "Sometimes lonely field as",
"words": [
{
"boundingBox": [
178,
213,
468,
209,
467,
263,
177,
267
],
"text": "Sometimes"
},
{
"boundingBox": [
486,
209,
661,
206,
660,
260,
485,
263
],
"text": "lonely"
},
{
"boundingBox": [
675,
206,
840,
203,
839,
257,
674,
260
],
"text": "field"
},
{
"boundingBox": [
850,
203,
932,
202,
931,
256,
848,
257
],
"text": "as"
}
]
},
{
"boundingBox": [
187,
304,
560,
292,
561,
342,
188,
354
],
"text": "sky kisses it",
"words": [
{
"boundingBox": [
173,
302,
288,
300,
288,
353,
173,
355
],
"text": "sky"
},
{
"boundingBox": [
288,
300,
488,
295,
488,
348,
288,
353
],
"text": "kisses"
},
{
"boundingBox": [
488,
295,
573,
293,
573,
346,
488,
348
],
"text": "it"
}
]
},
{
"boundingBox": [
191,
417,
976,
387,
979,
469,
194,
499
],
"text": "Three years iam gifted with",
"words": [
{
"boundingBox": [
173,
417,
324,
412,
318,
494,
167,
499
],
"text": "Three"
},
{
"boundingBox": [
343,
411,
504,
405,
498,
488,
337,
493
],
"text": "years"
},
{
"boundingBox": [
517,
405,
623,
401,
617,
483,
512,
487
],
"text": "iam"
},
{
"boundingBox": [
646,
400,
839,
394,
833,
476,
640,
483
],
"text": "gifted"
},
{
"boundingBox": [
839,
394,
977,
389,
971,
471,
833,
476
],
"text": "with"
}
]
},
{
"boundingBox": [
167,
492,
825,
472,
828,
551,
169,
572
],
"text": "gud friend happiness !",
"words": [
{
"boundingBox": [
159,
493,
274,
489,
274,
569,
159,
573
],
"text": "gud"
},
{
"boundingBox": [
284,
489,
484,
483,
484,
563,
284,
569
],
"text": "friend"
},
{
"boundingBox": [
504,
482,
814,
473,
814,
553,
504,
562
],
"text": "happiness"
},
{
"boundingBox": [
794,
474,
844,
472,
844,
552,
794,
554
],
"text": "!"
}
]
},
{
"boundingBox": [
167,
608,
390,
628,
387,
664,
163,
644
],
"text": "50870 W,",
"words": [
{
"boundingBox": [
159,
603,
321,
623,
310,
661,
147,
641
],
"text": "50870"
},
{
"boundingBox": [
309,
621,
409,
634,
397,
672,
297,
659
],
"text": "W,"
}
]
},
{
"boundingBox": [
419,
607,
896,
601,
897,
665,
420,
671
],
"text": "Seperation , sheds",
"words": [
{
"boundingBox": [
404,
609,
713,
604,
707,
669,
399,
674
],
"text": "Seperation"
},
{
"boundingBox": [
703,
604,
749,
604,
743,
669,
698,
669
],
"text": ","
},
{
"boundingBox": [
740,
604,
910,
602,
904,
667,
734,
669
],
"text": "sheds"
}
]
},
{
"boundingBox": [
161,
685,
437,
688,
436,
726,
160,
724
],
"text": "blood as in",
"words": [
{
"boundingBox": [
147,
687,
299,
684,
291,
726,
139,
729
],
"text": "blood"
},
{
"boundingBox": [
311,
683,
387,
682,
379,
724,
303,
725
],
"text": "as"
},
{
"boundingBox": [
398,
681,
440,
681,
432,
723,
390,
724
],
"text": "in"
}
]
},
{
"boundingBox": [
518,
678,
686,
679,
685,
719,
517,
718
],
"text": "tears !",
"words": [
{
"boundingBox": [
518,
677,
678,
682,
665,
723,
505,
717
],
"text": "tears"
},
{
"boundingBox": [
658,
681,
708,
683,
695,
724,
645,
722
],
"text": "!"
}
]
},
{
"boundingBox": [
165,
782,
901,
795,
900,
868,
164,
855
],
"text": "I can't bear it Especially",
"words": [
{
"boundingBox": [
145,
785,
191,
786,
184,
862,
138,
861
],
"text": "I"
},
{
"boundingBox": [
204,
786,
342,
787,
336,
863,
198,
862
],
"text": "can't"
},
{
"boundingBox": [
370,
788,
513,
789,
506,
865,
364,
864
],
"text": "bear"
},
{
"boundingBox": [
522,
789,
595,
790,
589,
866,
516,
865
],
"text": "it"
},
{
"boundingBox": [
605,
790,
913,
794,
907,
869,
598,
866
],
"text": "Especially"
}
]
},
{
"boundingBox": [
165,
874,
966,
884,
965,
942,
164,
933
],
"text": "final year a bunch of white",
"words": [
{
"boundingBox": [
155,
872,
306,
875,
294,
936,
143,
933
],
"text": "final"
},
{
"boundingBox": [
331,
876,
457,
878,
445,
939,
320,
936
],
"text": "year"
},
{
"boundingBox": [
466,
878,
508,
879,
496,
940,
454,
939
],
"text": "a"
},
{
"boundingBox": [
525,
879,
676,
882,
664,
943,
513,
940
],
"text": "bunch"
},
{
"boundingBox": [
697,
882,
772,
884,
760,
945,
685,
943
],
"text": "of"
},
{
"boundingBox": [
785,
884,
970,
888,
958,
948,
773,
945
],
"text": "white"
}
]
},
{
"boundingBox": [
174,
955,
936,
960,
935,
1006,
173,
1001
],
"text": "roses to me . I Loved it ! !",
"words": [
{
"boundingBox": [
164,
953,
348,
954,
341,
1002,
157,
1001
],
"text": "roses"
},
{
"boundingBox": [
376,
955,
445,
955,
437,
1003,
368,
1003
],
"text": "to"
},
{
"boundingBox": [
449,
955,
537,
956,
529,
1004,
442,
1003
],
"text": "me"
},
{
"boundingBox": [
518,
956,
564,
957,
557,
1005,
511,
1004
],
"text": "."
},
{
"boundingBox": [
569,
957,
615,
957,
607,
1005,
561,
1005
],
"text": "I"
},
{
"boundingBox": [
629,
957,
799,
959,
791,
1007,
621,
1005
],
"text": "Loved"
},
{
"boundingBox": [
817,
959,
886,
960,
879,
1008,
810,
1007
],
"text": "it"
},
{
"boundingBox": [
881,
960,
927,
960,
920,
1008,
874,
1008
],
"text": "!"
},
{
"boundingBox": [
909,
960,
955,
960,
948,
1008,
902,
1008
],
"text": "!"
}
]
},
{
"boundingBox": [
613,
1097,
680,
1050,
702,
1081,
635,
1129
],
"text": "by",
"words": [
{
"boundingBox": [
627,
1059,
683,
1059,
681,
1107,
625,
1107
],
"text": "by"
}
]
},
{
"boundingBox": [
320,
1182,
497,
1191,
495,
1234,
318,
1224
],
"text": "Vaisak",
"words": [
{
"boundingBox": [
309,
1183,
516,
1186,
492,
1229,
286,
1227
],
"text": "Vaisak"
}
]
},
{
"boundingBox": [
582,
1186,
964,
1216,
961,
1264,
578,
1234
],
"text": "Viswanathan",
"words": [
{
"boundingBox": [
574,
1186,
963,
1218,
945,
1265,
556,
1232
],
"text": "Viswanathan"
}
]
},
{
"boundingBox": [
289,
1271,
997,
1279,
996,
1364,
288,
1356
],
"text": "( Menonpara, Palakkad )",
"words": [
{
"boundingBox": [
274,
1264,
324,
1265,
306,
1357,
256,
1356
],
"text": "("
},
{
"boundingBox": [
329,
1265,
679,
1273,
661,
1364,
311,
1357
],
"text": "Menonpara,"
},
{
"boundingBox": [
669,
1273,
979,
1279,
961,
1371,
651,
1364
],
"text": "Palakkad"
},
{
"boundingBox": [
969,
1279,
1019,
1280,
1001,
1371,
951,
1370
],
"text": ")"
}
]
}
]
}
}

Download

Output

Handwritten content from an image using Computer Vision API. The content is extracted (around 99.99%) from the given image. If any failure occurs in detecting the image,it means that the Vision algorithm is not able to identify the written content.

Note : Thank you Vaisakh Viswanathan ( The author of the peom ).

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned extract handwritten content from an image using One of the important Cognitive Services API ( Computer Vision API ). I hope this article is useful for all Azure Cognitive Services API beginners.

Cognitive Services – Optical Character Recognition (OCR) from an image using Computer Vision API And C#


Introduction

In our previous article we learned how to Analyze an Image Using Computer Vision API With ASP.Net Core & C#. In this article we are going to learn how to extract printed text also known as optical character recognition (OCR) from an image using one of the important Cognitive Services API called as Computer Vision API. So we need a valid subscription key for accessing this feature in an image.

Optical Character Recognition (OCR)

Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream.

Prerequisites

  1. Subscription key ( Azure Portal ).
  2. Visual Studio 2015 or 2017

Subscription Key Free Trail

If you don’t have Microsoft Azure Subscription and want to test the Computer Vision API because it requires a valid Subscription key for processing the image information. Don’t worry !! Microsoft gives a 7 day trial Subscription Key ( Click here ). We can use that Subscription key for testing purposes. If you sign up using the Computer Vision free trial, then your subscription keys are valid for the westcentral region (https://westcentralus.api.cognitive.microsoft.com ).

Requirements

These are the major requirements mentioned in the Microsoft docs.

  1. Supported input methods: Raw image binary in the form of an application/octet stream or image URL.
  2. Supported image formats: JPEG, PNG, GIF, BMP.
  3. Image file size: Less than 4 MB.
  4. Image dimension: Greater than 50 x 50 pixels.

Computer Vision API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Computer Vision Subscription Key in the Azure portal.

Click on “Create a resource” on the left side menu and it will open an “Azure Marketplace”. There, we can see the list of services. Click “AI + Machine Learning” then click on the “Computer Vision”.

Provision a Computer Vision Subscription Key

After clicking the “Computer Vision”, It will open another section. There, we need to provide the basic information about Computer Vision API.

Name : Name of the Computer Vision API ( Eg. OCRApp ).

Subscription : We can select our Azure subscription for Computer Vision API creation.

Location : We can select our location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one.

Now click on the “OCRApp” in dashboard page and it will redirect to the details page of OCRApp ( “Overview” ). Here, we can see the Manage Key ( Subscription key details ) & Endpoint details. Click on the Show access keys links and it will redirect to another page.

We can use any of the subscription keys or regenerate the given key for getting image information using Computer Vision API.

 

Endpoint

As we mentioned above the location is the same for all the free trial Subscription Keys. In Azure we can choose available locations while creating a Computer Vision API. We have used the following endpoint in our code.

https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr

View Model

The following model will contain the API image response information.

using System.Collections.Generic;

namespace OCRApp.Models
{
    public class Word
    {
        public string boundingBox { get; set; }
        public string text { get; set; }
    }

    public class Line
    {
        public string boundingBox { get; set; }
        public List<Word> words { get; set; }
    }

    public class Region
    {
        public string boundingBox { get; set; }
        public List<Line> lines { get; set; }
    }

    public class ImageInfoViewModel
    {
        public string language { get; set; }
        public string orientation { get; set; }
        public int textAngle { get; set; }
        public List<Region> regions { get; set; }
    }
}

Request URL

We can add additional parameters or request parameters ( optional ) in our API “endPoint” and it will provide more information for the given image.

https://[location].api.cognitive.microsoft.com/vision/v1.0/ocr[?language][&detectOrientation ]

Request parameters

These are the following optional parameters available in computer vision API.

  1. language
  2. detectOrientation

language

The service will detect 26 languages of the text in the image and It will contain “unk” as the default value. That means the service will auto detect the language of the text in the image.

The following are the supported language mention in the Microsoft API documentation.

  1. unk (AutoDetect)
  2. en (English)
  3. zh-Hans (ChineseSimplified)
  4. zh-Hant (ChineseTraditional)
  5. cs (Czech)
  6. da (Danish)
  7. nl (Dutch)
  8. fi (Finnish)
  9. fr (French)
  10. de (German)
  11. el (Greek)
  12. hu (Hungarian)
  13. it (Italian)
  14. ja (Japanese)
  15. ko (Korean)
  16. nb (Norwegian)
  17. pl (Polish)
  18. pt (Portuguese,
  19. ru (Russian)
  20. es (Spanish)
  21. sv (Swedish)
  22. tr (Turkish)
  23. ar (Arabic)
  24. ro (Romanian)
  25. sr-Cyrl (SerbianCyrillic)
  26. sr-Latn (SerbianLatin)
  27. sk (Slovak)

detectOrientation

This will detect the text orientation in the image, for this feature we need to add detectOrientation=true in the service url  or Request url as we discussed earlier.

Vision API Service

The following code will process and generate image information using Computer Vision API and its response is mapped into the “ImageInfoViewModel”. We can add the valid Computer Vision API Subscription Key into the following code.

using Newtonsoft.Json;
using OCRApp.Models;
using System;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace OCRApp.Business_Layer
{
    public class VisionApiService
    {
        // Replace <Subscription Key> with your valid subscription key.
        const string subscriptionKey = "<Subscription Key>";

        // You must use the same region in your REST call as you used to
        // get your subscription keys. The paid subscription keys you will get
        // it from microsoft azure portal.
        // Free trial subscription keys are generated in the westcentralus region.
        // If you use a free trial subscription key, you shouldn't need to change
        // this region.
        const string endPoint =
            "https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr";

        /// 
<summary>
        /// Gets the text visible in the specified image file by using
        /// the Computer Vision REST API.
        /// </summary>

        public async Task<string> MakeOCRRequest()
        {
            string imageFilePath = @"C:\Users\rajeesh.raveendran\Desktop\bill.jpg";
            var errors = new List<string>();
            string extractedResult = "";
            ImageInfoViewModel responeData = new ImageInfoViewModel();

            try
            {
                HttpClient client = new HttpClient();

                // Request headers.
                client.DefaultRequestHeaders.Add(
                    "Ocp-Apim-Subscription-Key", subscriptionKey);

                // Request parameters.
                string requestParameters = "language=unk&detectOrientation=true";

                // Assemble the URI for the REST API Call.
                string uri = endPoint + "?" + requestParameters;

                HttpResponseMessage response;

                // Request body. Posts a locally stored JPEG image.
                byte[] byteData = GetImageAsByteArray(imageFilePath);

                using (ByteArrayContent content = new ByteArrayContent(byteData))
                {
                    // This example uses content type "application/octet-stream".
                    // The other content types you can use are "application/json"
                    // and "multipart/form-data".
                    content.Headers.ContentType =
                        new MediaTypeHeaderValue("application/octet-stream");

                    // Make the REST API call.
                    response = await client.PostAsync(uri, content);
                }

                // Get the JSON response.
                string result = await response.Content.ReadAsStringAsync();

                //If it is success it will execute further process.
                if (response.IsSuccessStatusCode)
                {
                    // The JSON response mapped into respective view model.
                    responeData = JsonConvert.DeserializeObject<ImageInfoViewModel>(result,
                        new JsonSerializerSettings
                        {
                            NullValueHandling = NullValueHandling.Include,
                            Error = delegate (object sender, Newtonsoft.Json.Serialization.ErrorEventArgs earg)
                            {
                                errors.Add(earg.ErrorContext.Member.ToString());
                                earg.ErrorContext.Handled = true;
                            }
                        }
                    );

                    var linesCount = responeData.regions[0].lines.Count;
                    for (int i = 0; i < linesCount; i++)
                    {
                        var wordsCount = responeData.regions[0].lines[i].words.Count;
                        for (int j = 0; j < wordsCount; j++)
                        {
                            //Appending all the lines content into one.
                            extractedResult += responeData.regions[0].lines[i].words[j].text + " ";
                        }
                        extractedResult += Environment.NewLine;
                    }

                }
            }
            catch (Exception e)
            {
                Console.WriteLine("\n" + e.Message);
            }
            return extractedResult;
        }

        /// 
<summary>
        /// Returns the contents of the specified file as a byte array.
        /// </summary>

        /// <param name="imageFilePath">The image file to read.</param>
        /// <returns>The byte array of the image data.</returns>
        static byte[] GetImageAsByteArray(string imageFilePath)
        {
            using (FileStream fileStream =
                new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
            {
                BinaryReader binaryReader = new BinaryReader(fileStream);
                return binaryReader.ReadBytes((int)fileStream.Length);
            }
        }
    }

}

API Response – Based on the given Image

The successful json response.

{
  "language": "en",
  "orientation": "Up",
  "textAngle": 0,
  "regions": [
    {
      "boundingBox": "306,69,292,206",
      "lines": [
        {
          "boundingBox": "306,69,292,24",
          "words": [
            {
              "boundingBox": "306,69,17,19",
              "text": "\"I"
            },
            {
              "boundingBox": "332,69,45,19",
              "text": "Will"
            },
            {
              "boundingBox": "385,69,88,24",
              "text": "Always"
            },
            {
              "boundingBox": "482,69,94,19",
              "text": "Choose"
            },
            {
              "boundingBox": "585,74,13,14",
              "text": "a"
            }
          ]
        },
        {
          "boundingBox": "329,100,246,24",
          "words": [
            {
              "boundingBox": "329,100,56,24",
              "text": "Lazy"
            },
            {
              "boundingBox": "394,100,85,19",
              "text": "Person"
            },
            {
              "boundingBox": "488,100,24,19",
              "text": "to"
            },
            {
              "boundingBox": "521,100,32,19",
              "text": "Do"
            },
            {
              "boundingBox": "562,105,13,14",
              "text": "a"
            }
          ]
        },
        {
          "boundingBox": "310,131,284,19",
          "words": [
            {
              "boundingBox": "310,131,95,19",
              "text": "Difficult"
            },
            {
              "boundingBox": "412,131,182,19",
              "text": "Job....Because"
            }
          ]
        },
        {
          "boundingBox": "326,162,252,24",
          "words": [
            {
              "boundingBox": "326,162,31,19",
              "text": "He"
            },
            {
              "boundingBox": "365,162,44,19",
              "text": "Will"
            },
            {
              "boundingBox": "420,162,52,19",
              "text": "Find"
            },
            {
              "boundingBox": "481,167,28,14",
              "text": "an"
            },
            {
              "boundingBox": "520,162,58,24",
              "text": "Easy"
            }
          ]
        },
        {
          "boundingBox": "366,193,170,24",
          "words": [
            {
              "boundingBox": "366,193,52,24",
              "text": "way"
            },
            {
              "boundingBox": "426,193,24,19",
              "text": "to"
            },
            {
              "boundingBox": "459,193,33,19",
              "text": "Do"
            },
            {
              "boundingBox": "501,193,35,19",
              "text": "It!\""
            }
          ]
        },
        {
          "boundingBox": "462,256,117,19",
          "words": [
            {
              "boundingBox": "462,256,37,19",
              "text": "Bill"
            },
            {
              "boundingBox": "509,256,70,19",
              "text": "Gates"
            }
          ]
        }
      ]
    }
  ]
}

Download

Output

Optical Character Recognition (OCR) from an image using Computer Vision API.

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned Optical Character Recognition (OCR) from an image using One of the important Cognitive Services API ( Computer Vision API ). I hope this article is useful for all Azure Cognitive Services API beginners.

 

%d bloggers like this: