Get started with Blazor Apps


Introduction

We have developed many Single Page Application using Angular,  Aurelia, React, etc. Finally the microsoft has introduced Blazor Apps and It is a programming framework to build client side rich web application with dotnet using c#. What ? Is it possible to create a Single Page Application using C# without using JavaScript ? Yes we can create a rich interactive User Interface ( UIs ) using C# instead of JavaScript and also the client and server code logic has written in the app using .Net. The Innovations are the major concerns of every era, So as a dotnet developer we can develop a Single Page Application ( SPA ) using Microsoft C#.

Blazor app have five different editions and two are now available in the Visual Studio and three editions are in a planing stage.

  • Blazor WebAssembly
  • Blazor Server
  • Blazor Progressive Web Apps (PWAs)
  • Blazor Hybrid
  • Blazor Native

The following are the currently available Blazor apps.

Blazor WebAssembly

Blazor WebAssembly is a single-page app framework for building interactive client-side web apps with .NET ( dotnet ). Blazor is running client-side in the browser on WebAssembly and the client side code has been written in C# instead of using javascript. So the .Net code is running inside the browser with the help of Webassembly(abbreviated wasm). Yes , It will work all the modern web browsers and including mobile browsers. There’s no .NET server-side dependency but the app is fully functional  after downloading the .Net runtime to the client. This will create a client-side dependency and Serverless deployment scenarios are possible because ASP.NET Core web server doesn’t required to host the app.

As per the Microsoft information “Blazor WebAssembly” is still in preview and expect to live by may 2020. So it is clear that Blazor WebAssembly is not ready for the production use and currently in a development stage.  If you’re looking for a production solution, then you can choose “Blazor Server” and it is recommended by Microsoft.

Blazor Server

Blazor Server provides support for hosting Razor components on the server in an ASP.NET Core app that means Blazor can run your client logic on the server.  The UI updates, event handling, and JavaScript calls are handled over with the help of “SignalR” ( a real-time messaging framework. ) connection. The download size is smaller than Blazor Server Comparing to Blazor WebAssembly app because those are handling in server side and the app loads much faster. Serverless deployment is not possible because an ASP.NET Core server is required to serve the app.

As daniel mentioned in the document “We have expect to have official previews of support for Blazor PWAs and Blazor Hybrid apps using Electron in the .NET 5 time frame (Nov 2020). There isn’t a road map for Blazor Native support yet. The following are the Blazor App planning editions”.

Blazor PWAs ( Progressive Web Apps )

Blazor PWAs ( Progressive Web Apps  ) are web apps that support the latest web standards to provide a more native-like experience. It will work offline and online scenarios and support push notifications, OS integrations, etc.

Blazor Hybrid

Native .Net readers to Electron and Blazor Hybrid apps don’t run on WebAssembly but instead uses a native .NET runtime like .NET Core or Xamarin. It will work offline or online scenarios.

Blazor Native

Same programming model but rendering non-HTML UI.

Note :  The application is tested in .NET Core 3.1 SDK & Visual Studio 2019 16.4 version and all the steps depend on the respective versions.

Prerequisites

  1. Install Visual Studio 2019 16.4 or later with the ASP.NET and web development workload.
  2. Install the .NET Core 3.1 SDK.

1.  Create a new project in Visual Studio 2019 ( version 16.4 ) and It lists the available project templates. So we have to  choose “Blazor App” template for development.

Create a new project

Create a new project

2. Configure new project name, solution name and location in our system.

Configure new project

3.  Based on the selected Blazor template ( Step 1 ) It will display the two available Blazor app in Visual Studio. So we have selected “Blazor Server App” from the list.

Output

We can run the application and see the first output of our Blazor Server App.

Reference

Summary

From this article we have learned the basics of microsoft Blazor Apps With Visual Studio 2019. I hope this article is useful for all the Blazor ASP.NET Core beginners.

C# Corner One Million Readers Club


Thank you all my readers !!! Thanks for the gift Atul Gupta , Praveen Kumar and C# corner team. #MVP #CsharpCorner

Getting Started With Angular Routing Using Angular CLI – Part Three


Introduction

In this article we are going to learn the basic routing concepts in Angular 6 using Angular CLI( Command Line Interface ). So before reading this article, you must read the following articles for more knowledge.

Routing in Application

In every application we do a minimal number of route concepts. For Eg While clicking the menu page it will redirect into child page or different page,etc. How can we do this in Angular ? Yes, There are few basic stuff we need to configure before the routing implementation in Angular.

Creating a Routing Module

By using command line parameter "--routing" we’re able to create a routing module in the Angular application. So by using any of the following command we can create routing module in Angular with the default routing setup. We have given “app-routing” as our routing module name.

ng generate module app-routing --flat --module=app
ng g m app-routing --flat --module=app

Note :

--flat puts the file in src/app instead of its own folder.
--module=app tells the CLI to register it in the imports array of the AppModule

The following code is generated after the above command execution.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

@NgModule({
  imports: [
    CommonModule
  ],
  declarations: []
})
export class AppRoutingModule { }

Import Router in the Application

Angular Router is an optional service and it is not a part of "@angular/core" and Router Modules are the part of "@angular/router" library package.  So we need to import “RouterModule” and “Routes” from "@angular/router" library package.

import { RouterModule , Routes } from '@angular/router';

Add Routes

A very simple explanation in Angular docs is “Routes” tell the router which view to display when a user clicks a link or pastes a URL into the browser address bar. So the scenario is whether a click or paste a URL into the browser !!

const routes: Routes = [{}];

We have created a empty routes in our routing module, Now we need to add redirect page, default page , 404 page , etc. For this just type “a-path” inside the “{}” and It will display the possible routing options in the routing module.

Now we have added path and component name in the Routes.

const routes: Routes = [{ path: 'customer', component: CustomerComponent }];

We already know “app component” is our default launching  page in Angular application. So we need to setup a routing configuration in the app component.

RouterModule.forRoot()

We first initialize and import the router setup and it start listening the browser location changes.Whatever routes we have mentioned earlier will be injected into the forRoot().

@NgModule({
  exports: [RouterModule] ,
  imports: [
    CommonModule, [RouterModule.forRoot(routes)]
  ],
  declarations: []
})

RouterModule.forChild()

forChild() is used for the submodules and lazy loaded submodules as the following way.

@NgModule({
  exports: [RouterModule] ,
  imports: [
    CommonModule, [RouterModule.forChild(routes)]
  ],
  declarations: []
})

Router outlet

RouterOutlet  acts as a placeholder and it looks like a component. So when you place this outlet in an app component then it will be dynamically filled based on the current router state.

<router-outlet></router-outlet>

Navigation

The navigation we have added in the same app component page and while clicking on the “/customer” string content in “routerLink” then  It will redirect into the respective page. We can add more functionality in the anchor tag like active,binding array of links,etc

The Router Output

 

Page Not Found – 404 Error !!

If user is trying to access different page in the application and that is not part of the routing configuration then we need to display an Error message page called as Page Not Found. Any of the following command will create a “Page Not Found” page with using inline template.

ng generate component PageNotFound --inline-template
ng g c PageNotFound -t

We have modified the PageNotFound typescript file.

If you want to add bootstrap style in the application then import the following reference link in the style.css.

@import url('https://unpkg.com/bootstrap@3.3.7/dist/css/bootstrap.min.css');

We have updated the router URL as “PageNotFoundComponent” with path mentioned as “**” in the last route is a wildcard. if the requested URL doesn’t match any paths for routes defined in the following configuration. This will help us to displaying a “404 – Not Found” page or redirecting to another route.

const routes: Routes = [{ path: '', component: EmployeeComponent },
{ path: 'customer', component: CustomerComponent },
{ path: '**', component: PageNotFoundComponent }];

Output

Download :

Reference

Summary

From this article we have learned the basic routing concepts of  Angular CLI. I hope this article is useful for all the Angular CLI beginners.

Getting started with Angular 6 using Angular CLI – Part 1


Introduction

Today we are going to learn most popular Single Page Application called as Angular Js using Angular CLI ( Command Line Interface ). We know by using Angular we can build modern applications such as web, mobile, or desktop in the real world. Previously we learnt about another famous single page application called as Aurelia.

Prerequisites

  • Download and Install : Node.Js ( Angular requires Node.js version 8.x or 10.x. )
  • Download and Install : VS Code ( A best opensource editor for developing Angular Application,etc ).
  • Download and Install : Git ( Not mandatory but use for a best practice ).

Angular CLI Installation

We can install Angular Command Line Interface after the installation of “node js” in our environment.  This will help you to create projects, generate application and library code, and perform such variety of ongoing development tasks such as testing, bundling, and deployment in our application. So this will reduce the time of development activities because of command line Interface register and generate files automatically at the beginning stage.  So that is why we need a version control for newly changed item in the application and it will identify  in which files what types of changes are reflected. This is more helpful for the beginners in   Angular CLI application development. Now we can install the Angular CLI globally by using the following command in our application.

npm install -g @angular/cli

Create a Project

Now we are going to create a simple project repository in our environment. The repository I have created in Github account and it is been cloned  in my environment(machine). The best practice of Angular CLI or any other application is configuring a version control in our VS Code otherwise we blindly create and check-in the code in project repository. No issues we can continue without configuring a version control in our application.

Go to our repository or  the place where we are planning to create a project setup , open the command prompt and run the following command.

ng new angular-app

Eg : “ng new [ProjectName]” our project name is a “angular-app”

Angular CLI created “angular-app” in our repository.

Install Angular Essential Extension

Click on Visual Studio Code Extension menu in left side bar then search “Angular Essential”  (john papa) in the search box. Once we install Angular Essential,  it will install all other supporting packages in the application automatically. Also It will generate a different icons for all the folders,ts files,style,json,etc.

Angular Build

We are generated a dummy application in our repository. The next step will be to build our application. For that open a “Command Terminal” in Visual Studio Code.

  1. Click on Visual Studio “Terminal” menu from the top of the menu list.
  2. “Terminal” menu displays a list of options ,  just click “New Terminal (  Cntrl + Shift + ~ )”.

There is one more short key for opening “Terminal” in VS Code ( “Cntrl + ~ ” ) . We can see the following terminal displayed on the right side bottom of  VS Code.

Now we need to build our application for that we need to open root directory of the application. May be when you open the Terminal in the VS code it will display the repository path of our application. So just change to application path by the following way.

The build artifacts will be stored in the dist/ directory folder in the application. Now run the Angular CLI build Command.

ng build

If you are getting the following error then that means we need to install “@angular-devkit/build-angular” dev dependency. This package is newly introduced in Angular 6.0.

The following command will help us to create devkit dependency in our application.

npm install --save-dev @angular-devkit/build-angular

If you are facing this same issue after the installation of dev kit then you need to uninstall and install the angular cli.

App Component

Components are the basic UI building block of an Angular app. So here we can see there is a “app” component generated under the “src -> app” folder in “angular-app” application. If you are using Angular CLI then it will auto generate all the files that are relevant for your basic application. For example in the following screen shot app folder contain such auto generated files are css,spec,ts and module.

Angular Serve

Now build is succeeded, Yes our application is ready to serve. Please run any of the following command ( one is alias or a short command ) to open our application in a browser.

ng serve --open
ng s -o

or If we don’t want to open application in a browser then just run the following command and navigate to "http://localhost:4200/".

ng serve

Bundling the application

So we can bundle our application using any of the following command and flag “–prod” bundle for production.

ng build --prod
ng serve --prod

for more options click here.

Changing the default port number

Every application should navigate to "http://localhost:4200/" as default. So If you are really thinking to open an application in a different port address then it is possible. Just run any of the following command.

ng s --port 3000 --open
ng s --port 3000 -o

Output :

As I mentioned earlier the default port we have changed to “3000” instead of “4200”.

Download :

Reference

Summary

From this article we have learned the basic configuration of Angular 6 using Angular CLI and few basic CLI commands. I hope this article is useful for all the Angular CLI beginners.

Cognitive Services – Optical Character Recognition (OCR) from an image using Computer Vision API And C#


Introduction

In our previous article we learned how to Analyze an Image Using Computer Vision API With ASP.Net Core & C#. In this article we are going to learn how to extract printed text also known as optical character recognition (OCR) from an image using one of the important Cognitive Services API called as Computer Vision API. So we need a valid subscription key for accessing this feature in an image.

Optical Character Recognition (OCR)

Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream.

Prerequisites

  1. Subscription key ( Azure Portal ).
  2. Visual Studio 2015 or 2017

Subscription Key Free Trail

If you don’t have Microsoft Azure Subscription and want to test the Computer Vision API because it requires a valid Subscription key for processing the image information. Don’t worry !! Microsoft gives a 7 day trial Subscription Key ( Click here ). We can use that Subscription key for testing purposes. If you sign up using the Computer Vision free trial, then your subscription keys are valid for the westcentral region (https://westcentralus.api.cognitive.microsoft.com ).

Requirements

These are the major requirements mentioned in the Microsoft docs.

  1. Supported input methods: Raw image binary in the form of an application/octet stream or image URL.
  2. Supported image formats: JPEG, PNG, GIF, BMP.
  3. Image file size: Less than 4 MB.
  4. Image dimension: Greater than 50 x 50 pixels.

Computer Vision API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Computer Vision Subscription Key in the Azure portal.

Click on “Create a resource” on the left side menu and it will open an “Azure Marketplace”. There, we can see the list of services. Click “AI + Machine Learning” then click on the “Computer Vision”.

Provision a Computer Vision Subscription Key

After clicking the “Computer Vision”, It will open another section. There, we need to provide the basic information about Computer Vision API.

Name : Name of the Computer Vision API ( Eg. OCRApp ).

Subscription : We can select our Azure subscription for Computer Vision API creation.

Location : We can select our location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one.

Now click on the “OCRApp” in dashboard page and it will redirect to the details page of OCRApp ( “Overview” ). Here, we can see the Manage Key ( Subscription key details ) & Endpoint details. Click on the Show access keys links and it will redirect to another page.

We can use any of the subscription keys or regenerate the given key for getting image information using Computer Vision API.

 

Endpoint

As we mentioned above the location is the same for all the free trial Subscription Keys. In Azure we can choose available locations while creating a Computer Vision API. We have used the following endpoint in our code.

https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr

View Model

The following model will contain the API image response information.

using System.Collections.Generic;

namespace OCRApp.Models
{
    public class Word
    {
        public string boundingBox { get; set; }
        public string text { get; set; }
    }

    public class Line
    {
        public string boundingBox { get; set; }
        public List<Word> words { get; set; }
    }

    public class Region
    {
        public string boundingBox { get; set; }
        public List<Line> lines { get; set; }
    }

    public class ImageInfoViewModel
    {
        public string language { get; set; }
        public string orientation { get; set; }
        public int textAngle { get; set; }
        public List<Region> regions { get; set; }
    }
}

Request URL

We can add additional parameters or request parameters ( optional ) in our API “endPoint” and it will provide more information for the given image.

https://[location].api.cognitive.microsoft.com/vision/v1.0/ocr[?language][&detectOrientation ]

Request parameters

These are the following optional parameters available in computer vision API.

  1. language
  2. detectOrientation

language

The service will detect 26 languages of the text in the image and It will contain “unk” as the default value. That means the service will auto detect the language of the text in the image.

The following are the supported language mention in the Microsoft API documentation.

  1. unk (AutoDetect)
  2. en (English)
  3. zh-Hans (ChineseSimplified)
  4. zh-Hant (ChineseTraditional)
  5. cs (Czech)
  6. da (Danish)
  7. nl (Dutch)
  8. fi (Finnish)
  9. fr (French)
  10. de (German)
  11. el (Greek)
  12. hu (Hungarian)
  13. it (Italian)
  14. ja (Japanese)
  15. ko (Korean)
  16. nb (Norwegian)
  17. pl (Polish)
  18. pt (Portuguese,
  19. ru (Russian)
  20. es (Spanish)
  21. sv (Swedish)
  22. tr (Turkish)
  23. ar (Arabic)
  24. ro (Romanian)
  25. sr-Cyrl (SerbianCyrillic)
  26. sr-Latn (SerbianLatin)
  27. sk (Slovak)

detectOrientation

This will detect the text orientation in the image, for this feature we need to add detectOrientation=true in the service url  or Request url as we discussed earlier.

Vision API Service

The following code will process and generate image information using Computer Vision API and its response is mapped into the “ImageInfoViewModel”. We can add the valid Computer Vision API Subscription Key into the following code.

using Newtonsoft.Json;
using OCRApp.Models;
using System;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace OCRApp.Business_Layer
{
    public class VisionApiService
    {
        // Replace <Subscription Key> with your valid subscription key.
        const string subscriptionKey = "<Subscription Key>";

        // You must use the same region in your REST call as you used to
        // get your subscription keys. The paid subscription keys you will get
        // it from microsoft azure portal.
        // Free trial subscription keys are generated in the westcentralus region.
        // If you use a free trial subscription key, you shouldn't need to change
        // this region.
        const string endPoint =
            "https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr";

        /// 
<summary>
        /// Gets the text visible in the specified image file by using
        /// the Computer Vision REST API.
        /// </summary>

        public async Task<string> MakeOCRRequest()
        {
            string imageFilePath = @"C:\Users\rajeesh.raveendran\Desktop\bill.jpg";
            var errors = new List<string>();
            string extractedResult = "";
            ImageInfoViewModel responeData = new ImageInfoViewModel();

            try
            {
                HttpClient client = new HttpClient();

                // Request headers.
                client.DefaultRequestHeaders.Add(
                    "Ocp-Apim-Subscription-Key", subscriptionKey);

                // Request parameters.
                string requestParameters = "language=unk&detectOrientation=true";

                // Assemble the URI for the REST API Call.
                string uri = endPoint + "?" + requestParameters;

                HttpResponseMessage response;

                // Request body. Posts a locally stored JPEG image.
                byte[] byteData = GetImageAsByteArray(imageFilePath);

                using (ByteArrayContent content = new ByteArrayContent(byteData))
                {
                    // This example uses content type "application/octet-stream".
                    // The other content types you can use are "application/json"
                    // and "multipart/form-data".
                    content.Headers.ContentType =
                        new MediaTypeHeaderValue("application/octet-stream");

                    // Make the REST API call.
                    response = await client.PostAsync(uri, content);
                }

                // Get the JSON response.
                string result = await response.Content.ReadAsStringAsync();

                //If it is success it will execute further process.
                if (response.IsSuccessStatusCode)
                {
                    // The JSON response mapped into respective view model.
                    responeData = JsonConvert.DeserializeObject<ImageInfoViewModel>(result,
                        new JsonSerializerSettings
                        {
                            NullValueHandling = NullValueHandling.Include,
                            Error = delegate (object sender, Newtonsoft.Json.Serialization.ErrorEventArgs earg)
                            {
                                errors.Add(earg.ErrorContext.Member.ToString());
                                earg.ErrorContext.Handled = true;
                            }
                        }
                    );

                    var linesCount = responeData.regions[0].lines.Count;
                    for (int i = 0; i < linesCount; i++)
                    {
                        var wordsCount = responeData.regions[0].lines[i].words.Count;
                        for (int j = 0; j < wordsCount; j++)
                        {
                            //Appending all the lines content into one.
                            extractedResult += responeData.regions[0].lines[i].words[j].text + " ";
                        }
                        extractedResult += Environment.NewLine;
                    }

                }
            }
            catch (Exception e)
            {
                Console.WriteLine("\n" + e.Message);
            }
            return extractedResult;
        }

        /// 
<summary>
        /// Returns the contents of the specified file as a byte array.
        /// </summary>

        /// <param name="imageFilePath">The image file to read.</param>
        /// <returns>The byte array of the image data.</returns>
        static byte[] GetImageAsByteArray(string imageFilePath)
        {
            using (FileStream fileStream =
                new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
            {
                BinaryReader binaryReader = new BinaryReader(fileStream);
                return binaryReader.ReadBytes((int)fileStream.Length);
            }
        }
    }

}

API Response – Based on the given Image

The successful json response.

{
  "language": "en",
  "orientation": "Up",
  "textAngle": 0,
  "regions": [
    {
      "boundingBox": "306,69,292,206",
      "lines": [
        {
          "boundingBox": "306,69,292,24",
          "words": [
            {
              "boundingBox": "306,69,17,19",
              "text": "\"I"
            },
            {
              "boundingBox": "332,69,45,19",
              "text": "Will"
            },
            {
              "boundingBox": "385,69,88,24",
              "text": "Always"
            },
            {
              "boundingBox": "482,69,94,19",
              "text": "Choose"
            },
            {
              "boundingBox": "585,74,13,14",
              "text": "a"
            }
          ]
        },
        {
          "boundingBox": "329,100,246,24",
          "words": [
            {
              "boundingBox": "329,100,56,24",
              "text": "Lazy"
            },
            {
              "boundingBox": "394,100,85,19",
              "text": "Person"
            },
            {
              "boundingBox": "488,100,24,19",
              "text": "to"
            },
            {
              "boundingBox": "521,100,32,19",
              "text": "Do"
            },
            {
              "boundingBox": "562,105,13,14",
              "text": "a"
            }
          ]
        },
        {
          "boundingBox": "310,131,284,19",
          "words": [
            {
              "boundingBox": "310,131,95,19",
              "text": "Difficult"
            },
            {
              "boundingBox": "412,131,182,19",
              "text": "Job....Because"
            }
          ]
        },
        {
          "boundingBox": "326,162,252,24",
          "words": [
            {
              "boundingBox": "326,162,31,19",
              "text": "He"
            },
            {
              "boundingBox": "365,162,44,19",
              "text": "Will"
            },
            {
              "boundingBox": "420,162,52,19",
              "text": "Find"
            },
            {
              "boundingBox": "481,167,28,14",
              "text": "an"
            },
            {
              "boundingBox": "520,162,58,24",
              "text": "Easy"
            }
          ]
        },
        {
          "boundingBox": "366,193,170,24",
          "words": [
            {
              "boundingBox": "366,193,52,24",
              "text": "way"
            },
            {
              "boundingBox": "426,193,24,19",
              "text": "to"
            },
            {
              "boundingBox": "459,193,33,19",
              "text": "Do"
            },
            {
              "boundingBox": "501,193,35,19",
              "text": "It!\""
            }
          ]
        },
        {
          "boundingBox": "462,256,117,19",
          "words": [
            {
              "boundingBox": "462,256,37,19",
              "text": "Bill"
            },
            {
              "boundingBox": "509,256,70,19",
              "text": "Gates"
            }
          ]
        }
      ]
    }
  ]
}

Download

Output

Optical Character Recognition (OCR) from an image using Computer Vision API.

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned Optical Character Recognition (OCR) from an image using One of the important Cognitive Services API ( Computer Vision API ). I hope this article is useful for all Azure Cognitive Services API beginners.

 

C# Corner MVP 2018 !! 3 years of glory !! Hatrick 🏆


Hatrick !! Yes three in a row , It is a great privilege to my professional career as well a great motivation through out my upcoming years. I believe that one day each award I receive will mark my identity in the industry.I would like to extend my gratitude to my family and friends..Tadit , Sayed , Priyan , Ronen, C# Corner Team,etc are my greatest inspiration to write articles in technical community.

Thank you for all support 🙏

Three years of glory : 2015-16 , 2016-17 , 2017-18

Thank you Stratis for your awesome Tshirt & Power bank.

Fabulous Tshirts…!!

Cognitive Services : Analyze an Image Using Computer Vision API With ASP.Net Core & C#


Introduction

One of the important Cognitive Services API is Computer Vision API and it helps to access the advanced algorithms for processing images and returning valuable information. For example By uploading an image or specifying an image URL, Microsoft Computer Vision algorithms can analyze visual content in different ways based on inputs and user choices. So we will get various information about the given image. We need a valid subscription key for accessing this feature.

Prerequisites

  1. Subscription key ( Azure Portal ).
  2. Visual Studio 2015 or 2017

Subscription Key Free Trail

If you don’t have Microsoft Azure Subscription and want to test the Computer Vision API because it requires a valid Subscription key for processing the image information. Don’t worry !! Microsoft gives a 7 day’s trail Subscription Key ( Click here ) . We can use that Subscription key for testing purpose. If you sign up using the Computer Vision free trial, Then your subscription keys are valid for the westcentral region ( https://westcentralus.api.cognitive.microsoft.com )

Requirements

These are the major requirements mention in the Microsoft docs.

  1. Supported input methods: Raw image binary in the form of an application/octet stream or image URL.
  2. Supported image formats: JPEG, PNG, GIF, BMP.
  3. Image file size: Less than 4 MB.
  4. Image dimension: Greater than 50 x 50 pixels.

Computer Vision API

First, we need to log into the Azure Portal with our Azure credentials. Then we need to create an Azure Computer Vision Subscription Key in the Azure portal.

Click on “Create a resource” on the left side menu and it will open an “Azure Marketplace”. There, we can see the list of services. Click “AI + Machine Learning” then click on the “Computer Vision”.

Provision a Computer Vision Subscription Key

After clicking the “Computer Vision”, it will open another section. There, we need to provide the basic information about Computer Vision API.

Name : Name of the Computer Vision API.

Subscription : We can select our Azure subscription for Computer Vision API creation.

Location : We can select our location of resource group. The best thing is we can choose a location closest to our customer.

Pricing tier : Select an appropriate pricing tier for our requirement.

Resource group : We can create a new resource group or choose from an existing one.

Now click on the MenothVision in dashboard page and it will redirect to the details page of MenothVision ( “Overview” ). Here, we can see the Manage Key ( Subscription key details ) & Endpoint details. Click on the Show access keys links and it will redirect to another page.

We can use any of the Subscription key or Regenerate the given key for getting image information using Computer Vision API.

Endpoint

As we mentioned above the location is same for all the free trail Subscription Key. In Azure we can choose available locations while creating a Computer Vision API. The following Endpoint we have used in our code.

https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze

View Model

The following model will contain the API image response information.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace VisionApiDemo.Models
{
public class Detail
{
public List<object> celebrities { get; set; }
}

public class Category
{
public string name { get; set; }
public double score { get; set; }
public Detail detail { get; set; }
}

public class Caption
{
public string text { get; set; }
public double confidence { get; set; }
}

public class Description
{
public List<string> tags { get; set; }
public List<Caption> captions { get; set; }
}

public class Color
{
public string dominantColorForeground { get; set; }
public string dominantColorBackground { get; set; }
public List<string> dominantColors { get; set; }
public string accentColor { get; set; }
public bool isBwImg { get; set; }
}

public class Metadata
{
public int height { get; set; }
public int width { get; set; }
public string format { get; set; }
}

public class ImageInfoViewModel
{
public List<Category> categories { get; set; }
public Description description { get; set; }
public Color color { get; set; }
public string requestId { get; set; }
public Metadata metadata { get; set; }
}
}

Request URL

We can add additional parameters or request parameters ( optional ) in our API “endPoint” and it will provide more information for the given image.

https://%5Blocation%5D.api.cognitive.microsoft.com/vision/v1.0/analyze%5B?visualFeatures%5D%5B&details%5D%5B&language%5D

Request parameters

Currently we can use 3 optional parameters.

  1. visualFeatures
  2. details
  3. language

visualFeatures

The name itself clearly mentions it returns Visual Features of the given image. If we add multiple values in a visualFeatures parameters then put a comma for each value. The following are the visualFeatures parameters in API.

  • Categories
  • Tags
  • Description
  • Faces
  • ImageType
  • Color
  • Adult

details

This parameter will return domain specific information whether it is Celebrities or Landmarks.

Celebrities : If the detected image is of a celebrity it identify the same.

Landmarks : If the detected image is of a landmark it identify the same.

language

The service will return recognition results in specified language. Default language is english.

Supported languages.

  • en – English, Default.
  • zh – Simplified Chinese

Vision API Service

The following code will process and generate image information using Computer Vision API and its response is mapped into the “ImageInfoViewModel”. We can add the valid Computer Vision API Subscription Key into the following code.

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using VisionApiDemo.Models;

namespace VisionApiDemo.Business_Layer
{
public class VisionApiService
{
const string subscriptionKey = "<Enter your subscriptionKey>";
const string endPoint =
"https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze";

public async Task<ImageInfoViewModel> MakeAnalysisRequest()
{
string imageFilePath = @"C:\Users\Rajeesh.raveendran\Desktop\Rajeesh.jpg";
var errors = new List<string>();
ImageInfoViewModel responeData = new ImageInfoViewModel();
try
{
HttpClient client = new HttpClient();

// Request headers.
client.DefaultRequestHeaders.Add(
"Ocp-Apim-Subscription-Key", subscriptionKey);

// Request parameters. A third optional parameter is "details".
string requestParameters =
"visualFeatures=Categories,Description,Color";

// Assemble the URI for the REST API Call.
string uri = endPoint + "?" + requestParameters;

HttpResponseMessage response;

// Request body. Posts a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);

using (ByteArrayContent content = new ByteArrayContent(byteData))
{
// This example uses content type "application/octet-stream".
// The other content types you can use are "application/json"
// and "multipart/form-data".
content.Headers.ContentType =
new MediaTypeHeaderValue("application/octet-stream");

// Make the REST API call.
response = await client.PostAsync(uri, content);
}

// Get the JSON response.
var result = await response.Content.ReadAsStringAsync();

if (response.IsSuccessStatusCode)
{

responeData = JsonConvert.DeserializeObject<ImageInfoViewModel>(result,
new JsonSerializerSettings
{
NullValueHandling = NullValueHandling.Include,
Error = delegate (object sender, Newtonsoft.Json.Serialization.ErrorEventArgs earg)
{
errors.Add(earg.ErrorContext.Member.ToString());
earg.ErrorContext.Handled = true;
}
}
);
}

}
catch (Exception e)
{
Console.WriteLine("\n" + e.Message);
}

return responeData;
}

static byte[] GetImageAsByteArray(string imageFilePath)
{
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
}

API Response – Based on the given Image

The successful json response.

{
"categories": [
{
"name": "people_group",
"score": 0.6171875,
"detail": {
"celebrities": []
}
},
{
"name": "people_many",
"score": 0.359375,
"detail": {
"celebrities": []
}
}
],
"description": {
"tags": [
"person",
"sitting",
"indoor",
"posing",
"group",
"people",
"man",
"photo",
"woman",
"child",
"front",
"young",
"table",
"cake",
"large",
"holding",
"standing",
"bench",
"room",
"blue"
],
"captions": [
{
"text": "a group of people sitting posing for the camera",
"confidence": 0.9833507086594954
}
]
},
"color": {
"dominantColorForeground": "White",
"dominantColorBackground": "White",
"dominantColors": [
"White",
"Black",
"Red"
],
"accentColor": "AD1E3E",
"isBwImg": false
},
"requestId": "89f21ccf-cb65-4107-8620-b920a03e5f03",
"metadata": {
"height": 346,
"width": 530,
"format": "Jpeg"
}
}

Download

Output

Image information captured using Computer Vision API.For demo purpose, I have taken only a few data even though you can get more information about the image.

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

From this article we have learned how to implement One of the important Cognitive Services API ( Computer Vision API ). I hope this article is useful for all Azure Cognitive Services API beginners.

Code First Migration – ASP.NET MVC 5 With EntityFrameWork & MySql


Introduction

We know how to use Code First Migration in SQL Server. But in most cases, a customer will think we can use it for the open source database. So that’s the reason we pick the “MySQL” database, and we can follow the same steps we follow in the “SQL” database. In this article, we are going to explain Code First Migration in ASP.NET MVC 5 with Entity FrameWork & MySQL.

Prerequisites

  1. MySQL Installer
  2. Download MySQL Workbench
  3. Visual Studio ( We are using Visual Studio 2017 Community Edition ).

Create a Web Application using MVC 5

Click on File -> New -> Project -> Visual C# -> Web -> ASP.Net Web Application ( .Net Framework ).

Click on “OK” then click on “MVC”.

Install EntityFramework & MySql Entity

Go to Visual Studio “Tools -> Nuget Package Manager -> Manage Nuget Packages for Solution” or Right click on your Web Application then click on “Manage Nuget Packages”.

EntityFramework

Search EntityFramework in the “Browse” Section.

MySql.Data.Entity

Search MySql.Data.Entity in the “Browse” Section.

Once we installed EntityFramework & MySql Entity in our application then it will generate a SQL and MySQL Provider inside the EntityFramework Section in Web.Config.

<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" />
<providers>
<provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
<provider invariantName="MySql.Data.MySqlClient" type="MySql.Data.MySqlClient.MySqlProviderServices, MySql.Data.Entity.EF6, Version=6.8.8.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d"></provider></providers>
</entityFramework>

Model Class

We just created a sample model class for demo purpose.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace WebAppWithMySql.Models
{
public class Student
{
public int Id { get; set; }

public string Name { get; set; }

public string Password { get; set; }
}
}

Creation of DBContext

Create a db context class in our application. The following dbcontext will point out our connection string in WebConfig.

using MySql.Data.Entity;
using System.Data.Entity;
using WebAppWithMySql.Models;

namespace WebAppWithMySql
{
[DbConfigurationType(typeof(MySqlEFConfiguration))]
public class WebAppContext : DbContext
{
public DbSet<Student> Products
{
get;
set;
}
public WebAppContext()
//Reference the name of your connection string ( WebAppCon )
: base("WebAppCon") { }
}
}

Connection String

We added the same connection string name that we added in the dbcontext class. The following connection string represents “MySql” Db.

<connectionStrings>
<add name="WebAppCon" providerName="MySql.Data.MySqlClient" connectionString="server=localhost;userid=root;password=rajeesh123;database=WebAppMySql;persistsecurityinfo=True" />
</connectionStrings>

Migration Steps

  1. Enable-Migrations – ( We need to enable the migration, only then can we do the EF Code First Migration ).
  2. Add-Migration IntialDb (migration name) – ( Add a migration name and run the command ).
  3. Update-Database -Verbose — if it is successful then we can see this message (Running Seed method).

Once Migration is done; then, we can see that the respective files are auto-generated under the “Migrations” folder.

OutPut

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

In this article, we are going to explain Code First Migration in ASP.NET MVC 5 with EntityFrameWork & MySql. I hope this article is useful for all Azure beginners.

Channel Configuration : Azure Bot Service to Slack Application


Introduction

This article explains how to configure Azure Bot Service to Slack Applications. So, before reading this article, please read our previous article related to Create and Connect a chat bot with Azure Bot Service. Then, we will get a clear idea of how to create a Bot service in Azure.

Create a Web App Bot in Azure

Click on “New” on the left side menu and it will open an Azure Marketplace. There, we can see the list of services. Click “AI + Cognitive Services” then click on the “Web App Bot” for your bot service app.

Bot Service

Fill the following details and add the location based on your client location or your Geolocation.

Once the build is successful, click on the “Dashboard” and we can see that the “menothbotdemo” bot is created in the All resources list. Bot is ready for use!

Create a Slack Application for our bot

First, we need to create a workspace in Slack Account. Check the following link to create a Slack Account: New slack account

Create an app and assign a Development Slack team or Slack Workspace

  1. Click on the url https://api.slack.com/apps. Then, click on the “Create New App” !!.

Once the Slack workspace is created, then only we can create a slack application under the Workspace. Now, we are going to create and assign our slack app name into the Workspace. We have given our App a name as “menothbotdemo”.

Click on the “Create App” button. Then, Slack will create our app and generate a Client ID and Client Secret. We can use these IDs for channel configuration in Azure Web App bot.

Add a new Redirect URL

Click on the “OAuth & Permission” tab in the left panel. Then, add the redirect URLs as “https://slack.botframework.com&#8221; and save it properly.

Create Bot Users

Click on the “Bot Users” tab in the left panel. Then, click on “Add a Bot User”. In this section, we can give our bot “Display name”. For example, we created our bot user’s name as “menothbotdemo”. If we want our bot to always show as Online, then click on the “On” button. After that, click “Add Bot User” button.

Event Subscriptions

  1. Select “Event Subscriptions” tab in the left panel.
  2. Click Enable Events to On.
  3. In the “Request URL” we need to add the following URL to our “Bot Handle Name”.

https://slack.botframework.com/api/Events/{bot handle name}

The “Bot Handle” name we will get inside the “Web App Bot ( we created our web app as “menothbotdemo”)” Settings.

Finally, we can add the Request URL inside the Event Subscriptions.

4.  In Subscribe to Bot Events, click “Add Bot User Event”.

5. In the list of events, click “Add Bot User Event” and select the following event name.

Subscribe to Bot Events

6. Click “Save Changes”.

Configure Interactive Messages ( Optional )

  1. Select the “Interactive Components” tab and click “Enable Interactive Components”.
  2. Enter https://slack.botframework.com/api/Actions as the request URL.
  3. Click the “Enable Interactive Messages” button, and then click the “Save Changes” button.

App Credentials

Select the “Basic Information” tab and then we will get the ClientID & Client Secret & Verification Token for our channel configuration in Azure Bot Service.

Channel Configuration

There is a very simple way to connect our bot service app to Slack in Azure. Just follow the following steps.

Click on the “Channels” menu on the left side option. Then, it will open a window with channel details where you can see “More channels” options. Then, select “Slack” in the channels list.

Add the following Slack App ( Already Created Slack App ) credentials into the Azure Slack configuration section.

  • ClientID
  • Client Seceret
  • Verification Token

Once the configuration is done, we can see our Slack configured into the channel.

C# Code

We have done some changes in the default code in bot service.

using System;
using System.Threading.Tasks;

using Microsoft.Bot.Connector;
using Microsoft.Bot.Builder.Dialogs;
using System.Net.Http;

namespace Microsoft.Bot.Sample.SimpleEchoBot
{
[Serializable]
public class EchoDialog : IDialog<object>
{
protected int count = 1;

public async Task StartAsync(IDialogContext context)
{
context.Wait(MessageReceivedAsync);
}

public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument)
{
var message = await argument;

if (message.Text == "reset")
{
PromptDialog.Confirm(
context,
AfterResetAsync,
"Are you sure you want to reset the count?",
"Didn't get that!",
promptStyle: PromptStyle.Auto);
}
else if (message.Text == "Hi")
{
await context.PostAsync($"{this.count++}: Slack Configured in Bot App !!");
context.Wait(MessageReceivedAsync);
}
else
{
await context.PostAsync($"{this.count++}: You said {message.Text}");
context.Wait(MessageReceivedAsync);
}
}

public async Task AfterResetAsync(IDialogContext context, IAwaitable<bool> argument)
{
var confirm = await argument;
if (confirm)
{
this.count = 1;
await context.PostAsync("Reset count.");
}
else
{
await context.PostAsync("Did not reset count.");
}
context.Wait(MessageReceivedAsync);
}

}
}

Output

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

We learned how to configure Azure Bot Service to Slack application. I hope this article is useful for all Azure beginners.

Create & Deploy an ASP.NET Core web app in Azure


Introduction

Nowadays, most people are choosing web hosting on a cloud platform. Microsoft lovers like us basically select “Microsoft Azure” as our hosting environment.That’s the reason I have written this new article defining a simple way of hosting in Azure.

Before reading this article, you must read the articles given below for ASP.NET Core knowledge.

Azure Account 

First, we need to create an account on the Azure portal. Only then can we host the application in the cloud environment. So, please check the following steps to create an Azure account.

Azure Account Registration

Create an account through this link to Azure Portal.

Domain Registration

We need to host our application in a particular domain. Check the following steps –

  1. Click on “All resources” on the left side menu and it will open a dashboard with an empty or already existing list of resources that we have created earlier.
  2. Click on the “Add” button and it will open another window with multiple options. We can choose an appropriate option to host our application.
  3. As per our requirement, we choose “Web + Mobile” and clicked on the “Web App” on the right side.
App Name Creation

App Name Creation

4. It will open another form to fill up our app details to host. We need to give a unique name in the “Appname” section and It will create a subdomain for our ASP.NET Core application.

5. We choose subscription as “Free Trial” because we created a free account on the Azure portal.

6. We need to host our app resources in Resource group, So first we need to create a resource group name in our Azure account. But we choose existing resource group name “AzureDemo” that we are already created in our Azure account.

7.”OS ( Operating System )” we selected as “Windows”( As per our requirement ).

8. We can create our own App Service Plan name.

9. Application Insight will give you the more clarity about your hosted app. Eg. analytics, etc.

10. Click on the “Create” button and wait for the build success.

Resource Group Name

Resource Group Name

11. Another way to create the Resource Group Name – click on “Resource groups -> Add”.

Resource Group

Resource Group

12. Once the build is succeeded, then we can see this output.

Build Succeeded

Simple steps to create an Asp.Net Core Application

  1. Open our  VisualStudio then click on File > New > Project.
  2. Select Visual C# > Web > ASP.NET Core Web Application.
  3. We have given our application name as “MyFirstAzureWebApp”.
  4. Then, click OK.
  5. Click on the “Ctrl+F5”

App Publishing into Azure
We created a default ASP.NET Core application ( We have done some changes in UI Section) for the publishing process.

  1. Right click on the application and click on the Publish menu.
App Publishing

App Publishing

2. Click on the “Microsoft Azure App Service”

3. We choose our existing resource group name ( That we created ! “AzureDemo” ) in our Azure Portal and It will display the App Name inside the “AzureDemo” folder. This will display only when we are login through the Visual Studio using Azure credentials ( Email & Password).

Resource Name

Resource Name

OutPut

The application is hosted in given domain address  http://menoth.azurewebsites.net/

Reference

See Also

You can download other ASP.NET Core source codes from MSDN Code, using the link, mentioned below.

Summary

We learned how to create and deploy an ASP.NET Core web app in Azure. I hope this article is useful for all ASP.NET Core & Azure beginners.

 

%d bloggers like this: