How to Build a Rocking Github Profile

Since the acquisition by Microsoft, Github been so much better! They have added so many new features, made mobile app. GitHub recently made lot of improvements on the UI aspects including adding a twitter handle to your profile. Also introduced a special feature for developers, that allows you to showcase yourself by pinning a README.md containing information about you, your work, portfolio and anything else on your GitHub profile.

In this post, I’ll show you how to create a rocking Github profile to showcase your skills when someone visits your profile

Prerequisites

  • A GitHub account
  • Basic markdown knowledge
  • Expertise on Making Gifs would be an added advantage

Step 1 :

Create a new ✨special✨ repository with your username. The special repository is case sensitive, ensure to use the same case as your account’s username.

Creating special repository

Click on the checkbox: Initialize this repository with a README. This will create a README.md file inside your <Username>/<Username> repository, where you will be adding the details

Step 2 :

In the README.md file in your ✨special✨ repository, Edit the file as you deem fit (add images, text, tables, lists, embeds and anything else markdown supports). The README file will automatically appear on your public GitHub profile!

Template Github Profile

If you are not sure what to add, you also get a free template out of the box, cool right?

Here’s my own finished rocking profile page from the special repository:

My Github Profile

This is definitely a great feature for developers to expose their skills and also to showcase to recruiters, followers etc. I would ask everyone to get creative and showcase everything about yourself to your frollowers

Done with your special repository ? Drop a link to your GitHub account in the comments and let’s see how amazing yours look. ✌🏾 Cheers!

Deploy Deno App on Azure with Github Actions

Overview :

Before two weeks Ryan Dahl (Founder of Node.JS) announced the first version of Deno. As the tagline says A secure runtime for JavaScript and TypeScript. Deno is a runtime for Javascript and Typescript that is based on the V8 JavaScript engine and the Rust programming language. I have been a Node developer for 2 years in the past, if you want to get started with Deno knowing Node.js would be an added advantage. Even though Deno has arrived as a competitor for NodeJS in the industry not so quick but people are sure that it’ll take over.

I was reading lot of documentations and materials to understand the difference. So, here are the advantages that i see from Deno,

  • It is Secure by default. No file, network, or environment access, unless explicitly enabled.
  • Supports TypeScript out of the box.
  • Ships only a single executable file.
  • Has built-in utilities like a dependency inspector (deno info) and a code formatter (deno fmt).
  • Deno does not use npm
  • Deno does not use package.json in its module resolution algorithm.
  • All async actions in Deno return a promise. Thus Deno provides different APIs than Node.
  • Uses “ES Modules” and does not support require().
  • Deno has a built-in test runner that you can use for testing JavaScript or TypeScript code.
  • Deno always dies on uncaught errors.

I was very excited as other developers when Deno was announced. In this post i will demonstrate how to create a simple Web API with Deno and deploy to production on Web App with Azure.

PreRequisities:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

Install Deno :

Using Shell (macOS, Linux):

curl -fsSL https://deno.land/x/install/install.sh | sh

Using PowerShell (Windows):

iwr https://deno.land/x/install/install.ps1 -useb | iex

Using Homebrew (macOS):

brew install deno

Using Chocolatey (Windows):

choco install deno

Using Scoop (Windows):

scoop install deno

Services used:

  • Azure Web App
  • Github Actions

Step 1 : Create Deno Rest API

I will not be going through each step on how to create the REST API, however if you are familiar with creating APIs with Node , it is the same way that you need to do. You need to have the main file server.ts which will have those routes defined. (server.ts)

import { Application } from "https://deno.land/x/oak/mod.ts";
import router from "./routes.ts";
const PORT = 8001;

const app = new Application();

app.use(router.routes());
app.use(router.allowedMethods());

console.log(`Server at ${PORT}`);
await app.listen({ port: PORT });

One feature that i personally liked in DENO is that it provides developers to code with  TypeScript that addresses “design mistakes” in Node.js. In this case i am going to create an API to fetch/add/delete products and my interface would look like as below (types.ts),

export interface Product {
    id: String;
    name: String;
    description: String;
    price: Number;
    status: String;
}

Similar to how you would define routes in Node, you need to define the routes for different endpoints when user want to execute fetch/add/delete operations as follows(routes.ts),

import { Router } from "https://deno.land/x/oak/mod.ts";
import { delete_product, add_product, get_product, get_products } from "./Controllers/Products.ts";

const router = new Router();

router.get("/", ctx => {
    ctx.response.body = "Welcome to Deno!";
});

router.get("/get/:id", get_product);
router.post("/add", add_product);
router.get("/get_all_products", get_products);
router.get("/delete/:id", delete_product);

export default router;

The final step is to create the code for the logic of those each routes. You need to implement the methods which are defined in those routes. For example get_products would look like


import { Product } from "../Types.ts";

let products: Product[] = [
    {
        id: "1",
        name: "Iphone XI",
        description: "256GB",
        price: 799,
        status: "Active"
    }
];

const get_products = ({response}: {response: any}) => {
    response.status = 200;
    response.body = products;
};

You can access the whole code from this Repository.

Run the DENO app:

Once you are good with everything, you can run the app in local and check if the endpoints are working as expected.

deno run -A server.ts

And you would see the app running in port 8001 , and you can access the endpoints as follows ,

Deno API

Step 2 : Create Azure Resources

Now we are good with the first step and you can see the app running successfully in local. As the next step let’s go ahead and deploy the app to Azure. Inorder to deploy the app, you need to create a Resource Group first.

Create a ResourceGroup Named Deno-Demo

You can navigate to Azure Portal and search for Resource Group in the search bar and create a new one as defined here!

Next step is to create the Web App , as we are going to deploy this app to a Linux environment, you can set the configuration as follows,

Web App Configuration

Step 3 : Deploy to Azure with Github Actions

One of the recent inventions by Github team that was loved by all developers were Github Actions. Personally i am a big fan of Github actions and i have published few posts earlier explaining the same. To configure the Github Action to our application, first you need to push the code to your github repository.

Create a deno.yml

To deploy the app , we first need to create the workflow under the actions. you can create a new workflow by navigating to Actions tab and create new workflow

New Workflow

I am assuming that you are familiar with important terms of Github Actions, if you are new you can explore here. In this particular example i will be using one package created by Anthony Chu who is a Program Manager in Azure functions team. And my deno.yml looks like below,

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:

  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    
    - uses: actions/checkout@v2

    - uses: azure/login@v1.1
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS }}
    
    - name: Set up Deno
      uses: denolib/setup-deno@master
      with:
        deno-version: 1.0.2

    - name: Bundle and zip Deno app
      run: |
        deno bundle server.ts server.bundle.js
        zip app.zip server.bundle.js
    - name: Deploy to Azure Web Apps
      uses: anthonychu/azure-webapps-deno-deploy@master
      with:
        app-name: denodemo
        resource-group: deno-demo
        package: app.zip
        script-file: server.bundle.js
        deno-version: "1.0.2"

One important thing you need to verify is the resource-group and the app-name as you created on Azure.

Also you need to add secrets of your application under secrets in Github repository. You can generate a new Service Principal and obtain the secret as below,


az ad sp create-for-rbac --name "deno-demo" --role contributor  --scopes /subscriptions/{SubscriptionID}/resourceGroups/deno-demo  --sdk-auth

It will generate a JSON like below,

Generate Service Principal

You can copy and paste the JSON under the secret named “AZURE_CREDENTIALS” ,

Add Secret

Now we are good with everything, you can update some file on the repository and see the workflow getting triggered. You can monitor the deployment by navigating to the workflow.

Workflow Execution

Once everything is successful you can navigate to Azure portal and open the Web App endpoint to see if the app is running successfully.

WebApp with Deno API

You can see the app running successfully on Azure.

Deno Web API on azure.

Final words

I really enjoyed learning about the Deno project and created this simple app. I hope this article can be of value for anyone getting started with Deno with Azure.  I see it Deno gaining in popularity, yes. However, I do not see it replacing NodeJS and npm based on several factors. If you found this article useful, or if you have any questions please reach out me on Twitter. Cheers!

Azure static web app is simply AWESOME : Deployed MEME generator app in 10 seconds!

One of the highlight among the announcements made at Microsoft Build 2020 was announcement of the new Azure service in the keynote: Azure App Static Web Apps. Azure Static Web Apps is a service that automatically builds and deploys full stack web apps to Azure from a GitHub repository. This service allow web developers to publish websites to a production environment by building apps from a GitHub repository for free. developers can use modular and extensible patterns to deploy apps in minutes while taking advantage of the built-in scaling and cost-savings offered by serverless technologies.

It provides the killer features for developers such as:

  • Free web hosting for static content like HTML, CSS, JavaScript, and images.
  • Integrated API support provided by Azure Functions as backend APIS
  • First-party GitHub integration where repository changes trigger builds and deployments with Github Actions
  • Globally distributed static content, putting content closer to your users.
  • Free SSL certificates, which are automatically renewed.
  • Custom domains* to provide branded customizations to your app.
  • Seamless security model with a reverse-proxy when calling APIs, which requires no CORS configuration.
  • Authentication provider integrations with Azure Active Directory, Facebook, Google, GitHub, and Twitter.
  • Customizable authorization role definition and assignments.
  • Back-end routing rules enabling full control over the content and routes you serve.
  • Generated staging versions powered by pull requests enabling preview versions of your site before publishing.

How i deployed Meme-Generator App:

I was building this meme generator app for an angular session today with Azure cognitive service to detect persons in the image and also to generate a meme by adding a text as the user wanted. As soon as Azure static web apps was announced I wanted to check it out with this application on how easy it is to deploy. Experience was seamless and easy to deploy and generate a url in few seconds.

Let me explain, how i achieved this in quick time.

Step 1. Sign-in to the Azure Portal, Search for “Static Web Apps”, and click the Create button

Visit https://portal.azure.com, sign-in, and use the search box at the top to locate the Static Web Apps service (note that it’s currently in “preview”). click the Create button to get started.

Create Static Web App

In this step you’ll fill out the Static Web Apps form and sign-in to your Github account to select your repository.

  • Select your Azure subscription.
  • Create a Resource Group
  • Name your app , in my case its meme4fun
  • Select a region (as of now its not available in all regions)
  • Sign-in to Github and select your org, repo, and branch. 

Once you’re done filling out the form click the Next: Build > button.

Step 2: Define Angular App location, API, and Build Output

The next step is to define the path where my app is located in the repository, and i did not have any azure function integrated and i will keep it as empty, and the directory where my build artifacts (your bundles) are located(i.e dist/meme-4-fun). After entering that information click the Review + create button.

Defining Paths

Step 3:  Click the Create and Look for the Magic !

Once you are good with everything you can go ahead and click the create button and you will see the application successfully gets deployed and end point generated to access it public.

Deployment complete

Once the deployment is done, if you go the resource and click on overview you will see a configuration as follows,

Overview

It has the urls of the Github Actions and as well as Github source code and also the url of the application deployed. If you’d like to see the build in action on Github, click the Workflow file above.

You can access the meme generator application and create your own memes from https://lively-forest-0fd67f010.azurestaticapps.net/

Here are some great links you can visit to learn more. 

The above app is also available in the Microsoft’s sample static web app Gallery.

If you’re a web dev you need to check out this cool service for sure. Cheers!

How world reacts to Work from home(#WFH) using Serverless with Azure(CosmosDB + Functions + LogicApps)

Overview:

Due to the recent COVID outbreak and as it continues to spread throughout the world, employees are being to asked to work from home. While most of the companies are already getting adapted to this new way of working, there are mixed opinions among employees from different parts of the world. IMO , Working from home is a good option for new parents, people with disabilities and others who aren’t well served by a traditional office setup. As this was appreciated by most of my colleagues and industry friends, i wanted to see how everyone is reacting to this new way of working across the world. In this post, i will explain how i built an application in 10 minutes to solve this particular question in mind using server less computing offered by Azure.

PreRequisities:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

Services used:

  • Azure Logic Apps
  • Azure Functions
  • Azure CosmosDB
  • Cognitive Service
  • PowerBI

Architecture:

Architecture

Architecture of the solution is very simple and it uses most of the Azure managed services that handle the infrastructure for you.Whenever a new tweet is posted Logic Apps receives and processes the tweet. Sentiment score of the tweet can be analyzed using the Cognitive service then Azure function is used here to detect the sentiment of the tweet and finally inserted as a row in the powerBI to visualize in the dashboard. You can also use SQL server/Cosmosdb to store the tweet data if you want to process it later.

How to build the application:

Step 1: Create the Resource Group

As the first step, we need to create the resource group that contains all the resources needed. Navigate to Azure Portal and create the resource group named “wfh-sentiment”

Step 2 : Create the Function App

As the next step lets create the Function App which we need to detect the sentiment of the tweet. You can create and deploy the function app using Visual Studio Code. Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as C# ( But you could consider using any of the language that you are familiar with)

Create new Function App
Select language as C#
Select the trigger as HttpTrigger
Give the name of the Function
Provide the name of the function

and the logic of the Function app is as follows,

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using System.Net.Http;
namespace WorkFromHome
{
    public static class DecideSentinment
    {
        [FunctionName("DecideSentinment")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestMessage req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");
             string Sentiment = "POSITIVE";
            //Getting the score from the Cognitive Service and determining the sentiment
             double score = await req.Content.ReadAsAsync<double>();
             if(score < 0.3){
                 Sentiment = "NEGATIVE";
             }
             else if(score < 0.6){
                 Sentiment = "NEUTRAL";
             }
             return req.CreateResponse(System.Net.HttpStatusCode.OK,Sentiment);
        }
    }
}

And the source code can be found here. Then , you can deploy the function App to Azure with simple command using Ctrl+Shift+P and deploy to Function App.

Step 3: Create the Azure Cognitive Service to determine the sentiment of the tweet text

As we discussed above, lets create the cognitive service to determine the sentiment score of the tweet. Go to the same resource group and search for cognitive service and create a new service as follows,

Create Cognitive Service

Step 4: Create Cosmosdb to store the data

In my application, i have made this step optional as i don’t need to save the tweet data for historical analysis. But you can definitely use cosmosdb to store the tweets to process later. As how you created the Cognitive service create a new cosmosdb account and a database to store the data as follows,

Cosmosdb to store tweets data

Step 5: Create PowerBI dataset to visualize the data

Navigate to PowerBI portal and create a new dataset to visualize the data we collected as follows,

Create new Streaming Data set in the work space
Select API in the new streaming data set option
Configure the fields as above.

Step 6: Create the Logic App and configure the Flow

This is the core part of the application as we are going to link together the above component as one flow. You can connect these flows using designer as well as using YAML code. I will be using Designer to create the flow.

As denoted above the first step we need to add the twitter connector which you can pick from the available list of connector named “when a new tweet is posted”

Connector when new tweet is posted

You need to configure the search text which you want to get the tweets , in this case i am going to use the Hashtag “#WFH” and set the interval as 30 seconds.

Look for new tweets on every 30 seconds

The second step is to pass the tweet to Azure cognitive service to analyse the sentiment of the tweet and get the score as output

Select detect sentiment as the next step

You need to provide the key and the URL which could be obtained from the cognitive service you created above.

Configure the detect sentiment of the tweet with the input as the tweet text

The third step is to pass the score obtained above to Azure function which we already deployed to determine the sentiment of the tweet, select the azure function from the connector list as follows,

Select Azure Function which will display the functions already deployed to azure
Configure score from the Cognitive service as an input to the Azure function

Next step is to stream the data set to powerBI so that it will be readily available for the visualization. Select the below connector as next step

Configure Add rows to a dataset to insert data to PowerBI

We are almost done with the configuration, as the last step you need to map the data fields from the above steps to insert into the dataset and the final configuration looks as below.

Mapping the dataset with the outputs from the previous steps

Step 7: Visualize it in PowerBI

Now we have configured all the steps required in the logic app, navigate to PowerBI and select the data set from which you want to create the report/dashboard. In this case we will select the data set which we have already created as follows,

Select the dataset

Rest is yours, you can create lot of usual charts/visualizations according to the way you need. I have created four basic metrics to see how world reacts to “work from home”

  • Indicate the total number of unique tweets
  • Distribution of sentiments using a pie chart
  • Table which displays all the data (user,location,sentiment,score and the tweet)
  • Worldmap which shows how distribution of sentiments look like

and this is how my application/dashboard look like.

Final Dashboard with RealTime Tweets

As you can see the tweets and the sentiments are being inserted to the data set and most of the sentiments are being Positive(Looks green !!!). You can replicate the same architecture for your scenarios ( Brands/ Public opinion etc).

As you see some complex scenarios/problems can be easily sorted out with the help of serverless computing and that is the power of Azure. Cheers!

For those who are interested you can view the Live dashboard.

Automate Azure Resources with PowerApps

Microsoft’s Power Platform is a low code platform which enable the organization to analyse data from multiple sources, act on it through application and automate business process. Power Platform contains 3 main aspects (Power Apps,Power BI and Microsoft Flow) and it also integrates two main ecosystems(Office 365 and Dynamics 365). In this blog we will see in detail about Power Apps which helps to quickly build apps that connects data and run on web and mobile devices.

Overview :

We wills see how to build a simple Power App to automate scaling of resources in Azure in a simple yet powerful fashion to help save you money in the cloud using multiple Microsoft tools. In this post, we will use Azure Automation run books to code Azure Power Shell scripts, MS Flow as orchestration tool and MS Power Apps as the simple interface. If you have an Azure Environment with lot of resources it becomes hassle to manage the scaling part of it if you don’t have the auto scaling implemented. The following Power App will help to understand how easy it is to build the above scenario.

Prerequisites :

We will use the following Azure resources to showcase how scaling and automation can save lot of cost.

  • AppService Plan
  • Azure SQL database
  • Virtual Machines

Step 1: Create and Update Automation Account :

First we need to create the Azure Automation account. Navigate to Create Resources -> search for Automation Account on the search bar. Create a new resource as follows,

Once you have created the resource, you also need to install the required modules for this particular demo which includes
-AzureRM.Resources
-AzureRM.Profile

Installing necessary Modules

Step 2: Create Azure PowerShell RunBooks

Next step is to create the run books by going to the Runbooks blade, for this tutorial lets create 6 run books one for each resources and its purpose. We need to create PowerShell scripts for each of those types

Scale Up SQL
-Scale Up ASP
-Start VM
-Scale Down SQL
-Scale Down ASP
-Stop VM

Creating RunBook

PowerShell scripts for each of these are found in the Github repository. We need to create runbook for each of the scripts.

All the runbooks above can be scheduled and automated to run for desired length of time, particular days of the week and time frame or continuously with no expiry. For an enterprise for non-production you would want it to scale down end of business hours and at the weekend.

Step 3: Create and Link Microsoft Flow with Azure Automation Run books

Once we tested with the above Powershell scripts and the runbooks, tested and published now we can move on the next step to create the Flow. Navigate to Flow and Create New App from the template.

New Flow from the template

Select the template as PowerApps Button and the first step we need to add is the automation job. When you search for automation you will see the list of jobs available. Select Create Job and pick the one you created above. If you want to all the actions in one app, you can add one by one, If not you need to create separate flows for each one.

In this one, i have created one with the ScaleUpDB job which will execute the Scale up command of the database.

Once you are done with all the steps save the flow with necessary name.

Step 4 : Create Power Apps and and link to your Flow button

Once we create PowerApp flow buttons login to MS PowerApps with a work/school account. Navigate to Power Apps which will give a way to create a blank canvas for Mobile or tablet. Next you can then begin to customise the PowerApp with text labels, colour and buttons as below

PowerApps Name

In this case we will have a button to increase/decrease the count of instances of the sql db, my app looked like below with few labels and buttons.

AutoMateApp

Next is to link the flow to the button of the power App by navigating to the Actions -> Power Automate

Linking Button Action

Once both Scale Up/Scale down actions are linked, save the app and publish

Step 5 : Verify Automation

Inorder to verify if things are working correctly, click on scale up and scale down few times and navigate to Azure Portal and open the Automation account we created.

Navigate to the overview tab to see the requests for each job made via the power app as below.

Jobs executed.

In order to look at the history navigate to the Jobs blade

Jobs Execution

further you can build a customized app for different environments with different screens using Power App. With the help of Azure Alert, whenever you get an alert regarding the heavy usages of resources/spikes, with single click of button you will be able scale up and scale down the resources as you need.

Improvements and things to consider:

This is just a starting point to explore more on this functionality, but there are improvements you could add to make this really useful.

(i) Sometimes the azure automation action fails to start the runbook. When you are implementing flow needs to handle this condition.

(ii) Sometimes a runbook action will be successful, but the runbook execution errored. Consider using try/catch blocks in the PowerShell and output the final result as a JSON object that can be parsed and further handled by the flow.

(iii) We should update your code to use the Az modules rather than AzureRM.

Note : The user who executes the PowerApp also needs permission to execute runbooks in the Automation Account.

With this App, It becomes handy for the operations team to manage the portal without logging in. Hope it helps someone out there! Cheers.

How to resolve : Cosmos DB x-ms-partitionkey Error

One of the most repeated question that i came across on stackoverflow on the tag #Cosmosdb is that how to resolve the error “The partition key supplied in x-ms-partitionkey header has fewer components than defined in the the collection”

This error could occur when you are attempting to get a Document from Cosmosdb using the REST API or using SDK. If you are using using a partitioned Collection and therefore you need to add the “x-ms-documentdb-partitionkey” header. Even after adding the header if you get the error you can fix it by the following methods,

Partition key must be specified as an array (with a single element). For example:

in C#

  requestMessage.Headers.Add("x-ms-documentdb-partitionkey", " [ \"" + partitionKey + "\" ] ");

In Javascript

headers['x-ms-documentdb-partitionkey'] = JSON.stringify([pkey]);

Partition key for a partitioned collection is actually the path to a property in Cosmosdb. Thus you would need to specify it in the following format:/{path to property name} e.g. /abc

Hope this helps someone out there who is struggling to fix this issue!

Best practices in handling Azure Service Bus dead-letter messages

Azure Service Bus is being used as one of the most reliable enterprise messaging services across different domains like health care, finance and you name it.

Often, users do have uncertainties in handling the dead-letter messages. Before diving into the best practices, I would like to give you a quick introduction on dead-lettered messages.

Azure Service Bus Dead-letter Queue

Azure Service Bus queues and topic subscriptions provide a secondary sub-queue, called a dead-letter queue. The purpose of the dead-letter queue is to hold messages that cannot be delivered to any receiver, or messages that could not be processed. So, any message that resides in the dead-letter queue is called a dead-lettered message.

Best practices in handling Azure Service Bus dead-letter message

Most of the time we could notice that the message fails due to the following reasons;

  1. Dependent service not available
  2. Faulty message
  3. Process code issue

Dependent service not available

This could be one of the foremost and time after time reasons where the services that reply on message delivery may go down for a short period. For instance, the Redis or SQL connection issues may often happen.

Faulty Message

According to the business scenarios, you may configure the custom properties to you Azure Service Bus messages and validate with respect to the values that should contain in the custom/user defined properties.

If in case the message doesn’t have a mandatory parameter or some value is incorrect, then the message will end up in the dead-letter queue after the maxDeliveryCount is attained.

The failed delivery can also be caused by a few other reasons such as network failures, a deleted queue, a full queue, authentication failure, or a failure to deliver on time.

Here we can drill down the reasons into two ways:

  1. System level dead-lettering
  2. Application level dead-lettering

Reasons for System level dead-lettering

  • Header Size Exceeded
  • Error on processing subscription rule
  • Exceeding time to live value
  • Exceeding maxDeliveryCount
  • When Session id property is set to true (the default is false)

Reasons for Application level dead-lettering

  • Messages that cannot be properly processed due to any sort of system issue
  • Messages that hold malformed payloads
  • Messages that fail authentication when some message-level security scheme is used

In this second scenario, the best practice is to manually verify the dead-lettered messages (using Service Bus Explorer or Serverless360) to correct message data or sometimes to purge messages and clear the queue.

Message process code issue

This is a very rare case given a good number of resources in the community to fetch the flawless code. The developer should keep all the scenarios in the head and handle all the exceptions.

In the first and third scenario, the best practice is to use a flawless code that should run and reprocess the dead-lettered messages, you can find the sample code below;

internal class Program
    {
        private static string connectionString = ConfigurationSettings.AppSettings["GroupAssetConnection"];
        private static string topicName = ConfigurationSettings.AppSettings["GroupAssetTopic"];
        private static string subscriptionName = ConfigurationSettings.AppSettings["GroupAssetSubscription"];
        private static string databaseEndPoint = ConfigurationSettings.AppSettings["DatabaseEndPoint"];
        private static string databaseKey = ConfigurationSettings.AppSettings["DatabaseKey"];
        private static string deadLetterQueuePath = "/$DeadLetterQueue";

        private static void Main(string[] args)
        {

            try
            {
                ReadDLQMessages(groupAssetSyncService, log);
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                throw;
            }
            finally
            {
                documentClient.Dispose();
            }
            Console.WriteLine("All message read successfully from Deadletter queue");
            Console.ReadLine();
        }

        public static void ReadDLQMessages(IGroupAssetSyncService groupSyncService, ILog log)
        {
            int counter = 1;
            SubscriptionClient subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName + deadLetterQueuePath);
            while (true)
            {
                BrokeredMessage bmessgage = subscriptionClient.Receive(TimeSpan.FromMilliseconds(500));
                if (bmessgage != null)
                {
                    string message = new StreamReader(bmessgage.GetBody<Stream>(), Encoding.UTF8).ReadToEnd();
                    syncService.UpdateDataAsync(message).GetAwaiter().GetResult();
                    Console.WriteLine($"{counter} message Received");
                    counter++;
                    bmessgage.Complete();
                }
                else
                {
                    break;
                }
            }

            subscriptionClient.Close();
        }
    }

Myths to chunk out

Does the SequenceNumber of message added by Azure Service bus keeps on increasing on each failed attempt till it reaches maxDeliveryCount?

The sequence number can be trusted as a unique identifier since it is assigned by a central and neutral authority and not by clients. It also represents the true order of arrival and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.

Setting maxDeliveryCount = 1, is that best practice to deal with poison messages so that consumer never attempt twice to process message once it failed?

It is not a best practice to set the maxDeliveryCount=1. Because if some network/connection issue occurs, the built-in retry will process and clear from the queue.

If you are reading messages in batch, a complete batch will re-process if an error occurred any of message.

Conclusion

In this blog, we have seen a sneak peek of Azure dead-lettered queues and various reasons for dead-lettering of messages. Further, we discussed the best practices on handling the dead-lettered messages. Finally, we looked into the myths to chunk out while dealing with Azure Service Bus dead-lettered messages.

I hope you enjoyed reading this article. Happy Learning!

This article was contributed to my site by Nadeem Ahamed and you can read more of his articles from here.

Build and Deploy Future Proof Application with Azure Kubernetes Service in 10 Minutes (Nodejs, Go + AKS)

QUARANTINE, SELF-ISOLATION, SOCIAL-DISTANCING – for the past one month, I’m living with these words. While most of us are investing this time to learn new technologies/tools. I challenged myself to skill up myself and have deep knowledge on certain workloads on Azure.

Kubernetes provides a uniform way of managing containers. Its aim is to remove the complexity of deciding where applications should be scheduled to run, how to locate them, how to ensure they are running, autoscale or deploy. Azure Kubernetes is a service on Azure that help Customer achieve their business goals, by providing a layer of automation on top of their infrastructure. Going towards the technical features, Azure Kubernetes has a lot to offer, but at the end of the day, is a great platform for saving money or growing faster.

Azure Kubernetes service offers great set for microservice architectures.If your application needs to start hundreds of containers quickly or will terminate them just as quickly and to have full control of those services, AKS is a great option. There are other scenarios such as Bigdata, IOT scenarios you would consider AKS as a preferred choice. In this post i will explain how to easily setup your application running on AKS cluster in 10 minutes with CI/CD pipelines.

PreRequisities:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

How to build & Deploy the application:

If you are a beginner with Azure Kubernetes Service, Azure Devops is the best place that you need to look in order to understand how an Application is deployed on Azure Kubernetes. The Azure DevOps Project simplifies the setup of an entire continuous integration (CI) and continuous delivery (CD) pipeline to Azure with Azure DevOps. Cool thing is that, you can start with existing code or use one of the provided sample applications. It enables you to quickly deploy that application to various Azure services such as Virtual Machines, App Service, Azure Kubernetes Services (AKS), Azure SQL Database, and Azure Service Fabric.

Lets Deploy an Node.js App to Azure Kubernetes Service :

Navigate to Azure Portal and search for Azure Devops Project in the market place/search bar.

Azure Devops Project

Let’s go ahead and add a new project.

Add new Azure Devops Project

Azure Devops project enables developers to launch an app withany Azure App Service in just a few quick steps, providing everything needed to develop, deploy and monitor an app. Create a DevOps Project, and it provisions all the Azure resources and provides a Git code repository, Application Insights integration and a continuous delivery pipeline setup for deployment to Azure. The DevOps Project dashboard lets you monitor code commits, builds and deployments from a single view in the Azure portal. How cool is that ?

With the help of Azure DevOps Projects, you can build an Azure application, on an Azure service, in quick time. You also get automatic full CI/CD pipeline integration, built-in monitoring and deployment to the platform of your choice. Azure Devops Project supports almost all the latest languages out there in practice such as .Net,Java,Node.js,PHP,Python and Go.

Next step is to select the Language you want to have the application on, I will go ahead and choose Nodejs as my application language. But you could choose any language that you want to test.

Create Node.js Devops Project

Once you select the language, next step is to select the framework in which you want the application to be based on , For example, if you choose Python it could be based on Flask,Django etc. Similarly you have the flexibility to choose the framework once you decide the language. In this case i will go ahead and choose express.

Select the Framework

Next step is the critical part of the process, This is the step that defines which service you would be using to deploy the app. You can Run your application on Windows or Linux. Simply deploy to Azure Web App, Virtual Machine, Service Fabric or choose Azure Kubernetes Service for your application. Each of those option provides deployment in an elegant and fast way. In this case, we will deploy the application to Azure Kubernetes service.

Azure Kubernetes Service to Deploy

Once you are done with the above step, final step is passing the configuration details for the Kubernetes cluster on AKS as follows,

Most of the settings are self explanatory , you can change the size of underlying VMs based on your requirement. The default number of nodes for your cluster comes as 3 , if you need to make changes to your cluster and the container registry settings click on Additional Settings. Here you can configure the Kubernetes version, Node count, App Insights and resource group location. The HTTP application routing solution makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster. In this case we will disable it.

Additional Settings AKS configuration

Container registry is needed as your images needs to be pushed to them. Once you’re good with all settings, click ok and done!. You will see a notification box as below.

K8s cluster, Container Registtry, CI/CD pipelines are created

Once everything is created you will be redirected to a Dashboard page as below.

Resources in page

The four stages involved are:

  1. Azure Kubernetes Cluser: Created and configured your Azure Kubernetes Cluster and application endpoint.
  2. Azure Container Registry : Created and application image is pushed to the container registry.
  3. Repository: Created a distributed Git repository and checked in sample code.
  4. CI/CD Pipeline: Seamlessly connected with the Azure Devops collaboration solution allows you to plan, test, release and monitor your solutions.
  5. Application Insights:  Created and configured your Application Insights telemetry which enables active monitoring and learning to proactively detect issues and continuously analyze and test hypotheses without code.

You can see all the resources created on Azure under the resource group

Resource Group with All resources

When you click on the Kuberentes cluster, you can see the Kubernetes related resources such as dashboard logs etc.

Kubernetes cluster resources

And if you navigate to the blade you can see the settings such as Enabling Dev spaces , Kubernetes version, Application Insights etc.

On the Azure Devops side, you will be able to see new Azure Devops project created with Dashboard, Backlog items, CI/CD pipelines etc.

Azure Devops Project with CI/CD pipelines

And when you click on the application endpoint, you would see that the application running successfully on Azure Kubernetes service.

Nodejs App on AKS

In order to verify the services and pods, you could follow the steps provided in the Azure Kubernetes dashboard configuration and when you open up the dashboard, you can see the status of each services.

Azure Kubernetes dashboard

I have spent more than 4 days in the past to configure Kubernetes to deploy my application. But Azure Devops project Simplify and speed up the DevOps process with Azure DevOps services. If you want to explore more kind of scenarios on different services on Azure its worth to explore Azure Devops Labs. I hope it makes it easier to get started with any of the deployment with Azure services. Cheers!

Tip : Connect/Navigate to Microsoft Teams From any Web App

I have been involved in developing Apps and integrated them as a part of Teams. At times there are questions from customers whether they would be able to utilize the features of teams within their application. The simple answer for the will be NO as Teams Platform Architecture is enabling developers to bring adoption to teams by building more applications from different industry verticals which can be embedded inside teams

However there are two ways you can make the user to navigate to Microsoft Teams from the web application or even from Outlook.

Make a clickable link for Teams chat :

Here is a handy tip on how to make a one-click link that will open up Teams chat to you or the person who is already logged in. Say if you are using a web application which is on Angular, simply you can add a click event to navigate to teams with the code,

 takeUserToTeams() {
    window.location.href='http://teams.microsoft.com/l/chat/0/0?users=someone@domain.com';
  }

When the user click on the button it will open a web page where the user will sign-in with their Microsoft account. If the Teams app is installed on the client’s computer, the web page will offer to open the chat in the Teams client, otherwise they can use the Teams web browser experience

User Navigation from web app to Teams
Opening chat seamlessly once user login

This is a very simple tip but most of the customers are asking for a way to do the same.

Create a Share to Microsoft Teams

Third-party websites can also use the launcher script to embed Share to Teams buttons on their webpages which will launch the Share to Teams experience in a popup window when clicked. This will allow you to share a link directly to any person or Microsoft Teams channel without switching context.

Step 1: Add  launcher.js script on your webpage.

<script async defer src="https://teams.microsoft.com/share/launcher.js"></script>

Step 2 : add an HTML element on your webpage with the teams-share-button class attribute and the link to share in the data-href attribute.

<div
  class="teams-share-button"
  data-href="https://<link-to-be-shared>">
</div>

This will add the Microsoft Teams icon to your website.

Share to Teams icon

That’s it whenever user click the button content will be posted on the user directly from your apps to teams.

Hope these 2 tips helps someone out there to integrate teams with your application easily to navigate the users to teams!

Great to be the Winner of the MVP Community Quiz

The online MVP Community Quiz is an online quize program by Isidora Katanic, who is the organizer of the famous community events such as Experts Live Europe and Experts Live Switzerland.

Before joining with Microsoft, I have been a Microsoft MVP continuously for 2 years that gave me the opportunity to meet so many experts across the world and get to know them and the communities. The quiz mainly covered the questions among the interesting facts about famous MVPs and the community events.

🏆 Among the winners!

It’s great to be listed among the winners as there were participants from 23 countries around the world.

Build a BOT in quick time to support COVID Scenario and place it with Teams and Your Website!

Overview

This is my 2nd blog about a Chatbot solution to address the world wide problem “CoronaVirus” panademic. As it is now spreading across the world and its really vital to provide all citizens of the countries with up-to-date information. As we know it has come to a level that certain countries are in lockdown mode , more and more people are started to practice social distancing and work from home. As we have seen in the previous blog about the sentiment analysis of employees working form home, in this blog i will explain about how to build a chat bot to provides answers to most of the queries related to health information and latest learning. Chatbot is a platform which can be integrated with Web inteface or any collaboration tools such as MS Teams, Slack or Telegram and it can be operated 24*7 without any downtime.

Prerequisites:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

How to Build:

If you are new to building Chatbot, it is extremely easy and you can enable access to knowledge base via bot in few minutes.

Step 1: Create the Resource Group

As the first step, we need to create the resource group that contains all the resources needed. Navigate to Azure Portal and create the resource group named “rg-chatbot”

Create Resource Group

Step 2: Create a knowledge base

As the next step, navigate to QnA maker,sign-in with your Microsoft account and then click “Create a knowledge base” option

QnA maker

click “Create a QnA service” button in “Step 1” as shown below;

Create QnA service

You will be re-directed to Azure portal. Fill in the template form, and make sure to set pricing tier for “QnA Maker service” to F0, and for the supporting Azure Search service to F, if you want to host your Bot components for free. Then click “Create” button;

Create QnA service

As you could see in the portal Deployment of the relevant resources in your target resource group. If successful, you should get notification as resources deployed. As you will see the following resources under the particular resource group

Resources in QnA

Click on the App Service Plan you created

Note : By Default when you create, App Service Plan for the website is set to S1 pricing tier. You need to change it to the F1 tier if you don’t want to have much cost incur on your subscription. You can do this by navigating to Scale Up(App Service Plan) and then select Dev/Test tab and select F1 and Apply.

Change the AppService plan

Step 3 : Connect your QnA service to your KB.

Navigate to the QnA maker again and click ‘Refresh” button in “Step 2”

Click on Refresh and select the bot we created

Give a meaningful name to your Knowledge Base (KB)

Name your KB

Step 4 : Populate Knowledge base from different sources

You can specify different sources to feed your knowledge base. It can be populated uploaded from files (e.g., in PDF, MS Word, MS Excel, etc. formats) or typed manually or from Web sites (containing FAQ). For this example, we will be using the recommended standard Knowledge base with Covid-19 FAQs from the  Centers for Disease Control and Prevention and World Health Organisation Web sites.

Knowledge Base Sources

Also in the next step chit-chat” section you may choose “personality” for your bot, so that it can answer some additional small talk questions. This option will enrich the knowledge base with additional question/answer details, so you bot may respond to various greetings.

Populate KB

Now you can create your KB with a simple button click to setup your knowledge base and populate it with the data from the different configured sources.

Select the way it should answer small-talk questions

Once it successfuly created, you will see a window as follows,

Create the knowledge base

Step 5: Test bot’s knowledge base

It’s time to test bot’ts knowledge base , by clicking -> Test button, then type a question and enter,

Test Knowledge base

If you are happy with bot’s responses, just go ahead and click “Save and train”, then switch to “Publish ” tab and publish the Knowledge base

Publish the Knowledge base

Step 6: Create Bot

Once it got successfully published, you can click on Create Bot as follows,

Create Bot

You will be re-directed to Azure portal where you can set the pricing tier of your Azure Bot Service to F0 (free one) , and you can pick SDK language as the one you are familiar with (either C# or Node) and the click “Create” button;

Create Web Bot

You can obtain the QnA authKey by clicking on it and from the deployment details,

Obtaining AuthKey

Once we filled everything, just hit Create button. Once it get deployed you will be able to see it from the notification.

Once Web App Bot is deployed , you may verify its functionality by selecting “Test in Web Chat” in the left navigation bar and then typing your messages in the Test window.

Test in Web Chat

If you get replies similar to what is shown on the screenshot above, congratulations – you have successfully completed setup and training of your bot !

Next step would be to make it accessible in your platform of choice.

Step 7 : Embed your bot into any web site of yours

Azure Web App Bot can communicate with external world via so called “channels”. The channels are built for the relevant collaborating platforms, e.g. Skype,MS teams or Telegram. To find out more about supported channels, please consult Microsoft documentation here. By default, Web App Bot has “Web Chat” channel activated. It means that you can easily start using it on your Web site.

Navigate to Channels on the blade of created bot, then click “Get bot embedded codes” and finally click “Click here to open Web Chat configuration page” link.

Web Chat

Now click “Show” link to make one of the secret keys visible and use to replace <YOUR_SECRET_CODE> placeholder in provided embedded code sample. You can paste this embedded code now into the source code of your target Web site.

Now you can embed this into a Web sites can be built using various Web development frameworks: be it Angular,React,Vue,Angular or anything else.

Copy the code and embed in your site

Simply clone it, replace <PUT_YOUR_SECRET_CODE> placeholder in line # 1059 with the secret code from your Web Chat configuration page as describe above and you will get fully functional Web page with embedded QnA Chatbot.

In this case, i already have built an Covid Tracker application with Angular and this how it looks when i embed the code what we got from the portal.

Covid tracker with BOT in place

Step 8: Enable Chatbot in Microsoft Teams

Say if you want to serve internal audience such as employees and if you are widely using Microsoft Teams in this crisis situation, you would nee to activate relvant channel in the bot’s configuration.

Open Web App bot resource in the Azure portal again and select “Channels” option from the blade and click Microsoft Teams icon and press Save button

Select the channel you want to integrate with

Once it got successfully published, you can navigate to Channels blade again and see the Microsoft Teams channel in running state.

MS teams on running state

Now, switch to Microsoft Teams client, select “App Studio” from the left navigation bar if you dont have App studio, just install it from here

Click “Manifest Editor” tab and then click “Create a new app” button. 

Create a new App on teams

Fill all the details which are necessary and mandatory,

Add the details and make sure you get through all validation

Next in Capabilities -> Bots” choose your Azure bot from “Existing bot” tab’s drop-down list and then define its scope in MS Teams, e.g. “Personal” and “Team” for your users to chat with the bot directly and within specific teams.

Setup the bot

Finish -> Test and distribute“, use “Install” button (if you have MS Teams administrator access) or “Download” button (to send .ZIP package with the manifest details to your MS Teams administrator) to make your bot available in your MS Teams environment.

We’re adone

If successful, then you should be able to chat now with your bot directly from within MS Teams.

BOT in Lit Mode

The above can be integrated into any social media platform listed in the channels and its quite easy

You can also consider using HealthCare Bot if you are focused to have the chatbot to handle related scenario only and he is a detailed article on how to build the same.

Things to add/Improve:

Modify your bot with lot more cool features with images and more content via the code (either c#/Node)

Keep the knowledge base upto date : As shown aove it is important to keep your bot’s knowledge base up to date. You can just navigate to the QnA maker website and click on Save and Train button and publish as we shown above.

I am sure this would be useful to those who have built COVID trackers , you can just create and embed this chat bot within your web application, and for others to support employees who are working from home you can extend this bot on the respective collaboration tool. Let’s fight against COVID with these cool technology and make this world a better place. Cheers!

%d bloggers like this: