The AusSpiderBot: Linking Power Automate and Azure’s Custom Vision API

Standard

I went to Microsoft Ignite The Tour last week. The true value was in catching up with friends and colleagues I have not seen for a while and seeing presentations on the technologies I have heard about but not yet got around to playing with.

One such presentation was by the awesome Amy Kapernick who presented her Quokkabot. Using .Net she linked up WhatsApp and Azure’s Custom Vision API so that anyone could ask for a picture of a quokka or check whether the picture they had was a quokka.

I approached her at the end of the presentation to ask if she had considered doing the same on the Power Platform and she said she had not. Challenge Accepted!!

This is the end result; you can Tweet any image with the hashtag #IsItARedback and my bot will Tweet on @leontribe whether it is or not (being a free plan the response can take up to an hour but it will come). There was literally NO code in its production. It looks something like this:

So What is the Custom Vision API?

Custom Vision API is part of Azure’s Cognitive Services. Cognitive Services are Azure’s AI services which can do mind-blowing things. They are friendly and well worth your time.

As you will see in this blog, they are quite easy to set up when you have Power Automate in your corner.

Custom Vision is a deep learning image analyzer which is a fancy way of saying you can train it to recognise stuff in images. There are plenty of other services, depending on your need but this is the one I needed for my application.

The Project

Amy comes from Perth where there are quokkas. Sydney does not have anything as cute as quokkas so I selected the redback spider. Truth be told Perth has redbacks too but it is much easier for me to identify a redback than, say, a funnelweb when compared to the other eight-legged critters that inhabit Australian shores. Please note the recognition limitation is mine and not that of the Custom Vision API.

While Amy used WhatsApp to make the request, there is no standard Connector in Power Automate for WhatsApp so I used Twitter instead. I was already reasonably familiar with Power Automate and Twitter from working on my TwitterBot which helped the decision.

The Power Automate Bit

If you do not know what Power Automate is, it is the new name for Microsoft Flow. The rumour is Microsoft could not secure the name Power Flow so they went with Power Automate just so everything in the Power Platform had ‘Power’ in the title.

Image result for "power platform" agent

The legacy still remains though. To set up a free account and create a flow (which seems to be what they are still going with) you go to flow.microsoft.com. In this case, our flow is relatively simple.

To go through is step by step in graphics with larger text, our trigger is a Tweet with the hashtag #IsItARedback.

We then loop through the media images linked in the Tweet and feed them to the Custom Vision API.

The Custom Vision API returns probabilities that the image corresponds to one of the Tags we have set up (we will see this a little later).

Looping through the Tags, we check whether the Redback tag has scored a probability of greater than 50% and respond via Twitter accordingly.

The Tweets are constructed such that they respond and show the Tweet they are responding to.

The Custom Vision API Bit

Firstly head to portal.azure.com You will need an Azure subscription but you can get a 12 month trial for free with $200 of credit.

Click the big plus and search for “Custom Vision” to find the service. Hit the Create button and fill in the fields, keeping the default options.

There is a free pricing plan to help preserve those trial credits.

Once complete, two services will be created: the training service and the prediction service.

Click through to the one which is not the Prediction service, go to Quick Start and click the link through to the Custom Vision Portal and sign in.

Create a new Project and follow the prompts.

The Getting Started wizard will then walk you through the setup.

By the end we have created two Tags: ‘Redback’ and ‘NotRedback’, uploaded at least 15 images for each, linked the images to the Tags and trained the model by hitting the Train button (I did the Quick Train to preserve my free cycle allocation). Do not forget to hit the Publish button on the Performance tab to make your training model/iteration accessible.

The Quick Test button allows you to test from the portal uploading an image file or providing a URL.

Linking Power Automate and Custom Vision API

The final link in the chain is linking Power Automate to the Custom Vision API. This was the step which took me the longest to figure out. To set up the Connection, you will need:

  • Connection Name: Whatever you like
  • Prediction Key: This can be found by clicking the ‘Prediction URL’ button on the Performance tab.
  • Site URL: This is the data center for your model. This took a bit of detective work by going to the Custom Vision Prediction API Reference ClassifyImageURL page (thank you Olena Grischenko for the tip!) and seeing how it constructed the Request URL I could work out the components. For me, as I am based my service out of the East US center, the value is eastus.api.cognitive.microsoft.com

You will then need to populate the values in the flow Step:

  • Project ID: You can either get this from the page URL when in the project in the Custom Vision API portal, or by clicking the Cog icon in the Custom Vision portal at the top right.
  • Published Name: The one value that took me the longest to figure out (Seriously Microsoft, make it easy for the dev-muggles!) What it should be called is Iteration Name. The default value is ‘Iteration1’. In this case trawling through sample code online showed that ‘Iteration Name’ = ‘Published Name’ = ‘Model Name’

Conclusions

While this is a fun and trivial example (unless you have just been bitten by a spider and are not sure), it is clear to see there is a lot of possibility in this technology. The Cognitive Services are already being used to help manage fauna and flora in Australia’s Kakadu National Park, as well as to manage fish populations in Darwin Harbour. I can also see this service bringing semi-automated medical diagnostic services to the world, being used for the auto-assessment of parts before they malfunction and quality control of parts in mass production. Considering where we have come from, it is very exciting times.

Microsoft Power Platform 2020 Release Wave 1 Plan: The bits which excited me

Standard

Microsoft are getting quite organised with their documentation these days and last week they put out their Power Platform ‘roadmap’ for April 2020-September 2020. This is separate to the Dynamics 365 2020 release wave document whose 400+ pages will have to wait for another day.

Here are the bits in Power Platform, going to General Availability, which excited me.

Power BI

  • Drillthrough Buttons: This should make guiding a report user through the report a lot easier. They also are context aware, which is great.
  • Office ribbon for Power BI Desktop: Getting everything consistent makes it much easier for people to get on board with an application
  • Incremental refresh: Only data that has changed will update in a report. Less data consumption, quicker refreshes, less frustration
  • Conditional formatting for totals and subtotals: For exception reporting, this is a fantastic inclusion and I am surprised it was not there already.
  • Being able to render a paginated report in any format, such as PDF or Excel via the API: Very useful for automated report comms/invoicing etc.
  • Copy and paste visuals into other applications: If only Dashboards would follow suit! Interoperability of apps is a key foundation of Microsoft Office. It is great Power BI has followed suit on this. I hope others will also come to the table soon.
  • Sub-report support: In the category of “why was it not there in the first place?” we have sub-report support.
  • Datasets larger than 10Gb in Power BI Premium: The whole point of Power BI is to synthesize large data into meaningful reports. The bigger the better as far as I am concerned. The only limit now is the limit of the memory capacity.

Power Apps

  • Deep integration from Azure to Microsoft Teams: The ability to create apps directly in Teams is very exciting to me. We start to depart from a collaboration tool to a truly useful productivity tool
  • Canvas and model-driven apps run on a single mobile application: Some of us still have scars from the early attempts of taking Dynamics to a mobile device. Users do not know or care whether an app is canvas or model-driven so having them launch from the one place makes a lot of sense.
  • Power BI Embedded component in portal designer: No more need for liquid code to make this happen. Easier to build and manage.
  • Save and Save & Close are back!: With auto save on the only way to force a save was the tiny disk icon in the footer. The Save and Save & Close buttons are now back in the Command Bar which certainly make me a lot happier as I am trained, by the ghosts of Microsoft past to hit save and hit it often

Power Automate (what used to be called Microsoft Flow)

  • Copy and paste Actions: Actions can now be copied and pasted. A huge time saver for branching scenarios
  • UI flows: Probably my favorite feature in the release. This is like a macro recorder for flows. Record mouse clicks, keyboard strokes and data entry and then automate it. UI flows also comes with error handling and it is solution aware
    • Automate web-based applications: Supporting Google Chrome (and Microsoft Edge Chromium) this allows the automation of web-based applications. This is crazy powerful and allows for all sorts of automated testing which may be hard to execute through traditional script
    • Automate Windows applications: Macro capture for Windows applications. Very exciting.
    • Automate on virtual machines: UI flows can be run on virtual machines, including Microsoft Remote Desktop

Power Virtual Agents

These are very new and exciting and allow you to create a chatbot without any code.

  • Add a Power Virtual Agents bot into a Power Apps canvas app: Great for automated help within an app
  • Add images and videos to topics: The bot’s response can now include video and images. With a picture being worth a thousand words it makes sense to make responses more than just text
  • Additional language support: Bots will be able to converse in French, German, Spanish, Italian, Portuguese and Chinese (a specific bot can only handle one of these though)

AI Builder

  • Form processing: Teach it what your form looks like with a few examples and you can automatically extract data for Power Apps or as part of a flow.
  • Object detection: Used for recognizing or counting objects, this has a huge range of applications from checking an employee has work safety gear on through to automated stock taking

Power Platform governance and administration

  • Admin connectors for Power Automate/Power Apps: These will be in General Availability in July 2020 they literally allow an admin to manage the tools with the same tools i.e. create flows to manage flows/apps. A great way to ensure an admin is familiar with their management tools.

Common Data Model and data integration

  • SAP ERP connector for Power Apps and Power Automate: I thought this one was already there but obviously not. This allows you to connect to SAP ECC or S/4HANA which is often part of a client’s ecosystem
  • New connectors in Power Query Online: There are quite a few of these but the ones which I want to explore further are: Active Directory and OLEDB

They really have listened!

All through the document I kept seeing:

Historically it was not clear that Microsoft considered outside feedback from MVPs or the public in setting priorities for their development. They are very clearly stating this is now part of the process and I applaud them for it.

Conclusions

Innovation in the Power Platform is coming thick and fast and this document proves it. All of the above features are coming into General Availability, although not all straight away, so check the document if there is a specific feature you need. The one I really want to play with is UI flows. For legacy automation and low code automation this could really be an inexpensive way to achieve a lot. I have said it before but it is a very exciting time to be in Business Applications.

Business Applications’ New Architecture Paradigm

Standard

Back in BC (Before CDS), the architecture of a Dynamics solution involved (roughly) the following steps:

  • Listen to the client’s business need
  • Work out which modules most closely aligned
  • Minimize the cost of development and maximize the benefits
  • Configure and customize
  • Go live

With the introduction of CDS and the evolution to the Business Applications ecosystem, the steps have changed:

  • Listen to the client’s business need
  • Work out how they buy their Microsoft software licenses
  • Explore all the different ways the business need can be met
  • Work out the license implications of each one
  • Minimize the cost of development and licensing and maximize the benefits
  • Configure and customize
  • Go live

If you are not considering the license implications for your clients’ solutions, it could prove to be a costly mistake.

Licensing Then and Now

Historically (BC) the Dynamics licensing model was simple. You paid a ‘per-user-per-month’ fee and it was all you could eat. This was probably the big difference between Salesforce and Dynamics back in the day. Salesforce has always charged for each module and the incremental add-on of costs was referred to as the “Salesforce Tax” in competitive pitches. With Dynamics the cost model used to be simple but, over time, it has changed to the current component-based pricing, similar to Salesforce’s model.

The Dynamics components of old have been split out as add-on modules to CDS and the Business Applications ecosystem includes Azure services and the Power Platform. We have a wide variety of different license models for each of the components. The cost model is no longer a simple multiplier based on the number of users.

Navigating the New World

One way to consider the new model is to consider whether your customer’s needs will work better with a consumption model or a per-user model. If the solution is to be used by a large number of users but infrequently, a consumption model makes sense. Conversely, if the solution is to be used by a small number of users but they will conduct a high volume of transactions in it every day, a per-user model may make more sense.

Hypothetical Case Study: Internal Catering

Let us say we have the requirement of building a system which allows the request of internal catering for meetings. Users go to an app or web page, specify what they need and the request goes to an internal organizer who sorts out the catering.

In the old days we would likely look to the Customer Service module and use Cases. However, this is charged on a per-user basis. So how many users are we talking about? Let us say 12,000 internal users. That is an expensive application for sandwiches and wraps.

So is there another option? The Business Applications ecosystem provides a wide range of options to solve business problems. PowerApps are also on a per-user basis so this does not help. Virtual Agent may be useful but the pricing model, to my knowledge, has not come out at the time of writing. PowerApps Portal is licensed by proxy to things like PowerApps which again leads us to a per-user model for internal users.

An option which may work is Forms Pro linked to CDS via Power Automate (the new name for Microsoft Flow) licensed on a per Flow (Automate?) basis. As long as we stay under the 15,000 daily API requests per day limit we are good to go (high number of users, low number of calls).

Instead of using Dynamics 365 for Customer Service and restricting an enterprise-ready customer service module to ordering Danishes, considering licensing forces us to get creative with all the tools in our Business Applications toolbox. It also forces us to think beyond the cost of creation to the cost of ongoing maintenance.

Conclusions

In the old world, licensing was not a significant part of the architectural design. Licensing was simple and for a per-user-per-month fee you had access to what is now CDS and the Sales, Marketing, and Service modules. In the new Business Applications ecosystem, licensing is much more complex and is a vital input into ensuring a solution is delivering true value to its customer, both for the duration of the implementation and beyond go-live.

One of the simpler ways to consider licensing is in terms of the number of users and the activity the solution will generate. This can be weighed against whether a per-user or consumption model will provide the most value. For lots of users and an app which is only used occasionally, a consumption model makes sense. For less users but an app which those users will live in, a per-user may be a better option.

The notion of considering the license implication of a design will set apart the next generation of Business Applications architects from the old Dynamics CRM architects who are catching up. Do not get left behind or, worse, deliver an elegant solution which will bankrupt your client in licensing fees.

How To Become An Expert In Business Applications

Standard

A question I often get asked at conferences or via LinkedIn is how did I become an expert in Dynamics/Business Applications. The answer is pretty simple. I was on a bootcamp for Microsoft CRM 1.0 beta and have been tinkering with it ever since, focusing mainly on doing crazy things with Workflow. That bootcamp was back in 2003 and, to show how long ago that was, here is a screenshot of a Total Cost of Ownership (TCO) calculator they provided us with the biggest competitor at the time, Saleslogix (Salesforce was not even a blip on Microsoft’s radar back in 2003.

Everything was on-premise back then so, as an implementer, you also had to know how to set up SQL Server, Exchange and all the other server components Microsoft CRM relied on. We have come a long way. The good news is if you are looking to become an expert in Business Applications today, you can do it much faster than the 16 years it has taken me.

How The Game Has Changed

It used to be the case that, to be an expert in Microsoft/Dynamics CRM/365 you jumped on board with a version of the product and then just incrementally updated your knowledge as the new version came out. The version cycle used to be every 2-3 years, back in the old days, so it was very easy to keep up with all things Sales, Support, and Marketing. Then, to meet the expectations of the SaaS market, Microsoft changed the way they released software. Firstly, around five years ago, they committed to an annual release with a six-monthly mini-release. This slowly evolved into a six-monthly/unscheduled release cycle and now, for the online version of the product, it is a continual release.

How does this change the game? Because it is simply impossible to keep up to date with everything that is happening with Business Applications and the Power Platform. The pace of innovation is too fast. Moreover, much of the knowledge gained over the years is now redundant (in my case Workflows are becoming less and less relevant so it is time to carve out a new niche).

Let me state this very clearly: The MVPs who used to know almost everything there was to know about Microsoft/Dynamics CRM/365 are, at best, experts in one area or module and have a broad understanding of the rest. Business Applications has, for the purposes of understanding all of it, moved beyond the Technological Singularity.

The Opportunity For You

With no one being an expert in all things Business Applications, and new additions coming out literally all the time, anyone can grab hold of an area of the ecosystem and become the expert.

To highlight an example, I am going to pick on a friend of mine, Elaiza Benitez. Now Elaiza is no newcomer to Dynamics, she has been blogging about it since 2014 but, in my opinion, what has catapulted her to fame is her YouTube series #WTF (What The Flow). When I was at UG Summit EMEA earlier this year, Elaiza was mentioned in presentations and people asked me about her, and it was all because of #WTF.

#WTF was only started a year ago and exclusively focuses on tips and tricks for Microsoft Flow. Her recent passion is linking Flic buttons to Flow for creative and practical applications.

Anyone could have dug into the details of Flow and made a name for themselves. The barriers of entry were low because Flow requires practically zero coding but it was Elaiza who took the plunge and chose to own this piece of Business Application real estate and bring it to YouTube. She was the one who chose to devote the time to understand it and devise ways to make it accessible to the rest of us. Similarly with Flic buttons; to connect a Flic button to Flow requires zero code. The immense value Elaiza brings is in coming up with entertaining, meaningful and practical examples demonstrating the value of this elegant technology and inspiring others to improve the world with it.

OK, enough of this fanboy adoration of Elaiza, what does this mean to you? It means you can do the same. There are so many products being released begging for someone to take them up and show how amazing they are and how they can transform the world.

What Products Are Begging For Experts?

Here are some examples of products in the ecosystem begging for someone to grab them with both hands and show the rest of us how it is done:

  • IoT: The barriers of setting this up are reasonably high with Azure subscriptions and setup but this means whoever takes this ground will be unassailable
  • Talent: From what I understand it is not complex and should be fairly easy to get across. It has hooks into both F&O and Dynamics 365 CE (or whatever the new name is for the CRM modules) so it is open to being attacked from both sides of the Business Applications fence.

STOP PRESS: The gracious Megan Walker has pointed out that there is a Talent MVP who reigns as the queen of Talent, this being Malin Donoso Martnes. Apologies for this oversight Malin!

  • Retail: This has received significant investment with the soon-to-be-released Commerce which claims to be everything you need to run a store both online and offline. If I was running my own consultancy and wanted to focus on a solution which I could roll out to companies across the country, this would be it.
  • AI: With the new AI Builder making AI available without a line of code and the pre-built Insight solutions, this is a piece of territory which will soon be taken but is large. A few people could focus on specific areas in AI without getting in each others way. If you want to become an expert in something which Microsoft is investing heavily and which will provide tremendous value to those who adopt it, AI is for you.
  • Mixed Reality: There is some hardware investment and likely need for coding in this one but, like IoT, for those who invest, their position will be hard to approach. The biggest problem is understanding how it will benefit business to do what they do better. The person who cracks that nut and makes it accessible to the masses will be hailed a hero.
  • Fraud Protection: A lesser known offering in the Dynamics 365 collection. For the person who becomes the expert in this, they will, in my opinion, be able to write their own checks. It is very easy to demand top dollar when you have just saved a company a few million in fraudulent transactions.
  • Azure Cognitive Services: Spinning up an Azure service and using Flow’s Connectors to talk to it is codeless and very simple. There is so much power here and all it is waiting for is the next #WTF for Cognitive Services.

What Is Stopping You?

There is very little preventing you from being an expert in any of the above technologies. All it takes, in many cases, is focus and time. If you are willing to devote a few hours a week to tackling your chosen territory, you will be ahead of 99% of the people out there.

Extending Scrum Without Making It FrAgile

Standard

It has been a while since I have written a post. In the interim, I started my Diabetes Blog: The Practical Diabetic. While I am Type 1, there is good information in the articles for anyone who is prediabetic or a different Type and all are tagged to make sure the content is relevant. Being a physics geek, all the information has a basis in science. No half-baked cures on my blog. There are also some interesting technical articles for helping manage the disease. If you know someone in the D-Camp, point them my way.

This article is based on research I did for PowerObjects as part of their Agile Center of Excellence (CoE). PowerObjects have a bunch of CoEs with membership across the globe. We get together weekly and work out best practices and share war stories. I belong to the Agile one. In this case, the research was around the various Agile frameworks out there and their applicability to Dynamics implementations.

As with my Diabetes blog, if the article looks too long, skip to the tl;dr section at the end for a summary.

The Sanctity of Scrum

Arguably the most common Agile framework used for Dynamics implementations is Scrum. The definition of Scrum is a very readable 19-page PDF document. However, The Scrum Guide is far from prescriptive. For example, the words ‘velocity’ and ‘points’ appear precisely zero times, yet these are common elements in many Scrum implementations; The Scrum Guide says precious little about tracking progress. To this end, there is plenty of room for adding creative flavor to Scrum. Creativity is often seen in the format of the retrospective, and in the estimating of effort in Sprint Planning. Another area where we can bring creativity to Scrum is by introducing elements from other Agile frameworks out there.

Agile vs Lean

Software development frameworks come from two main paradigms: Agile and Lean. While Agile focuses on delivery through iterative development, Lean focuses on the process and seeks a continuous stream of productivity and improvement. Agile values the people involved, collaboration, and adaptability, while Lean values the elimination of waste, improved quality and optimization.

Where the two share common ground is in providing efficient and tangible outcomes. Most software development frameworks sit between the philosophies of Agile and Lean.

The Frameworks Of Interest

There are many, many frameworks but for this post I will focus on three: Extreme Programming (XP), Scrum, and Kanban. If you like another framework, by all means embrace it. These three are useful as they cover the spectrum between Agile and Lean and share some compatible elements.

The most Agile is XP, while Kanban is very Lean, being all about the process, with Scrum sitting in the middle. In terms of how the frameworks differ, the key points of differentiation for Scrum is its focus on the people involved (both on the consulting side and the customer side) and the ability to accommodate offshore development teams (something increasingly common in Dynamics implementations).

Scrum XP Kanban
Project Size All Small All
Sprint (weeks) 2-4 2 1
Process Centric No No Yes
People Centric Yes Yes No
Virtual Team Support Yes No Yes
Documentation Basic Basic N/A

Extreme Programming (XP)

Extreme Programming is probably the most famous of the ‘pure Agile’ frameworks. Being born out of the rise of the internet and dot.com boom, it sought an alternative to the traditional waterfall approach, more suited to construction projects.

The approach of XP is less about a set of requirements but more about embracing the right values, principles, and practices to achieve the requirements; it focuses on the journey, not the destination.

Designed for pure coding, it is, in my opinion, difficult to embrace completely for Dynamics implementations. However, in terms of its values and in using an incremental development approach, it is closely aligned to Scrum.

Some of the techniques/philosophies used in XP which can benefit a Scrum implementation are:

  • Pair programming: Even if it is configuration of the system, a second pair of eyes can be invaluable for re-evaluating the intent behind a User Story, or offering different approaches to address the problem (there is always more than one way to solve a problem in Dynamics).
  • Test-Driven Development: Determining how a story will be tested is a great way to understand what needs to be configured/coded. It also provides an opportunity for the Product Owner, the test team and the development team to come together to ensure they are aligned on what is to be achieved.
  • Collective Code Ownership: There is no ‘i’ in Extreme and with the development team in Scrum being an amorphous blob, it makes no sense that configuration/code is an individual responsibility.

Kanban

Kanban is all about the process and visualises this process through a board with tickets covering it to represent the jobs (User Stories) being processed and the stage in the process they are at. This board is creatively known as the ‘Kanban board’. The Kanban board is a board of continual development/progress. No sprints here.

A key element to Kanban is a lack of upfront planning and story-sizing. This means it is very hard to predict what effort or time a project will take to be delivered. Convincing a project sponsor to give the go ahead for a project with no timeline or budget is challenging and this often precludes Kanban as the primary framework for a Dynamics implementation.

Some of the approaches used in Kanban which can be beneficial to Scrum are:

  • Using a Scrum board: The Scrum board differs from a Kanban board in that the Scrum board is reset at the end of every sprint to reflect the new Sprint Backlog. Otherwise, their appearance and function are quite similar.
  • Limits on the number of stories permitted at a given stage: This prevents too many stories moving between stages e.g. development to testing. One benefit of this is it will highlight resourcing issues if a specific stage is blocked due to too many stories. This approach will also show if stories are not being system tested thoroughly by the development team before handing over to the test team as if too many are returned for re-work, this will also lead to blockage.
  • Story-typing: A lot of information can be conveyed on a Kanban board and there is no reason why this cannot be brought across to a Scrum board. A great example of this is in the use of ‘Story-Typing’ which is the classification of User Stories, depending on the type of story they are. Chores (work that needs to be done upfront before actual development can start) and Spikes (Research/analysis required to address a User Story) are good examples of story types.

Neil Benson, who I had the privilege of working with on a two and a half year Agile Dynamics implementation for the University of New South Wales, was a big fan of story-typing. Blocked stories were inverted on the board and we had a healthy pool of Spikes and Chores. We also used paired programming. Neil is quite the fan of mixing it up.

tl;dr

There are quite a few software development frameworks out there and while Scrum is the most popular, the Scrum framework is sufficiently flexible that it can incorporate elements from other frameworks.

Looking at the various Lean and Agile frameworks out there, two which have elements which can be adopted by a Scrum implementation are Extreme Programming (XP) and Kanban. Elements which lend themselves to inclusion are:

  • Pair-Programming
  • Test-Driven Development
  • Collective Code Ownership
  • Using a Scrum board
  • Limiting the number of User Stories at a given stage of the development process
  • Story-Typing

The Evolution of Customer Service From A Call Center to Multi-Channel And Beyond

Standard

Starting university in the early nineties gave me a unique position to appreciate the modern evolution of technology. The internet in the public domain was still in its infancy. I was one of the first people I knew to have an email address and had to explain what email was to many of my friends. ICQ was five years away so messaging was done via ‘telnet’ where you ‘dialled’ someone’s IP number to chat.

Browsing was text based with hyperlinks and there was no search engine (AltaVista was three years away). You simply discovered pages of web links through word of mouth.

This was a time when customer service was delivered through three channels: face to face, phone, and fax. We have come a long way.

The Advent of Multi-Channel

Multichannel

People do not really talk about multi-channel any more. It was big for customer service and for marketing and, while it could be argued even our pre-internet customer service was multi-channel, my recollection is the term only came into mode when linked to internet channels.

It was also the beginning of a shift in considering what customer service was for. Before this time, customer service was little more than part of the product/service offering. If a company offered three services, each department was responsible for customer service, and there would be three customer service functions. I recall a prominent American bank at the time having literally a dozen different fax numbers for different divisions (the only reason I remember this is because I once flooded all the fax numbers when the bank was slow at refunding an erroneous monthly charge. Enquiry processing across the bank came to a halt in what was, arguably, a pre-internet Denial of Service attack).

With the introduction of channels like email and online forms, came the shift to considering the customer’s experience. It made sense for the customer to choose the most convenient way to reach the organisation and not the other way around.

The outstanding problem was minimal cross-communication. Multi-channel meant multiple ways for the customer to get service but each channel was still a separate experience. Switching channels often meant starting again and customers were still bounced around departments for more complex issues.

Progression to Omni-Channel

Omnichannel

Omni-channel, as you can see from the Google Trends graph, started becoming a thing about five years ago. Thanks to the online revolution, enterprise-level CRM systems became affordable for all. This provided a centralized hub for all enquiries. You could email about an issue, follow up with a phone call, and then go to the company’s physical service counter and all interactions would be recorded in the same system and available at the click of a button.

While multi-channel gave consumers a choice of communication channel, onmi-channel took it one step further and ensured a consistent experience or, at least, a consolidated one.

With most CRM systems, a rudimentary omni-channel system can be set up relatively easily. In my last project for a major university, whether the student asked their question face to face, via phone, email, or online form, everything became a Case record in Dynamics. In an omni-channel system, the customer gets to use the channel which makes sense for them and their enquiry. For the company, the channel does not really matter as a centralized CRM system means all enquiries are treated consistently. A true omni-channel system also removes “answer shopping”, common in multi-channel systems.

The Future is Omni-Moment

Omnimoment

The core assumption in an omni-channel system is the customer chooses a channel for an enquiry and sticks with it for the duration of that enquiry. Focussing further on the customer experience, the nature of the enquiry may require multiple channels to be engaged as part of the one interaction. Let us consider an example of opening a bank account.

In the multi-channel experience, a customer calls to find out about the procedure. They do not quite get the answer they are after, so they call back to get a different agent. They then visit a bank branch to collect the right forms. They go home, to fill in the forms. If they need to clarify something about the form, they either call or revisit the branch. There is no guarantee that the advice they get from these channels will be consistent.

The customer hunts down a notary and has their identification documentation validated. Once completed, the forms are faxed. Finally, once the processing department has informed the local branch that the account is open, the customer returns to the bank branch to provide a signature and collect a bank card.

Every step in the process is an isolated channel with the customer being expected to bring it all together in what was often a frustration and time-wasting experience.

In the omni-channel world, the customer goes online to find out about the procedure and there is an online form. If the customer has a question about the form, they can call or browse the web site. As both channels are pulling their information from a centralized knowledge management system, the answers will be consistent (and hopefully comprehensive).

Identification documentation is again notarized and once the form is completed, with notarized documentation attached, the application is processed and, with signatures being a thing of the past, a card is sent in the mail.

In the omni-moment experience, the customer goes online to find out about the procedure. The web site recognizes the intent and provides the option of a chat bot to assist. If the customer’s enquiry cannot be answered by the web site or bot, the interaction is escalated to a human. The agent offers to share screen and walk the customer through filling in the online form. Using video conferencing, the agent can verify identification on the spot without the need of a notary. Forms are completed and the account is opened immediately ready for online use. A bank card is again sent in the mail.

As you can see, the seamless integration of people, process and technology, make for a delightful customer experience. A process which took a week in the multichannel world, is completed in half an hour in the omni-moment world.

The Evolution of KPIs

As the way we interact with customers has changed, so too must our KPIs. Here are some classic call center KPIs which I consider irrelevant (or at least very misguided) in the modern customer service center.

Average Handling Time

Even back in the days of call centers, I was not a fan of this measure. It encouraged agents to open a call and immediately hang up to lower the stat. It is focussed on productivity, often at the expense of the customer experience.

If an agent spends 20 minutes assisting one customer to open up a bank account and 30 minutes with another, why is this a problem? If one agent is terse and goes through the form quickly, is this better than someone who actually takes the time to make sure the customer knows what is going on?

Average Time in Queue

There really is no excuse for waiting in a queue on the phone these days. Assuming a customer insists on exclusively using the phone, a call back service should be standard procedure. In an omni-moment world, there should be no queue and all queue measures are irrelevant.

Cost Per Enquiry

It is good to have visibility on costs but this should not be managed at the expense of the customer experience. In the early days of online channels it was realised these were much cheaper to operate than traditional channels. In some cases the customer experience was worsened for the traditional channels to encourage people to go online. This is management in the absence of strategy and is disastrous in the long term.

What is the Purpose of Customer Service?

The ultimate measure of customer service should be customer satisfaction. In my opinion this should be sought directly through surveys rather than assumed through measures such as Average Handling Time (a short call is not necessarily a good call). I can see value in measuring First Call Resolution (as confirmed directly with the customer) as this should be the ultimate goal of customer service. However, it need to be modified so it covers all channels across the customer experience, not just the phone component (assuming a phone is even involved).

While in a pre-multi-channel world, customer service was seen as little more than a necessary evil for selling a product or service, in an omni-moment world, the minimum standard is having the customer ask no more than once and be satisfied every time they make an enquiry. In fact with machine learning, in many cases, it should be possible to anticipate customer need and frequently achieve ‘ask never’ for existing customers.

Generating Reports For NightScout Data Using Flow, Excel, and OneDrive

Standard

A few months ago I talked about extracting data from a MongoDB database for the purposes of generating alerts. Since then I have taken it further and now generate regular reports of my data using the power of Flow, Excel, and OneDrive. As this may be useful to others running NightScout I thought I would share my set up and the discoveries along the way.

The Flow

First of all, I need to extract the data from the MongoDB and sent it to a target Excel sheet. To do this we use Flow.

image

I have set the recurrence to three hours. This strikes a balance between not running too often and blowing my Flow quota, and running sufficiently often to give timely results. At every three hours, we run approximately 240 times a month, which works well with our limit of 750 Flows per month.

The variable stores the latest DateTime value from our target Excel file.

image

To populate this variable, we query our target Excel and set the value.

image

In this screenshot we see that we return only one row from Excel, being the row with the highest DATE value. We then use this to set the variable.

Once we have this DateTime value we incorporate it into a modified version of the API call we used in the Alert blog.

image

For this call we bring back 100 entries from the MongoDB, a bunch of fields and order it so that if there are more than 100 rows available from the Latest Date from our target Excel, then only the rows immediately after this DateTime are returned. This ensures the query does not mess with the row order when it transfers them to Excel.

My continuous glucose monitor (CGM) feeds a value to the MongoDB every five minutes which means it generates 180/5 = 36 entries every three hours. Therefore 100 is a good setting to keep on top of the additional values generated in MongoDB but sufficiently large that it will be able to catch up if there is a temporary issue with the running of Flow.

Once the reply is parsed, we can populate our Excel with the new rows.

image

One point of note here is that the Flow step requires a Table within the Excel workbook. This is relatively easy to set up. Basically, you add your headers to the sheet, highlight them and select Format as Table from the Styles section of the Home tab.

The result looks something like this.

image

The DATE value is an integer representing the DateTime value but is a little difficult to read or transform so we also record the DATESTRING which is a little friendlier. Then we have the SGV value which is the blood glucose level in units only the USA use and finally we have the DELTA which is the change in SGV value between reads.

Once we have captured our data, we can begin reporting on it.

The Report

I discovered relatively quickly that Flow has a size limit for the Excel files it will work with. In the free plan this size limit is 5Mb, which makes it impractical for our purpose. Luckily I had a paid Flow plan via my Office subscription so I moved to this. This plan allowed me to work with Excel files up to 25Mb in size. This worked well. My Excel file has approximately four months of data in it and is 1.6Mb in size. Therefore, I have around five years of data to go before Flow reaches its limit. In five years either Microsoft will have removed this silly limit, I will be using a different technology to analyse my data or they will have found a cure for Type 1 Diabetes (there is a running joke in the diabetes community that the medical professionals have been promising a cure within five years for decades now).

The other trick I did to minimise the size of my target Excel was to house the reporting in a separate file and use a Power Query to reference back to the target file for the data. Using this Power Query, and some Excel formulae to manipulate the data to make it friendlier for reporting, I got this for my first worksheet.

image

If you struggle to replicate any of my formulae, please leave a comment and I will reply with the details.

HbA1c Prediction

The HbA1c is an indicator of how ‘sugary’ your blood has been for roughly the last four months. Using our CGM data we can make a prediction of what our HbA1c value is.

image

There are a few formulae available to do this calculation and in the above I use three of them. In the case of my blood results, the models predict 5.3, 5.1, and 5.1 which is well below the target threshold of 6.5 so well done me. I expect this value to slowly increase over time as my pancreas becomes less able to lower my blood sugar levels.

Distance Report

The Distance Report is something that can only really be generated using CGM data with a regular time interval  between measurements (in our case every five minutes). The Distance Report shows the total ‘distance’ travelled by the blood values i.e. the sum of the absolute delta values and is an alternative measure to the standard deviation.

image

For this report we only have data for the last four months as this is how long I have been using a CGM. We can see that the distance travelled each month is roughly the same. As time goes on we would expect this to increase as the pancreas becomes weaker and blood glucose levels (BGLs) start to vary more.

BGL Report

This was the first report I created and reviews literally all my BGL measures (around 600 manual finger pricks and then the CGM data).

image

In the top left we have literally every value recorded and when it was recorded. The CGM data can be seen as the ‘thickening’ of the values towards the right hand side of this graph.

In the top right we have the distribution graph for the data showing the spread of results.

The bottom left shows all the data points but strips out the Date value, leaving only the Time value. This has the effect of showing the data over a 24 hour period.

Finally, in the bottom right, we have a range of filters to assist with analysing the data.

For example, if we compare the distribution curves for 2017:

image

2018:

image

and 2019:

image

we see that our distribution curves are centred around 5.4, 5.5, and 6.0 respectively. In other words it appears the curve is moving to the right over time. This is consistent with a weakening pancreas (or me being more relaxed about carbs).

Range Report

The Range Report looks at the average and standard deviation of the data per hour, looking for where in the day the BGL values are highest and vary the most.

image

The graphs are relatively flat with a slight increase towards the end of the day. This is likely the result of dinner (generally the largest and most variable meal of the day and therefore the meal with the most impact on glucose levels) and late night snacking (which will never have a positive effect on BGLs). Again we have a filter, in this case a timeline, to help with our analysis.

Distribution Report

The Distribution Report does a similar analysis as the Range Report but per month, rather than per hour.

image

The trendlines suggest the numbers are relatively flat (average BGL around 6 with a standard deviation of 1). It is expected both of these will increase over time and the BGL average and variability increase.

Displaying the Data to the Health Team

With the Excel files sitting in OneDrive, you simply right click the file to generate a link for sharing a read-only version for health care professionals. In my case I use bit.ly to also make it friendlier. While it is a little twitchy, it is reasonably friendly across various form factors and browsers.

Conclusions

Flow opens up a raft of opportunities for using my data whether it be alerts, analysis to maintain my health or making it readily available to my health care team. A few years ago this kind of set up would have taken weeks of coding, if it was possible at all. Today, it requires zero code and costs almost nothing. If this kind of set up could help you or someone you know, have a tinker, it really is straightforward to set up.