The Voice Channel Rises as the Call Center Falls

Standard

Microsoft have formally announced the release of their VoiceBot as part of their low code/no code Power Platform at Ignite 2021. For me, this is very exciting although, conceptually, it is quite simple. Microsoft’s configurable text-based chatbot, in the form of Power Virtual Agents, has been around for a while now and this new VoiceBot takes that foundation and links it to Azure Cognitive Services to convert text to speech and vice versa. Behind the scenes it is a remarkable achievement. We literally have a configurable bot capable of generating and processing natural speech, and which can interact in real time with a human.

So what does this technological achievement mean and why do I claim it is the death of the call center?

Why We All Hate Call Centers

No one gets excited about talking on the phone to a call center. There is rarely anything pleasant about the experience. Let us consider a typical scenario for a customer.

Finding the Phone Number

We look up the call center number for the organization we are trying to reach. Often the number is purposely buried in some dark corner of the company’s web site because they want the customer to choose practically any other channel to answer their enquiry. Why? Because speaking to a human on the phone is very expensive.

Forrester looked at the cost of different channels over ten years ago and found call center support cost $6-12 dollar per contact compared to, say, web self-service which cost literally 100 times less. No wonder the companies make it difficult to call.

Wading Through the IVR Swamp

The above table gives us an indication of one of the reason we are immediately encouraged to press keys before speaking. If a customer’s question can be answered through IVR/DTMF touchtones and automated recordings, we are still at least 20 times cheaper than speaking to a human. Another reason for IVR use is the company wants to filter out the high-value customers from the low-value ones. For example, if you are looking to make a purchase, the company will not want to keep you waiting compared to, say, a support call where the customer is a captive audience. Whatever the reason, assuming our call needs that human touch, we need to get through the IVR obstacle course before we are permitted to speak to a human.

On Hold, But Your Call is Important to Us

Next comes the wait and that music. The smarter companies offer a call-back service but, certainly in my experience, these are the exception, not the rule. The lazy ones, again, encourage you to abandon all hope and return to the web site for answers. Why the wait? To limit costs, a balance is struck between how long a customer is predicted to tolerate waiting and how many call center staff the company is willing to spend money on; the more staff, the higher the salary costs and the more expensive the service.

The Language Barrier and a Lack of Localization

To reduce costs, many companies outsource their call center overseas. This means, almost by definition, the call center will be populated with people for whom English is a second language. The language aspect may cause difficulty although, in my experience, things have improved on this front a lot over the years; the days of having to spell “Leon” are mostly behind me unless it is a company that has really gone cheap on their customer service.

Another consequence, which is harder to overcome, is a lack of localization. This means clarification questions are asked which are unnecessary for a local call center. For example, pretty much every Australian knows where the “Gold Coast” is but an overseas operator may still ask for the state.

Assuming these aspects are overcome, the result should be that the customer’s issue is resolved hopefully on the first attempt.

How the VoiceBot Helps

With a VoiceBot, almost all of these pain points are removed. We see from Forrester’s table that, over ten years ago, a virtual agent was ten times cheaper than a human. I am not sure what constituted a virtual agent in 2009 (primitive chatbot perhaps?) but a coded chatbot would not have been cheap so I think it is reasonable to expect a configurable Power Virtual Agent will be the same cost or cheaper and therefore a compelling economic alternative to a human call center agent.

Cheaper means many of the concessions made above, at the expense of the customer experience, are no longer necessary.

First of all, we do not need to hide the phone number and encourage other channels. The phone number can be as prominent as the web site text-based chatbot which, in all likelihood, will be running on the same configured engine as our new VoiceBot. The customer could use the chatbot and get the same results but the decision is the customer’s, as it should be.

The IVR swamp can be drained. Microsoft’s chatbot is IVR-aware but I think this will become less relevant when the customer can simply say what can be typed and be perfectly understood.

The waiting and listening to Muzak will also disappear because scaling an army of VoiceBots is a lot more affordable than running a call center populated by humans.

Issues of language and localization should also diminish as VoiceBots become more sophisticated. While in the early days of voice recognition my (mostly) Australian accent proved troublesome, it is not the case today and localization, in many cases, will be a Google/Bing search away. Alexa, as an example of how things have progressed, is now conversant in Australian slang.

No Longer a Need for Humans?

Of course, as anyone with a bot in their home will tell you, VoiceBots are unlikely to be perfect for quite a while and, where the VoiceBot ends there will still need to be a human waiting. However, setting up a call back service will be trivial and, given Power Virtual Agents can be retrained on previous encounters to improve intent recognition, I believe the need for humans will significantly diminish.

If we think of a traditional technical support setup, Level 1 support (agents with limited technical knowledge following scripts) will disappear the quickest. Any script a human can follow, a bot can as well. Level 2 requires in-depth knowledge of the product which is a product manual away. While a human struggles to sort through large volumes of text quickly, this is trivial for a bot. So, as I see it, in the short term, Level 1 and some fraction of Level 2 will be the easiest to replace, significantly reducing the call center headcount and potentially bringing many call centers back onshore, populated with a handful of deep technical experts.

The Ultimate Outcome

The biggest win though is the choice of channel is put back in the hands of the customer, rather than being dictated and compromised by economic considerations. If a customer chooses not to engage with the VoiceBot, they can request an escalation to a human straight away, although I think this will become less frequent as people learn that the bots can solve an increasing range of issues. The customer regains power to control their experience and the company is not compromised in offering it. Both the company and the customer wins.

What Jumps Out At Me From Dynamics 365: 2021 Release Plan 2

Standard

A new job and a new role (in presales) means a bit more time to get “dirty with the tech” and wading through the 400+ pages of the Wave 2 Release for Dynamics 365 is a good place to start.

I did a review for 2020 Wave 1 in February last year and one thing which has changed from then is the consolidation into one PDF. There is no longer a distinct guide for Power Platform vs Dynamics and this is, of course, indicative of the consolidation into one cohesive ecosystem/platform. Just as CRM and ERP are no longer divided but simply a collection of “First Party Apps”, we now have Power Platform joining the gathering to form one cohesive Business Applications Platform.

This is the future so expect to hear people (especially if they work for Microsoft) talk more about Business Applications (or, if they are spelling it out “Dynamics 365 and the Power Platform”) and less about CE, CRM, and ERP. All your base (products) are belong to BusApps.

So I will now read the guide as I write and call out the sparkly bits which excite me and how they might be important to you, your Business Applications implementation and future strategy with the platform. I will focus on Public Preview (PP) and General Availability (GA) features. Where I do not know a First Party App well enough (or nothing excites me) I will leave it out.

Marketing

Create email content easily and efficiently with AI-based content ideas (PP Oct 2021)

I see adverts online for this kind of thing. Basically, AI recommending marketing copy and, from what I hear, the robots do a pretty good job. Content snippets based on key points which the user selects and massages, as needed.

More and more Microsoft are weaving AI magic into Dynamics First Party Apps and this is a great example. This is where the real value of the First Party Apps resides in my opinion; the linking of Model-Driven Apps with Azure AI to provide an unassailable competitive advantage to Microsoft’s offerings and also to the organisations which employ the technology.

Deliver rich customer experiences across Dynamics 365, Office and other apps by augmenting customer journeys with Power Automate (PP Nov 2021)

Imagine adjusting a customer journey based on the weather, or based on local traffic conditions. This is now possible with the incorporation of Power Automate Flows into customer journeys. A lot of possibilities open up with this.

Create segments for leads and custom entities in the new segmentation builder (PP Dec 2021)

No longer limited to Contacts, we can now use the segmentation builder to create segments for other “person-like” entities tables.

Sales

Lead routing (GA Jan 2022)

Similar to Case routing in Customer Service, this routes Leads based on a rules engine. Historically, this is something we have built using Workflows or Flows (here is one I built ten years ago). Rules can be based on Lead segment, Lead attributes, seller attributes and assigned round robin or by load balancing. Combined with new data hygiene features (GA Jan 2022) this will be a huge boost for organizations which generate opportunities for large Lead lists.

A shedload of Teams integration features (mostly GA Jan 2022)

This blog is going to be long enough as it is so I will not list all the “do this Dynamics thing from Teams” and “do this Teams thing from Dynamics” but suffice to say Microsoft are bringing these products together in a big way. Hint: Expect lots of announcements at Ignite.

Service

Modern control for subject entity (GA Oct 2021)

The Subject tree has not really changed since the days of Microsoft CRM v4 (thank you SeeLogic for this trip down memory lane)

Thankfully, it has now had an upgrade and looks a lot like the asset location tree for those familiar with Field Service. Thank you to Nishant Rana for this review of the new feature and this handy screenshot.

Lots of Omnichannel Voice Features (GA Nov 2021)

Omnichannel is moving towards being a proper call center solution. Outbound calls will be supported, using Azure Communication Services, and Agents will be able to put calls on hold, consult other agents or transfer the call to another agent. Azure Communication Services is now Dynamics’ in-built voice provider as opposed to before where a third party telephony service would need to be stitched in.

Call recording, call transcripts and sentiment analysis, along with reporting analytics will also be available. Again, Microsoft is using their AI services to make their First Party Apps a bit more magical.

Supervisors can also shadow calls and review the live transcript. If required, they can also participate in the call to keep it on course.

Intelligent voice bot via Power Virtual Agents and Microsoft Bot Framework (GA Nov 2021)

I am very excited for this. Power Virtual Agents is a configurable text chatbot and pretty great. This feature gives the chatbot a voice. The bot can answer questions 24-7 and when it fails the Turing Test, can hand over to a human with full transcript and context.

Bring your own data to timeline (GA Oct 2021)

This is a bit of a dark horse of a feature but I can see it being immensely useful. In essence, external data can be exposed on the Timeline (the activity timeline box you see on Accounts, Contacts, etc.) via virtual entities. So, for example, financial time-relevant data could be exposed from Dynamics 365 Finance on the timeline such as when payments were made or when they are due. Similarly, if you could figure out a way to bring the data in as a virtual entity, you could expose a Contact’s LinkedIn job history on the timeline. Lots of opportunities for this one.

Field Service

Enable customers to schedule service visits with a simple web experience (GA Oct 2021)

I assume this is the self-scheduling feature which has been previewed as part of the Microsoft Cloud for Healthcare.

The welcome screen on the Contoso Healthcare app on a mobile phone and the screen to schedule a new appointment on a tablet.

I am very happy to see this in General Availability as creating a self-service scheduling feature for Power Apps Portal used to involve a lot of code and a prayer. Dynamics 365 Field Service resource setup is needed to match customers to technicians but it is a small price to pay for such a useful feature.

Finance

Create collections activities based on payment predictions (GA Oct 2021)

This combines the new payment prediction feature with automated collection activity creation. This allows for the optimisation of the efforts to chase payments and potentially could be used to increase cashflow by chasing repeat offenders early.

Forecast bank balance and treasurer workspace (GA Oct 2021)

Microsoft continues to bring predictive and forecasting capabilities into Finance. This feature allows for cash flow forecasting to know how much cash the business will have on hand and when, making the future allocation of funds much easier and more reliable.

Combined with the new Treasurer Workspace, which allows for forecast snapshots for comparison to actuals, businesses are getting some seriously powerful tools for managing their bank balances.

Intelligent budget proposal (GA Oct 2021)

Creating a budget at any level of an organisation is a tedious manual process but it does not need to be. With historical data, this feature puts together a template budget, based on historical spending which can be refined, based on the upcoming needs of the organisation. This feature alone will save organisations a fortune in hours regularly wasted chasing up numbers and making “best guesses”.

Commerce

I simply do not know Commerce well enough to know what is exciting and what is not but the segmentation based on location, device type etc. looks interesting, as does redirection based on geolocation.

Project Operations and Human Resources

Obviously, it is easy to see where Microsoft are making their investments by the sheer volume of features added to a First Party App in a given release. For example, Service had close to 40 pages of new features in the release (10% of the entire document). In contrast, Project Operations and Human Resources have only a few pages. I am not saying these products are going anywhere but either they are already too perfect to improve or it is not at the top of Microsoft’s list for attention. Another litmus for Microsoft’s focus is the number of sessions devoted to a product at events like Microsoft Ignite. My guess is Project Operations and Human Resources sessions will not have too many sessions.

Guides and Remote Assist

Just a handful of pages here as well which is a pity because this is really exciting technology. Hopefully, there will be more to come in the future.

Power Apps

Intelligent authoring experience with Power Apps Studio (GA Oct 2021)

A while ago I saw a demonstration where Microsoft had trained an AI to write code using the entire public Github repository. Effectively, you wrote the comment and the AI built the code around it. It was very impressive. This capability is now with Power Apps with the ability to write natural language and have Power Apps Studio generate candidate code for the author. It is a great way to save time and teach novice coders what they should be typing.

You can also provide an example for formatting and Power Apps Studio will create the Power Fx code to enforce this format.

Relevance Search

I would not be a release without a name change. Relevance Search is now Dataverse Search.

Reinvented maker experience for configuring model-driven apps for offline use (GA Dec 2021)

This is very cool. Previously, to enable a model-driven app for offline use you had to activate offline for each table used by the app. Now we can enable it at the app level. One toggle and all relevant tables become offline capable.

Manage everything about solutions and tables in a modern way (GA Oct 2021)

No more Switch to Classic!! The maker portal is getting parity with the classic experience. Goodbye solution explorer, hello new fully-featured maker portal.

Microsoft Dataverse

Microsoft Dataverse search can search through file data type (GA Oct 2021)

The one search to rule them all can now search through files stored in Dataverse much like the SharePoint Enterprise Search of old. This is great and will be really useful at finding records based on attached files.

Microsoft Dataverse data archival (PP Mar 2022)

This is very exciting but still some time away. Arguably a flaw in Dataverse is the lack of proper archiving. In effect, to meet an organisation’s archiving policy, it must be set up with integration and code or some very, very clever Power Automate Flows. If Microsoft build an archiving engine into Dataverse this will save a lot of development and make answering RFPs which ask about archiving a lot easier.

Delete and remove users with disabled status (GA Oct 2021)

I expect quite a few Dynamics administrators will dry tears of joy over this one. No longer are we stuck with useless disabled users in Dataverse. Users can be purged along with the historical records associated with them.

Conclusions

As usual, a wealth of innovation in the new release, even more AI integration and the occasional patching of a hole of missing functionality which probably should never been there in the first place. Overall I am impressed with what has been produced and will look forward to the additional enhancements to be announced at Ignite.

Breaking Modern Encryption With a Toilet Roll: An Introduction to Quantum Computing

Standard

Thanks to COVID-19 virtualising the Microsoft Build conference this week, I got to attend it for the first time. There were many great talks but the ones of particular interest to me were on quantum computing. Microsoft is now entering the world of quantum computing with their Q-sharp programming language. We may not have commercially useful quantum computers yet but, when we do, Microsoft plans to have the tools ready to make use of them.

Inspired by those presentations, this blog will explain why quantum computing is useful; a subject which is still deeply misunderstood by many.

My Interest in Quantum Computing

My background is a little unusual in that my education was originally in quantum physics. I even published a physics paper with my PhD supervisor, and a fellow researcher 25 or so years ago. One benefit of that unfinished PhD was being exposed to exciting developments in quantum computing and quantum encryption. One of those developments was the invention of Shor’s Algorithm in 1994 (just two years before I put out that physics paper). Shor’s Algorithm sent waves through the academic community because it showed that, in theory, a quantum computer could break modern encryption. If a sufficiently powerful quantum computer could be created, no encryption would be safe. Arguably, it was Shor’s Algorithm and its implications that led to the commercial funding of the development of quantum computers from that time until now. Even though that was 25 years ago and there has been billions of dollars of investment since then, quantum computers still have a long way to go before they can crack modern encryption.

Modern Encryption

One would expect that the encryption methods that protect our secrets are based on some deep, mathematical concepts, inaccessible to all but mathematics professors but this is not true. A lot of modern encryption is based on one simple concept: it is much easier to multiple two numbers together to form a bigger number than to take the bigger number and work out the two numbers used (called factors).

For example, we know that 3 multiplied by 5 is 15 and, because most of us know our times tables, we can easily divine that the factors of 15 are 3 and 5. However, not as many of us can immediately reason that 221 is the product of 13 and 17. Scale this up and you have a system which can readily encrypt secrets but cannot be readily broken.

Factoring with a Toilet Roll

Is there a way we can try lots of potential solutions at once to find the factors of a number? One way is with resonances on a tube. We know that if we blow on a pan pipe, we hear a note. This is note is constructed of the resonant frequencies in the pan pipe tube.

The physics of tube resonance is well understood.

Parts Cleaning and Ultrasonic Cleaning Equipment | CTG
Image from http://www.ctgclean.com

So, if we have a tube of length 221/2 mm = 110.5mm (about the size of a toilet roll) this will resonate with tones of wavelength 13mm and 17mm among others (use inches if you prefer, it does not really matter although a 11 inch tube would be closer to a kitchen roll).

Now comes the clever part. Let us construct a sound using a synthesizer made up of tones of wavelength 1mm, 2mm, 3mm, and so on. We then play the sound through the tube and identify which tones resonate.

Moog Grandmother Synthesizer

Unless we have pitch perfect hearing we might need some help identifying the wavelengths of the tones which resonate. We can do this with a spectrum analyzer. If you have ever seen a car stereo from the nineties you will be familiar with a spectrum analyzer. It looks like this:

Spectrum Analyzer with Python? - Stack Overflow

Using a clever piece of mathematics called a Fourier transform, the spectrum analyzer takes the sound being produced by the toilet roll and breaks it up into its component tones. The resonating tones will be louder and appear as taller on the spectrum display.

Once we identify these resonant tones, we can convert them back to numbers and we have our factors. The algorithm looks something like this.

So what stops us pulling out a Moog Synthesizer, a toilet roll, an old car stereo, and unlocking the world’s secrets? The numbers we need to factor in modern encryption are really long i.e. a few hundred digits long. Using millimetres to define our wavelengths, we need a tube longer than the width of the observable universe to crack it. That is a lot of toilet paper!

Shor’s Algorithm

Shor resolves the problem by abandoning the toilet roll for a cleverly constructed mathematical function, and uses a quantum superposition instead of our synthesized wave. Otherwise, the process parallels our own.

While the mathematics is complex, the idea is very similar to ours. We convert the problem to something we can work with, throw multiple possible solutions at it at once in such a way that the actual solutions separate themselves out, we identify them using Fourier, and check they work.

Is Modern Encryption Dead?

The good news is we still have lots of time ahead of us before we need to overhaul modern encryption. While RSA encryption relies on the factoring problem described and can be tackled by Shor’s Algorithm, other encryption techniques, such as AES encryption are not. Even if we created a sufficiently powerful quantum computer, AES encryption remains strong.

This also leads to the second reason why modern encryption remains unchallenged; it is really hard to create a stable quantum computer. To date, the largest number factored by a quantum computer using Shor’s Algorithm is 291,311. In essence, to challenge modern cryptography, we need a quantum computer thousands of times more powerful than the best machine today and progress is slow.

So Why Are Quantum Computers Useful?

It may seem we have invented the mathematical equivalent of a Rube Goldberg machine, but the fact is this approach of throwing a spectrum of quantum states at a quantum toilet roll and seeing what comes out is much quicker than trying to crack the code with a normal computer. While it may take a normal computer more than the age of the universe to crack this type of encryption, a quantum computer of sufficient size can do it in hours. For encryption, there are still significant hurdles but it speaks to the potential of quantum computing.

The key here is quantum computers allow us to answer questions differently so problems, like this one, where there are lots of potential answers, can be tackled much more efficiently than with a classical (non-quantum) computer. It is for this reason that optimization problems, such as traffic routing, or delivery distribution lend themselves well to quantum computer algorithms.

Chemistry has its foundation in quantum mechanics but anything more complex than the hydrogen atom requires computer simulation to predict. To simulate and design novel molecules for drug manufacturing, it makes sense to use a computer rooted in a quantum world. While projects like Folding@home attempt to tackle the problem by stitching together a vast array of classical computers through the internet, a quantum computer could revolutionize the approach and rapidly accelerate the discover of elusive cures.

There are many applications waiting for quantum computers to become a reality but the fields are green and there is still much to be discovered. Even today, applying quantum algorithms on simulated quantum computers while not providing speed efficiencies, are proving to be superior to the classical algorithms and worth implementing. If you have problems which are computationally intensive, it may be worth considering quantum computing for the task.

The AusSpiderBot: Linking Power Automate and Azure’s Custom Vision API

Standard

I went to Microsoft Ignite The Tour last week. The true value was in catching up with friends and colleagues I have not seen for a while and seeing presentations on the technologies I have heard about but not yet got around to playing with.

One such presentation was by the awesome Amy Kapernick who presented her Quokkabot. Using .Net she linked up WhatsApp and Azure’s Custom Vision API so that anyone could ask for a picture of a quokka or check whether the picture they had was a quokka.

I approached her at the end of the presentation to ask if she had considered doing the same on the Power Platform and she said she had not. Challenge Accepted!!

This is the end result; you can Tweet any image with the hashtag #IsItARedback and my bot will Tweet on @leontribe whether it is or not (being a free plan the response can take up to an hour but it will come). There was literally NO code in its production. It looks something like this:

So What is the Custom Vision API?

Custom Vision API is part of Azure’s Cognitive Services. Cognitive Services are Azure’s AI services which can do mind-blowing things. They are friendly and well worth your time.

As you will see in this blog, they are quite easy to set up when you have Power Automate in your corner.

Custom Vision is a deep learning image analyzer which is a fancy way of saying you can train it to recognise stuff in images. There are plenty of other services, depending on your need but this is the one I needed for my application.

The Project

Amy comes from Perth where there are quokkas. Sydney does not have anything as cute as quokkas so I selected the redback spider. Truth be told Perth has redbacks too but it is much easier for me to identify a redback than, say, a funnelweb when compared to the other eight-legged critters that inhabit Australian shores. Please note the recognition limitation is mine and not that of the Custom Vision API.

While Amy used WhatsApp to make the request, there is no standard Connector in Power Automate for WhatsApp so I used Twitter instead. I was already reasonably familiar with Power Automate and Twitter from working on my TwitterBot which helped the decision.

The Power Automate Bit

If you do not know what Power Automate is, it is the new name for Microsoft Flow. The rumour is Microsoft could not secure the name Power Flow so they went with Power Automate just so everything in the Power Platform had ‘Power’ in the title.

Image result for "power platform" agent

The legacy still remains though. To set up a free account and create a flow (which seems to be what they are still going with) you go to flow.microsoft.com. In this case, our flow is relatively simple.

To go through is step by step in graphics with larger text, our trigger is a Tweet with the hashtag #IsItARedback.

We then loop through the media images linked in the Tweet and feed them to the Custom Vision API.

The Custom Vision API returns probabilities that the image corresponds to one of the Tags we have set up (we will see this a little later).

Looping through the Tags, we check whether the Redback tag has scored a probability of greater than 50% and respond via Twitter accordingly.

The Tweets are constructed such that they respond and show the Tweet they are responding to.

The Custom Vision API Bit

Firstly head to portal.azure.com You will need an Azure subscription but you can get a 12 month trial for free with $200 of credit.

Click the big plus and search for “Custom Vision” to find the service. Hit the Create button and fill in the fields, keeping the default options.

There is a free pricing plan to help preserve those trial credits.

Once complete, two services will be created: the training service and the prediction service.

Click through to the one which is not the Prediction service, go to Quick Start and click the link through to the Custom Vision Portal and sign in.

Create a new Project and follow the prompts.

The Getting Started wizard will then walk you through the setup.

By the end we have created two Tags: ‘Redback’ and ‘NotRedback’, uploaded at least 15 images for each, linked the images to the Tags and trained the model by hitting the Train button (I did the Quick Train to preserve my free cycle allocation). Do not forget to hit the Publish button on the Performance tab to make your training model/iteration accessible.

The Quick Test button allows you to test from the portal uploading an image file or providing a URL.

Linking Power Automate and Custom Vision API

The final link in the chain is linking Power Automate to the Custom Vision API. This was the step which took me the longest to figure out. To set up the Connection, you will need:

  • Connection Name: Whatever you like
  • Prediction Key: This can be found by clicking the ‘Prediction URL’ button on the Performance tab.
  • Site URL: This is the data center for your model. This took a bit of detective work by going to the Custom Vision Prediction API Reference ClassifyImageURL page (thank you Olena Grischenko for the tip!) and seeing how it constructed the Request URL I could work out the components. For me, as I am based my service out of the East US center, the value is eastus.api.cognitive.microsoft.com

You will then need to populate the values in the flow Step:

  • Project ID: You can either get this from the page URL when in the project in the Custom Vision API portal, or by clicking the Cog icon in the Custom Vision portal at the top right.
  • Published Name: The one value that took me the longest to figure out (Seriously Microsoft, make it easy for the dev-muggles!) What it should be called is Iteration Name. The default value is ‘Iteration1’. In this case trawling through sample code online showed that ‘Iteration Name’ = ‘Published Name’ = ‘Model Name’

Conclusions

While this is a fun and trivial example (unless you have just been bitten by a spider and are not sure), it is clear to see there is a lot of possibility in this technology. The Cognitive Services are already being used to help manage fauna and flora in Australia’s Kakadu National Park, as well as to manage fish populations in Darwin Harbour. I can also see this service bringing semi-automated medical diagnostic services to the world, being used for the auto-assessment of parts before they malfunction and quality control of parts in mass production. Considering where we have come from, it is very exciting times.

Microsoft Power Platform 2020 Release Wave 1 Plan: The bits which excited me

Standard

Microsoft are getting quite organised with their documentation these days and last week they put out their Power Platform ‘roadmap’ for April 2020-September 2020. This is separate to the Dynamics 365 2020 release wave document whose 400+ pages will have to wait for another day.

Here are the bits in Power Platform, going to General Availability, which excited me.

Power BI

  • Drillthrough Buttons: This should make guiding a report user through the report a lot easier. They also are context aware, which is great.
  • Office ribbon for Power BI Desktop: Getting everything consistent makes it much easier for people to get on board with an application
  • Incremental refresh: Only data that has changed will update in a report. Less data consumption, quicker refreshes, less frustration
  • Conditional formatting for totals and subtotals: For exception reporting, this is a fantastic inclusion and I am surprised it was not there already.
  • Being able to render a paginated report in any format, such as PDF or Excel via the API: Very useful for automated report comms/invoicing etc.
  • Copy and paste visuals into other applications: If only Dashboards would follow suit! Interoperability of apps is a key foundation of Microsoft Office. It is great Power BI has followed suit on this. I hope others will also come to the table soon.
  • Sub-report support: In the category of “why was it not there in the first place?” we have sub-report support.
  • Datasets larger than 10Gb in Power BI Premium: The whole point of Power BI is to synthesize large data into meaningful reports. The bigger the better as far as I am concerned. The only limit now is the limit of the memory capacity.

Power Apps

  • Deep integration from Azure to Microsoft Teams: The ability to create apps directly in Teams is very exciting to me. We start to depart from a collaboration tool to a truly useful productivity tool
  • Canvas and model-driven apps run on a single mobile application: Some of us still have scars from the early attempts of taking Dynamics to a mobile device. Users do not know or care whether an app is canvas or model-driven so having them launch from the one place makes a lot of sense.
  • Power BI Embedded component in portal designer: No more need for liquid code to make this happen. Easier to build and manage.
  • Save and Save & Close are back!: With auto save on the only way to force a save was the tiny disk icon in the footer. The Save and Save & Close buttons are now back in the Command Bar which certainly make me a lot happier as I am trained, by the ghosts of Microsoft past to hit save and hit it often

Power Automate (what used to be called Microsoft Flow)

  • Copy and paste Actions: Actions can now be copied and pasted. A huge time saver for branching scenarios
  • UI flows: Probably my favorite feature in the release. This is like a macro recorder for flows. Record mouse clicks, keyboard strokes and data entry and then automate it. UI flows also comes with error handling and it is solution aware
    • Automate web-based applications: Supporting Google Chrome (and Microsoft Edge Chromium) this allows the automation of web-based applications. This is crazy powerful and allows for all sorts of automated testing which may be hard to execute through traditional script
    • Automate Windows applications: Macro capture for Windows applications. Very exciting.
    • Automate on virtual machines: UI flows can be run on virtual machines, including Microsoft Remote Desktop

Power Virtual Agents

These are very new and exciting and allow you to create a chatbot without any code.

  • Add a Power Virtual Agents bot into a Power Apps canvas app: Great for automated help within an app
  • Add images and videos to topics: The bot’s response can now include video and images. With a picture being worth a thousand words it makes sense to make responses more than just text
  • Additional language support: Bots will be able to converse in French, German, Spanish, Italian, Portuguese and Chinese (a specific bot can only handle one of these though)

AI Builder

  • Form processing: Teach it what your form looks like with a few examples and you can automatically extract data for Power Apps or as part of a flow.
  • Object detection: Used for recognizing or counting objects, this has a huge range of applications from checking an employee has work safety gear on through to automated stock taking

Power Platform governance and administration

  • Admin connectors for Power Automate/Power Apps: These will be in General Availability in July 2020 they literally allow an admin to manage the tools with the same tools i.e. create flows to manage flows/apps. A great way to ensure an admin is familiar with their management tools.

Common Data Model and data integration

  • SAP ERP connector for Power Apps and Power Automate: I thought this one was already there but obviously not. This allows you to connect to SAP ECC or S/4HANA which is often part of a client’s ecosystem
  • New connectors in Power Query Online: There are quite a few of these but the ones which I want to explore further are: Active Directory and OLEDB

They really have listened!

All through the document I kept seeing:

Historically it was not clear that Microsoft considered outside feedback from MVPs or the public in setting priorities for their development. They are very clearly stating this is now part of the process and I applaud them for it.

Conclusions

Innovation in the Power Platform is coming thick and fast and this document proves it. All of the above features are coming into General Availability, although not all straight away, so check the document if there is a specific feature you need. The one I really want to play with is UI flows. For legacy automation and low code automation this could really be an inexpensive way to achieve a lot. I have said it before but it is a very exciting time to be in Business Applications.

Business Applications’ New Architecture Paradigm

Standard

Back in BC (Before CDS), the architecture of a Dynamics solution involved (roughly) the following steps:

  • Listen to the client’s business need
  • Work out which modules most closely aligned
  • Minimize the cost of development and maximize the benefits
  • Configure and customize
  • Go live

With the introduction of CDS and the evolution to the Business Applications ecosystem, the steps have changed:

  • Listen to the client’s business need
  • Work out how they buy their Microsoft software licenses
  • Explore all the different ways the business need can be met
  • Work out the license implications of each one
  • Minimize the cost of development and licensing and maximize the benefits
  • Configure and customize
  • Go live

If you are not considering the license implications for your clients’ solutions, it could prove to be a costly mistake.

Licensing Then and Now

Historically (BC) the Dynamics licensing model was simple. You paid a ‘per-user-per-month’ fee and it was all you could eat. This was probably the big difference between Salesforce and Dynamics back in the day. Salesforce has always charged for each module and the incremental add-on of costs was referred to as the “Salesforce Tax” in competitive pitches. With Dynamics the cost model used to be simple but, over time, it has changed to the current component-based pricing, similar to Salesforce’s model.

The Dynamics components of old have been split out as add-on modules to CDS and the Business Applications ecosystem includes Azure services and the Power Platform. We have a wide variety of different license models for each of the components. The cost model is no longer a simple multiplier based on the number of users.

Navigating the New World

One way to consider the new model is to consider whether your customer’s needs will work better with a consumption model or a per-user model. If the solution is to be used by a large number of users but infrequently, a consumption model makes sense. Conversely, if the solution is to be used by a small number of users but they will conduct a high volume of transactions in it every day, a per-user model may make more sense.

Hypothetical Case Study: Internal Catering

Let us say we have the requirement of building a system which allows the request of internal catering for meetings. Users go to an app or web page, specify what they need and the request goes to an internal organizer who sorts out the catering.

In the old days we would likely look to the Customer Service module and use Cases. However, this is charged on a per-user basis. So how many users are we talking about? Let us say 12,000 internal users. That is an expensive application for sandwiches and wraps.

So is there another option? The Business Applications ecosystem provides a wide range of options to solve business problems. PowerApps are also on a per-user basis so this does not help. Virtual Agent may be useful but the pricing model, to my knowledge, has not come out at the time of writing. PowerApps Portal is licensed by proxy to things like PowerApps which again leads us to a per-user model for internal users.

An option which may work is Forms Pro linked to CDS via Power Automate (the new name for Microsoft Flow) licensed on a per Flow (Automate?) basis. As long as we stay under the 15,000 daily API requests per day limit we are good to go (high number of users, low number of calls).

Instead of using Dynamics 365 for Customer Service and restricting an enterprise-ready customer service module to ordering Danishes, considering licensing forces us to get creative with all the tools in our Business Applications toolbox. It also forces us to think beyond the cost of creation to the cost of ongoing maintenance.

Conclusions

In the old world, licensing was not a significant part of the architectural design. Licensing was simple and for a per-user-per-month fee you had access to what is now CDS and the Sales, Marketing, and Service modules. In the new Business Applications ecosystem, licensing is much more complex and is a vital input into ensuring a solution is delivering true value to its customer, both for the duration of the implementation and beyond go-live.

One of the simpler ways to consider licensing is in terms of the number of users and the activity the solution will generate. This can be weighed against whether a per-user or consumption model will provide the most value. For lots of users and an app which is only used occasionally, a consumption model makes sense. For less users but an app which those users will live in, a per-user may be a better option.

The notion of considering the license implication of a design will set apart the next generation of Business Applications architects from the old Dynamics CRM architects who are catching up. Do not get left behind or, worse, deliver an elegant solution which will bankrupt your client in licensing fees.

How To Become An Expert In Business Applications

Standard

A question I often get asked at conferences or via LinkedIn is how did I become an expert in Dynamics/Business Applications. The answer is pretty simple. I was on a bootcamp for Microsoft CRM 1.0 beta and have been tinkering with it ever since, focusing mainly on doing crazy things with Workflow. That bootcamp was back in 2003 and, to show how long ago that was, here is a screenshot of a Total Cost of Ownership (TCO) calculator they provided us with the biggest competitor at the time, Saleslogix (Salesforce was not even a blip on Microsoft’s radar back in 2003.

Everything was on-premise back then so, as an implementer, you also had to know how to set up SQL Server, Exchange and all the other server components Microsoft CRM relied on. We have come a long way. The good news is if you are looking to become an expert in Business Applications today, you can do it much faster than the 16 years it has taken me.

How The Game Has Changed

It used to be the case that, to be an expert in Microsoft/Dynamics CRM/365 you jumped on board with a version of the product and then just incrementally updated your knowledge as the new version came out. The version cycle used to be every 2-3 years, back in the old days, so it was very easy to keep up with all things Sales, Support, and Marketing. Then, to meet the expectations of the SaaS market, Microsoft changed the way they released software. Firstly, around five years ago, they committed to an annual release with a six-monthly mini-release. This slowly evolved into a six-monthly/unscheduled release cycle and now, for the online version of the product, it is a continual release.

How does this change the game? Because it is simply impossible to keep up to date with everything that is happening with Business Applications and the Power Platform. The pace of innovation is too fast. Moreover, much of the knowledge gained over the years is now redundant (in my case Workflows are becoming less and less relevant so it is time to carve out a new niche).

Let me state this very clearly: The MVPs who used to know almost everything there was to know about Microsoft/Dynamics CRM/365 are, at best, experts in one area or module and have a broad understanding of the rest. Business Applications has, for the purposes of understanding all of it, moved beyond the Technological Singularity.

The Opportunity For You

With no one being an expert in all things Business Applications, and new additions coming out literally all the time, anyone can grab hold of an area of the ecosystem and become the expert.

To highlight an example, I am going to pick on a friend of mine, Elaiza Benitez. Now Elaiza is no newcomer to Dynamics, she has been blogging about it since 2014 but, in my opinion, what has catapulted her to fame is her YouTube series #WTF (What The Flow). When I was at UG Summit EMEA earlier this year, Elaiza was mentioned in presentations and people asked me about her, and it was all because of #WTF.

#WTF was only started a year ago and exclusively focuses on tips and tricks for Microsoft Flow. Her recent passion is linking Flic buttons to Flow for creative and practical applications.

Anyone could have dug into the details of Flow and made a name for themselves. The barriers of entry were low because Flow requires practically zero coding but it was Elaiza who took the plunge and chose to own this piece of Business Application real estate and bring it to YouTube. She was the one who chose to devote the time to understand it and devise ways to make it accessible to the rest of us. Similarly with Flic buttons; to connect a Flic button to Flow requires zero code. The immense value Elaiza brings is in coming up with entertaining, meaningful and practical examples demonstrating the value of this elegant technology and inspiring others to improve the world with it.

OK, enough of this fanboy adoration of Elaiza, what does this mean to you? It means you can do the same. There are so many products being released begging for someone to take them up and show how amazing they are and how they can transform the world.

What Products Are Begging For Experts?

Here are some examples of products in the ecosystem begging for someone to grab them with both hands and show the rest of us how it is done:

  • IoT: The barriers of setting this up are reasonably high with Azure subscriptions and setup but this means whoever takes this ground will be unassailable
  • Talent: From what I understand it is not complex and should be fairly easy to get across. It has hooks into both F&O and Dynamics 365 CE (or whatever the new name is for the CRM modules) so it is open to being attacked from both sides of the Business Applications fence.

STOP PRESS: The gracious Megan Walker has pointed out that there is a Talent MVP who reigns as the queen of Talent, this being Malin Donoso Martnes. Apologies for this oversight Malin!

  • Retail: This has received significant investment with the soon-to-be-released Commerce which claims to be everything you need to run a store both online and offline. If I was running my own consultancy and wanted to focus on a solution which I could roll out to companies across the country, this would be it.
  • AI: With the new AI Builder making AI available without a line of code and the pre-built Insight solutions, this is a piece of territory which will soon be taken but is large. A few people could focus on specific areas in AI without getting in each others way. If you want to become an expert in something which Microsoft is investing heavily and which will provide tremendous value to those who adopt it, AI is for you.
  • Mixed Reality: There is some hardware investment and likely need for coding in this one but, like IoT, for those who invest, their position will be hard to approach. The biggest problem is understanding how it will benefit business to do what they do better. The person who cracks that nut and makes it accessible to the masses will be hailed a hero.
  • Fraud Protection: A lesser known offering in the Dynamics 365 collection. For the person who becomes the expert in this, they will, in my opinion, be able to write their own checks. It is very easy to demand top dollar when you have just saved a company a few million in fraudulent transactions.
  • Azure Cognitive Services: Spinning up an Azure service and using Flow’s Connectors to talk to it is codeless and very simple. There is so much power here and all it is waiting for is the next #WTF for Cognitive Services.

What Is Stopping You?

There is very little preventing you from being an expert in any of the above technologies. All it takes, in many cases, is focus and time. If you are willing to devote a few hours a week to tackling your chosen territory, you will be ahead of 99% of the people out there.

Extending Scrum Without Making It FrAgile

Standard

It has been a while since I have written a post. In the interim, I started my Diabetes Blog: The Practical Diabetic. While I am Type 1, there is good information in the articles for anyone who is prediabetic or a different Type and all are tagged to make sure the content is relevant. Being a physics geek, all the information has a basis in science. No half-baked cures on my blog. There are also some interesting technical articles for helping manage the disease. If you know someone in the D-Camp, point them my way.

This article is based on research I did for PowerObjects as part of their Agile Center of Excellence (CoE). PowerObjects have a bunch of CoEs with membership across the globe. We get together weekly and work out best practices and share war stories. I belong to the Agile one. In this case, the research was around the various Agile frameworks out there and their applicability to Dynamics implementations.

As with my Diabetes blog, if the article looks too long, skip to the tl;dr section at the end for a summary.

The Sanctity of Scrum

Arguably the most common Agile framework used for Dynamics implementations is Scrum. The definition of Scrum is a very readable 19-page PDF document. However, The Scrum Guide is far from prescriptive. For example, the words ‘velocity’ and ‘points’ appear precisely zero times, yet these are common elements in many Scrum implementations; The Scrum Guide says precious little about tracking progress. To this end, there is plenty of room for adding creative flavor to Scrum. Creativity is often seen in the format of the retrospective, and in the estimating of effort in Sprint Planning. Another area where we can bring creativity to Scrum is by introducing elements from other Agile frameworks out there.

Agile vs Lean

Software development frameworks come from two main paradigms: Agile and Lean. While Agile focuses on delivery through iterative development, Lean focuses on the process and seeks a continuous stream of productivity and improvement. Agile values the people involved, collaboration, and adaptability, while Lean values the elimination of waste, improved quality and optimization.

Where the two share common ground is in providing efficient and tangible outcomes. Most software development frameworks sit between the philosophies of Agile and Lean.

The Frameworks Of Interest

There are many, many frameworks but for this post I will focus on three: Extreme Programming (XP), Scrum, and Kanban. If you like another framework, by all means embrace it. These three are useful as they cover the spectrum between Agile and Lean and share some compatible elements.

The most Agile is XP, while Kanban is very Lean, being all about the process, with Scrum sitting in the middle. In terms of how the frameworks differ, the key points of differentiation for Scrum is its focus on the people involved (both on the consulting side and the customer side) and the ability to accommodate offshore development teams (something increasingly common in Dynamics implementations).

Scrum XP Kanban
Project Size All Small All
Sprint (weeks) 2-4 2 1
Process Centric No No Yes
People Centric Yes Yes No
Virtual Team Support Yes No Yes
Documentation Basic Basic N/A

Extreme Programming (XP)

Extreme Programming is probably the most famous of the ‘pure Agile’ frameworks. Being born out of the rise of the internet and dot.com boom, it sought an alternative to the traditional waterfall approach, more suited to construction projects.

The approach of XP is less about a set of requirements but more about embracing the right values, principles, and practices to achieve the requirements; it focuses on the journey, not the destination.

Designed for pure coding, it is, in my opinion, difficult to embrace completely for Dynamics implementations. However, in terms of its values and in using an incremental development approach, it is closely aligned to Scrum.

Some of the techniques/philosophies used in XP which can benefit a Scrum implementation are:

  • Pair programming: Even if it is configuration of the system, a second pair of eyes can be invaluable for re-evaluating the intent behind a User Story, or offering different approaches to address the problem (there is always more than one way to solve a problem in Dynamics).
  • Test-Driven Development: Determining how a story will be tested is a great way to understand what needs to be configured/coded. It also provides an opportunity for the Product Owner, the test team and the development team to come together to ensure they are aligned on what is to be achieved.
  • Collective Code Ownership: There is no ‘i’ in Extreme and with the development team in Scrum being an amorphous blob, it makes no sense that configuration/code is an individual responsibility.

Kanban

Kanban is all about the process and visualises this process through a board with tickets covering it to represent the jobs (User Stories) being processed and the stage in the process they are at. This board is creatively known as the ‘Kanban board’. The Kanban board is a board of continual development/progress. No sprints here.

A key element to Kanban is a lack of upfront planning and story-sizing. This means it is very hard to predict what effort or time a project will take to be delivered. Convincing a project sponsor to give the go ahead for a project with no timeline or budget is challenging and this often precludes Kanban as the primary framework for a Dynamics implementation.

Some of the approaches used in Kanban which can be beneficial to Scrum are:

  • Using a Scrum board: The Scrum board differs from a Kanban board in that the Scrum board is reset at the end of every sprint to reflect the new Sprint Backlog. Otherwise, their appearance and function are quite similar.
  • Limits on the number of stories permitted at a given stage: This prevents too many stories moving between stages e.g. development to testing. One benefit of this is it will highlight resourcing issues if a specific stage is blocked due to too many stories. This approach will also show if stories are not being system tested thoroughly by the development team before handing over to the test team as if too many are returned for re-work, this will also lead to blockage.
  • Story-typing: A lot of information can be conveyed on a Kanban board and there is no reason why this cannot be brought across to a Scrum board. A great example of this is in the use of ‘Story-Typing’ which is the classification of User Stories, depending on the type of story they are. Chores (work that needs to be done upfront before actual development can start) and Spikes (Research/analysis required to address a User Story) are good examples of story types.

Neil Benson, who I had the privilege of working with on a two and a half year Agile Dynamics implementation for the University of New South Wales, was a big fan of story-typing. Blocked stories were inverted on the board and we had a healthy pool of Spikes and Chores. We also used paired programming. Neil is quite the fan of mixing it up.

tl;dr

There are quite a few software development frameworks out there and while Scrum is the most popular, the Scrum framework is sufficiently flexible that it can incorporate elements from other frameworks.

Looking at the various Lean and Agile frameworks out there, two which have elements which can be adopted by a Scrum implementation are Extreme Programming (XP) and Kanban. Elements which lend themselves to inclusion are:

  • Pair-Programming
  • Test-Driven Development
  • Collective Code Ownership
  • Using a Scrum board
  • Limiting the number of User Stories at a given stage of the development process
  • Story-Typing

The Evolution of Customer Service From A Call Center to Multi-Channel And Beyond

Standard

Starting university in the early nineties gave me a unique position to appreciate the modern evolution of technology. The internet in the public domain was still in its infancy. I was one of the first people I knew to have an email address and had to explain what email was to many of my friends. ICQ was five years away so messaging was done via ‘telnet’ where you ‘dialled’ someone’s IP number to chat.

Browsing was text based with hyperlinks and there was no search engine (AltaVista was three years away). You simply discovered pages of web links through word of mouth.

This was a time when customer service was delivered through three channels: face to face, phone, and fax. We have come a long way.

The Advent of Multi-Channel

Multichannel

People do not really talk about multi-channel any more. It was big for customer service and for marketing and, while it could be argued even our pre-internet customer service was multi-channel, my recollection is the term only came into mode when linked to internet channels.

It was also the beginning of a shift in considering what customer service was for. Before this time, customer service was little more than part of the product/service offering. If a company offered three services, each department was responsible for customer service, and there would be three customer service functions. I recall a prominent American bank at the time having literally a dozen different fax numbers for different divisions (the only reason I remember this is because I once flooded all the fax numbers when the bank was slow at refunding an erroneous monthly charge. Enquiry processing across the bank came to a halt in what was, arguably, a pre-internet Denial of Service attack).

With the introduction of channels like email and online forms, came the shift to considering the customer’s experience. It made sense for the customer to choose the most convenient way to reach the organisation and not the other way around.

The outstanding problem was minimal cross-communication. Multi-channel meant multiple ways for the customer to get service but each channel was still a separate experience. Switching channels often meant starting again and customers were still bounced around departments for more complex issues.

Progression to Omni-Channel

Omnichannel

Omni-channel, as you can see from the Google Trends graph, started becoming a thing about five years ago. Thanks to the online revolution, enterprise-level CRM systems became affordable for all. This provided a centralized hub for all enquiries. You could email about an issue, follow up with a phone call, and then go to the company’s physical service counter and all interactions would be recorded in the same system and available at the click of a button.

While multi-channel gave consumers a choice of communication channel, onmi-channel took it one step further and ensured a consistent experience or, at least, a consolidated one.

With most CRM systems, a rudimentary omni-channel system can be set up relatively easily. In my last project for a major university, whether the student asked their question face to face, via phone, email, or online form, everything became a Case record in Dynamics. In an omni-channel system, the customer gets to use the channel which makes sense for them and their enquiry. For the company, the channel does not really matter as a centralized CRM system means all enquiries are treated consistently. A true omni-channel system also removes “answer shopping”, common in multi-channel systems.

The Future is Omni-Moment

Omnimoment

The core assumption in an omni-channel system is the customer chooses a channel for an enquiry and sticks with it for the duration of that enquiry. Focussing further on the customer experience, the nature of the enquiry may require multiple channels to be engaged as part of the one interaction. Let us consider an example of opening a bank account.

In the multi-channel experience, a customer calls to find out about the procedure. They do not quite get the answer they are after, so they call back to get a different agent. They then visit a bank branch to collect the right forms. They go home, to fill in the forms. If they need to clarify something about the form, they either call or revisit the branch. There is no guarantee that the advice they get from these channels will be consistent.

The customer hunts down a notary and has their identification documentation validated. Once completed, the forms are faxed. Finally, once the processing department has informed the local branch that the account is open, the customer returns to the bank branch to provide a signature and collect a bank card.

Every step in the process is an isolated channel with the customer being expected to bring it all together in what was often a frustration and time-wasting experience.

In the omni-channel world, the customer goes online to find out about the procedure and there is an online form. If the customer has a question about the form, they can call or browse the web site. As both channels are pulling their information from a centralized knowledge management system, the answers will be consistent (and hopefully comprehensive).

Identification documentation is again notarized and once the form is completed, with notarized documentation attached, the application is processed and, with signatures being a thing of the past, a card is sent in the mail.

In the omni-moment experience, the customer goes online to find out about the procedure. The web site recognizes the intent and provides the option of a chat bot to assist. If the customer’s enquiry cannot be answered by the web site or bot, the interaction is escalated to a human. The agent offers to share screen and walk the customer through filling in the online form. Using video conferencing, the agent can verify identification on the spot without the need of a notary. Forms are completed and the account is opened immediately ready for online use. A bank card is again sent in the mail.

As you can see, the seamless integration of people, process and technology, make for a delightful customer experience. A process which took a week in the multichannel world, is completed in half an hour in the omni-moment world.

The Evolution of KPIs

As the way we interact with customers has changed, so too must our KPIs. Here are some classic call center KPIs which I consider irrelevant (or at least very misguided) in the modern customer service center.

Average Handling Time

Even back in the days of call centers, I was not a fan of this measure. It encouraged agents to open a call and immediately hang up to lower the stat. It is focussed on productivity, often at the expense of the customer experience.

If an agent spends 20 minutes assisting one customer to open up a bank account and 30 minutes with another, why is this a problem? If one agent is terse and goes through the form quickly, is this better than someone who actually takes the time to make sure the customer knows what is going on?

Average Time in Queue

There really is no excuse for waiting in a queue on the phone these days. Assuming a customer insists on exclusively using the phone, a call back service should be standard procedure. In an omni-moment world, there should be no queue and all queue measures are irrelevant.

Cost Per Enquiry

It is good to have visibility on costs but this should not be managed at the expense of the customer experience. In the early days of online channels it was realised these were much cheaper to operate than traditional channels. In some cases the customer experience was worsened for the traditional channels to encourage people to go online. This is management in the absence of strategy and is disastrous in the long term.

What is the Purpose of Customer Service?

The ultimate measure of customer service should be customer satisfaction. In my opinion this should be sought directly through surveys rather than assumed through measures such as Average Handling Time (a short call is not necessarily a good call). I can see value in measuring First Call Resolution (as confirmed directly with the customer) as this should be the ultimate goal of customer service. However, it need to be modified so it covers all channels across the customer experience, not just the phone component (assuming a phone is even involved).

While in a pre-multi-channel world, customer service was seen as little more than a necessary evil for selling a product or service, in an omni-moment world, the minimum standard is having the customer ask no more than once and be satisfied every time they make an enquiry. In fact with machine learning, in many cases, it should be possible to anticipate customer need and frequently achieve ‘ask never’ for existing customers.

Generating Reports For NightScout Data Using Flow, Excel, and OneDrive

Standard

A few months ago I talked about extracting data from a MongoDB database for the purposes of generating alerts. Since then I have taken it further and now generate regular reports of my data using the power of Flow, Excel, and OneDrive. As this may be useful to others running NightScout I thought I would share my set up and the discoveries along the way.

The Flow

First of all, I need to extract the data from the MongoDB and sent it to a target Excel sheet. To do this we use Flow.

image

I have set the recurrence to three hours. This strikes a balance between not running too often and blowing my Flow quota, and running sufficiently often to give timely results. At every three hours, we run approximately 240 times a month, which works well with our limit of 750 Flows per month.

The variable stores the latest DateTime value from our target Excel file.

image

To populate this variable, we query our target Excel and set the value.

image

In this screenshot we see that we return only one row from Excel, being the row with the highest DATE value. We then use this to set the variable.

Once we have this DateTime value we incorporate it into a modified version of the API call we used in the Alert blog.

image

For this call we bring back 100 entries from the MongoDB, a bunch of fields and order it so that if there are more than 100 rows available from the Latest Date from our target Excel, then only the rows immediately after this DateTime are returned. This ensures the query does not mess with the row order when it transfers them to Excel.

My continuous glucose monitor (CGM) feeds a value to the MongoDB every five minutes which means it generates 180/5 = 36 entries every three hours. Therefore 100 is a good setting to keep on top of the additional values generated in MongoDB but sufficiently large that it will be able to catch up if there is a temporary issue with the running of Flow.

Once the reply is parsed, we can populate our Excel with the new rows.

image

One point of note here is that the Flow step requires a Table within the Excel workbook. This is relatively easy to set up. Basically, you add your headers to the sheet, highlight them and select Format as Table from the Styles section of the Home tab.

The result looks something like this.

image

The DATE value is an integer representing the DateTime value but is a little difficult to read or transform so we also record the DATESTRING which is a little friendlier. Then we have the SGV value which is the blood glucose level in units only the USA use and finally we have the DELTA which is the change in SGV value between reads.

Once we have captured our data, we can begin reporting on it.

The Report

I discovered relatively quickly that Flow has a size limit for the Excel files it will work with. In the free plan this size limit is 5Mb, which makes it impractical for our purpose. Luckily I had a paid Flow plan via my Office subscription so I moved to this. This plan allowed me to work with Excel files up to 25Mb in size. This worked well. My Excel file has approximately four months of data in it and is 1.6Mb in size. Therefore, I have around five years of data to go before Flow reaches its limit. In five years either Microsoft will have removed this silly limit, I will be using a different technology to analyse my data or they will have found a cure for Type 1 Diabetes (there is a running joke in the diabetes community that the medical professionals have been promising a cure within five years for decades now).

The other trick I did to minimise the size of my target Excel was to house the reporting in a separate file and use a Power Query to reference back to the target file for the data. Using this Power Query, and some Excel formulae to manipulate the data to make it friendlier for reporting, I got this for my first worksheet.

image

If you struggle to replicate any of my formulae, please leave a comment and I will reply with the details.

HbA1c Prediction

The HbA1c is an indicator of how ‘sugary’ your blood has been for roughly the last four months. Using our CGM data we can make a prediction of what our HbA1c value is.

image

There are a few formulae available to do this calculation and in the above I use three of them. In the case of my blood results, the models predict 5.3, 5.1, and 5.1 which is well below the target threshold of 6.5 so well done me. I expect this value to slowly increase over time as my pancreas becomes less able to lower my blood sugar levels.

Distance Report

The Distance Report is something that can only really be generated using CGM data with a regular time interval  between measurements (in our case every five minutes). The Distance Report shows the total ‘distance’ travelled by the blood values i.e. the sum of the absolute delta values and is an alternative measure to the standard deviation.

image

For this report we only have data for the last four months as this is how long I have been using a CGM. We can see that the distance travelled each month is roughly the same. As time goes on we would expect this to increase as the pancreas becomes weaker and blood glucose levels (BGLs) start to vary more.

BGL Report

This was the first report I created and reviews literally all my BGL measures (around 600 manual finger pricks and then the CGM data).

image

In the top left we have literally every value recorded and when it was recorded. The CGM data can be seen as the ‘thickening’ of the values towards the right hand side of this graph.

In the top right we have the distribution graph for the data showing the spread of results.

The bottom left shows all the data points but strips out the Date value, leaving only the Time value. This has the effect of showing the data over a 24 hour period.

Finally, in the bottom right, we have a range of filters to assist with analysing the data.

For example, if we compare the distribution curves for 2017:

image

2018:

image

and 2019:

image

we see that our distribution curves are centred around 5.4, 5.5, and 6.0 respectively. In other words it appears the curve is moving to the right over time. This is consistent with a weakening pancreas (or me being more relaxed about carbs).

Range Report

The Range Report looks at the average and standard deviation of the data per hour, looking for where in the day the BGL values are highest and vary the most.

image

The graphs are relatively flat with a slight increase towards the end of the day. This is likely the result of dinner (generally the largest and most variable meal of the day and therefore the meal with the most impact on glucose levels) and late night snacking (which will never have a positive effect on BGLs). Again we have a filter, in this case a timeline, to help with our analysis.

Distribution Report

The Distribution Report does a similar analysis as the Range Report but per month, rather than per hour.

image

The trendlines suggest the numbers are relatively flat (average BGL around 6 with a standard deviation of 1). It is expected both of these will increase over time and the BGL average and variability increase.

Displaying the Data to the Health Team

With the Excel files sitting in OneDrive, you simply right click the file to generate a link for sharing a read-only version for health care professionals. In my case I use bit.ly to also make it friendlier. While it is a little twitchy, it is reasonably friendly across various form factors and browsers.

Conclusions

Flow opens up a raft of opportunities for using my data whether it be alerts, analysis to maintain my health or making it readily available to my health care team. A few years ago this kind of set up would have taken weeks of coding, if it was possible at all. Today, it requires zero code and costs almost nothing. If this kind of set up could help you or someone you know, have a tinker, it really is straightforward to set up.