“Lost in Redmond”: Interactive Fiction Games using Power Virtual Agents

Standard

For those unfamiliar with this game style, this was from the days when computers ran DOS and USB sticks were called floppy disks, even though they were not that floppy.

The poster child for this style of game was Zork.

In the absence of graphics, the adventure was text-based, testing wits over hand-eye coordination and the player’s patience in finding the specific phrase that would move the plot along.

Inspired by this genre of my youth, over ten years ago, I created “Lost in Redmond”, an interactive fiction game (also referred to as text-based adventure) using Dynamics Dialogs.

Sadly, not seeing the potential for Dialogs, Microsoft removed them from the product a couple of years ago. My ambition for using Dynamics as a platform for text-based adventures was dashed…until now.

Thanks to Power Virtual Agents (PVA), Lost in Redmond is reborn and, while, for reasons I will explain later, it is not quite possible to release it publicly. The good news is this blog will show you how to create your very own (and run it in Teams).

The Development Environment

Even as an MVP I struggled to get a non-trial environment to play with. Thankfully, firing up Power Virtual Agents in Teams worked without complaint so this is where I put it together. To get to it, simply click on the three dots in the left hand column in the Teams screen and search for Power Virtual Agents.

Elements of the Power Virtual Agent

The two main elements to a Virtual Agent are Topics (things to talk about) and Entities (linguistic concepts to help the AI understand what is being said). For my bot I did not need to use Entities but, if you want more information, here is an article Microsoft have put together describing them. Please note, PVA entities are completely different to what us Dynamics old-timers think of as Entities (what are now called Tables).

Topics

For the purpose of the game, most Topics represented the ‘rooms’ the player could occupy. In my game there were five different areas from the Microsoft campus:

  • Building 7: Bill Gates’ Secret Bunker (this is a small in-joke in that there is no Building 7 at the Redmond Campus)
  • Building 33: Executive Briefing Center
  • Building 87: Microsoft Library
  • Building 92: Company Store and Visitor’s Center
  • Building 99: Microsoft Research

Other than Building 7, the rest are real buildings, although their purposes are mostly different in real life e.g. Building 87 is actually a research lab.

At the bottom of the screenshot you see the default Topics. These are part of all Virtual Agents and, as far as I can tell, cannot be removed, although they can be edited.

Finally, there are the “Welcome”, “Help”, and “Reset Inventory” Topics. “Welcome” is, effectively the splash page for the game, being triggered by the expression “Lost in Redmond”. “Help” was there to give overall guidance and then return to the game, and “Reset Inventory” was useful for game testing when I wanted to wipe the inventory of the player.

Welcome

This Topic is a great introduction to the anatomy of a Topic. Firstly, we have the trigger phrase; this is what we say to our Virtual Agent to provoke the Topic response. In this case I have just one phrase “Lost in Redmond” but you can add as many as you like so the AI has the best possible chance to infer the right Topic. For example, I could add “Play Game” or “Play LIR” etc. to the list.

Next is a Message which has simple text formatting and then we have a Redirect to another Topic: Building 92. The Redirects were how I controlled the flow of the game, bouncing the player between Topic/rooms, collecting and using objects along the way.

Building 87: Microsoft Library

Most of the rooms had a similar structure but the Microsoft Library has all the elements seen elsewhere so this is good to describe flow within a Topic/room.

Here is the first half of the Topic. Like the Welcome Topic, we start with a Message, where we describe the room, and then ask a Question to prompt for an action. The “Go to <x>” responses use Redirects to go to other Topics as we did with the Welcome Topic and, similarly, those Topics redirect here so there is no need for a trigger phrase.

Help

Help triggers the Help Topic and then returns to this question.

To bend the flow back to the question, you hit the ‘+’ as if you are going to add another step and then grab the purple dot with the mouse.

Dragging this dot to the top of the original question, makes the flow return.

Inventory And Its Flows

Implementing a player inventory proved quite hard because variables in Power Virtual Agents are just plain horrible. Being used to Power Automate variables, I was expecting similar functionality. Specifically, I was expecting to be able to Initialize a variable, Set a variable, and Append to a variable. While it is possible to initialize a variable and give it a value, there is no way to reset that value or append it. For the curious, a variable is created whenever a Question is asked to store the response. You can also initialize a variable by calling a Power Automate flow.

Given the limitations of variables in PVA, I had to get creative which meant using Power Automate flows. Here are all the flows I created for the game.

  • Check Inventory simply checks what is in the inventory list.
  • Check for Item queries the Inventory list to see if an item is there.
  • Reset Inventory, as previously mentioned, wipes the Inventory list.
  • Activate Coat Hanger was me experimenting to see if I could store the Inventory with Booleans but it failed.
  • Add a row to an Excel file added an item to the Inventory list and gives away the approach I eventually adopted; storing the Inventory in an Excel file.

Check Inventory

Here is the flow to Check Inventory. The Excel file simply has a Table with two columns: Name and Inventory, capturing the items all players have.

The flow takes the player’s name as input (auto-captured by PVA), pulls the list of all rows in the Excel file, filters them based on the Name and appends the Inventory values to a string which is passed back to the PVA.

Check for Item

Check for Item uses effectively the same structure taking the player’s name and the name of an item as input and then using both to filter the rows. Those which match get added to the string variable and the length of the string is passed back to the PVA.

Add a Row (Add an Item)

Add a Row takes the player’s name and the name of an object and adds it as a row to the Inventory Table.

I return a Status value back to PVA but, to be honest, I do not use it.

Reset Inventory

Finally, the Reset Inventory lists the rows and then does a delete Action for each one.

It is this use of an Excel file to store the inventory which make it difficult to package or expose the PVA to the outside world because it needs to sit somewhere to be used and cannot be added to a solution. Hopefully, Microsoft will improve the variable functionality of PVAs in the future and allow the game to be fully packaged.

Get Object

This is how the Get Object command looks like in the Topic. On selection of “Get Object”, the object in question is confirmed and then a flow adds it to the inventory Excel Table.

Use Object

This one is longer and does the following steps:

  • Confirms the Object to use
  • Shows the player their current inventory (via flow)
  • Checks if they have the Object in their inventory (via flow)
  • If the returned string is longer than zero characters, they have the object and can proceed. If they do not have the object they are redirected back to the main menu of options for this Topic.

One problem I had with using Excel as an approach was caching. Sometimes, despite an Object being added to the inventory and displaying as such with the Check Inventory flow, the Check Item flow would fail. I managed to improve it by using the looping approach above to filter the row queries, rather than using an ODATA Filter but it was still not bulletproof. Hopefully, this will improve with future versions of the flow Actions.

Testing and Publishing the Game

Testing is available at any time through the Test Bot

Publishing is through the Publish area and allows you to make the game live, add the bot to a Team (where you “@” the name of the PVA and pass it commands) and generate a link you can give to others to access the game.

Clicking on the generated link allows the person to add Lost in Redmond to Teams on the left hand side of Teams and run it in its own window, as we saw at the start of the article.

Other channels are also available, with a Premium license.

How Easy Is It To Make a PVA?

If you have tinkered with Power Automate you will have no problems with PVA. This was literally my first Virtual Agent and took me a few nights in front the tv to put together. While it might not be a game you create, there is a lot of potential for a chatbot which can use Power Automate to go off and retrieve answers to questions.

Have a play and, as soon as Microsoft fix variables in PVA, I promise I will release Lost in Redmond.

The Future Is An Aphex Twin Music Video

Standard

A recent episode of “The Book of Boba Fett” showed just how far Deep Fake technology has come, resurrecting the character of Luke Skywalker as he was 30 years ago.

Deep Fake imagery (digitally superimposing someone else’s face onto another person) has been with us for a while and there are even phone apps which can achieve a simple version of it. What was remarkable about this rendition, for me, was the voice. Luke’s voice was, as it was, 30 years ago. Initially I had assumed they has used the same actor, Mark Hamill, and somehow de-aged his voice. In fact the voice was completely artificial. Geek Tyrant describes the process which involved feeding an AI with many, many samples of Mark Hamill’s voice from that time so the AI could recite the dialogue pitch perfect.

For now, Deep Faking is process heavy and time consuming. However, processing speeds continue to improve. The situation reminds me of Autotune.

How Autotune Changed Music

Autotune was invented in 1997 and is used to correct the pitch of singing. Using a mathematical method called autocorrelation, it does this in practically real time. Music engineers considered it impractical because of the massive computational processing required. The inventor, Andy Hildebrand, invented the technology for seismic wave analysis to pinpoint drilling sites and was thus unhindered by such preconceived notions. Using a “mathematical trick”, he greatly simplified the calculations required, thus reducing the necessary processing.

Today, autotune is used on practically every popular song as part of the sound engineering to make the song “play ready”.

Real Time Deep Faking

Let us assume similar “mathematical tricks” can be discovered for Deep processes. Obviously this allows for real time broadcasting of anyone pretending to be anyone else and potentially kills voice recognition security. I can see an era of politicians using stand-ins for speeches both live and recorded. Actors will ‘de-age’ their voices in the same way they ‘de-age’ their appearances with surgery. Audiences will be given a choice for who they want to see playing the parts in movies or presenting live events. Someone that looks more like them perhaps, or someone who they trust.

Mixed Deep Faked Reality

Now consider the combination of another nascent technology: mixed reality. We are seeing smaller and less invasive versions of Hololens and Oculus-like devices and we already have glasses with bone conducting headphones. Add in noise cancelling technology and it is not hard to imagine a world where we can auto-adjust reality’s sights and sounds to whatever we want. In this world we can Deep Nude everyone we meet; we can make everyone sound like anyone we choose. Perhaps we love the sound of our own voice and want to hear it from everyone else’s mouth or there is a figure we feel comfortable and relaxed with; the ultimate ice breaker at parties.

The is where the title of this article comes from. If you are unfamiliar with the artist Aphex Twin, the video to his song, Windowlicker involves a gentleman, and some associates in white bikinis, dancing and enjoying each other’s company in a limousine. The part of the video which is slightly disturbing is they all look exactly like him.

This notion of adjusting the appearance of those around us to our preference, depending on the circumstance, I believe, will become a natural part of our lives. For people we know, we may allow them to define their appearance. For strangers, the technology will adjust them to our preference, changing their appearance to highlight “high value” individuals, based on clothing brands, known importance from online presence etc. and I will leave it to the reader to consider what this technology means for realising fantasies in the bedroom.

For those otherwise reluctant to engage in social settings, I see benefit in lowering the social barriers; it has never been easier to picture a room of people without their clothes on, but I also see it making our world even more superficial. Are we really capable of determining a person’s true worth from their shoes and LinkedIn profile? What do we lose in removing someone’s physical presence to suit our own aesthetics? The technology will encourage us not to accept people for who they are, but for who we want them to be or who they are online which, for many of us, is a very rough approximation.

I am both excited and concerned at what this technology will bring and what it will take away but I also see it as inevitable.

Refactoring Flows

Standard

A couple of weeks ago I put together a Flow for merging Word files. I mentioned at the time I was not overly comfortable with the approach. While the Flow worked fine, the design was not great.

Here we have an example of poor design from that Flow. The Flow splits based on whether the input is “CRM” or “ERP”. However, the processes are identical. If we need to make an adjustment to the process for one, we are likely to need to do it to the other and the double handling introduces room for error. Can we redesign the process and eliminate the double-handling? This is what is meant by refactoring: changing code (or, in this case, a Flow) so it does the same thing but it is more “elegant”.

Reasons to Refactor

Obviously, double-handling is one good reason to refactor or, to put it another way, “Maintainability”. We want to create a Flow which can be maintained easily in the future, either by us or someone else. If we need to remember to make identical changes in multiple spots in the Flow, this is not easily maintained.

Other reasons for refactoring include:

  • Readability: A Flow which someone can open and easily understand what it does
  • Scalability: A Flow which grows as its use or the volume of the information it works with grows
  • Reusability: A Flow which we can use in other circumstances so we do not have to reinvent/recreate
  • Performance: A Flow which runs efficiently

All of these apply to the Create Response Flow.

Readability

Let us look at the current version of our Flow which does effectively the same thing as before but redesigned to embrace these design principles.

First of all, every Action has a meaningful name. Without expanding them, I get a sense of what they do. I can also add comments to Actions through the ellipsis to explain more detail.

Finally, you can see a couple of brown Scope Actions. A Scope Action is used as a container for a collection of Actions. So, for example, the “Set up Substitution JSON” looks like this when expanded.

In this case, the Actions get a list from an Excel sheet and then does something with them and, based on the title of the Scope, it appears to be generating JSON for Substitution. For someone relatively familiar with making Flows this should be enough to put them on the right track.

Maintainability

Let us look how we read the files we are merging and how we merge them. In the original Flow we read the file contents of all possible Word files we were looking to merge into our response,

and then added them manually to the Merge Action.

Also adding a little bit of script to see if that particular file was to be included.

This is hard to maintain (for any file we need to add a Get File Contents action, add it to the Encodian Merge Action and also add a tick box to our Trigger inputs), hard to read (the Get File Contents Actions alone span multiple pages on screen), and is troublesome to scale (imagine Settings up the Flow for 100 potential documents). So what is the alternative?

First of all, I moved the list of potential files and the flag on whether to include them into an Excel sheet in a central location.

In the Flow we read the Excel table list of files and loop through them, checking to see if the Include value is set to YES.

The advantage of this approach is the list of files can change and the Flow is unaffected. We simply adjust the list on Excel and we are good to go.

Scalability

You will notice in the above image the final step is “Check if below message buffer size”. This refers to a limitation in Power Automate which hinders scaling the Flow to many documents.

Firstly, let us revisit the Encodian Merge CRM Word Documents step. You will notice a “T” in the top right of the configuration settings for the Action.

Clicking this toggles the configuration from an entry screen to JSON which looks like this.

If you are unfamiliar with JSON it is, for the purposes of this discussion, a structured formatting convention for the storage of information (I once likened it to XML because of this, much to the chagrin of a developer friend of mine). Because everyone agrees on the JSON standard, it is easy to pass JSON between Actions and, because it is formatted in a well defined way, we can construct our JSON and insert it into the Encodian step afterwards which is, of course, what our loop does.

In this case, the File Array is an Array variable and the loop keeps adding to it.

The reason for the buffer check? It turns out converting the file contents of Word documents into a format which is friendly for Arrays blows out the size and Power Automate will only let you add elements to an Array up to a certain size. So, the buffer check gets the length of the Array and the length of the new element to be added and checks whether combining them will be too large. If it is, a TEMP file is used to store the file contents and the Array is reset.

If it is not completely clear what is happening here, I merge the files already in the JSON Array variable (Merge Array Documents) and then merge this with the existing TEMP file I have in a Teams folder, rewriting back to the TEMP file with the additional content. In short, every time we hit the Array buffer limit, we merge and write the extra parts to a TEMP file.

Why don’t I simply merge for each file in the loop and not worry about building an Array in the first place? Because the licensing plan for Encodian gives me a finite number of Action calls per month so doing this, say three times as our Array becomes too big is better than doing it 30 times for the 30 Word documents I am looking to merge. This consideration of Encodian Action calls more falls under Performance so we will return to this later.

Reusability

The use of an Excel as a configuration file for the Flow also has the advantage that the Flow “engine” can be used elsewhere and, the more settings we put in the Excel file, the less rework we have to do to make it function. There is still some work to do on this front in the Flow. While the specific files used are now outside of the Flow, I still specify the Site Address and part of the File Path when retrieving the TEMP file. Ideally this would be stored in the Excel file and retrieved at the start of the Flow. In an ideal world there would be no settings in this Flow other than the location of the Excel configuration file.

To then transplant the Flow elsewhere would just need a copy of the Flow and the Excel configuration file in a convenient location. The only change needed to the Flow would be to point it to the Excel configuration file and you are good to go.

Performance

Performance is traditionally thought of as how quickly something runs. In this case the Flow speed is not a big consideration. The Flow takes about 10 minutes to merge 30 or so documents and that is ok. A bigger performance issue, as alluded to earlier, is the minimization of Encodian Action calls.

At the moment the Flow uses three Action calls at the end to merge the final File Array with the TEMP file and then perform a Search and Replace of pre-defined text substitutions (also configured in Excel in the Substitution worksheet with the JSON for the Action defined in the first Scope of the Flow, mentioned at the start).

It also costs us another two Action calls every time we reach the Array buffer limit and need to write to the TEMP file which is the loop we explored above. So, for a document which reaches the Array limit, say, three times, the Performance is 2*3+3 = 9 Encodian Action calls.

Defining Performance as a Flow which uses the least number of Encodian Action calls, I can see the possibility of reducing the number of calls further by writing a separate TEMP file each time we hit the Array buffer limit rather than merging them as we go. So, by the end of the major loop we might have TEMP1, TEMP2, and TEMP3 files and we merge these at the end. In principle, this would bring our calls down from 9 to 6 (1*3+3).

However, a new loop at the end to combine the TEMP files may be tricky to construct and make the Flow slightly less readable. This is an example where design principles are potentially in conflict and trade-offs need to be made.

Conclusions

It is very easy in Power Automate to write a Flow that works but is poorly developed. Refactoring ensures a Flow not only works but it is also well designed and has longevity. As you practice building Flows and reflecting on the values of Maintainability, Readability, Scalability, Reusability, and Performance what is poor design will become more intuitive. In these cases, consider how you can improve your Flow and refactor it. If you or someone else ever needs to revisit your Flow, I guarantee they will thank you for the effort.

Merging Word Files With Power Automate in Teams

Standard

With a quiet afternoon, following the holiday break, I set my mind to a task on my to-dos; creating a Power Automate Flow which merges Word documents into one combined document, all within Teams. How hard could it be? After all, it is two Microsoft Office files of the same type being merged into another file of the same type within Microsoft’s tool of choice for office collaboration. It turns out it is not straightforward but is possible. This is how.

The Motivation

Working in presales I write a lot of responses to requests for proposals/quotes etc. as well as providing general capability documents to sales folk. It turns out a lot of the content needed to answer these requests is the same. Over time, we have developed Word documents which cover the main topics. We have these in folders and draw upon them, as needed.

Until now, putting a response together involved getting a blank Word document and inserting the content of these source files. The plan was for the Flow to eliminate the manual stitching.

What Did Not Work

Initially I thought Power Automate would have a raft of Word Actions to choose from. It turns out this is not the case.

Making a PDF or populating a Word template with text, no worries. File manipulation, not so much. Even SharePoint and OneDrive Actions came up short. File content extraction was possible but merging was nowhere to be seen.

I thought, maybe, I could extract the file contents from two source files and then create or update a new file in some way but this also failed. I expect it might be possible to convert the File Content objects to a manipulation-friendly format, strip out headers and footers and merge the content through script and turn them back into a File Content object for file insertion but that smelled a little too much like code and so I looked for an alternative.

Encodian To The Rescue

While I wanted to keep things all things Microsoft (and non-Premium), I soon started considering third party options. However, being cheap, I also wanted something I did not have to pay for. Encodian came through with the goods.

This was the library of Actions I was expecting to have been built natively by Microsoft but beggars cannot be choosers.

With an Encodian Free Plan allowing 50 Action calls per month, this was going to meet my needs.

With the Actions I needed, I started putting the Flow together.

Where The Flow Appears

My hope was to call the Flow from the Teams folder but this proved impossible. While I could add the Power Automate tab to a Team, every time I hit the “New flow” button and created a Flow it would not appear.

Fortunately, I could see my Flows from the Power Automate icon down the left hand side of Teams which was fairly easy to add in (although I still needed to click the “View all flows” button to make them appear.

From here I could manually trigger my Flow “Create Response”.

The Trigger

The “Manually trigger a flow” Trigger did the job for me and the ability to add inputs also made life easier. Here we have:

  • Client Name: This is a text input used to set the final file name and to make text replacements through the document
  • CRM or ERP?: This is a text input (with drop-down options) as some templates have a CRM and ERP version
  • YES/NO options: These define which Word documents to pull into the final response document.

A word of warning here, while the inputs appeared as expected when triggering the Flow, once a YES/NO option was toggled on, it was impossible to untoggle it. If you no longer wanted a source document to be added in, you had to cancel the trigger and re-run the Flow. I tried to use text inputs instead but could not get the Power Automate Condition Action to accept <<input>> = “YES” as, for some reason, it did not accept the string input as being a string. Perhaps there is a simple fix for this but, as I could get it working with the YES/NO, I moved on. The enemy of progress is perfection.

Another word of warning here. The inputs appear in the order they were created. So, if they appear in the wrong order for you, your options are deleting and recreating them (and fixing up your Flow to point to the new input) or extracting the Flow, hacking the Zip file and reimporting it.

Get The File Contents

Next was to get the File Contents from all the source files. For this, I used the SharePoint “Get file content using path” action. This allowed me to browse the Team file structure to find the source files. I also had a placeholder blank Word document which I used a little further on in the Flow.

I found the final result was more stable if I also toggled the “Infer Content Type” advanced setting to “No”.

ERP Or CRM?

Next I split the Flow to construct the final document based on whether it was a CRM or ERP response.

The developers (both of you) who read this may be uncomfortable with this approach and that is fair enough. The idea of grabbing all the file contents first, then splitting the process into two paths which have a lot of commonality between them means we are grabbing file content we will not use e.g. we will not use ERP file content in a CRM response, and also means maintenance will likely have to be done twice in the two branches if there is an adjustment which applies to both types of responses. In my defence, my intention was to build a proof of concept so optimisation was a secondary concern. I do have it on my list to “refactor” the Flow down the track. One thing to consider is, of course, that we only get 50 Encodian Actions per month so, whatever our final design for the Flow is, we want to minimise the number of Encodian Actions.

Constructing The Document

The two branches follow essentially the same process so I will break down just one. To merge the content for the final document, we use the Encodian “Merge Word Documents” Action.

For each File Content previously extracted a little bit of script

(if(triggerBody()[‘boolean_3’],outputs(‘Get_00_Xxxx_Xxxxxxx_XXX_Xxxxxxxx’)?[‘body’],outputs(‘Get_Blank_File_Contents’)?[‘body’]))

where:

  • triggerBody()[‘boolean_3’] refers to the input parameter
  • outputs(‘Get_00_Xxxx_Xxxxxxx_XXX_Xxxxxxxx’)?[‘body’] refers to the relevant file content
  • outputs(‘Get_Blank_File_Contents’)?[‘body’] is the default blank Word document file contents I mentioned earlier

This sees if the Trigger toggle was activated for that section of the document and, if so, brings it in to the response. You can construct the above script using the Expression builder and referencing Dynamic Content which makes life easier. Another small bug I noticed was the Expression builder was not popping up unless I put the Power Automate Builder in its own window with the “Pop out app” button in Teams – Power Automate.

Next I created the response file using the SharePoint “Create file” Action.

The File Name is simply the Client Name input with “.docx” concatenated to the end.

concat(triggerBody()[‘text’],’.docx’)

Again, this was fairly easy to construct with the Expression Builder.

The File Content is from the previous Encodian Action.

Feeling confident, I then used Encodian’s “Search and Replace Text” Action to set the client name in the final document. In our source documents, we used “[Client]” to refer to the client name so it was a case of taking the input Client Name and replacing everywhere “[Client]” was mentioned.

The Action can handle PDFs and Word documents so the File Type is specified, the Filename uses the same concatenation script as the file creation step. In writing this blog I noticed I used the same File Content reference as the file creation step and, therefore, we can probably remove the file creation step and create it after the text replacement. More refactoring.

As can be seen above, the Phrases Search Text is what we are looking for in the document and the Phrases Replacement Text is what we put in its place. In this case we replace it with the client name we got from the Trigger input.

Finally, we need to insert our manipulated file content back into the response file which we do with a SharePoint “Update file” Action.

The File Identifier is the Path from the “Create file” Action and the File Content is the content from the previous step.

The End Result

Once set up, you can go to Teams and the Power Automate icon on the left, bring up your Flows, and run the Flow from the ellipsis (three dots) on the right.

Up pops the trigger inputs which we complete and press “Run flow”.

After a little while (less than 10 minutes, usually 1-2) a new file will be created in the folder specified in the Create file action. This will combine all the source files selected and make the text replacements specified.

If anyone knows fixes for the bugs mentioned or other tips for merging Word documents, feel free to add them in the comments.

Fixing Dynamics Licensing: Dynamics 365 App for Outlook

Standard

It is a poorly kept secret that Microsoft Business Applications licensing has some problems. This is a shame because the potential for the technology is immense and it is a constant source of frustration for those of us who work with the tools when innovation gets hampered by Microsoft licensing.

The classic problem is internal use rights for things like raising tickets. I mentioned this in my recent Pay-As-You-Go article where the new Pay-As-You-Go plan for Power Apps goes some way to fixing the frustrating solution that is/was Team Member licensing.

In this article I will tackle an issue that is actively preventing organisations moving to the Power Platform. I am sure other Microsoft partners have come across this issue as well and Microsoft needs to make it easier to transition to Power Platform, not harder.

A Common Business Problem

Organisations without a formal CRM system, often cobble together an informal “BONA” system (Business Cards, Outlook, Napkins, Access) to manage CRM activity but, as they grow, they realise they need something more centralised. BONAs work for individuals but make it hard to collaborate among employees and make it hard to get a detailed picture of a customer’s interactions with an organisation.

Activity management (also called Contact Management) is the first step towards managing this. Microsoft used to have the Business Contact Manager and this was then replaced with the Outlook Customer Manager but now there is no formal replacement.

The obvious choice is something built on Microsoft’s Dataverse.

How Dataverse Addresses the Challenge

For the purpose of this conversation, Dataverse is a stripped-down CRM database which applications sit on top of. Those applications can be ones pre-built by Microsoft (Dynamics 365 Modules) configured using the no/low-code Power Platform, or built from scratch with code, connecting to the Dataverse Web API.

Dataverse has a simple set of tables at its core which allow the management of business interactions. These tables include:

  • Accounts (organisations a business interacts with)
  • Contacts (individuals a business interacts with)
  • Activities (out of the box these include meetings, tasks, and phone calls)
  • Notes (and Attachments)

What is more, Microsoft have developed the Dynamics 365 App for Outlook. The description for the app speaks of the Common Data Service (CDS) which is the old name for Dataverse. This app allows a user to push emails, Contacts and meetings to Dataverse so it is centrally managed and visible to the wider organisation. Users can also refer to the information in Dataverse, as they write emails or set up meetings with customers, to see what other employees have done with those customers. The Dynamics 365 App for Outlook is the obvious replacement for the Business Contact Manager, the Outlook Customer Manager, and those BONA setups. We can see activity based on employees, customer contacts, or across entire customer organisations.

Licensing

The problem comes when we scratch the surface of licensing. Just what do we need to run the Dynamics 365 App for Outlook without running afoul of Microsoft licensing? The AppSource page is not clear, other than saying the product is free.

Fortunately, Microsoft provide lengthy tomes on their licensing. You can find the one for Power Platform here. 32 pages of fun which mentions Outlook three times but not the App. The other licensing guide which can help in these situations is the Dynamics 365 Licensing Guide. Twice the size but it does mention the App.

In short, if you have a Team Member license or a license for a Dynamics 365 Module, you are allowed to use the Dynamics 365 App for Outlook (see the first line in the table below).

So, for a small business dipping their toes into the Power Platform waters, a Team Members license is the obvious choice. It is hard to nail down formal pricing for a Team Member license but you are looking at around US$8 per user per month (compared to US$50 per user per month for Customer Service Pro).

Here is the problem. While Team Members gives us access to the Dynamics 365 App for Outlook, it only gives us read access to the Account table (also in the above table). We get full access to Activities, Contacts, and Notes but Account is the notable exception and the deal-breaker. Microsoft have moved from free tools for Contact Management through to requiring US$50 per user per month to achieve the same outcome.

The other option is to get a Team Members license and a Power App license, given a Power App license gives you full access to the “Core” Dataverse Tables, also known as the standard tables. A license for a Power App is US$5 per user per month. From the Power Platform Licensing Guide:

The combined Team Member + Power App license option may be tempting until we realise that configuring the Outlook App to meet our needs e.g. adding a field to the Contact form which we can see in Outlook is very restricted and described in undecipherable detail in the Dynamics 365 Licensing Guide’s Appendix C. In short, forget it.

With this confused and twisted licensing model, incrementally built up as products have evolved, Microsoft have raised massive barriers to small businesses looking to migrate to Power Platform for Contact Management. Microsoft are driving these organisations to competitors and decimating the organisation’s lifetime value to Microsoft’s bottom line.

I have literally done half a dozen demos of this functionality in the last few months to public and private organisations looking to manage their stakeholder interactions better and, as soon as the conversation comes to price and the insanity of the above licensing, the spark of joy of seeing exactly what they need fades from their eyes and is replaced with a phrase passing their lips around the theme of “<insert expletive> Microsoft licensing!”

The Obvious Solution

In this case, there are two obvious solutions. Option one is to give Team Members full access to those Standard tables in Dataverse, what are called the Common Data Model Core Entities/Tables.

If this was the case, a user would only need a Team Member license for Contact Management using Outlook and Dataverse.

The other option would be to grant Power App licensees access to Dynamics 365 App for Outlook. Then they would only need this license to smoothly transition to the world of the Dataverse. In fact, it could be argued that, with these kinds of adjustments to the Power App licenses, the Team Member license could be scrapped altogether. I am not saying I know Microsoft staff who would break down and cry tears of joy if the Team License disappeared but, if it did disappear, I might be taking a few boxes of Kleenex with me to hand out at the Microsoft offices.

With full access to the App and Dataverse, so many problems go away and the transition for a small business onto Power Platform and Dataverse becomes a simple process. Once there, their use of the services and features of the ecosystem will only increase. Whether it is turning on Dynamics 365 modules, building Power Apps, or making use of the Azure Services which are readily plugged into Dataverse through configuration, Microsoft would see revenue from these otherwise alienated customers and live up to their motto “to empower every person and every organization on the planet to achieve more”.

Fusion Development: Awesome or Loathsome?

Standard

I have seen a few “robust discussions” among developers about Microsoft’s take on Fusion Development. I thought I would review what it is and where I think Microsoft is on the right track.

What is Fusion Development?

The high-level notion of Fusion Development is familiar to anyone who has developed/implemented a software application in an Agile framework; developers and users/Product Owners working together to achieve a shared vision. The key difference, as I see it, is the Product Owner evolves from simply dictating the requirement, and approving the development outcomes, to being more actively involved in the development process. Rather than being outside the development team, they become part of it and more involved in the development process.

Microsoft has some learning modules on Fusion Development where they describe the ideas behind it. This module has the below diagram, which captures the concept pretty well.

VanArsdel fusion dev team organization.

Tools like the Power Platform have encouraged this evolution by effectively merging prototyping tools and development tools into one. A user can put together a rough frame of what their vision for an application is and, when they reach the limits of their capabilities, a developer can take over to smooth off the rough edges and make it practical.

The biggest complaint I hear from developers about letting users loose on the tools is their ignorance of development practices. Similarly, I hear of organisations letting users loose on tools like Power Platform to build apps to meet their needs with disastrous results. While great apps are built, without governance in place, there is the risk that an app will fall foul of internal policies, or worse, the law. In the worse case scenarios, the apps have to be scrapped because, for example, Personally Identifiable Information is not being managed appropriately. No one wins. IT are seen as the destroyers of innovation and the users get a taste of something better only to have it taken away, demoralising them and discouraging them from engaging in future development.

This talks, of course, at the need for good governance in place in terms of software development, and data management. For the developers who have concerns about cowboy user development, they can set up a framework for the user to work in. For data management, an administrator needs to configure the environment to ensure data moves in and out of the developed products appropriately.

At a high level, assuming governance is in place, the idea of bringing users and developers together onto a common platform seems like a good idea to me; a common platform to work on, and a common vocabulary of tools and components.

So what has Microsoft introduced to facilitate Fusion Development?

Comments in Development

Microsoft now allow comments in Power Automate and Power Apps.

Figure 2: Commenting in Microsoft Power Automate (generally available)

In this example, developer Rakesh is building the Flow but needs input on the design, presumably from a user. By adding comments, user Yuxing can respond to help out. If you have worked on a Word proposal in Teams with your colleagues, this scenario will be familiar. Different members of the proposal team know different information and, through comments, information can be shared and applied to the proposal. Assuming we can “@” people as we do in Word, this seems like a good way to collaborate.

graphical user interface, applicationFigure 1: Commenting in Microsoft Power Apps (generally available)

In this example, Alan has reached the limits of his knowledge and is having some trouble with importing data into the App. Alicia points Alan in the right direction by providing a link. In this case, Alicia is effectively coaching Alan by pointing him to resources to extend his skills. Again, a great way to use comments.

This could also work well in training lab scenarios where students add comments for the trainer to review and help them as they develop. While they are waiting for the trainer to respond, they could move onto another section of the lab tutorial. I can also see this as potentially enabling remote paired programming although where I could see this being annoying is someone using comments to stay hands-off but micro-manage the development process and being a fly who constantly needs swatting away.

Co-Authoring

The feature which I have seen the most resistance with among developers is co-authoring. This is the equivalent of the almost-real-time editing of the Teams Word proposal. Only, based on this animated gif from the Power Apps Blog, it seems there is no auto-save and changes only appear after a manual save. I can see this causing even more sync clashes than what the Word equivalent does.

The blog makes it abundantly clear this is very much an experimental feature, rather than part of a well-crafted vision for the product. Given the reaction I have seen among devs, even if it remains in the product, I do not think it will be extensively used. It reminds me of the children’s game Telephone where the to and fro of development will lead to a murky mess or of the old Google Chrome Multitask April Fool’s Joke where the promise is greater productivity but the reality is something different.

Overall Thoughts

I think the lowering of barriers between users and developers is a good thing; the more they communicate, the better the outcome, and Power Platform gives developers and users a common language to communicate with. The comments feature I see as having the potential for improving a lot of aspects of Power Platform development, both in terms of project collaboration as well as for training.

The co-authoring feature I am less enthusiastic for but, perhaps, I am missing something. Feel free to leave a comment if you see the co-authoring feature as helping, rather than hindering or whether you see it becoming useful in the future as it matures.

What I Like About the New Power Platform Pay-As-You-Go Meters

Standard

One of the big announcements for me at Ignite was the ability to link Power Platform to Azure for managing consumption and charges.

The announcement covered three areas (App Usage, Storage, and Requests). Here is what I think it means for the Power Platform.

For the quick version of what was announced, here is a summary below, care of Microsoft docs.

Pay as you Use: The Per App Meter

The Per App meter adds a third column to our usual Power App pricing.

Clearly the new Plan is designed for infrequently used applications but it also goes some way towards solving a problem that has plagued Dynamics since v4. Specifically, the problems of internal use.

An obvious use for Dynamics 365 Customer Service is for internal support e.g. IT helpdesk. The problem is Microsoft has always been hard-nosed about licensing internal users. In short, if an employee or someone whose actions are “guided” by the organisation e.g. a contractor, access Dynamics, regardless of the interface, they need a user license. While there are concessions about accessing Cases of one’s own creation with Power App, Power Automate, and PVA licensing (check disclaimer (1) here for details), until this new Plan came along, effectively every user needed to be licensed for the App, whether they used it or not.

For small organisations, this was not such a big deal but for larger organisations, it was always a deal-breaker, sending prospects to competitors such as ServiceNow who do not charge for “Requesters”. Microsoft used to offer heavily discounted licensing for direct ServiceNow compete scenarios but I have not seen this being offered for a long time.

With the new model, Microsoft partners no longer have to walk away from internal support opportunities. Let us say an organisation has 20,000 employees. Rather than pay US$5*20,000 on the off-chance an employee logs a ticket, they will pay based on request volume which, for larger organisations should be fairly predictable.

There are scenarios which still fall through the cracks e.g. a Personal Assistant logging a ticket for an executive needs, by the letter of Microsoft law, a full Dynamics 365 Customer Service license if the protected Case entity is being used. Also, a spike in tickets on the same subject could prove quite expensive.

To show how ridiculous the licensing model is, the workaround is to use emails. If an employee sends an email to a Dynamics queue this is not considered accessing the system. So, a theoretical workaround would be to get users to email support requests, rather than use a self-service portal. We might even be able to format the body of the email so it could be read by a Flow and capture structured data.

Another, less common scenario, which falls foul of Microsoft licensing, is where an infrequent specialist is required to resolve a Case. Where I came across this was at a large university which processed student enquiries. In some cases it was necessary to seek out an academic for advice. Academics, being simple folk, did not need to access the entire Dynamics 365 system. A simple App which listed the Cases for review and allowed them to pass on their thoughts was enough. However, as with the internal tickets, this meant literally every academic needed to be licensed in case they were called upon. Again, the new licensing model goes towards addressing this.

I have often wondered if the approval function of Power Automate triggers the need for a license given it can pass the requests via email. I have raised this a few times with different people at Microsoft with a wide spectrum of responses ranging from “you might be onto something” through to “nice try, Leon”.

For me, the takeaway is any licensing model which encourages workarounds (SharePoint mutiplexing anyone?) and hinders innovation is a broken licensing model which makes the vendor look confused and out of touch.

Pay as you Store: The Capacity Meter

Not too much to say on this one. I like the idea of the shared tenant capacity pool but, as Jukka points out, turning the meter on turns the pool off and replaces it with a baseline capacity (see picture further up) with charges for anything over this. Of the three meters I can see this one generating the most nasty surprises due to data imports or aggressive file attaching.

Pay as you Request: The API Meter

This is one I like the most and, this being the Thanksgiving weekend, we have an example to see why. Let us pretend my Etsy shop relies on Power Platform in some fashion and every sale I make generates calls to the Power Platform API. Through the year the usual entitlement is sufficient but let us say the Black Friday sales cause a run on pocket watch pill boxes and insulin pen cooling pouches (they also work great for epi-pens) and I exceed the request limit. What do I want to happen?

One option would be for Microsoft to throttle or turn off the pipe. Given the volume of sales, this is undesirable as it could lead to timeouts and lost sales. Linking to Azure means the excess calls get charged but, given the calls are directly associated to sales, I do not really care as I will be making money anyway.

Interestingly, at the time of the announcement of the public preview of this feature, those who turned it on were not going to get charged for excess calls for at least a few weeks. Perhaps Black Friday and Cyber Monday will be covered after all.

Last Thoughts

So there you have it, new ways to pay for Power Platform using Azure dollars which, if you have a monthly amount set aside, may well be a compelling option. In terms of starting to fix the infrequent internal user issue, it is a step in the right direction. Also, in terms of providing a clear answer to how Microsoft wants organisations to manage excess API calls, we now know where we stand.

For those of us who design solutions on the Power Platform, it adds slightly more complexity, as it requires us to know details of our clients’ Azure subscriptions which may or may not be accessible but, overall, I think it is a positive move.

The Voice Channel Rises as the Call Center Falls

Standard

Microsoft have formally announced the release of their VoiceBot as part of their low code/no code Power Platform at Ignite 2021. For me, this is very exciting although, conceptually, it is quite simple. Microsoft’s configurable text-based chatbot, in the form of Power Virtual Agents, has been around for a while now and this new VoiceBot takes that foundation and links it to Azure Cognitive Services to convert text to speech and vice versa. Behind the scenes it is a remarkable achievement. We literally have a configurable bot capable of generating and processing natural speech, and which can interact in real time with a human.

So what does this technological achievement mean and why do I claim it is the death of the call center?

Why We All Hate Call Centers

No one gets excited about talking on the phone to a call center. There is rarely anything pleasant about the experience. Let us consider a typical scenario for a customer.

Finding the Phone Number

We look up the call center number for the organization we are trying to reach. Often the number is purposely buried in some dark corner of the company’s web site because they want the customer to choose practically any other channel to answer their enquiry. Why? Because speaking to a human on the phone is very expensive.

Forrester looked at the cost of different channels over ten years ago and found call center support cost $6-12 dollar per contact compared to, say, web self-service which cost literally 100 times less. No wonder the companies make it difficult to call.

Wading Through the IVR Swamp

The above table gives us an indication of one of the reason we are immediately encouraged to press keys before speaking. If a customer’s question can be answered through IVR/DTMF touchtones and automated recordings, we are still at least 20 times cheaper than speaking to a human. Another reason for IVR use is the company wants to filter out the high-value customers from the low-value ones. For example, if you are looking to make a purchase, the company will not want to keep you waiting compared to, say, a support call where the customer is a captive audience. Whatever the reason, assuming our call needs that human touch, we need to get through the IVR obstacle course before we are permitted to speak to a human.

On Hold, But Your Call is Important to Us

Next comes the wait and that music. The smarter companies offer a call-back service but, certainly in my experience, these are the exception, not the rule. The lazy ones, again, encourage you to abandon all hope and return to the web site for answers. Why the wait? To limit costs, a balance is struck between how long a customer is predicted to tolerate waiting and how many call center staff the company is willing to spend money on; the more staff, the higher the salary costs and the more expensive the service.

The Language Barrier and a Lack of Localization

To reduce costs, many companies outsource their call center overseas. This means, almost by definition, the call center will be populated with people for whom English is a second language. The language aspect may cause difficulty although, in my experience, things have improved on this front a lot over the years; the days of having to spell “Leon” are mostly behind me unless it is a company that has really gone cheap on their customer service.

Another consequence, which is harder to overcome, is a lack of localization. This means clarification questions are asked which are unnecessary for a local call center. For example, pretty much every Australian knows where the “Gold Coast” is but an overseas operator may still ask for the state.

Assuming these aspects are overcome, the result should be that the customer’s issue is resolved hopefully on the first attempt.

How the VoiceBot Helps

With a VoiceBot, almost all of these pain points are removed. We see from Forrester’s table that, over ten years ago, a virtual agent was ten times cheaper than a human. I am not sure what constituted a virtual agent in 2009 (primitive chatbot perhaps?) but a coded chatbot would not have been cheap so I think it is reasonable to expect a configurable Power Virtual Agent will be the same cost or cheaper and therefore a compelling economic alternative to a human call center agent.

Cheaper means many of the concessions made above, at the expense of the customer experience, are no longer necessary.

First of all, we do not need to hide the phone number and encourage other channels. The phone number can be as prominent as the web site text-based chatbot which, in all likelihood, will be running on the same configured engine as our new VoiceBot. The customer could use the chatbot and get the same results but the decision is the customer’s, as it should be.

The IVR swamp can be drained. Microsoft’s chatbot is IVR-aware but I think this will become less relevant when the customer can simply say what can be typed and be perfectly understood.

The waiting and listening to Muzak will also disappear because scaling an army of VoiceBots is a lot more affordable than running a call center populated by humans.

Issues of language and localization should also diminish as VoiceBots become more sophisticated. While in the early days of voice recognition my (mostly) Australian accent proved troublesome, it is not the case today and localization, in many cases, will be a Google/Bing search away. Alexa, as an example of how things have progressed, is now conversant in Australian slang.

No Longer a Need for Humans?

Of course, as anyone with a bot in their home will tell you, VoiceBots are unlikely to be perfect for quite a while and, where the VoiceBot ends there will still need to be a human waiting. However, setting up a call back service will be trivial and, given Power Virtual Agents can be retrained on previous encounters to improve intent recognition, I believe the need for humans will significantly diminish.

If we think of a traditional technical support setup, Level 1 support (agents with limited technical knowledge following scripts) will disappear the quickest. Any script a human can follow, a bot can as well. Level 2 requires in-depth knowledge of the product which is a product manual away. While a human struggles to sort through large volumes of text quickly, this is trivial for a bot. So, as I see it, in the short term, Level 1 and some fraction of Level 2 will be the easiest to replace, significantly reducing the call center headcount and potentially bringing many call centers back onshore, populated with a handful of deep technical experts.

The Ultimate Outcome

The biggest win though is the choice of channel is put back in the hands of the customer, rather than being dictated and compromised by economic considerations. If a customer chooses not to engage with the VoiceBot, they can request an escalation to a human straight away, although I think this will become less frequent as people learn that the bots can solve an increasing range of issues. The customer regains power to control their experience and the company is not compromised in offering it. Both the company and the customer wins.

What Jumps Out At Me From Dynamics 365: 2021 Release Plan 2

Standard

A new job and a new role (in presales) means a bit more time to get “dirty with the tech” and wading through the 400+ pages of the Wave 2 Release for Dynamics 365 is a good place to start.

I did a review for 2020 Wave 1 in February last year and one thing which has changed from then is the consolidation into one PDF. There is no longer a distinct guide for Power Platform vs Dynamics and this is, of course, indicative of the consolidation into one cohesive ecosystem/platform. Just as CRM and ERP are no longer divided but simply a collection of “First Party Apps”, we now have Power Platform joining the gathering to form one cohesive Business Applications Platform.

This is the future so expect to hear people (especially if they work for Microsoft) talk more about Business Applications (or, if they are spelling it out “Dynamics 365 and the Power Platform”) and less about CE, CRM, and ERP. All your base (products) are belong to BusApps.

So I will now read the guide as I write and call out the sparkly bits which excite me and how they might be important to you, your Business Applications implementation and future strategy with the platform. I will focus on Public Preview (PP) and General Availability (GA) features. Where I do not know a First Party App well enough (or nothing excites me) I will leave it out.

Marketing

Create email content easily and efficiently with AI-based content ideas (PP Oct 2021)

I see adverts online for this kind of thing. Basically, AI recommending marketing copy and, from what I hear, the robots do a pretty good job. Content snippets based on key points which the user selects and massages, as needed.

More and more Microsoft are weaving AI magic into Dynamics First Party Apps and this is a great example. This is where the real value of the First Party Apps resides in my opinion; the linking of Model-Driven Apps with Azure AI to provide an unassailable competitive advantage to Microsoft’s offerings and also to the organisations which employ the technology.

Deliver rich customer experiences across Dynamics 365, Office and other apps by augmenting customer journeys with Power Automate (PP Nov 2021)

Imagine adjusting a customer journey based on the weather, or based on local traffic conditions. This is now possible with the incorporation of Power Automate Flows into customer journeys. A lot of possibilities open up with this.

Create segments for leads and custom entities in the new segmentation builder (PP Dec 2021)

No longer limited to Contacts, we can now use the segmentation builder to create segments for other “person-like” entities tables.

Sales

Lead routing (GA Jan 2022)

Similar to Case routing in Customer Service, this routes Leads based on a rules engine. Historically, this is something we have built using Workflows or Flows (here is one I built ten years ago). Rules can be based on Lead segment, Lead attributes, seller attributes and assigned round robin or by load balancing. Combined with new data hygiene features (GA Jan 2022) this will be a huge boost for organizations which generate opportunities for large Lead lists.

A shedload of Teams integration features (mostly GA Jan 2022)

This blog is going to be long enough as it is so I will not list all the “do this Dynamics thing from Teams” and “do this Teams thing from Dynamics” but suffice to say Microsoft are bringing these products together in a big way. Hint: Expect lots of announcements at Ignite.

Service

Modern control for subject entity (GA Oct 2021)

The Subject tree has not really changed since the days of Microsoft CRM v4 (thank you SeeLogic for this trip down memory lane)

Thankfully, it has now had an upgrade and looks a lot like the asset location tree for those familiar with Field Service. Thank you to Nishant Rana for this review of the new feature and this handy screenshot.

Lots of Omnichannel Voice Features (GA Nov 2021)

Omnichannel is moving towards being a proper call center solution. Outbound calls will be supported, using Azure Communication Services, and Agents will be able to put calls on hold, consult other agents or transfer the call to another agent. Azure Communication Services is now Dynamics’ in-built voice provider as opposed to before where a third party telephony service would need to be stitched in.

Call recording, call transcripts and sentiment analysis, along with reporting analytics will also be available. Again, Microsoft is using their AI services to make their First Party Apps a bit more magical.

Supervisors can also shadow calls and review the live transcript. If required, they can also participate in the call to keep it on course.

Intelligent voice bot via Power Virtual Agents and Microsoft Bot Framework (GA Nov 2021)

I am very excited for this. Power Virtual Agents is a configurable text chatbot and pretty great. This feature gives the chatbot a voice. The bot can answer questions 24-7 and when it fails the Turing Test, can hand over to a human with full transcript and context.

Bring your own data to timeline (GA Oct 2021)

This is a bit of a dark horse of a feature but I can see it being immensely useful. In essence, external data can be exposed on the Timeline (the activity timeline box you see on Accounts, Contacts, etc.) via virtual entities. So, for example, financial time-relevant data could be exposed from Dynamics 365 Finance on the timeline such as when payments were made or when they are due. Similarly, if you could figure out a way to bring the data in as a virtual entity, you could expose a Contact’s LinkedIn job history on the timeline. Lots of opportunities for this one.

Field Service

Enable customers to schedule service visits with a simple web experience (GA Oct 2021)

I assume this is the self-scheduling feature which has been previewed as part of the Microsoft Cloud for Healthcare.

The welcome screen on the Contoso Healthcare app on a mobile phone and the screen to schedule a new appointment on a tablet.

I am very happy to see this in General Availability as creating a self-service scheduling feature for Power Apps Portal used to involve a lot of code and a prayer. Dynamics 365 Field Service resource setup is needed to match customers to technicians but it is a small price to pay for such a useful feature.

Finance

Create collections activities based on payment predictions (GA Oct 2021)

This combines the new payment prediction feature with automated collection activity creation. This allows for the optimisation of the efforts to chase payments and potentially could be used to increase cashflow by chasing repeat offenders early.

Forecast bank balance and treasurer workspace (GA Oct 2021)

Microsoft continues to bring predictive and forecasting capabilities into Finance. This feature allows for cash flow forecasting to know how much cash the business will have on hand and when, making the future allocation of funds much easier and more reliable.

Combined with the new Treasurer Workspace, which allows for forecast snapshots for comparison to actuals, businesses are getting some seriously powerful tools for managing their bank balances.

Intelligent budget proposal (GA Oct 2021)

Creating a budget at any level of an organisation is a tedious manual process but it does not need to be. With historical data, this feature puts together a template budget, based on historical spending which can be refined, based on the upcoming needs of the organisation. This feature alone will save organisations a fortune in hours regularly wasted chasing up numbers and making “best guesses”.

Commerce

I simply do not know Commerce well enough to know what is exciting and what is not but the segmentation based on location, device type etc. looks interesting, as does redirection based on geolocation.

Project Operations and Human Resources

Obviously, it is easy to see where Microsoft are making their investments by the sheer volume of features added to a First Party App in a given release. For example, Service had close to 40 pages of new features in the release (10% of the entire document). In contrast, Project Operations and Human Resources have only a few pages. I am not saying these products are going anywhere but either they are already too perfect to improve or it is not at the top of Microsoft’s list for attention. Another litmus for Microsoft’s focus is the number of sessions devoted to a product at events like Microsoft Ignite. My guess is Project Operations and Human Resources sessions will not have too many sessions.

Guides and Remote Assist

Just a handful of pages here as well which is a pity because this is really exciting technology. Hopefully, there will be more to come in the future.

Power Apps

Intelligent authoring experience with Power Apps Studio (GA Oct 2021)

A while ago I saw a demonstration where Microsoft had trained an AI to write code using the entire public Github repository. Effectively, you wrote the comment and the AI built the code around it. It was very impressive. This capability is now with Power Apps with the ability to write natural language and have Power Apps Studio generate candidate code for the author. It is a great way to save time and teach novice coders what they should be typing.

You can also provide an example for formatting and Power Apps Studio will create the Power Fx code to enforce this format.

Relevance Search

I would not be a release without a name change. Relevance Search is now Dataverse Search.

Reinvented maker experience for configuring model-driven apps for offline use (GA Dec 2021)

This is very cool. Previously, to enable a model-driven app for offline use you had to activate offline for each table used by the app. Now we can enable it at the app level. One toggle and all relevant tables become offline capable.

Manage everything about solutions and tables in a modern way (GA Oct 2021)

No more Switch to Classic!! The maker portal is getting parity with the classic experience. Goodbye solution explorer, hello new fully-featured maker portal.

Microsoft Dataverse

Microsoft Dataverse search can search through file data type (GA Oct 2021)

The one search to rule them all can now search through files stored in Dataverse much like the SharePoint Enterprise Search of old. This is great and will be really useful at finding records based on attached files.

Microsoft Dataverse data archival (PP Mar 2022)

This is very exciting but still some time away. Arguably a flaw in Dataverse is the lack of proper archiving. In effect, to meet an organisation’s archiving policy, it must be set up with integration and code or some very, very clever Power Automate Flows. If Microsoft build an archiving engine into Dataverse this will save a lot of development and make answering RFPs which ask about archiving a lot easier.

Delete and remove users with disabled status (GA Oct 2021)

I expect quite a few Dynamics administrators will dry tears of joy over this one. No longer are we stuck with useless disabled users in Dataverse. Users can be purged along with the historical records associated with them.

Conclusions

As usual, a wealth of innovation in the new release, even more AI integration and the occasional patching of a hole of missing functionality which probably should never been there in the first place. Overall I am impressed with what has been produced and will look forward to the additional enhancements to be announced at Ignite.

Breaking Modern Encryption With a Toilet Roll: An Introduction to Quantum Computing

Standard

Thanks to COVID-19 virtualising the Microsoft Build conference this week, I got to attend it for the first time. There were many great talks but the ones of particular interest to me were on quantum computing. Microsoft is now entering the world of quantum computing with their Q-sharp programming language. We may not have commercially useful quantum computers yet but, when we do, Microsoft plans to have the tools ready to make use of them.

Inspired by those presentations, this blog will explain why quantum computing is useful; a subject which is still deeply misunderstood by many.

My Interest in Quantum Computing

My background is a little unusual in that my education was originally in quantum physics. I even published a physics paper with my PhD supervisor, and a fellow researcher 25 or so years ago. One benefit of that unfinished PhD was being exposed to exciting developments in quantum computing and quantum encryption. One of those developments was the invention of Shor’s Algorithm in 1994 (just two years before I put out that physics paper). Shor’s Algorithm sent waves through the academic community because it showed that, in theory, a quantum computer could break modern encryption. If a sufficiently powerful quantum computer could be created, no encryption would be safe. Arguably, it was Shor’s Algorithm and its implications that led to the commercial funding of the development of quantum computers from that time until now. Even though that was 25 years ago and there has been billions of dollars of investment since then, quantum computers still have a long way to go before they can crack modern encryption.

Modern Encryption

One would expect that the encryption methods that protect our secrets are based on some deep, mathematical concepts, inaccessible to all but mathematics professors but this is not true. A lot of modern encryption is based on one simple concept: it is much easier to multiple two numbers together to form a bigger number than to take the bigger number and work out the two numbers used (called factors).

For example, we know that 3 multiplied by 5 is 15 and, because most of us know our times tables, we can easily divine that the factors of 15 are 3 and 5. However, not as many of us can immediately reason that 221 is the product of 13 and 17. Scale this up and you have a system which can readily encrypt secrets but cannot be readily broken.

Factoring with a Toilet Roll

Is there a way we can try lots of potential solutions at once to find the factors of a number? One way is with resonances on a tube. We know that if we blow on a pan pipe, we hear a note. This is note is constructed of the resonant frequencies in the pan pipe tube.

The physics of tube resonance is well understood.

Parts Cleaning and Ultrasonic Cleaning Equipment | CTG
Image from http://www.ctgclean.com

So, if we have a tube of length 221/2 mm = 110.5mm (about the size of a toilet roll) this will resonate with tones of wavelength 13mm and 17mm among others (use inches if you prefer, it does not really matter although a 11 inch tube would be closer to a kitchen roll).

Now comes the clever part. Let us construct a sound using a synthesizer made up of tones of wavelength 1mm, 2mm, 3mm, and so on. We then play the sound through the tube and identify which tones resonate.

Moog Grandmother Synthesizer

Unless we have pitch perfect hearing we might need some help identifying the wavelengths of the tones which resonate. We can do this with a spectrum analyzer. If you have ever seen a car stereo from the nineties you will be familiar with a spectrum analyzer. It looks like this:

Spectrum Analyzer with Python? - Stack Overflow

Using a clever piece of mathematics called a Fourier transform, the spectrum analyzer takes the sound being produced by the toilet roll and breaks it up into its component tones. The resonating tones will be louder and appear as taller on the spectrum display.

Once we identify these resonant tones, we can convert them back to numbers and we have our factors. The algorithm looks something like this.

So what stops us pulling out a Moog Synthesizer, a toilet roll, an old car stereo, and unlocking the world’s secrets? The numbers we need to factor in modern encryption are really long i.e. a few hundred digits long. Using millimetres to define our wavelengths, we need a tube longer than the width of the observable universe to crack it. That is a lot of toilet paper!

Shor’s Algorithm

Shor resolves the problem by abandoning the toilet roll for a cleverly constructed mathematical function, and uses a quantum superposition instead of our synthesized wave. Otherwise, the process parallels our own.

While the mathematics is complex, the idea is very similar to ours. We convert the problem to something we can work with, throw multiple possible solutions at it at once in such a way that the actual solutions separate themselves out, we identify them using Fourier, and check they work.

Is Modern Encryption Dead?

The good news is we still have lots of time ahead of us before we need to overhaul modern encryption. While RSA encryption relies on the factoring problem described and can be tackled by Shor’s Algorithm, other encryption techniques, such as AES encryption are not. Even if we created a sufficiently powerful quantum computer, AES encryption remains strong.

This also leads to the second reason why modern encryption remains unchallenged; it is really hard to create a stable quantum computer. To date, the largest number factored by a quantum computer using Shor’s Algorithm is 291,311. In essence, to challenge modern cryptography, we need a quantum computer thousands of times more powerful than the best machine today and progress is slow.

So Why Are Quantum Computers Useful?

It may seem we have invented the mathematical equivalent of a Rube Goldberg machine, but the fact is this approach of throwing a spectrum of quantum states at a quantum toilet roll and seeing what comes out is much quicker than trying to crack the code with a normal computer. While it may take a normal computer more than the age of the universe to crack this type of encryption, a quantum computer of sufficient size can do it in hours. For encryption, there are still significant hurdles but it speaks to the potential of quantum computing.

The key here is quantum computers allow us to answer questions differently so problems, like this one, where there are lots of potential answers, can be tackled much more efficiently than with a classical (non-quantum) computer. It is for this reason that optimization problems, such as traffic routing, or delivery distribution lend themselves well to quantum computer algorithms.

Chemistry has its foundation in quantum mechanics but anything more complex than the hydrogen atom requires computer simulation to predict. To simulate and design novel molecules for drug manufacturing, it makes sense to use a computer rooted in a quantum world. While projects like Folding@home attempt to tackle the problem by stitching together a vast array of classical computers through the internet, a quantum computer could revolutionize the approach and rapidly accelerate the discover of elusive cures.

There are many applications waiting for quantum computers to become a reality but the fields are green and there is still much to be discovered. Even today, applying quantum algorithms on simulated quantum computers while not providing speed efficiencies, are proving to be superior to the classical algorithms and worth implementing. If you have problems which are computationally intensive, it may be worth considering quantum computing for the task.