The Evolution of Customer Service From A Call Center to Multi-Channel And Beyond

Standard

Starting university in the early nineties gave me a unique position to appreciate the modern evolution of technology. The internet in the public domain was still in its infancy. I was one of the first people I knew to have an email address and had to explain what email was to many of my friends. ICQ was five years away so messaging was done via ‘telnet’ where you ‘dialled’ someone’s IP number to chat.

Browsing was text based with hyperlinks and there was no search engine (AltaVista was three years away). You simply discovered pages of web links through word of mouth.

This was a time when customer service was delivered through three channels: face to face, phone, and fax. We have come a long way.

The Advent of Multi-Channel

Multichannel

People do not really talk about multi-channel any more. It was big for customer service and for marketing and, while it could be argued even our pre-internet customer service was multi-channel, my recollection is the term only came into mode when linked to internet channels.

It was also the beginning of a shift in considering what customer service was for. Before this time, customer service was little more than part of the product/service offering. If a company offered three services, each department was responsible for customer service, and there would be three customer service functions. I recall a prominent American bank at the time having literally a dozen different fax numbers for different divisions (the only reason I remember this is because I once flooded all the fax numbers when the bank was slow at refunding an erroneous monthly charge. Enquiry processing across the bank came to a halt in what was, arguably, a pre-internet Denial of Service attack).

With the introduction of channels like email and online forms, came the shift to considering the customer’s experience. It made sense for the customer to choose the most convenient way to reach the organisation and not the other way around.

The outstanding problem was minimal cross-communication. Multi-channel meant multiple ways for the customer to get service but each channel was still a separate experience. Switching channels often meant starting again and customers were still bounced around departments for more complex issues.

Progression to Omni-Channel

Omnichannel

Omni-channel, as you can see from the Google Trends graph, started becoming a thing about five years ago. Thanks to the online revolution, enterprise-level CRM systems became affordable for all. This provided a centralized hub for all enquiries. You could email about an issue, follow up with a phone call, and then go to the company’s physical service counter and all interactions would be recorded in the same system and available at the click of a button.

While multi-channel gave consumers a choice of communication channel, onmi-channel took it one step further and ensured a consistent experience or, at least, a consolidated one.

With most CRM systems, a rudimentary omni-channel system can be set up relatively easily. In my last project for a major university, whether the student asked their question face to face, via phone, email, or online form, everything became a Case record in Dynamics. In an omni-channel system, the customer gets to use the channel which makes sense for them and their enquiry. For the company, the channel does not really matter as a centralized CRM system means all enquiries are treated consistently. A true omni-channel system also removes “answer shopping”, common in multi-channel systems.

The Future is Omni-Moment

Omnimoment

The core assumption in an omni-channel system is the customer chooses a channel for an enquiry and sticks with it for the duration of that enquiry. Focussing further on the customer experience, the nature of the enquiry may require multiple channels to be engaged as part of the one interaction. Let us consider an example of opening a bank account.

In the multi-channel experience, a customer calls to find out about the procedure. They do not quite get the answer they are after, so they call back to get a different agent. They then visit a bank branch to collect the right forms. They go home, to fill in the forms. If they need to clarify something about the form, they either call or revisit the branch. There is no guarantee that the advice they get from these channels will be consistent.

The customer hunts down a notary and has their identification documentation validated. Once completed, the forms are faxed. Finally, once the processing department has informed the local branch that the account is open, the customer returns to the bank branch to provide a signature and collect a bank card.

Every step in the process is an isolated channel with the customer being expected to bring it all together in what was often a frustration and time-wasting experience.

In the omni-channel world, the customer goes online to find out about the procedure and there is an online form. If the customer has a question about the form, they can call or browse the web site. As both channels are pulling their information from a centralized knowledge management system, the answers will be consistent (and hopefully comprehensive).

Identification documentation is again notarized and once the form is completed, with notarized documentation attached, the application is processed and, with signatures being a thing of the past, a card is sent in the mail.

In the omni-moment experience, the customer goes online to find out about the procedure. The web site recognizes the intent and provides the option of a chat bot to assist. If the customer’s enquiry cannot be answered by the web site or bot, the interaction is escalated to a human. The agent offers to share screen and walk the customer through filling in the online form. Using video conferencing, the agent can verify identification on the spot without the need of a notary. Forms are completed and the account is opened immediately ready for online use. A bank card is again sent in the mail.

As you can see, the seamless integration of people, process and technology, make for a delightful customer experience. A process which took a week in the multichannel world, is completed in half an hour in the omni-moment world.

The Evolution of KPIs

As the way we interact with customers has changed, so too must our KPIs. Here are some classic call center KPIs which I consider irrelevant (or at least very misguided) in the modern customer service center.

Average Handling Time

Even back in the days of call centers, I was not a fan of this measure. It encouraged agents to open a call and immediately hang up to lower the stat. It is focussed on productivity, often at the expense of the customer experience.

If an agent spends 20 minutes assisting one customer to open up a bank account and 30 minutes with another, why is this a problem? If one agent is terse and goes through the form quickly, is this better than someone who actually takes the time to make sure the customer knows what is going on?

Average Time in Queue

There really is no excuse for waiting in a queue on the phone these days. Assuming a customer insists on exclusively using the phone, a call back service should be standard procedure. In an omni-moment world, there should be no queue and all queue measures are irrelevant.

Cost Per Enquiry

It is good to have visibility on costs but this should not be managed at the expense of the customer experience. In the early days of online channels it was realised these were much cheaper to operate than traditional channels. In some cases the customer experience was worsened for the traditional channels to encourage people to go online. This is management in the absence of strategy and is disastrous in the long term.

What is the Purpose of Customer Service?

The ultimate measure of customer service should be customer satisfaction. In my opinion this should be sought directly through surveys rather than assumed through measures such as Average Handling Time (a short call is not necessarily a good call). I can see value in measuring First Call Resolution (as confirmed directly with the customer) as this should be the ultimate goal of customer service. However, it need to be modified so it covers all channels across the customer experience, not just the phone component (assuming a phone is even involved).

While in a pre-multi-channel world, customer service was seen as little more than a necessary evil for selling a product or service, in an omni-moment world, the minimum standard is having the customer ask no more than once and be satisfied every time they make an enquiry. In fact with machine learning, in many cases, it should be possible to anticipate customer need and frequently achieve ‘ask never’ for existing customers.

Advertisements

Generating Reports For NightScout Data Using Flow, Excel, and OneDrive

Standard

A few months ago I talked about extracting data from a MongoDB database for the purposes of generating alerts. Since then I have taken it further and now generate regular reports of my data using the power of Flow, Excel, and OneDrive. As this may be useful to others running NightScout I thought I would share my set up and the discoveries along the way.

The Flow

First of all, I need to extract the data from the MongoDB and sent it to a target Excel sheet. To do this we use Flow.

image

I have set the recurrence to three hours. This strikes a balance between not running too often and blowing my Flow quota, and running sufficiently often to give timely results. At every three hours, we run approximately 240 times a month, which works well with our limit of 750 Flows per month.

The variable stores the latest DateTime value from our target Excel file.

image

To populate this variable, we query our target Excel and set the value.

image

In this screenshot we see that we return only one row from Excel, being the row with the highest DATE value. We then use this to set the variable.

Once we have this DateTime value we incorporate it into a modified version of the API call we used in the Alert blog.

image

For this call we bring back 100 entries from the MongoDB, a bunch of fields and order it so that if there are more than 100 rows available from the Latest Date from our target Excel, then only the rows immediately after this DateTime are returned. This ensures the query does not mess with the row order when it transfers them to Excel.

My continuous glucose monitor (CGM) feeds a value to the MongoDB every five minutes which means it generates 180/5 = 36 entries every three hours. Therefore 100 is a good setting to keep on top of the additional values generated in MongoDB but sufficiently large that it will be able to catch up if there is a temporary issue with the running of Flow.

Once the reply is parsed, we can populate our Excel with the new rows.

image

One point of note here is that the Flow step requires a Table within the Excel workbook. This is relatively easy to set up. Basically, you add your headers to the sheet, highlight them and select Format as Table from the Styles section of the Home tab.

The result looks something like this.

image

The DATE value is an integer representing the DateTime value but is a little difficult to read or transform so we also record the DATESTRING which is a little friendlier. Then we have the SGV value which is the blood glucose level in units only the USA use and finally we have the DELTA which is the change in SGV value between reads.

Once we have captured our data, we can begin reporting on it.

The Report

I discovered relatively quickly that Flow has a size limit for the Excel files it will work with. In the free plan this size limit is 5Mb, which makes it impractical for our purpose. Luckily I had a paid Flow plan via my Office subscription so I moved to this. This plan allowed me to work with Excel files up to 25Mb in size. This worked well. My Excel file has approximately four months of data in it and is 1.6Mb in size. Therefore, I have around five years of data to go before Flow reaches its limit. In five years either Microsoft will have removed this silly limit, I will be using a different technology to analyse my data or they will have found a cure for Type 1 Diabetes (there is a running joke in the diabetes community that the medical professionals have been promising a cure within five years for decades now).

The other trick I did to minimise the size of my target Excel was to house the reporting in a separate file and use a Power Query to reference back to the target file for the data. Using this Power Query, and some Excel formulae to manipulate the data to make it friendlier for reporting, I got this for my first worksheet.

image

If you struggle to replicate any of my formulae, please leave a comment and I will reply with the details.

HbA1c Prediction

The HbA1c is an indicator of how ‘sugary’ your blood has been for roughly the last four months. Using our CGM data we can make a prediction of what our HbA1c value is.

image

There are a few formulae available to do this calculation and in the above I use three of them. In the case of my blood results, the models predict 5.3, 5.1, and 5.1 which is well below the target threshold of 6.5 so well done me. I expect this value to slowly increase over time as my pancreas becomes less able to lower my blood sugar levels.

Distance Report

The Distance Report is something that can only really be generated using CGM data with a regular time interval  between measurements (in our case every five minutes). The Distance Report shows the total ‘distance’ travelled by the blood values i.e. the sum of the absolute delta values and is an alternative measure to the standard deviation.

image

For this report we only have data for the last four months as this is how long I have been using a CGM. We can see that the distance travelled each month is roughly the same. As time goes on we would expect this to increase as the pancreas becomes weaker and blood glucose levels (BGLs) start to vary more.

BGL Report

This was the first report I created and reviews literally all my BGL measures (around 600 manual finger pricks and then the CGM data).

image

In the top left we have literally every value recorded and when it was recorded. The CGM data can be seen as the ‘thickening’ of the values towards the right hand side of this graph.

In the top right we have the distribution graph for the data showing the spread of results.

The bottom left shows all the data points but strips out the Date value, leaving only the Time value. This has the effect of showing the data over a 24 hour period.

Finally, in the bottom right, we have a range of filters to assist with analysing the data.

For example, if we compare the distribution curves for 2017:

image

2018:

image

and 2019:

image

we see that our distribution curves are centred around 5.4, 5.5, and 6.0 respectively. In other words it appears the curve is moving to the right over time. This is consistent with a weakening pancreas (or me being more relaxed about carbs).

Range Report

The Range Report looks at the average and standard deviation of the data per hour, looking for where in the day the BGL values are highest and vary the most.

image

The graphs are relatively flat with a slight increase towards the end of the day. This is likely the result of dinner (generally the largest and most variable meal of the day and therefore the meal with the most impact on glucose levels) and late night snacking (which will never have a positive effect on BGLs). Again we have a filter, in this case a timeline, to help with our analysis.

Distribution Report

The Distribution Report does a similar analysis as the Range Report but per month, rather than per hour.

image

The trendlines suggest the numbers are relatively flat (average BGL around 6 with a standard deviation of 1). It is expected both of these will increase over time and the BGL average and variability increase.

Displaying the Data to the Health Team

With the Excel files sitting in OneDrive, you simply right click the file to generate a link for sharing a read-only version for health care professionals. In my case I use bit.ly to also make it friendlier. While it is a little twitchy, it is reasonably friendly across various form factors and browsers.

Conclusions

Flow opens up a raft of opportunities for using my data whether it be alerts, analysis to maintain my health or making it readily available to my health care team. A few years ago this kind of set up would have taken weeks of coding, if it was possible at all. Today, it requires zero code and costs almost nothing. If this kind of set up could help you or someone you know, have a tinker, it really is straightforward to set up.

Review: Amazon Echo/Alexa

Standard

This Christmas has been something of a revolution in the Tribe household. Prior to December 25th, our household had very little which was internet enabled outside of phones, gaming consoles and laptops. The television is a plasma dinosaur, the stereo has an analog radio tuner in it and is the size of a slab of beers, and the lights require moving to the wall and flicking a switch to activate.

Then came the Amazon Echo. I had bought it for my wife as she had been keen to get one for a while. It was Christmas so I bit the bullet and bought the second generation Amazon Echo. If you are unfamiliar with this device, it is essentially a Bluetooth enabled speaker with a digital assistant built in.

Amazon Echo

The setup was an absolute nightmare. Here are my tips:

  • If you are in Australia, make sure you shift your account over to amazon.com.au first. I had already shifted mine but my wife had not. This was only a problem when I tried to add myself to her household. Esoteric messages and a bit of configuration later all was good.
  • If you are looking to share content through the household option, this is not yet supported in Australia. Yes, all the heartache of the previous step was for naught.
  • The device is effectively a single user device (I’ll elaborate a bit more on this later). Whoever is the main user, they are the person who should download the Alexa configuration app to their phone during set up. I initially set it up with my phone and then tried to shift it across to my wife’s. A few hours with support and a couple of factory resets and we were good again.
  • The Amazon Echo is similar to Android devices in that there is one ‘first class’ user and multiple ‘second class’ users. In the case of the Echo, additional users are set up as voices in the primary user’s Alexa app. Once this is done, the additional users can download the Alexa app, log in as the primary user and then select who they really are. This being said, there is no strong differentiation in content. For example, if Amazon has access to the primary user’s contacts, everyone has access. Similarly, while you can add an Office 365 account to Alexa for appointments, this is the primary user’s account which, again, everyone has access to. You cannot add multiple Office 365 accounts, let alone differentiate them by user.

However, once setup was done, things were smooth sailing. I got 14 days free access to Amazon Music which had everything I could think of (ranging from Top 40 through to Prog Rock 70s band Camel.) What’s more, the more we used the Amazon Echo the more we saw value. All those little nuggets of information we would usually look up on our phone, we can simply ask Alexa. Examples include:

  • The current time in another timezone
  • When it is sunset (the time our house becomes a device-free zone until dinner)
  • Random trivia (do fish have nostrils?)
  • The latest news
  • The weather outside

You can also use it to make calls via Skype (untested as I write this though) and for those who have installed the Alexa App on their phones and logged in as themselves (via the primary user) you can call them through the Alexa app even when they are away from home.

There are also ‘skills’ (read as ‘apps’) which can be added to the Echo. While the variety in Australia is woefully limited compared to the US, there is still enough to be useful. So far  have added:

  • ABC world News
  • Domino’s Pizza
  • Cocktail King
  • The Magic Door (a Choose-Your-Own-Adventure storytelling app for children)
  • RadioApp

The way you speak to Alexa (the assistant in the machine) is not completely natural. but you do get used to it. For example, for some of the skills you need to say “Alexa, open <skill name>” first before it will realise it needs to employ that skill. For example if I ask “Alexa, how do you make a Negroni?” it will suggest using the Taste skill, even though Cocktail King is activated. To get the recipe I need to say “Alexa, ask Cocktail King how to make a Negroni”

Finally you can speak to Alexa through the Alexa App on your phone. In one case, I had added the Diabetic Connections Podcast skill to Alexa but, given the content was of limited interest to my family, I asked, through my phone to play the latest podcast. Sure enough it came through to my phone and, with the headphones plugged in, my family were none the wiser.

Echo Dots

With the Echo downstairs and my desire for us to stop shouting up the stairs to summon our children, within 24 hours of setting up the Echo, I had bought two Echo Dots: one for each child’s bedroom.

Echo-Dot-3

These have exactly the same brains in them as the Amazon Echo so they can be used standalone. However, through the Alexa app, you can make them part of the same ecosystem meaning you can use them as an intercom system throughout the house. Also, they support commands such as:

  • “Alexa, tell Orlando’s room that dinner is ready”
  • “Alexa, tell Claudia’s room it is time to go”
  • “Alexa, tell Orlando’s room it is time to wake up”

All with their own special audio touches.

Some Hacking

It has only been a couple of days so I have not had time to get up to too much mischief but here are a few things I have discovered:

  • Amazon Echo is compatible with IFTTT so if you want to trigger IFTTT when you issue a command to Alexa, this is not a problem
  • Amazon Echo is also Smart Watch friendly. When I played the podcast, controls appeared on my Smart Watch. This also happened when I played music through Amazon Echo
  • If you go through the Alexa App, it demands you have a Spotify Premium account before it will connect Spotify. You can get around this by pairing your phone to the Amazon Echo (“Alexa, pair my device”). Once your phone is paired, anything you run on the phone e.g. Spotify will have its sound come out of the Amazon Echo.
  • If you get yourself a Bluetooth stereo receiver (basically a Bluetooth receiver which plugs into the audio input of your stereo, it is fairly straightforward to get a dinosaur stereo like mine to become Echo’s sounds system.

Next Steps

The next step is to make the house a little more internet aware. I have ordered a WiFi plug from eBay for around AU$12 (roughly US$10) and I will see if I can link it to the Echo and have Alexa turn things on and off. For example, I could set up my slow cooker and then, halfway through the day while at work, tell Alexa, through the phone app to turn on the plug and initiate the cooking of dinner for that evening.

Conclusions

While setup was a nightmare for me and there is little in the way of an instruction booklet for the device, now I have experimented with it for a couple of days I am really happy with my purchase. The main reason for not going with Google Home was the lack of support for Office 365. This being said, the ability to only add one Office 365 account through the app makes that differentiator small in hindsight.

Amazon suggest they will continue to improve the device and, as I upgrade the appliances in my home over time I expect the benefits will also multiply e.g. linking Amazon Prime to a smart TV.

If you are looking to take the plunge, my recommendation is to do so. The devices, especially the Dots, are very inexpensive and the previous (second) generation ones are being sold for a song by Amazon and retailers such as JB-Hifi. If you want to go really cheap, you can buy the Echo Input which is the brains of an Echo without a speaker where you simply plug it into an existing speaker.

If you have any Echo hacks, please post them in the comments Winking smile

Making a Tweet Bot With Microsoft Flow

Standard

If you subscribe to my Twitter feed, you will have noticed a lot more activity of late. This is because I have created a Tweet Bot to find me the most interesting Dynamics articles out there and Tweet them.

My inspiration for doing this was Mark Smith’s Twitter feed (@nz365guy). Every hour Mark pumps out a Tweet, sometimes in a different language, sometimes on related technologies, such as SQL Server. He also drops in quotes from the books he is reading, as well as the odd manual Tweet.

Mark Smith Twitter

As you can see, this formula has been very successful for him. Over 11,000 followers and almost 69,000 likes on the back of 29,000 Tweets. That’s a little over two likes per Tweet. Good stuff.

Previously I had only really used Twitter to promote my blog articles so I thought it would be a perfect testbed to see if automated Tweeting, plus the odd promotion of my blogs and speaking engagements did anything to lift my own statistics.

In doing so I also found a curated list of Tweets was far more useful than browsing through the list of Tweets from the people I am Following because looking at my own list of Tweets is ad-free. Now I review the curated list and most days, if I find something I really like I post it to my LinkedIn feed. So, if you want to see something less automated, feel free to follow me on LinkedIn.

How It Works

image

Here it is. Essentially, the Flow:

  • Triggers with a pre-determined frequency
  • Initializes a bunch of variables and searches for candidate Tweets
  • Loops through the Tweets to find the best one
  • Stores the winning Tweet in a list of sent Tweets and then Tweets it

Let us go through these stages in more detail.

Recurrence

This seems pretty straightforward but there are a couple of things to consider. Firstly, if I did like Mark and scheduled to send one every hour, this would be around 24*30 = 720 Tweets per month which is close to my quota of 750 on a free plan. Do-able but this does not leave a lot of wiggle room for other Flows and experiments like my MondoDB integration.

Initially I set it to every two hours but even this had some troubles with the following error often appearing:

{

“status”: 429,

“message”: “This operation is rate limited by Twitter. Follow Twitter guidelines as given here: https://dev.twitter.com/rest/public/rate-limits.\r\nclientRequestId: 00776e5e-6e93-4873-bcf5-a1c972ba7d2a\r\nserviceRequestId: 597a00b83806f259127207b0a18797a0”,

“source”: “twitter-ase.azconn-ase.p.azurewebsites.net”

}

I went to the link suggested but it was broken. So I went to the rate limits in the Flow documentation for Twitter and I did not seem to be violating these limits so it was quite confusing. A little browsing revealed that others had also come across this problem and it does appear to be a bug in Flow.

image

A bit of testing suggests that as long as you do not Tweet more often than once every four hours you do not hit this error (unless you are Jukka).

Variables and the Candidate Tweets

Variables are really useful for debugging, as you can see the value assigned to them, but also for managing the information you pass around in your Flow. In my case, I defined the following variables:

  • TweetBody: The body of the Tweet we will be posting
  • TweetRank: A measure of how good the Tweet is. Initially I wanted to use ‘Likes’ but Flow does not allow you to access the number of Likes a Tweet has so I had to use another measure in the end.
  • TweetAuthor: Who Tweeted the best Tweet. While Flow does not allow you to Retweet (or put the ‘@’ symbol in any Tweet you post), I wanted to give the original poster as much credit as I could
  • TweetID: Every Tweet has a unique ID which is useful to make sure you are not posting the same popular Tweet more than once
  • TweetMatch: A flag to say if a Tweet being reviewed has failed to make the cut of being the ‘best’ Tweet

The criterion for the candidate Tweets is pretty simple.

image

If the Tweet has the #msdyn365 flag, it is worth considering. You will notice my step limits the number of Tweets returned to 100. The reason for this is because it is the maximum allowed by Flow, which is a pity.

Loop Decision One: Has the Tweet Been Retweeted?

As mentioned above, it is not possible with Flow to check the number of Likes a Tweet has so I took inspiration from Google. While much more complex now, the original algorithm for ranking in the Google search engine was the number of links to a web site. The more people referenced you, the more likely you were to appear at the top of the search rankings. In my case, I used the number of retweets of the original Tweet being referenced as my measure of popularity. To clarify, this is not the number of retweets of the Tweet that the Flow search found but, if the search found a retweet, it is the number of retweets of that original Tweet. Going to the original Tweet as my source meant I removed the possibility of Tweeting two people’s retweet of the same original Tweet, no matter how popular the retweets were..

However, I soon discovered that testing the number of retweets of the source Tweet failed if the Tweet was not a retweet. I tried working around this by capturing null results but, in the end, it was easier just to test up front.

image

You will see that if the condition fails, we set our TweetMatch flag. If there is no retweet, the Tweet is no good.

Loop Decision Two: Will My Tweet Be Too Long?

Next I want to make sure that if I construct a Tweet from this candidate Tweet, it is not too long. Initially I just concatenated the resultant Tweet but I was partially cutting hashtags and I could see that being a problem if the wrong hashtag was cut the wrong way (#MSDYN365ISAWFULLYGOOD becoming #MSDYN365ISAWFUL, for example).

image

The format of my resultant Tweet is ‘<author> <Tweet body>’ so as long as this is under 280 characters, we are good to go. Again, if this test fails, we set the TweetMatch flag.

Loop Decision Three: Testing for Popularity and Filtering Out ‘Bad’ Tweets

image

Next we ask if the Original Tweet Retweet Count is bigger than the retweet count of our existing ‘best’ Tweet. If not, we raise our flag, if it is, we need to make sure that the Tweet in question has not been Tweeted by me before and that it is not from my blacklist of Twitter Accounts.

To manage the list of posted Tweets and the blacklist, I used an Excel sheet in OneDrive. I also included myself on the blacklist as, if I did not, it could lead to the situation where I am reposting my own Tweet, which, in itself could be reposted and so on. Again, if these tests fail, the flag is set.

Final Loop Decision: Is the Tweet Worthy?

image

If the Tweet gets through all those checks unscathed, the variables are set with the values from this new Tweet. Otherwise, we reset the TweetMatch flag in readiness for the next loop integration. We then repeat for the next candidate Tweet until we have gone through all of them.

Store and Send

image

With the winning Tweet selected, we store its ID in our Excel sheet to avoid sending it twice on subsequent runs and post our Tweet. Initially, rather than using an Excel sheet, I tried string matching to avoid resends but this proved too hard with the limited tools available in Flow. Keeping a list of IDs and looping through them proved to be a lot easier to implement in the end.

As mentioned before, Flow does not allow for retweeting, so I simply constructed a Tweet which looks similar to a retweet and off it goes.

image

Consequences of  Activating the Bot

I did have one Follower complain about the bot but, otherwise things have been positive as you can see below.

image

Impressions, visits, and mentions are significantly up with followers also getting a net gain. Moreover, as well as getting more exposure, I now have an ad-free list of interesting articles to read and promote on LinkedIn.

Conclusions

This has been a really interesting project from a Flow development perspective but also in forcing me to consider what I use Twitter (and LinkedIn) for and whether I should change my use of them.

Building the bot has given me lots of tips on how non-coding developers can think like their coding counterparts, which I will be talking about in Melbourne at Focus 18 and this conscious change in my use of Twitter has massively increased my audience reach.

I encourage all of you to think about Flow can solve that automation problem you have but also, if you use social media, seriously consider if you use it as effectively as you can and if it could serve you better.

Setting Alerts For NightScout/MongoDB Using Zapier and Microsoft Flow

Standard

An Introduction to NightScout

As a Type 1 Diabetic, I need to monitor my blood sugar pretty much 24/7. These days I do it with a Continuous Glucose Monitor (CGM) which sits on my arm and transmits my sugar levels, every five minutes, to my phone.

Here is the sensor and transmitter on my arm; a modified Dexcom G5.

IMG_20181002_104340[1]

and here is the output using a program called xDrip+ (an open source app for Android phones).

Screenshot_20181006-011002[1]

As mentioned in my article on looping, there is a thriving online community building better ways to manage diabetes both for people like me with the disease but also, as Type 1 Diabetes often affects children, for carers to manage the disease. One of these innovations, and a key piece of technology within the looping community is NightScout: an open source web page you can build to pull your data up onto the internet. Here is the same data stream on my NightScout web page.

image

The database technology behind NightScout is called MongoDB. MongoDB is big in the open source community but not so much in the Microsoft world. In this article I will walk through how to connect to this underlying MongoDB database using Zapier and Microsoft Flow so you can set up things like PowerBI reports, alerts when your blood glucose is out of range or even have a stream of data being emailed or tweeted to someone who wants it.

image

While NightScout can be set up on Azure, I had some real problems getting it to work so I went to the other option: Heroku. The irony that I am using a SalesForce subsidiary to house my data is not lost on me. As most people set up their NightScout on Heroku, this will form the basis for my set up instructions but the principles I am showing will work just as well on an Azure-hosted NightScout site as well.

The Easy but Expensive Way: Zapier

Zapier is by far the easiest way to connect to the MongoDB data. It literally does all the work for you. Firstly, we need to sign up for a free account with Zapier. Once this is done we will want to “Make a Zap!”.

image

For the trigger, we want MongoDB and we want it to trigger when a new Document is added.

image

Basically this means the Zap! will fire every time a new entry is transmitted from my CGM, received by xDrip+ and uploaded to NightScout’s MongoDB. Next we will need to set up the connection to our MongoDB database.

image

Fortunately, the NightScout settings have everything we need. If we go to our Heroku account, select the NightScout app, select the Settings tab and ‘Reveal Config Vars’, the one we want is ‘MONGODB_URI’. This will be in the format: mongodb://<Username>:<Password>@<Host>:<Port>/<Database>.

Transfer these values across and it should just work. Next we set up the options for the MongoDB database. The collection we want is ‘entries’

image

Next it will want a sample to understand the format of the stored data. Go ahead and let it pull a record. Once this is completed, our trigger is finished. Next we specify what happens when the trigger occurs i.e. what do we want to happen when a new reading hits MongoDB.

There are a wealth of Actions to choose from but, for simplicity, I will choose “GMail – Send Email”. Again, the process is pretty simple and mirrors the setup of the trigger. The only trick to mention is clicking the icon to the right of the field if you want to reference data from the trigger. In the case of the MongoDB data, the blood glucose level is called ‘sgv’ and is stored in the US mg/dl and not mmol/l.

image

Our final steps will be to name our Zap! and activate it.

Once done, the Zap! will query MongoDB every 15 minutes and bring back the new values and send an email for each one.

image

So far everything I have described is free. xDrip+ and NightScout are free and MongoDB and Zapier also have free accounts. So why is this the expensive option? The reason is, as soon as we want to make our Zap! a little more sophisticated, we need to upgrade our Zapier account. The free account allows you to create two-step Zap!s but if you want a condition e.g. only send an email if the sgv value is greater than or less than a specific value, you need to upgrade to the US$25/month account. Fortunately, there is an alternative.

The Trickier but Cheaper Way: Microsoft Flow

Sadly, Microsoft Flow does not have a Connector for MongoDB but it does have the ‘http’ step which serves the same purpose, using the MongoDB REST API.

For Flow, the first step is to set a trigger. While the LogicApps version of the ‘http’ step has the ability to set a recurrence when the REST API is called, the Flow version does not have this so we need to set a Schedule-Recurrence trigger.

image

Just like the Zap!, we will poll every 15 minutes.

Next we set up our ‘http-http’ step. This is the tricky bit.

We are getting data so our method is GET. The URL is what we use to call the MongoDB REST API. In our case we use it to bring back our data. The format of the URI I am using for this example is:

https://api.mlab.com/api/1/databases/<database>/collections/entries?l=1&f={”sgv”:1}&s={date:-1}&apiKey=<api-key>

Thank you to Ravi Mukkelli and Olena Grischenko for figuring this part out. Full documentation for the REST API can be found here. To translate the URI, I want it to return the first record (l=1), showing just the ‘sgv’ field where the collection is sorted in descending date order.

The API KEY is available from your MongoDB account. Simply to go your Heroku account, select your NightScout app and click through to mLab MongoDB.

image

Click on the User link.

image

and your API KEY will be shown on the next page. Also remember to enable Data API Access which can be done just below where you see the API Key.

image

Your Flow should now look something like this.

image

This will get data every 15 minutes in the form of JSON. JSON is a way to represent data in a text format. Think of it like a generic and adaptable alternative to XML which is a generic and adaptable form of HTML (the thing that web pages are made of).

To make use of the data, we need to parse it (translate it into something useful). To do this we add a new step. Searching for JSON shows the Data Operations – Parse JSON step. The content is the Body from the http step. For Flow to understand the fields, it needs to get a sample of the data. To feed it this data we click the “Use sample payload to generate schema” link.

image

To get this sample, all we need to do is paste the URI from the http step into a browser. You should get something like this:

[ { “_id” : { “$oid” : “5bb8c4cd44df60074eef234e”} , “sgv” : 77} ]

Your end result should look like this.

image

Finally, we send out email via Gmail. At some point of setting up your Gmail – Send Email step, Flow will likely insert a loop. This is because, in principle, our query could have returned more than one record. As my  query forces the return of only one record, the loop will iterate once and is not really needed. It is inelegant but it will work.

Also, by default the only field that gets shown is the oid one. Therefore, you may need to click on ‘See more’ for the sgv field to show.

image

All up, this is what our basic Flow looks like.

image

The result is similar to what we got with the Zap! one.

image

I say similar because the Zap! was slightly smarter in that it returned all records created in the 15 minute interval since the last run whereas this one only retrieves the latest record.

The one big advantage Flow has is we can add more stuff. So, for example, if we want to only send an email if the sgv value is over 180, we can add that in no problem.

image

Also, it is the cheap option because while there are different plans available for Flow, the base plan is completely free Please note that on the base plan you will only be able to run the check every 30 minutes due to the monthly limit of 2,000 runs. The next plan up is US$5/per user per month which may be a better option.

Conclusions

Tools like Microsoft Flow and Zapier offer non-coders a way to address problems in ways previously out of reach. Putting these tools in the hands of people managing diabetes means the tools could literally save lives. If you are using NightScout have a play and see how you can use the technology to make your life easier.

Divinity and the ‘Citizen Developer’

Standard

It is a term of derision, a key piece of Microsoft marketing and a term I have little time for. It is ‘Citizen Developer’. In the unlikely event you are unfamiliar with this term, it is the persona used by Microsoft to describe the benefits of their Power Platform. While I could not discover who invented the term, it is one which has been strongly embraced by the Microsoft marketing engine.

The reason they have embraced it is obvious. As evidenced in some of my previous posts, it is not hard to use tools like Flow, PowerApps and Power BI to add real value to an organisation with little to no code or scripting. This opens up the way for non-coders to build powerful business applications, previously the exclusive domain of coders. The rise of the Citizen Developer.

To be honest, this is not really a new approach for Microsoft. In the past Microsoft used the term ‘Power User’ and this has always been the focus for Dynamics innovation. Certainly as far back as the product moving from v3 to v4, there was a deliberate move to give power to administrators who wanted to tinker without code. Back then I used to say you could get 70% of the way towards a production-ready application using just configuration. With each major release that percentage has increased and it is very possible to build production-ready systems these days with no appreciable code in them.

In parallel, as workflows matured and the product became more and more flexible, there was a breed of coders who grumbled that feeding the power user was a terrible idea. Limiting the power to create exclusively to coders reminds me of the days when bibles were written in Latin and the only people who could read them were the priests. The priests had exclusive access to the divine and all others had to go through them. Thankfully times have changed.

I have experienced this resistance to the power shift first hand. In the days when my blog featured cute tricks with workflows, I used to hear it a lot. Coders I spoke to would complain that processes should be handled with code; that splitting processes across different technologies meant an administrative nightmare and offered no value in the long run. Also if the power of workflows were given to end users, it would be a disaster with them going rogue and creating an unmanaged mess. We hear the same fear and uncertainty today with Flow and the fear of ‘Shadow IT’; the creation of apps and processes unsanctioned by the administrators.

I agree that things must be managed but the perspective that the solution is to keep power with the priests misses the point. When development was restricted to the few it was easy to administer, now that many can develop it is harder but it is not the wrong approach.

There are two obvious benefits to giving access to creative power to non-coders. Firstly, it frees up coders to do more interesting work. Rather than writing routines to perform dull busy-work they can cast their minds to more interesting and unique problems. Secondly, it gives non-coders a much greater appreciation of what coders do and how they go about it.

The answer to managing this new world is collaboration and discipline. Standards and conventions need to be put in place to ensure Workflows, Plugins and Scripts all work together, rather than against each other. The idea that giving non-coders power is the problem is not true. Undisciplined power is the root of the problem. I know many a Dynamics system that came undone on upgrade because a coder decided it would be easier to use unsupported methods to develop a system. Just as with the priest, the coder is human. In the new world, what separates the good and the bad ones is not exclusivity to power but their approach to their work and their ability to work with others.

Just as the role of priests in society has evolved, so too has the role of the coder in modern software development. A good coder, having experience in managing the development and release of software, has the responsibility to guide others starting out on their journey. Thankfully the way we approach software development has evolved to accommodate this new perspective. Agile has meant a greater focus on release management and DevOps and variants such as Scrum are clear that there is no ‘i’ in team. The ‘development team’ makes no distinction between those that can code and those that cannot.

The idea of coders and non-coders being equivalent in terms of software development sits much better with me than the divisive idea of ‘coders’ and ‘Citizen Developers’. While a traditional coder may well say Flow is what Citizen Developers use while ‘true’ developers embrace Logic Apps, the fact is both tools have their place in software development and many coders appreciate and gain much benefit from their use of Flow.

The fact is the relevance of coders and priests does not derive from exclusivity and while many considered the change in exclusivity tantamount to blasphemy, the experience for all is much richer today because of it.

I have faith that, over time,  we will move away from terms like ‘Citizen Developer’ and embrace the idea that anyone who builds is simply a developer and the tool used to get there is irrelevant. What is more important than the tool used is the approach used by the entire development team to deliver a robust solution. That vision of the future is something I can see myself believing in.

CRM Crime Files: LinkedIn Marketers

Standard

mugshot.linkedin

This post references my time at KPMG. I am no longer at KPMG but I have had this post in draft for a while now and thought it was time to finish it off.

A while ago I did a Crime File on ETSY and their customer service when setting up my store. This time it is LinkedIn or, more accurately, the lazy marketing companies who use it to try and generate leads for others. On two occasions I have received emails like this:

IanDorney_redacted

The KPMG article referenced was not one I had any involvement in and I think it is quite courageous to email someone who works at KPMG and suggest they have “lazy accountants”. It is my opinion that KPMG has some of the hardest working and committed accountants I have ever seen. I was also left with the feeling that, despite their suggestions otherwise, they did make this offer to anyone prepared to listen.

The article claimed to come from the Principal at an accounting company. I have removed their name and the name of their company from the above image but left on the true CRM criminals. Lead Gladiator.

Lead Gladiator offer a ‘flood’ of leads for a ‘mere’ $2,000/month, using the methods as described here.

Knowing the Principal could do better, and having some LinkedIn InMail credits to burn, I messaged him.

IanDorney3_redacted

Sadly the Principal never got back to me but Lead Gladiator did.

IanDorney5_redacted

I preferred the tone of this message but the damage had already been done. You only get one chance to make a first impression and they had failed. I replied back.

IanDorney6_redacted

I never heard from them again. The biggest issue for me in this was one of authenticity. Talking about an article I had no involvement with, telling one of the world’s largest professional services companies that they have “lazy accountants” and then suggesting the offer being made in a clumsily customized mass marketing piece is somehow exclusive, started the relationship on the wrong foot. The interaction damaged the organisation who paid Lead Gladiator more than it helped them and I doubt they got value for the large amounts of money they spent. Moreover, what does it say about your company if you only care about new customers enough to outsource your relationship with them?

My second experience started in a similar way:

kent_cameron4_redacted

Either a bot had generated the text or someone had run it through Google translate without sanity checking it with a native speaker. Whichever it was I only partially understood their intention. Again, LinkedIn InMail came to the rescue. I sent a message to the Founder.

kent_cameron1_redacted

To his credit, the Founder replied.

kent_cameron2_redacted

Good deed done for the day.

Conclusions

While it can be tempting to outsource parts of your business to ‘experts’, be very careful who you partner with. In both of these cases, the business owners’ intentions were good; to grow the business. However, in putting their faith in third parties and not being involved in the process they damaged their brand and potentially achieved the exact opposite result of what they were trying to achieve.

A business is successful when it creates real relationships with its customers and stakeholders and there is no quick way to do this. True customer relationship management is about fostering long term relationships and delivering value. If you cannot be bothered to even engage with a prospect in an authentic way, why would that prospect think you are going to deliver value when they employ you?