Breaking Modern Encryption With a Toilet Roll: An Introduction to Quantum Computing

Standard

Thanks to COVID-19 virtualising the Microsoft Build conference this week, I got to attend it for the first time. There were many great talks but the ones of particular interest to me were on quantum computing. Microsoft is now entering the world of quantum computing with their Q-sharp programming language. We may not have commercially useful quantum computers yet but, when we do, Microsoft plans to have the tools ready to make use of them.

Inspired by those presentations, this blog will explain why quantum computing is useful; a subject which is still deeply misunderstood by many.

My Interest in Quantum Computing

My background is a little unusual in that my education was originally in quantum physics. I even published a physics paper with my PhD supervisor, and a fellow researcher 25 or so years ago. One benefit of that unfinished PhD was being exposed to exciting developments in quantum computing and quantum encryption. One of those developments was the invention of Shor’s Algorithm in 1994 (just two years before I put out that physics paper). Shor’s Algorithm sent waves through the academic community because it showed that, in theory, a quantum computer could break modern encryption. If a sufficiently powerful quantum computer could be created, no encryption would be safe. Arguably, it was Shor’s Algorithm and its implications that led to the commercial funding of the development of quantum computers from that time until now. Even though that was 25 years ago and there has been billions of dollars of investment since then, quantum computers still have a long way to go before they can crack modern encryption.

Modern Encryption

One would expect that the encryption methods that protect our secrets are based on some deep, mathematical concepts, inaccessible to all but mathematics professors but this is not true. A lot of modern encryption is based on one simple concept: it is much easier to multiple two numbers together to form a bigger number than to take the bigger number and work out the two numbers used (called factors).

For example, we know that 3 multiplied by 5 is 15 and, because most of us know our times tables, we can easily divine that the factors of 15 are 3 and 5. However, not as many of us can immediately reason that 221 is the product of 13 and 17. Scale this up and you have a system which can readily encrypt secrets but cannot be readily broken.

Factoring with a Toilet Roll

Is there a way we can try lots of potential solutions at once to find the factors of a number? One way is with resonances on a tube. We know that if we blow on a pan pipe, we hear a note. This is note is constructed of the resonant frequencies in the pan pipe tube.

The physics of tube resonance is well understood.

Parts Cleaning and Ultrasonic Cleaning Equipment | CTG
Image from http://www.ctgclean.com

So, if we have a tube of length 221/2 mm = 110.5mm (about the size of a toilet roll) this will resonate with tones of wavelength 13mm and 17mm among others (use inches if you prefer, it does not really matter although a 11 inch tube would be closer to a kitchen roll).

Now comes the clever part. Let us construct a sound using a synthesizer made up of tones of wavelength 1mm, 2mm, 3mm, and so on. We then play the sound through the tube and identify which tones resonate.

Moog Grandmother Synthesizer

Unless we have pitch perfect hearing we might need some help identifying the wavelengths of the tones which resonate. We can do this with a spectrum analyzer. If you have ever seen a car stereo from the nineties you will be familiar with a spectrum analyzer. It looks like this:

Spectrum Analyzer with Python? - Stack Overflow

Using a clever piece of mathematics called a Fourier transform, the spectrum analyzer takes the sound being produced by the toilet roll and breaks it up into its component tones. The resonating tones will be louder and appear as taller on the spectrum display.

Once we identify these resonant tones, we can convert them back to numbers and we have our factors. The algorithm looks something like this.

So what stops us pulling out a Moog Synthesizer, a toilet roll, an old car stereo, and unlocking the world’s secrets? The numbers we need to factor in modern encryption are really long i.e. a few hundred digits long. Using millimetres to define our wavelengths, we need a tube longer than the width of the observable universe to crack it. That is a lot of toilet paper!

Shor’s Algorithm

Shor resolves the problem by abandoning the toilet roll for a cleverly constructed mathematical function, and uses a quantum superposition instead of our synthesized wave. Otherwise, the process parallels our own.

While the mathematics is complex, the idea is very similar to ours. We convert the problem to something we can work with, throw multiple possible solutions at it at once in such a way that the actual solutions separate themselves out, we identify them using Fourier, and check they work.

Is Modern Encryption Dead?

The good news is we still have lots of time ahead of us before we need to overhaul modern encryption. While RSA encryption relies on the factoring problem described and can be tackled by Shor’s Algorithm, other encryption techniques, such as AES encryption are not. Even if we created a sufficiently powerful quantum computer, AES encryption remains strong.

This also leads to the second reason why modern encryption remains unchallenged; it is really hard to create a stable quantum computer. To date, the largest number factored by a quantum computer using Shor’s Algorithm is 291,311. In essence, to challenge modern cryptography, we need a quantum computer thousands of times more powerful than the best machine today and progress is slow.

So Why Are Quantum Computers Useful?

It may seem we have invented the mathematical equivalent of a Rube Goldberg machine, but the fact is this approach of throwing a spectrum of quantum states at a quantum toilet roll and seeing what comes out is much quicker than trying to crack the code with a normal computer. While it may take a normal computer more than the age of the universe to crack this type of encryption, a quantum computer of sufficient size can do it in hours. For encryption, there are still significant hurdles but it speaks to the potential of quantum computing.

The key here is quantum computers allow us to answer questions differently so problems, like this one, where there are lots of potential answers, can be tackled much more efficiently than with a classical (non-quantum) computer. It is for this reason that optimization problems, such as traffic routing, or delivery distribution lend themselves well to quantum computer algorithms.

Chemistry has its foundation in quantum mechanics but anything more complex than the hydrogen atom requires computer simulation to predict. To simulate and design novel molecules for drug manufacturing, it makes sense to use a computer rooted in a quantum world. While projects like Folding@home attempt to tackle the problem by stitching together a vast array of classical computers through the internet, a quantum computer could revolutionize the approach and rapidly accelerate the discover of elusive cures.

There are many applications waiting for quantum computers to become a reality but the fields are green and there is still much to be discovered. Even today, applying quantum algorithms on simulated quantum computers while not providing speed efficiencies, are proving to be superior to the classical algorithms and worth implementing. If you have problems which are computationally intensive, it may be worth considering quantum computing for the task.

The AusSpiderBot: Linking Power Automate and Azure’s Custom Vision API

Standard

I went to Microsoft Ignite The Tour last week. The true value was in catching up with friends and colleagues I have not seen for a while and seeing presentations on the technologies I have heard about but not yet got around to playing with.

One such presentation was by the awesome Amy Kapernick who presented her Quokkabot. Using .Net she linked up WhatsApp and Azure’s Custom Vision API so that anyone could ask for a picture of a quokka or check whether the picture they had was a quokka.

I approached her at the end of the presentation to ask if she had considered doing the same on the Power Platform and she said she had not. Challenge Accepted!!

This is the end result; you can Tweet any image with the hashtag #IsItARedback and my bot will Tweet on @leontribe whether it is or not (being a free plan the response can take up to an hour but it will come). There was literally NO code in its production. It looks something like this:

So What is the Custom Vision API?

Custom Vision API is part of Azure’s Cognitive Services. Cognitive Services are Azure’s AI services which can do mind-blowing things. They are friendly and well worth your time.

As you will see in this blog, they are quite easy to set up when you have Power Automate in your corner.

Custom Vision is a deep learning image analyzer which is a fancy way of saying you can train it to recognise stuff in images. There are plenty of other services, depending on your need but this is the one I needed for my application.

The Project

Amy comes from Perth where there are quokkas. Sydney does not have anything as cute as quokkas so I selected the redback spider. Truth be told Perth has redbacks too but it is much easier for me to identify a redback than, say, a funnelweb when compared to the other eight-legged critters that inhabit Australian shores. Please note the recognition limitation is mine and not that of the Custom Vision API.

While Amy used WhatsApp to make the request, there is no standard Connector in Power Automate for WhatsApp so I used Twitter instead. I was already reasonably familiar with Power Automate and Twitter from working on my TwitterBot which helped the decision.

The Power Automate Bit

If you do not know what Power Automate is, it is the new name for Microsoft Flow. The rumour is Microsoft could not secure the name Power Flow so they went with Power Automate just so everything in the Power Platform had ‘Power’ in the title.

Image result for "power platform" agent

The legacy still remains though. To set up a free account and create a flow (which seems to be what they are still going with) you go to flow.microsoft.com. In this case, our flow is relatively simple.

To go through is step by step in graphics with larger text, our trigger is a Tweet with the hashtag #IsItARedback.

We then loop through the media images linked in the Tweet and feed them to the Custom Vision API.

The Custom Vision API returns probabilities that the image corresponds to one of the Tags we have set up (we will see this a little later).

Looping through the Tags, we check whether the Redback tag has scored a probability of greater than 50% and respond via Twitter accordingly.

The Tweets are constructed such that they respond and show the Tweet they are responding to.

The Custom Vision API Bit

Firstly head to portal.azure.com You will need an Azure subscription but you can get a 12 month trial for free with $200 of credit.

Click the big plus and search for “Custom Vision” to find the service. Hit the Create button and fill in the fields, keeping the default options.

There is a free pricing plan to help preserve those trial credits.

Once complete, two services will be created: the training service and the prediction service.

Click through to the one which is not the Prediction service, go to Quick Start and click the link through to the Custom Vision Portal and sign in.

Create a new Project and follow the prompts.

The Getting Started wizard will then walk you through the setup.

By the end we have created two Tags: ‘Redback’ and ‘NotRedback’, uploaded at least 15 images for each, linked the images to the Tags and trained the model by hitting the Train button (I did the Quick Train to preserve my free cycle allocation). Do not forget to hit the Publish button on the Performance tab to make your training model/iteration accessible.

The Quick Test button allows you to test from the portal uploading an image file or providing a URL.

Linking Power Automate and Custom Vision API

The final link in the chain is linking Power Automate to the Custom Vision API. This was the step which took me the longest to figure out. To set up the Connection, you will need:

  • Connection Name: Whatever you like
  • Prediction Key: This can be found by clicking the ‘Prediction URL’ button on the Performance tab.
  • Site URL: This is the data center for your model. This took a bit of detective work by going to the Custom Vision Prediction API Reference ClassifyImageURL page (thank you Olena Grischenko for the tip!) and seeing how it constructed the Request URL I could work out the components. For me, as I am based my service out of the East US center, the value is eastus.api.cognitive.microsoft.com

You will then need to populate the values in the flow Step:

  • Project ID: You can either get this from the page URL when in the project in the Custom Vision API portal, or by clicking the Cog icon in the Custom Vision portal at the top right.
  • Published Name: The one value that took me the longest to figure out (Seriously Microsoft, make it easy for the dev-muggles!) What it should be called is Iteration Name. The default value is ‘Iteration1’. In this case trawling through sample code online showed that ‘Iteration Name’ = ‘Published Name’ = ‘Model Name’

Conclusions

While this is a fun and trivial example (unless you have just been bitten by a spider and are not sure), it is clear to see there is a lot of possibility in this technology. The Cognitive Services are already being used to help manage fauna and flora in Australia’s Kakadu National Park, as well as to manage fish populations in Darwin Harbour. I can also see this service bringing semi-automated medical diagnostic services to the world, being used for the auto-assessment of parts before they malfunction and quality control of parts in mass production. Considering where we have come from, it is very exciting times.

Microsoft Power Platform 2020 Release Wave 1 Plan: The bits which excited me

Standard

Microsoft are getting quite organised with their documentation these days and last week they put out their Power Platform ‘roadmap’ for April 2020-September 2020. This is separate to the Dynamics 365 2020 release wave document whose 400+ pages will have to wait for another day.

Here are the bits in Power Platform, going to General Availability, which excited me.

Power BI

  • Drillthrough Buttons: This should make guiding a report user through the report a lot easier. They also are context aware, which is great.
  • Office ribbon for Power BI Desktop: Getting everything consistent makes it much easier for people to get on board with an application
  • Incremental refresh: Only data that has changed will update in a report. Less data consumption, quicker refreshes, less frustration
  • Conditional formatting for totals and subtotals: For exception reporting, this is a fantastic inclusion and I am surprised it was not there already.
  • Being able to render a paginated report in any format, such as PDF or Excel via the API: Very useful for automated report comms/invoicing etc.
  • Copy and paste visuals into other applications: If only Dashboards would follow suit! Interoperability of apps is a key foundation of Microsoft Office. It is great Power BI has followed suit on this. I hope others will also come to the table soon.
  • Sub-report support: In the category of “why was it not there in the first place?” we have sub-report support.
  • Datasets larger than 10Gb in Power BI Premium: The whole point of Power BI is to synthesize large data into meaningful reports. The bigger the better as far as I am concerned. The only limit now is the limit of the memory capacity.

Power Apps

  • Deep integration from Azure to Microsoft Teams: The ability to create apps directly in Teams is very exciting to me. We start to depart from a collaboration tool to a truly useful productivity tool
  • Canvas and model-driven apps run on a single mobile application: Some of us still have scars from the early attempts of taking Dynamics to a mobile device. Users do not know or care whether an app is canvas or model-driven so having them launch from the one place makes a lot of sense.
  • Power BI Embedded component in portal designer: No more need for liquid code to make this happen. Easier to build and manage.
  • Save and Save & Close are back!: With auto save on the only way to force a save was the tiny disk icon in the footer. The Save and Save & Close buttons are now back in the Command Bar which certainly make me a lot happier as I am trained, by the ghosts of Microsoft past to hit save and hit it often

Power Automate (what used to be called Microsoft Flow)

  • Copy and paste Actions: Actions can now be copied and pasted. A huge time saver for branching scenarios
  • UI flows: Probably my favorite feature in the release. This is like a macro recorder for flows. Record mouse clicks, keyboard strokes and data entry and then automate it. UI flows also comes with error handling and it is solution aware
    • Automate web-based applications: Supporting Google Chrome (and Microsoft Edge Chromium) this allows the automation of web-based applications. This is crazy powerful and allows for all sorts of automated testing which may be hard to execute through traditional script
    • Automate Windows applications: Macro capture for Windows applications. Very exciting.
    • Automate on virtual machines: UI flows can be run on virtual machines, including Microsoft Remote Desktop

Power Virtual Agents

These are very new and exciting and allow you to create a chatbot without any code.

  • Add a Power Virtual Agents bot into a Power Apps canvas app: Great for automated help within an app
  • Add images and videos to topics: The bot’s response can now include video and images. With a picture being worth a thousand words it makes sense to make responses more than just text
  • Additional language support: Bots will be able to converse in French, German, Spanish, Italian, Portuguese and Chinese (a specific bot can only handle one of these though)

AI Builder

  • Form processing: Teach it what your form looks like with a few examples and you can automatically extract data for Power Apps or as part of a flow.
  • Object detection: Used for recognizing or counting objects, this has a huge range of applications from checking an employee has work safety gear on through to automated stock taking

Power Platform governance and administration

  • Admin connectors for Power Automate/Power Apps: These will be in General Availability in July 2020 they literally allow an admin to manage the tools with the same tools i.e. create flows to manage flows/apps. A great way to ensure an admin is familiar with their management tools.

Common Data Model and data integration

  • SAP ERP connector for Power Apps and Power Automate: I thought this one was already there but obviously not. This allows you to connect to SAP ECC or S/4HANA which is often part of a client’s ecosystem
  • New connectors in Power Query Online: There are quite a few of these but the ones which I want to explore further are: Active Directory and OLEDB

They really have listened!

All through the document I kept seeing:

Historically it was not clear that Microsoft considered outside feedback from MVPs or the public in setting priorities for their development. They are very clearly stating this is now part of the process and I applaud them for it.

Conclusions

Innovation in the Power Platform is coming thick and fast and this document proves it. All of the above features are coming into General Availability, although not all straight away, so check the document if there is a specific feature you need. The one I really want to play with is UI flows. For legacy automation and low code automation this could really be an inexpensive way to achieve a lot. I have said it before but it is a very exciting time to be in Business Applications.

Business Applications’ New Architecture Paradigm

Standard

Back in BC (Before CDS), the architecture of a Dynamics solution involved (roughly) the following steps:

  • Listen to the client’s business need
  • Work out which modules most closely aligned
  • Minimize the cost of development and maximize the benefits
  • Configure and customize
  • Go live

With the introduction of CDS and the evolution to the Business Applications ecosystem, the steps have changed:

  • Listen to the client’s business need
  • Work out how they buy their Microsoft software licenses
  • Explore all the different ways the business need can be met
  • Work out the license implications of each one
  • Minimize the cost of development and licensing and maximize the benefits
  • Configure and customize
  • Go live

If you are not considering the license implications for your clients’ solutions, it could prove to be a costly mistake.

Licensing Then and Now

Historically (BC) the Dynamics licensing model was simple. You paid a ‘per-user-per-month’ fee and it was all you could eat. This was probably the big difference between Salesforce and Dynamics back in the day. Salesforce has always charged for each module and the incremental add-on of costs was referred to as the “Salesforce Tax” in competitive pitches. With Dynamics the cost model used to be simple but, over time, it has changed to the current component-based pricing, similar to Salesforce’s model.

The Dynamics components of old have been split out as add-on modules to CDS and the Business Applications ecosystem includes Azure services and the Power Platform. We have a wide variety of different license models for each of the components. The cost model is no longer a simple multiplier based on the number of users.

Navigating the New World

One way to consider the new model is to consider whether your customer’s needs will work better with a consumption model or a per-user model. If the solution is to be used by a large number of users but infrequently, a consumption model makes sense. Conversely, if the solution is to be used by a small number of users but they will conduct a high volume of transactions in it every day, a per-user model may make more sense.

Hypothetical Case Study: Internal Catering

Let us say we have the requirement of building a system which allows the request of internal catering for meetings. Users go to an app or web page, specify what they need and the request goes to an internal organizer who sorts out the catering.

In the old days we would likely look to the Customer Service module and use Cases. However, this is charged on a per-user basis. So how many users are we talking about? Let us say 12,000 internal users. That is an expensive application for sandwiches and wraps.

So is there another option? The Business Applications ecosystem provides a wide range of options to solve business problems. PowerApps are also on a per-user basis so this does not help. Virtual Agent may be useful but the pricing model, to my knowledge, has not come out at the time of writing. PowerApps Portal is licensed by proxy to things like PowerApps which again leads us to a per-user model for internal users.

An option which may work is Forms Pro linked to CDS via Power Automate (the new name for Microsoft Flow) licensed on a per Flow (Automate?) basis. As long as we stay under the 15,000 daily API requests per day limit we are good to go (high number of users, low number of calls).

Instead of using Dynamics 365 for Customer Service and restricting an enterprise-ready customer service module to ordering Danishes, considering licensing forces us to get creative with all the tools in our Business Applications toolbox. It also forces us to think beyond the cost of creation to the cost of ongoing maintenance.

Conclusions

In the old world, licensing was not a significant part of the architectural design. Licensing was simple and for a per-user-per-month fee you had access to what is now CDS and the Sales, Marketing, and Service modules. In the new Business Applications ecosystem, licensing is much more complex and is a vital input into ensuring a solution is delivering true value to its customer, both for the duration of the implementation and beyond go-live.

One of the simpler ways to consider licensing is in terms of the number of users and the activity the solution will generate. This can be weighed against whether a per-user or consumption model will provide the most value. For lots of users and an app which is only used occasionally, a consumption model makes sense. For less users but an app which those users will live in, a per-user may be a better option.

The notion of considering the license implication of a design will set apart the next generation of Business Applications architects from the old Dynamics CRM architects who are catching up. Do not get left behind or, worse, deliver an elegant solution which will bankrupt your client in licensing fees.

How To Become An Expert In Business Applications

Standard

A question I often get asked at conferences or via LinkedIn is how did I become an expert in Dynamics/Business Applications. The answer is pretty simple. I was on a bootcamp for Microsoft CRM 1.0 beta and have been tinkering with it ever since, focusing mainly on doing crazy things with Workflow. That bootcamp was back in 2003 and, to show how long ago that was, here is a screenshot of a Total Cost of Ownership (TCO) calculator they provided us with the biggest competitor at the time, Saleslogix (Salesforce was not even a blip on Microsoft’s radar back in 2003.

Everything was on-premise back then so, as an implementer, you also had to know how to set up SQL Server, Exchange and all the other server components Microsoft CRM relied on. We have come a long way. The good news is if you are looking to become an expert in Business Applications today, you can do it much faster than the 16 years it has taken me.

How The Game Has Changed

It used to be the case that, to be an expert in Microsoft/Dynamics CRM/365 you jumped on board with a version of the product and then just incrementally updated your knowledge as the new version came out. The version cycle used to be every 2-3 years, back in the old days, so it was very easy to keep up with all things Sales, Support, and Marketing. Then, to meet the expectations of the SaaS market, Microsoft changed the way they released software. Firstly, around five years ago, they committed to an annual release with a six-monthly mini-release. This slowly evolved into a six-monthly/unscheduled release cycle and now, for the online version of the product, it is a continual release.

How does this change the game? Because it is simply impossible to keep up to date with everything that is happening with Business Applications and the Power Platform. The pace of innovation is too fast. Moreover, much of the knowledge gained over the years is now redundant (in my case Workflows are becoming less and less relevant so it is time to carve out a new niche).

Let me state this very clearly: The MVPs who used to know almost everything there was to know about Microsoft/Dynamics CRM/365 are, at best, experts in one area or module and have a broad understanding of the rest. Business Applications has, for the purposes of understanding all of it, moved beyond the Technological Singularity.

The Opportunity For You

With no one being an expert in all things Business Applications, and new additions coming out literally all the time, anyone can grab hold of an area of the ecosystem and become the expert.

To highlight an example, I am going to pick on a friend of mine, Elaiza Benitez. Now Elaiza is no newcomer to Dynamics, she has been blogging about it since 2014 but, in my opinion, what has catapulted her to fame is her YouTube series #WTF (What The Flow). When I was at UG Summit EMEA earlier this year, Elaiza was mentioned in presentations and people asked me about her, and it was all because of #WTF.

#WTF was only started a year ago and exclusively focuses on tips and tricks for Microsoft Flow. Her recent passion is linking Flic buttons to Flow for creative and practical applications.

Anyone could have dug into the details of Flow and made a name for themselves. The barriers of entry were low because Flow requires practically zero coding but it was Elaiza who took the plunge and chose to own this piece of Business Application real estate and bring it to YouTube. She was the one who chose to devote the time to understand it and devise ways to make it accessible to the rest of us. Similarly with Flic buttons; to connect a Flic button to Flow requires zero code. The immense value Elaiza brings is in coming up with entertaining, meaningful and practical examples demonstrating the value of this elegant technology and inspiring others to improve the world with it.

OK, enough of this fanboy adoration of Elaiza, what does this mean to you? It means you can do the same. There are so many products being released begging for someone to take them up and show how amazing they are and how they can transform the world.

What Products Are Begging For Experts?

Here are some examples of products in the ecosystem begging for someone to grab them with both hands and show the rest of us how it is done:

  • IoT: The barriers of setting this up are reasonably high with Azure subscriptions and setup but this means whoever takes this ground will be unassailable
  • Talent: From what I understand it is not complex and should be fairly easy to get across. It has hooks into both F&O and Dynamics 365 CE (or whatever the new name is for the CRM modules) so it is open to being attacked from both sides of the Business Applications fence.

STOP PRESS: The gracious Megan Walker has pointed out that there is a Talent MVP who reigns as the queen of Talent, this being Malin Donoso Martnes. Apologies for this oversight Malin!

  • Retail: This has received significant investment with the soon-to-be-released Commerce which claims to be everything you need to run a store both online and offline. If I was running my own consultancy and wanted to focus on a solution which I could roll out to companies across the country, this would be it.
  • AI: With the new AI Builder making AI available without a line of code and the pre-built Insight solutions, this is a piece of territory which will soon be taken but is large. A few people could focus on specific areas in AI without getting in each others way. If you want to become an expert in something which Microsoft is investing heavily and which will provide tremendous value to those who adopt it, AI is for you.
  • Mixed Reality: There is some hardware investment and likely need for coding in this one but, like IoT, for those who invest, their position will be hard to approach. The biggest problem is understanding how it will benefit business to do what they do better. The person who cracks that nut and makes it accessible to the masses will be hailed a hero.
  • Fraud Protection: A lesser known offering in the Dynamics 365 collection. For the person who becomes the expert in this, they will, in my opinion, be able to write their own checks. It is very easy to demand top dollar when you have just saved a company a few million in fraudulent transactions.
  • Azure Cognitive Services: Spinning up an Azure service and using Flow’s Connectors to talk to it is codeless and very simple. There is so much power here and all it is waiting for is the next #WTF for Cognitive Services.

What Is Stopping You?

There is very little preventing you from being an expert in any of the above technologies. All it takes, in many cases, is focus and time. If you are willing to devote a few hours a week to tackling your chosen territory, you will be ahead of 99% of the people out there.

Extending Scrum Without Making It FrAgile

Standard

It has been a while since I have written a post. In the interim, I started my Diabetes Blog: The Practical Diabetic. While I am Type 1, there is good information in the articles for anyone who is prediabetic or a different Type and all are tagged to make sure the content is relevant. Being a physics geek, all the information has a basis in science. No half-baked cures on my blog. There are also some interesting technical articles for helping manage the disease. If you know someone in the D-Camp, point them my way.

This article is based on research I did for PowerObjects as part of their Agile Center of Excellence (CoE). PowerObjects have a bunch of CoEs with membership across the globe. We get together weekly and work out best practices and share war stories. I belong to the Agile one. In this case, the research was around the various Agile frameworks out there and their applicability to Dynamics implementations.

As with my Diabetes blog, if the article looks too long, skip to the tl;dr section at the end for a summary.

The Sanctity of Scrum

Arguably the most common Agile framework used for Dynamics implementations is Scrum. The definition of Scrum is a very readable 19-page PDF document. However, The Scrum Guide is far from prescriptive. For example, the words ‘velocity’ and ‘points’ appear precisely zero times, yet these are common elements in many Scrum implementations; The Scrum Guide says precious little about tracking progress. To this end, there is plenty of room for adding creative flavor to Scrum. Creativity is often seen in the format of the retrospective, and in the estimating of effort in Sprint Planning. Another area where we can bring creativity to Scrum is by introducing elements from other Agile frameworks out there.

Agile vs Lean

Software development frameworks come from two main paradigms: Agile and Lean. While Agile focuses on delivery through iterative development, Lean focuses on the process and seeks a continuous stream of productivity and improvement. Agile values the people involved, collaboration, and adaptability, while Lean values the elimination of waste, improved quality and optimization.

Where the two share common ground is in providing efficient and tangible outcomes. Most software development frameworks sit between the philosophies of Agile and Lean.

The Frameworks Of Interest

There are many, many frameworks but for this post I will focus on three: Extreme Programming (XP), Scrum, and Kanban. If you like another framework, by all means embrace it. These three are useful as they cover the spectrum between Agile and Lean and share some compatible elements.

The most Agile is XP, while Kanban is very Lean, being all about the process, with Scrum sitting in the middle. In terms of how the frameworks differ, the key points of differentiation for Scrum is its focus on the people involved (both on the consulting side and the customer side) and the ability to accommodate offshore development teams (something increasingly common in Dynamics implementations).

Scrum XP Kanban
Project Size All Small All
Sprint (weeks) 2-4 2 1
Process Centric No No Yes
People Centric Yes Yes No
Virtual Team Support Yes No Yes
Documentation Basic Basic N/A

Extreme Programming (XP)

Extreme Programming is probably the most famous of the ‘pure Agile’ frameworks. Being born out of the rise of the internet and dot.com boom, it sought an alternative to the traditional waterfall approach, more suited to construction projects.

The approach of XP is less about a set of requirements but more about embracing the right values, principles, and practices to achieve the requirements; it focuses on the journey, not the destination.

Designed for pure coding, it is, in my opinion, difficult to embrace completely for Dynamics implementations. However, in terms of its values and in using an incremental development approach, it is closely aligned to Scrum.

Some of the techniques/philosophies used in XP which can benefit a Scrum implementation are:

  • Pair programming: Even if it is configuration of the system, a second pair of eyes can be invaluable for re-evaluating the intent behind a User Story, or offering different approaches to address the problem (there is always more than one way to solve a problem in Dynamics).
  • Test-Driven Development: Determining how a story will be tested is a great way to understand what needs to be configured/coded. It also provides an opportunity for the Product Owner, the test team and the development team to come together to ensure they are aligned on what is to be achieved.
  • Collective Code Ownership: There is no ‘i’ in Extreme and with the development team in Scrum being an amorphous blob, it makes no sense that configuration/code is an individual responsibility.

Kanban

Kanban is all about the process and visualises this process through a board with tickets covering it to represent the jobs (User Stories) being processed and the stage in the process they are at. This board is creatively known as the ‘Kanban board’. The Kanban board is a board of continual development/progress. No sprints here.

A key element to Kanban is a lack of upfront planning and story-sizing. This means it is very hard to predict what effort or time a project will take to be delivered. Convincing a project sponsor to give the go ahead for a project with no timeline or budget is challenging and this often precludes Kanban as the primary framework for a Dynamics implementation.

Some of the approaches used in Kanban which can be beneficial to Scrum are:

  • Using a Scrum board: The Scrum board differs from a Kanban board in that the Scrum board is reset at the end of every sprint to reflect the new Sprint Backlog. Otherwise, their appearance and function are quite similar.
  • Limits on the number of stories permitted at a given stage: This prevents too many stories moving between stages e.g. development to testing. One benefit of this is it will highlight resourcing issues if a specific stage is blocked due to too many stories. This approach will also show if stories are not being system tested thoroughly by the development team before handing over to the test team as if too many are returned for re-work, this will also lead to blockage.
  • Story-typing: A lot of information can be conveyed on a Kanban board and there is no reason why this cannot be brought across to a Scrum board. A great example of this is in the use of ‘Story-Typing’ which is the classification of User Stories, depending on the type of story they are. Chores (work that needs to be done upfront before actual development can start) and Spikes (Research/analysis required to address a User Story) are good examples of story types.

Neil Benson, who I had the privilege of working with on a two and a half year Agile Dynamics implementation for the University of New South Wales, was a big fan of story-typing. Blocked stories were inverted on the board and we had a healthy pool of Spikes and Chores. We also used paired programming. Neil is quite the fan of mixing it up.

tl;dr

There are quite a few software development frameworks out there and while Scrum is the most popular, the Scrum framework is sufficiently flexible that it can incorporate elements from other frameworks.

Looking at the various Lean and Agile frameworks out there, two which have elements which can be adopted by a Scrum implementation are Extreme Programming (XP) and Kanban. Elements which lend themselves to inclusion are:

  • Pair-Programming
  • Test-Driven Development
  • Collective Code Ownership
  • Using a Scrum board
  • Limiting the number of User Stories at a given stage of the development process
  • Story-Typing

The Evolution of Customer Service From A Call Center to Multi-Channel And Beyond

Standard

Starting university in the early nineties gave me a unique position to appreciate the modern evolution of technology. The internet in the public domain was still in its infancy. I was one of the first people I knew to have an email address and had to explain what email was to many of my friends. ICQ was five years away so messaging was done via ‘telnet’ where you ‘dialled’ someone’s IP number to chat.

Browsing was text based with hyperlinks and there was no search engine (AltaVista was three years away). You simply discovered pages of web links through word of mouth.

This was a time when customer service was delivered through three channels: face to face, phone, and fax. We have come a long way.

The Advent of Multi-Channel

Multichannel

People do not really talk about multi-channel any more. It was big for customer service and for marketing and, while it could be argued even our pre-internet customer service was multi-channel, my recollection is the term only came into mode when linked to internet channels.

It was also the beginning of a shift in considering what customer service was for. Before this time, customer service was little more than part of the product/service offering. If a company offered three services, each department was responsible for customer service, and there would be three customer service functions. I recall a prominent American bank at the time having literally a dozen different fax numbers for different divisions (the only reason I remember this is because I once flooded all the fax numbers when the bank was slow at refunding an erroneous monthly charge. Enquiry processing across the bank came to a halt in what was, arguably, a pre-internet Denial of Service attack).

With the introduction of channels like email and online forms, came the shift to considering the customer’s experience. It made sense for the customer to choose the most convenient way to reach the organisation and not the other way around.

The outstanding problem was minimal cross-communication. Multi-channel meant multiple ways for the customer to get service but each channel was still a separate experience. Switching channels often meant starting again and customers were still bounced around departments for more complex issues.

Progression to Omni-Channel

Omnichannel

Omni-channel, as you can see from the Google Trends graph, started becoming a thing about five years ago. Thanks to the online revolution, enterprise-level CRM systems became affordable for all. This provided a centralized hub for all enquiries. You could email about an issue, follow up with a phone call, and then go to the company’s physical service counter and all interactions would be recorded in the same system and available at the click of a button.

While multi-channel gave consumers a choice of communication channel, onmi-channel took it one step further and ensured a consistent experience or, at least, a consolidated one.

With most CRM systems, a rudimentary omni-channel system can be set up relatively easily. In my last project for a major university, whether the student asked their question face to face, via phone, email, or online form, everything became a Case record in Dynamics. In an omni-channel system, the customer gets to use the channel which makes sense for them and their enquiry. For the company, the channel does not really matter as a centralized CRM system means all enquiries are treated consistently. A true omni-channel system also removes “answer shopping”, common in multi-channel systems.

The Future is Omni-Moment

Omnimoment

The core assumption in an omni-channel system is the customer chooses a channel for an enquiry and sticks with it for the duration of that enquiry. Focussing further on the customer experience, the nature of the enquiry may require multiple channels to be engaged as part of the one interaction. Let us consider an example of opening a bank account.

In the multi-channel experience, a customer calls to find out about the procedure. They do not quite get the answer they are after, so they call back to get a different agent. They then visit a bank branch to collect the right forms. They go home, to fill in the forms. If they need to clarify something about the form, they either call or revisit the branch. There is no guarantee that the advice they get from these channels will be consistent.

The customer hunts down a notary and has their identification documentation validated. Once completed, the forms are faxed. Finally, once the processing department has informed the local branch that the account is open, the customer returns to the bank branch to provide a signature and collect a bank card.

Every step in the process is an isolated channel with the customer being expected to bring it all together in what was often a frustration and time-wasting experience.

In the omni-channel world, the customer goes online to find out about the procedure and there is an online form. If the customer has a question about the form, they can call or browse the web site. As both channels are pulling their information from a centralized knowledge management system, the answers will be consistent (and hopefully comprehensive).

Identification documentation is again notarized and once the form is completed, with notarized documentation attached, the application is processed and, with signatures being a thing of the past, a card is sent in the mail.

In the omni-moment experience, the customer goes online to find out about the procedure. The web site recognizes the intent and provides the option of a chat bot to assist. If the customer’s enquiry cannot be answered by the web site or bot, the interaction is escalated to a human. The agent offers to share screen and walk the customer through filling in the online form. Using video conferencing, the agent can verify identification on the spot without the need of a notary. Forms are completed and the account is opened immediately ready for online use. A bank card is again sent in the mail.

As you can see, the seamless integration of people, process and technology, make for a delightful customer experience. A process which took a week in the multichannel world, is completed in half an hour in the omni-moment world.

The Evolution of KPIs

As the way we interact with customers has changed, so too must our KPIs. Here are some classic call center KPIs which I consider irrelevant (or at least very misguided) in the modern customer service center.

Average Handling Time

Even back in the days of call centers, I was not a fan of this measure. It encouraged agents to open a call and immediately hang up to lower the stat. It is focussed on productivity, often at the expense of the customer experience.

If an agent spends 20 minutes assisting one customer to open up a bank account and 30 minutes with another, why is this a problem? If one agent is terse and goes through the form quickly, is this better than someone who actually takes the time to make sure the customer knows what is going on?

Average Time in Queue

There really is no excuse for waiting in a queue on the phone these days. Assuming a customer insists on exclusively using the phone, a call back service should be standard procedure. In an omni-moment world, there should be no queue and all queue measures are irrelevant.

Cost Per Enquiry

It is good to have visibility on costs but this should not be managed at the expense of the customer experience. In the early days of online channels it was realised these were much cheaper to operate than traditional channels. In some cases the customer experience was worsened for the traditional channels to encourage people to go online. This is management in the absence of strategy and is disastrous in the long term.

What is the Purpose of Customer Service?

The ultimate measure of customer service should be customer satisfaction. In my opinion this should be sought directly through surveys rather than assumed through measures such as Average Handling Time (a short call is not necessarily a good call). I can see value in measuring First Call Resolution (as confirmed directly with the customer) as this should be the ultimate goal of customer service. However, it need to be modified so it covers all channels across the customer experience, not just the phone component (assuming a phone is even involved).

While in a pre-multi-channel world, customer service was seen as little more than a necessary evil for selling a product or service, in an omni-moment world, the minimum standard is having the customer ask no more than once and be satisfied every time they make an enquiry. In fact with machine learning, in many cases, it should be possible to anticipate customer need and frequently achieve ‘ask never’ for existing customers.

Generating Reports For NightScout Data Using Flow, Excel, and OneDrive

Standard

A few months ago I talked about extracting data from a MongoDB database for the purposes of generating alerts. Since then I have taken it further and now generate regular reports of my data using the power of Flow, Excel, and OneDrive. As this may be useful to others running NightScout I thought I would share my set up and the discoveries along the way.

The Flow

First of all, I need to extract the data from the MongoDB and sent it to a target Excel sheet. To do this we use Flow.

image

I have set the recurrence to three hours. This strikes a balance between not running too often and blowing my Flow quota, and running sufficiently often to give timely results. At every three hours, we run approximately 240 times a month, which works well with our limit of 750 Flows per month.

The variable stores the latest DateTime value from our target Excel file.

image

To populate this variable, we query our target Excel and set the value.

image

In this screenshot we see that we return only one row from Excel, being the row with the highest DATE value. We then use this to set the variable.

Once we have this DateTime value we incorporate it into a modified version of the API call we used in the Alert blog.

image

For this call we bring back 100 entries from the MongoDB, a bunch of fields and order it so that if there are more than 100 rows available from the Latest Date from our target Excel, then only the rows immediately after this DateTime are returned. This ensures the query does not mess with the row order when it transfers them to Excel.

My continuous glucose monitor (CGM) feeds a value to the MongoDB every five minutes which means it generates 180/5 = 36 entries every three hours. Therefore 100 is a good setting to keep on top of the additional values generated in MongoDB but sufficiently large that it will be able to catch up if there is a temporary issue with the running of Flow.

Once the reply is parsed, we can populate our Excel with the new rows.

image

One point of note here is that the Flow step requires a Table within the Excel workbook. This is relatively easy to set up. Basically, you add your headers to the sheet, highlight them and select Format as Table from the Styles section of the Home tab.

The result looks something like this.

image

The DATE value is an integer representing the DateTime value but is a little difficult to read or transform so we also record the DATESTRING which is a little friendlier. Then we have the SGV value which is the blood glucose level in units only the USA use and finally we have the DELTA which is the change in SGV value between reads.

Once we have captured our data, we can begin reporting on it.

The Report

I discovered relatively quickly that Flow has a size limit for the Excel files it will work with. In the free plan this size limit is 5Mb, which makes it impractical for our purpose. Luckily I had a paid Flow plan via my Office subscription so I moved to this. This plan allowed me to work with Excel files up to 25Mb in size. This worked well. My Excel file has approximately four months of data in it and is 1.6Mb in size. Therefore, I have around five years of data to go before Flow reaches its limit. In five years either Microsoft will have removed this silly limit, I will be using a different technology to analyse my data or they will have found a cure for Type 1 Diabetes (there is a running joke in the diabetes community that the medical professionals have been promising a cure within five years for decades now).

The other trick I did to minimise the size of my target Excel was to house the reporting in a separate file and use a Power Query to reference back to the target file for the data. Using this Power Query, and some Excel formulae to manipulate the data to make it friendlier for reporting, I got this for my first worksheet.

image

If you struggle to replicate any of my formulae, please leave a comment and I will reply with the details.

HbA1c Prediction

The HbA1c is an indicator of how ‘sugary’ your blood has been for roughly the last four months. Using our CGM data we can make a prediction of what our HbA1c value is.

image

There are a few formulae available to do this calculation and in the above I use three of them. In the case of my blood results, the models predict 5.3, 5.1, and 5.1 which is well below the target threshold of 6.5 so well done me. I expect this value to slowly increase over time as my pancreas becomes less able to lower my blood sugar levels.

Distance Report

The Distance Report is something that can only really be generated using CGM data with a regular time interval  between measurements (in our case every five minutes). The Distance Report shows the total ‘distance’ travelled by the blood values i.e. the sum of the absolute delta values and is an alternative measure to the standard deviation.

image

For this report we only have data for the last four months as this is how long I have been using a CGM. We can see that the distance travelled each month is roughly the same. As time goes on we would expect this to increase as the pancreas becomes weaker and blood glucose levels (BGLs) start to vary more.

BGL Report

This was the first report I created and reviews literally all my BGL measures (around 600 manual finger pricks and then the CGM data).

image

In the top left we have literally every value recorded and when it was recorded. The CGM data can be seen as the ‘thickening’ of the values towards the right hand side of this graph.

In the top right we have the distribution graph for the data showing the spread of results.

The bottom left shows all the data points but strips out the Date value, leaving only the Time value. This has the effect of showing the data over a 24 hour period.

Finally, in the bottom right, we have a range of filters to assist with analysing the data.

For example, if we compare the distribution curves for 2017:

image

2018:

image

and 2019:

image

we see that our distribution curves are centred around 5.4, 5.5, and 6.0 respectively. In other words it appears the curve is moving to the right over time. This is consistent with a weakening pancreas (or me being more relaxed about carbs).

Range Report

The Range Report looks at the average and standard deviation of the data per hour, looking for where in the day the BGL values are highest and vary the most.

image

The graphs are relatively flat with a slight increase towards the end of the day. This is likely the result of dinner (generally the largest and most variable meal of the day and therefore the meal with the most impact on glucose levels) and late night snacking (which will never have a positive effect on BGLs). Again we have a filter, in this case a timeline, to help with our analysis.

Distribution Report

The Distribution Report does a similar analysis as the Range Report but per month, rather than per hour.

image

The trendlines suggest the numbers are relatively flat (average BGL around 6 with a standard deviation of 1). It is expected both of these will increase over time and the BGL average and variability increase.

Displaying the Data to the Health Team

With the Excel files sitting in OneDrive, you simply right click the file to generate a link for sharing a read-only version for health care professionals. In my case I use bit.ly to also make it friendlier. While it is a little twitchy, it is reasonably friendly across various form factors and browsers.

Conclusions

Flow opens up a raft of opportunities for using my data whether it be alerts, analysis to maintain my health or making it readily available to my health care team. A few years ago this kind of set up would have taken weeks of coding, if it was possible at all. Today, it requires zero code and costs almost nothing. If this kind of set up could help you or someone you know, have a tinker, it really is straightforward to set up.

Review: Amazon Echo/Alexa

Standard

This Christmas has been something of a revolution in the Tribe household. Prior to December 25th, our household had very little which was internet enabled outside of phones, gaming consoles and laptops. The television is a plasma dinosaur, the stereo has an analog radio tuner in it and is the size of a slab of beers, and the lights require moving to the wall and flicking a switch to activate.

Then came the Amazon Echo. I had bought it for my wife as she had been keen to get one for a while. It was Christmas so I bit the bullet and bought the second generation Amazon Echo. If you are unfamiliar with this device, it is essentially a Bluetooth enabled speaker with a digital assistant built in.

Amazon Echo

The setup was an absolute nightmare. Here are my tips:

  • If you are in Australia, make sure you shift your account over to amazon.com.au first. I had already shifted mine but my wife had not. This was only a problem when I tried to add myself to her household. Esoteric messages and a bit of configuration later all was good.
  • If you are looking to share content through the household option, this is not yet supported in Australia. Yes, all the heartache of the previous step was for naught.
  • The device is effectively a single user device (I’ll elaborate a bit more on this later). Whoever is the main user, they are the person who should download the Alexa configuration app to their phone during set up. I initially set it up with my phone and then tried to shift it across to my wife’s. A few hours with support and a couple of factory resets and we were good again.
  • The Amazon Echo is similar to Android devices in that there is one ‘first class’ user and multiple ‘second class’ users. In the case of the Echo, additional users are set up as voices in the primary user’s Alexa app. Once this is done, the additional users can download the Alexa app, log in as the primary user and then select who they really are. This being said, there is no strong differentiation in content. For example, if Amazon has access to the primary user’s contacts, everyone has access. Similarly, while you can add an Office 365 account to Alexa for appointments, this is the primary user’s account which, again, everyone has access to. You cannot add multiple Office 365 accounts, let alone differentiate them by user.

However, once setup was done, things were smooth sailing. I got 14 days free access to Amazon Music which had everything I could think of (ranging from Top 40 through to Prog Rock 70s band Camel.) What’s more, the more we used the Amazon Echo the more we saw value. All those little nuggets of information we would usually look up on our phone, we can simply ask Alexa. Examples include:

  • The current time in another timezone
  • When it is sunset (the time our house becomes a device-free zone until dinner)
  • Random trivia (do fish have nostrils?)
  • The latest news
  • The weather outside

You can also use it to make calls via Skype (untested as I write this though) and for those who have installed the Alexa App on their phones and logged in as themselves (via the primary user) you can call them through the Alexa app even when they are away from home.

There are also ‘skills’ (read as ‘apps’) which can be added to the Echo. While the variety in Australia is woefully limited compared to the US, there is still enough to be useful. So far  have added:

  • ABC world News
  • Domino’s Pizza
  • Cocktail King
  • The Magic Door (a Choose-Your-Own-Adventure storytelling app for children)
  • RadioApp

The way you speak to Alexa (the assistant in the machine) is not completely natural. but you do get used to it. For example, for some of the skills you need to say “Alexa, open <skill name>” first before it will realise it needs to employ that skill. For example if I ask “Alexa, how do you make a Negroni?” it will suggest using the Taste skill, even though Cocktail King is activated. To get the recipe I need to say “Alexa, ask Cocktail King how to make a Negroni”

Finally you can speak to Alexa through the Alexa App on your phone. In one case, I had added the Diabetic Connections Podcast skill to Alexa but, given the content was of limited interest to my family, I asked, through my phone to play the latest podcast. Sure enough it came through to my phone and, with the headphones plugged in, my family were none the wiser.

Echo Dots

With the Echo downstairs and my desire for us to stop shouting up the stairs to summon our children, within 24 hours of setting up the Echo, I had bought two Echo Dots: one for each child’s bedroom.

Echo-Dot-3

These have exactly the same brains in them as the Amazon Echo so they can be used standalone. However, through the Alexa app, you can make them part of the same ecosystem meaning you can use them as an intercom system throughout the house. Also, they support commands such as:

  • “Alexa, tell Orlando’s room that dinner is ready”
  • “Alexa, tell Claudia’s room it is time to go”
  • “Alexa, tell Orlando’s room it is time to wake up”

All with their own special audio touches.

Some Hacking

It has only been a couple of days so I have not had time to get up to too much mischief but here are a few things I have discovered:

  • Amazon Echo is compatible with IFTTT so if you want to trigger IFTTT when you issue a command to Alexa, this is not a problem
  • Amazon Echo is also Smart Watch friendly. When I played the podcast, controls appeared on my Smart Watch. This also happened when I played music through Amazon Echo
  • If you go through the Alexa App, it demands you have a Spotify Premium account before it will connect Spotify. You can get around this by pairing your phone to the Amazon Echo (“Alexa, pair my device”). Once your phone is paired, anything you run on the phone e.g. Spotify will have its sound come out of the Amazon Echo.
  • If you get yourself a Bluetooth stereo receiver (basically a Bluetooth receiver which plugs into the audio input of your stereo, it is fairly straightforward to get a dinosaur stereo like mine to become Echo’s sounds system.

Next Steps

The next step is to make the house a little more internet aware. I have ordered a WiFi plug from eBay for around AU$12 (roughly US$10) and I will see if I can link it to the Echo and have Alexa turn things on and off. For example, I could set up my slow cooker and then, halfway through the day while at work, tell Alexa, through the phone app to turn on the plug and initiate the cooking of dinner for that evening.

Conclusions

While setup was a nightmare for me and there is little in the way of an instruction booklet for the device, now I have experimented with it for a couple of days I am really happy with my purchase. The main reason for not going with Google Home was the lack of support for Office 365. This being said, the ability to only add one Office 365 account through the app makes that differentiator small in hindsight.

Amazon suggest they will continue to improve the device and, as I upgrade the appliances in my home over time I expect the benefits will also multiply e.g. linking Amazon Prime to a smart TV.

If you are looking to take the plunge, my recommendation is to do so. The devices, especially the Dots, are very inexpensive and the previous (second) generation ones are being sold for a song by Amazon and retailers such as JB-Hifi. If you want to go really cheap, you can buy the Echo Input which is the brains of an Echo without a speaker where you simply plug it into an existing speaker.

If you have any Echo hacks, please post them in the comments Winking smile

Making a Tweet Bot With Microsoft Flow

Standard

If you subscribe to my Twitter feed, you will have noticed a lot more activity of late. This is because I have created a Tweet Bot to find me the most interesting Dynamics articles out there and Tweet them.

My inspiration for doing this was Mark Smith’s Twitter feed (@nz365guy). Every hour Mark pumps out a Tweet, sometimes in a different language, sometimes on related technologies, such as SQL Server. He also drops in quotes from the books he is reading, as well as the odd manual Tweet.

Mark Smith Twitter

As you can see, this formula has been very successful for him. Over 11,000 followers and almost 69,000 likes on the back of 29,000 Tweets. That’s a little over two likes per Tweet. Good stuff.

Previously I had only really used Twitter to promote my blog articles so I thought it would be a perfect testbed to see if automated Tweeting, plus the odd promotion of my blogs and speaking engagements did anything to lift my own statistics.

In doing so I also found a curated list of Tweets was far more useful than browsing through the list of Tweets from the people I am Following because looking at my own list of Tweets is ad-free. Now I review the curated list and most days, if I find something I really like I post it to my LinkedIn feed. So, if you want to see something less automated, feel free to follow me on LinkedIn.

How It Works

image

Here it is. Essentially, the Flow:

  • Triggers with a pre-determined frequency
  • Initializes a bunch of variables and searches for candidate Tweets
  • Loops through the Tweets to find the best one
  • Stores the winning Tweet in a list of sent Tweets and then Tweets it

Let us go through these stages in more detail.

Recurrence

This seems pretty straightforward but there are a couple of things to consider. Firstly, if I did like Mark and scheduled to send one every hour, this would be around 24*30 = 720 Tweets per month which is close to my quota of 750 on a free plan. Do-able but this does not leave a lot of wiggle room for other Flows and experiments like my MondoDB integration.

Initially I set it to every two hours but even this had some troubles with the following error often appearing:

{

“status”: 429,

“message”: “This operation is rate limited by Twitter. Follow Twitter guidelines as given here: https://dev.twitter.com/rest/public/rate-limits.\r\nclientRequestId: 00776e5e-6e93-4873-bcf5-a1c972ba7d2a\r\nserviceRequestId: 597a00b83806f259127207b0a18797a0”,

“source”: “twitter-ase.azconn-ase.p.azurewebsites.net”

}

I went to the link suggested but it was broken. So I went to the rate limits in the Flow documentation for Twitter and I did not seem to be violating these limits so it was quite confusing. A little browsing revealed that others had also come across this problem and it does appear to be a bug in Flow.

image

A bit of testing suggests that as long as you do not Tweet more often than once every four hours you do not hit this error (unless you are Jukka).

Variables and the Candidate Tweets

Variables are really useful for debugging, as you can see the value assigned to them, but also for managing the information you pass around in your Flow. In my case, I defined the following variables:

  • TweetBody: The body of the Tweet we will be posting
  • TweetRank: A measure of how good the Tweet is. Initially I wanted to use ‘Likes’ but Flow does not allow you to access the number of Likes a Tweet has so I had to use another measure in the end.
  • TweetAuthor: Who Tweeted the best Tweet. While Flow does not allow you to Retweet (or put the ‘@’ symbol in any Tweet you post), I wanted to give the original poster as much credit as I could
  • TweetID: Every Tweet has a unique ID which is useful to make sure you are not posting the same popular Tweet more than once
  • TweetMatch: A flag to say if a Tweet being reviewed has failed to make the cut of being the ‘best’ Tweet

The criterion for the candidate Tweets is pretty simple.

image

If the Tweet has the #msdyn365 flag, it is worth considering. You will notice my step limits the number of Tweets returned to 100. The reason for this is because it is the maximum allowed by Flow, which is a pity.

Loop Decision One: Has the Tweet Been Retweeted?

As mentioned above, it is not possible with Flow to check the number of Likes a Tweet has so I took inspiration from Google. While much more complex now, the original algorithm for ranking in the Google search engine was the number of links to a web site. The more people referenced you, the more likely you were to appear at the top of the search rankings. In my case, I used the number of retweets of the original Tweet being referenced as my measure of popularity. To clarify, this is not the number of retweets of the Tweet that the Flow search found but, if the search found a retweet, it is the number of retweets of that original Tweet. Going to the original Tweet as my source meant I removed the possibility of Tweeting two people’s retweet of the same original Tweet, no matter how popular the retweets were..

However, I soon discovered that testing the number of retweets of the source Tweet failed if the Tweet was not a retweet. I tried working around this by capturing null results but, in the end, it was easier just to test up front.

image

You will see that if the condition fails, we set our TweetMatch flag. If there is no retweet, the Tweet is no good.

Loop Decision Two: Will My Tweet Be Too Long?

Next I want to make sure that if I construct a Tweet from this candidate Tweet, it is not too long. Initially I just concatenated the resultant Tweet but I was partially cutting hashtags and I could see that being a problem if the wrong hashtag was cut the wrong way (#MSDYN365ISAWFULLYGOOD becoming #MSDYN365ISAWFUL, for example).

image

The format of my resultant Tweet is ‘<author> <Tweet body>’ so as long as this is under 280 characters, we are good to go. Again, if this test fails, we set the TweetMatch flag.

Loop Decision Three: Testing for Popularity and Filtering Out ‘Bad’ Tweets

image

Next we ask if the Original Tweet Retweet Count is bigger than the retweet count of our existing ‘best’ Tweet. If not, we raise our flag, if it is, we need to make sure that the Tweet in question has not been Tweeted by me before and that it is not from my blacklist of Twitter Accounts.

To manage the list of posted Tweets and the blacklist, I used an Excel sheet in OneDrive. I also included myself on the blacklist as, if I did not, it could lead to the situation where I am reposting my own Tweet, which, in itself could be reposted and so on. Again, if these tests fail, the flag is set.

Final Loop Decision: Is the Tweet Worthy?

image

If the Tweet gets through all those checks unscathed, the variables are set with the values from this new Tweet. Otherwise, we reset the TweetMatch flag in readiness for the next loop integration. We then repeat for the next candidate Tweet until we have gone through all of them.

Store and Send

image

With the winning Tweet selected, we store its ID in our Excel sheet to avoid sending it twice on subsequent runs and post our Tweet. Initially, rather than using an Excel sheet, I tried string matching to avoid resends but this proved too hard with the limited tools available in Flow. Keeping a list of IDs and looping through them proved to be a lot easier to implement in the end.

As mentioned before, Flow does not allow for retweeting, so I simply constructed a Tweet which looks similar to a retweet and off it goes.

image

Consequences of  Activating the Bot

I did have one Follower complain about the bot but, otherwise things have been positive as you can see below.

image

Impressions, visits, and mentions are significantly up with followers also getting a net gain. Moreover, as well as getting more exposure, I now have an ad-free list of interesting articles to read and promote on LinkedIn.

Conclusions

This has been a really interesting project from a Flow development perspective but also in forcing me to consider what I use Twitter (and LinkedIn) for and whether I should change my use of them.

Building the bot has given me lots of tips on how non-coding developers can think like their coding counterparts, which I will be talking about in Melbourne at Focus 18 and this conscious change in my use of Twitter has massively increased my audience reach.

I encourage all of you to think about Flow can solve that automation problem you have but also, if you use social media, seriously consider if you use it as effectively as you can and if it could serve you better.