Building a Medical Analysis Bot in Days with Zero Code: Part One – Setting up AI

Standard

Three years ago I built the “Aussie Spider Bot”: a bot using Azure AI which could take a picture and determine if it contained a Redback Spider. It worked really well. Now I have used the same principle to build an “AGP Reader” which is effectively a medical image reading/assessment bot. What is remarkable is it only took me a few days part-time and required zero lines of code. Given this is the case I thought I would walk through the process so others can do similar things.

Given the length of the setup walkthrough, I have split this blog into two parts: This part which walks through setting up the bot and the second part which explains how to connect it to Power Automate.

What is an AGP Graph?

This is an Ambulatory Glucose Profile (AGP) graph. It is effectively a heat map for blood glucose levels (BGLs) for people with diabetes. To create it, you take BGLs over a number of days and then look at where the values most commonly fall; dark blue is the most frequent occurrence, light blue less so. The green area is the ‘safe range’ where values should ideally sit.

In the above we see that I (yes, this is from my data) tend to spike around 14:30 and also at 21:30. It is fairly easy to work out this is due to meals (lunch and dinner). Why no breakfast spike? Generally I do not eat breakfast.

This graph is something many people with diabetes generate once every, say, three months, as part of their visit with their health care team (endocrinologist, diabetes educator etc.) The interpretation I did in the previous paragraph is usually left up to the health care team who then talk about ways to improve the curve e.g. adjusting how insulin is being used, adjusting meals etc.

My aim was to see if I could create a bot which identified the same spikes and also provided feedback (Spoiler Alert: I could and it works great).

Setting Up the Bot: Azure Custom Vision

The pattern was effectively the same as the Aussie Spider Bot: Train Azure AI on imagery and then hook it up to a comms channel to receive imagery and return the analysis (via Power Automate). While I could have used AI Builder to link the two, I find it easy enough to link them manually.

The first step is to go to the Azure Portal (sign up for a free trial and credit if needed) and provision a Custom Vision service.

Clicking Create brings up the setup screen.

Key fields of note:

  • Resource Group: This is just a way of grouping Azure resources together for analysis of costs. Pick any group/create a new one.
  • Region: Different regions have different pricing tiers so shop around, especially if you are looking for a free plan
  • Name: The name of your bot
  • Training/Prediction Pricing Tier: In the above I was unable to select the free plan because it was already in use (with the Aussie Spider Bot) but you should be able to.

It should also been noted that we are, in fact, provisioning two services (yes, selecting both at the top of the setup screen is the way to go): a training service where we get the bot to learn and a prediction service where it uses that learning to draw conclusions.

Once done, hit the “Review and Create” and you will have your bot set up and eager to learn.

Where is my Bot?

At this point I struggled to work out what to do next. I had a bot but it was not obvious where to go to train it.

In fact, if you scroll down the page, Step 2 (which really, really should be Step 1 because calling an untrained service makes no sense at all) tells you where to go:

Clicking the blue link sends you to customvision.ai where you can set up your bot (you will have to log in again with the same credentials as Azure).

In my case, because I was keen to use the free pricing tier, I created the AGPBot as a second project under the original Aussie Spider Bot service. If you are creating it fresh, you create a new project.

Most of this is about setting up the kind of bot you want:

  • Project Type: Classification will categorise the image (what I wanted for mine) whereas Object Detection finds the location of specific objects within an image e.g. a bird in a national park photo
  • Classification Types: For the Aussie Spider Bot I used Multiclass but, in this case, as an image may have multiple tags associated to it, I chose Multilabel
  • Domain: The kinds of objects we are working with. As mine did not obviously fit another category I chose “General [A2]”

When done, click the “Create Project” button.

Setting up the Project

We are now ready to train our bot. Opening the project gives us the main areas we will work in:

  • Training Images: Where we add tags, images and link the two
  • Performance: Where we review the effectiveness of the bot and publish it
  • Predictions: Historic predictions made by the bot
  • Train: Where you initiate the training of the bot
  • Quick Test: An area where you can try out your bot, once trained

Let us go through the relevant ones:

Training Images

The first step is setting up the tags. I had ten positive tags and one negative tag. A negative tag is used for a null result i.e. none of the positive tags apply. I then uploaded my images for training and went through the exercise of clicking through each one and classifying it with the tags.

Once all the images are tagged, we are ready to train. Click the Train button.

The simplest way is to select Quick Training and, for the Aussie Spider Bot, this is what I used. However, the AGP Bot needed more refining so I selected Advanced Training and set it to 24 hours. In fact it usually ran for 8-12 hours but the results were much better than the Quick Training.

After it completes, you can go to the performance tab to see how the bot is shaping up.

Each training generates an “iteration” which can then be published and accessed by things like Power Automate. The circled “i”s give specific meanings to Precision, Recall, and AP but, for me, I just accept these are measures of accuracy/performance and consider the overall percentages as a measure of that. To improve the numbers, you can look at the performance of individual tags and, generally, throwing more images at the problem will help. In the case above, adding more images which have a low BGL at dinner would help train the model.

To see the model in action, we can now go to Quick Test.

Here you can try a sample image (ideally one the bot has NOT been trained on) to see if it yields the desired results. In my case, I trained on 150 images and reserved 10 for testing. Once I had refined the tags and added enough images for the 10 testing images to work, I was confident the model was ready to go.

Conclusions

There you have it, how to set up an Azure AI bot. In my case, the longest parts of the process were sourcing the images and individually tagging them; that part took most of a Sunday with the Power Automate Flow being finished over the next couple of nights. Other than that, the process was straightforward. As with the Aussie Spider Bot I will end this post with an xkcd comic from 2014.

What was “virtually impossible” 10 years ago is simple today and, as per the instructions in this blog, completely codeless. The only thing stopping us is not realising how accessible this technology is. If you have an idea, like I did with the AGP graphs, have a play. You will not regret it.

2 thoughts on “Building a Medical Analysis Bot in Days with Zero Code: Part One – Setting up AI

Leave a comment