A couple of weeks ago I put together a Flow for merging Word files. I mentioned at the time I was not overly comfortable with the approach. While the Flow worked fine, the design was not great.

Here we have an example of poor design from that Flow. The Flow splits based on whether the input is “CRM” or “ERP”. However, the processes are identical. If we need to make an adjustment to the process for one, we are likely to need to do it to the other and the double handling introduces room for error. Can we redesign the process and eliminate the double-handling? This is what is meant by refactoring: changing code (or, in this case, a Flow) so it does the same thing but it is more “elegant”.
Reasons to Refactor
Obviously, double-handling is one good reason to refactor or, to put it another way, “Maintainability”. We want to create a Flow which can be maintained easily in the future, either by us or someone else. If we need to remember to make identical changes in multiple spots in the Flow, this is not easily maintained.
Other reasons for refactoring include:
- Readability: A Flow which someone can open and easily understand what it does
- Scalability: A Flow which grows as its use or the volume of the information it works with grows
- Reusability: A Flow which we can use in other circumstances so we do not have to reinvent/recreate
- Performance: A Flow which runs efficiently
All of these apply to the Create Response Flow.
Readability
Let us look at the current version of our Flow which does effectively the same thing as before but redesigned to embrace these design principles.

First of all, every Action has a meaningful name. Without expanding them, I get a sense of what they do. I can also add comments to Actions through the ellipsis to explain more detail.
Finally, you can see a couple of brown Scope Actions. A Scope Action is used as a container for a collection of Actions. So, for example, the “Set up Substitution JSON” looks like this when expanded.

In this case, the Actions get a list from an Excel sheet and then does something with them and, based on the title of the Scope, it appears to be generating JSON for Substitution. For someone relatively familiar with making Flows this should be enough to put them on the right track.
Maintainability
Let us look how we read the files we are merging and how we merge them. In the original Flow we read the file contents of all possible Word files we were looking to merge into our response,

and then added them manually to the Merge Action.

Also adding a little bit of script to see if that particular file was to be included.
This is hard to maintain (for any file we need to add a Get File Contents action, add it to the Encodian Merge Action and also add a tick box to our Trigger inputs), hard to read (the Get File Contents Actions alone span multiple pages on screen), and is troublesome to scale (imagine Settings up the Flow for 100 potential documents). So what is the alternative?
First of all, I moved the list of potential files and the flag on whether to include them into an Excel sheet in a central location.

In the Flow we read the Excel table list of files and loop through them, checking to see if the Include value is set to YES.

The advantage of this approach is the list of files can change and the Flow is unaffected. We simply adjust the list on Excel and we are good to go.
Scalability
You will notice in the above image the final step is “Check if below message buffer size”. This refers to a limitation in Power Automate which hinders scaling the Flow to many documents.
Firstly, let us revisit the Encodian Merge CRM Word Documents step. You will notice a “T” in the top right of the configuration settings for the Action.

Clicking this toggles the configuration from an entry screen to JSON which looks like this.

If you are unfamiliar with JSON it is, for the purposes of this discussion, a structured formatting convention for the storage of information (I once likened it to XML because of this, much to the chagrin of a developer friend of mine). Because everyone agrees on the JSON standard, it is easy to pass JSON between Actions and, because it is formatted in a well defined way, we can construct our JSON and insert it into the Encodian step afterwards which is, of course, what our loop does.

In this case, the File Array is an Array variable and the loop keeps adding to it.

The reason for the buffer check? It turns out converting the file contents of Word documents into a format which is friendly for Arrays blows out the size and Power Automate will only let you add elements to an Array up to a certain size. So, the buffer check gets the length of the Array and the length of the new element to be added and checks whether combining them will be too large. If it is, a TEMP file is used to store the file contents and the Array is reset.

If it is not completely clear what is happening here, I merge the files already in the JSON Array variable (Merge Array Documents) and then merge this with the existing TEMP file I have in a Teams folder, rewriting back to the TEMP file with the additional content. In short, every time we hit the Array buffer limit, we merge and write the extra parts to a TEMP file.
Why don’t I simply merge for each file in the loop and not worry about building an Array in the first place? Because the licensing plan for Encodian gives me a finite number of Action calls per month so doing this, say three times as our Array becomes too big is better than doing it 30 times for the 30 Word documents I am looking to merge. This consideration of Encodian Action calls more falls under Performance so we will return to this later.
Reusability
The use of an Excel as a configuration file for the Flow also has the advantage that the Flow “engine” can be used elsewhere and, the more settings we put in the Excel file, the less rework we have to do to make it function. There is still some work to do on this front in the Flow. While the specific files used are now outside of the Flow, I still specify the Site Address and part of the File Path when retrieving the TEMP file. Ideally this would be stored in the Excel file and retrieved at the start of the Flow. In an ideal world there would be no settings in this Flow other than the location of the Excel configuration file.
To then transplant the Flow elsewhere would just need a copy of the Flow and the Excel configuration file in a convenient location. The only change needed to the Flow would be to point it to the Excel configuration file and you are good to go.

Performance
Performance is traditionally thought of as how quickly something runs. In this case the Flow speed is not a big consideration. The Flow takes about 10 minutes to merge 30 or so documents and that is ok. A bigger performance issue, as alluded to earlier, is the minimization of Encodian Action calls.
At the moment the Flow uses three Action calls at the end to merge the final File Array with the TEMP file and then perform a Search and Replace of pre-defined text substitutions (also configured in Excel in the Substitution worksheet with the JSON for the Action defined in the first Scope of the Flow, mentioned at the start).

It also costs us another two Action calls every time we reach the Array buffer limit and need to write to the TEMP file which is the loop we explored above. So, for a document which reaches the Array limit, say, three times, the Performance is 2*3+3 = 9 Encodian Action calls.
Defining Performance as a Flow which uses the least number of Encodian Action calls, I can see the possibility of reducing the number of calls further by writing a separate TEMP file each time we hit the Array buffer limit rather than merging them as we go. So, by the end of the major loop we might have TEMP1, TEMP2, and TEMP3 files and we merge these at the end. In principle, this would bring our calls down from 9 to 6 (1*3+3).
However, a new loop at the end to combine the TEMP files may be tricky to construct and make the Flow slightly less readable. This is an example where design principles are potentially in conflict and trade-offs need to be made.
Conclusions
It is very easy in Power Automate to write a Flow that works but is poorly developed. Refactoring ensures a Flow not only works but it is also well designed and has longevity. As you practice building Flows and reflecting on the values of Maintainability, Readability, Scalability, Reusability, and Performance what is poor design will become more intuitive. In these cases, consider how you can improve your Flow and refactor it. If you or someone else ever needs to revisit your Flow, I guarantee they will thank you for the effort.