Flow-tastrophe? Conquer Flows with a smile using Flow Execution History

Introduction

A common query arises these days: why utilize the Flow Execution History tool when Microsoft provides its own set of debugging tools within the Power Automate portal? The answer lies in a straightforward truth: those acquainted with Microsoft’s user interfaces for debugging are familiar with the challenges they can pose. Often, these interfaces are a work in progress, making the process of troubleshooting, at times, a frustrating endeavor. The Flow Execution History tool emerges as a solution to this age-old struggle.

Crafted to serve as your primary hub for debugging flows, it shares a kinship with the esteemed Plugin Trace Viewer developed by Jonas Rapp for debugging plugins. This tool is a testament to the principle that specialized solutions can significantly enhance the efficiency of complex workflows.

The Struggle

Have you ever encountered the challenge of debugging a series of Power Automate flows? Picture the tedious process of clicking through, searching for that specific flow, and repeatedly hitting ‘show more runs’ without an efficient filter in sight. It’s a situation that can be frustrating for anyone. Most Power Automate users have experienced this at least once. It can feel like a significant amount of time is spent trying to identify the problematic run. Fortunately, there’s now a solution. The Flow Execution History tool can help you save valuable time in this process

Features

Let’s explore some of the essential features that make this tool invaluable.

Single List of Runs

This feature allows users to select specific flows they want to focus on, and then view all flow runs related to those selected flows in one consolidated list. This streamlined view makes it easier to track and analyze the execution history of specific flows.

Advanced Filtering

The tool offers advanced filtering capabilities, which enable users to narrow down the list of flow runs based on a specified time span. This is particularly useful for quickly pinpointing problematic flow runs without having to manually sift through a large number of entries in the maker portal.

Flow Run Correlation

This feature provides a clear visual representation of how flows are interconnected. Users can easily see if certain flows are triggering other flows and view the entire flow thread. This helps in understanding the dependencies and interactions between different flows, which is crucial for troubleshooting complex workflows.

Additional Features

The Flow Execution History tool offers an array of advanced functionalities designed to streamline your workflow management:

  • Export to CSV and Excel: Seamlessly transfer data from the tool to CSV and Excel formats. The preconfigured conditional formatting ensures that your exported data retains its visual clarity.
  • Detailed Flow Run Links: Access detailed information about flow runs with ease. The tool provides direct links to flow run details, allowing for quick and precise examination.
  • Browser and Profile Preferences: Tailor your experience by choosing your preferred browser and profile settings when opening flow run details. This level of customization ensures an optimal viewing experience.
  • Color-Coded Flow Names: Simplify the process of identifying specific flow runs by utilizing color-coded flow names. This visual aid significantly enhances the efficiency of navigating through your workflows.
  • Error Details for Failed Flows: Quickly access and review error details for failed flows. This feature provides crucial insights into the reasons behind flow failures, enabling prompt resolution.
  • Detailed Trigger Outputs: Gain a comprehensive view of trigger outputs directly from the tool. This feature allows for detailed examination and filtering of trigger outputs, providing invaluable insights into flow execution.

And this is just the beginning. The Flow Execution History tool is committed to evolving and introducing even more features in the future, promising a continuously improved experience for Power Automate users.

Support Development

If you find the Flow Execution History tool helpful and would like to contribute to its ongoing development and improvement, you can show your appreciation by buying me a coffee! Your support goes a long way in helping to maintain and enhance this tool for the Power Automate community.

Buy Me a Coffee

Thank you for your generosity and for being a valued member of the Power Automate community!

Conclusion

In the realm of Power Automate, the Flow Execution History tool emerges as a critical asset for efficient workflow management. It addresses the common challenges faced during debugging, providing a systematic approach to troubleshooting complex flows. With features like a single list of runs, advanced filtering, and flow run correlation, users gain the ability to navigate their workflows with precision and ease.

No longer will users need to endure the arduous process of endless clicking and searching in hopes of finding the elusive problematic run. The Flow Execution History tool streamlines this process, saving time and minimizing frustration.

Much like Jonas Rapp’s Plugin Trace Viewer revolutionized plugin debugging, this tool is set to become an invaluable companion for every Power Automate enthusiast. Embrace it, and witness a significant boost in efficiency and troubleshooting proficiency. Bid farewell to the ‘Flow-tastrophe’, and usher in a new era of seamless workflow management.

Experience the difference today – let the Flow Execution History tool be your guiding light in the world of Power Automate.

Using HTTP Status Codes to Improve Your Custom APIs

Introduction to HTTP Status Codes for Software Testers - TestLodge Blog

Introduction

Custom APIs are becoming increasingly important in Dynamics 365 development. To ensure that they meet the highest standards, it’s important to be able to send appropriate HTTP status codes in response to client requests.

Recently, a client asked me if it was possible to send specific HTTP status codes from a Custom API in Dataverse. After some research, I discovered that it is indeed possible, and it’s a good practice to implement. In this article, I will explain the different ways to send appropriate HTTP status codes in a Custom API response, which can lead to a better user experience.

Problem

A Custom API may call an external service to fetch data, but sometimes, the service may return no records. To notify the user of the custom API that no records were found, it’s important to respond with the appropriate HTTP response code. In this case, the appropriate code is 404, indicating that the requested resource could not be found. Providing an accurate and descriptive response code can lead to a more efficient and satisfying user experience.

Solution

Basic Usage

When developing workflows and plugins in Dynamics 365, InvalidPluginExecutionException is commonly used to handle errors and exceptions. While this approach is suitable for simple scenarios, it may not provide enough flexibility for more complex situations.

The most basic way to use InvalidPluginExecutionException is to pass a string containing the error message to the constructor, like this:

This will throw an exception with the specified error message.

Overloads

In addition to the basic usage, there is an important overload of InvalidPluginExecutionException that allows you to pass a specific HTTP status code that you want to return from your custom API method.

The InvalidPluginExecutionException(String, PluginHttpStatusCode) overload takes two parameters:

  • The first parameter is a String that represents the error message you want to display to the user.
  • The second parameter is a PluginHttpStatusCode enumeration that represents the HTTP status code you want to return.

By using this overload, you can return specific HTTP status codes for different scenarios in your custom API. For example, if you want to notify the user that no records were found, you can use the code below to return a 404 status code.

Example with Not Found response

Here’s an example of a custom API that fetches data from an external system. If the query returns no results, the code will throw an InvalidPluginExecutionException with the message “No records found in System X” and a status code of 404. This will cause the API to return a 404 error to the user.

HTTP Response of Custom API

When making a call to the upper API in Postman, for example, you will receive a 404 HTTP code if there is an issue with the request. In the response body, the message you sent in the string parameter will be included.

As an example, a properly formatted response could look like the following:

The user can easily identify the specific error that occurred and the associated error code. This information is critical in troubleshooting the issue and finding a resolution.

Conclusion

In conclusion, when developing custom APIs in Dynamics 365, it’s important to handle errors and exceptions properly. While the InvalidPluginExecutionException is a common approach to handling exceptions, it may not provide enough flexibility for more complex scenarios. By using the overload of InvalidPluginExecutionException that allows you to specify a specific HTTP status code, you can return appropriate HTTP response codes for different scenarios, such as when no records are found. This can improve the user experience and help with troubleshooting and debugging.

Virtual Tables – Custom Ownership

This will be a follow-up post on my session about Virtual Tables at Nordic Summit. People have asked me if there is a blog post about how to set up custom security on SQL Virtual Tables that are available out of the box via the virtual connector.

The Problem

Virtual Tables are organization owned and that makes them quite limited in real business world scenarios where some users should not see all the data that is stored in the database. We will show how to implement a basic security model around them to make them more usable.

This example will use Azure SQL Virtual connector, but it’s working just the same with any other data source that is available out there. We need a custom owner field since there is no out-of-the-box one on the virtual table defined.

Polymorphic Lookup Issue

The owner on a normal table can be a team or a user, but on a virtual table, it’s kinda tricky. The problem lays in polymorphic lookup for virtual tables. You can create a polymorphic lookup, with a help of some code because it’s not available via UI, but it’s acting a bit weird when you see it for the first time.

The polymorphic lookup field will store only GUID in your external data source and because of that, you will have a problem while rendering the value in the UI. Lookup picker will allow you to choose tables defined in the lookup just fine, but the problem starts when you try to present the value to the user. It will be rendered in the right way for just the primary relationship defined in the lookup field and all other values will be rendered as No Name and will point to the wrong table.

Since we now know the limitations we will try to implement the security with a simple lookup this time until there is full support for polymorphic lookups in virtual tables space.

Prerequisites

We will not cover the whole process of creating a SQL virtual table via a virtual connector in this blog so if you are interested in how to do that there is a pretty well-written step-by-step guide by Microsoft here.

We will start building the extension to the virtual table that already exists in the Dataverse.

Solution Overview

We will create a simple virtual table against the SQL database which will contain two owner columns: one as a lookup to the team and the second one as a text column. Then we will create two plugins to help us achieve the final goal.

Steps:

  • Create duplicate owner ID fields in the SQL table
  • Create a custom lookup field that points to the team table
  • Create a single line of text field to store GUID as text
  • Implement a plugin that will store GUID as text on owner change
  • Implement a plugin that will inject filters on RetrieveMultiple message

SQL Table

Let’s create a simple SQL table as shown below.

There are only two columns that you need to include here. Those are OwnerId and OwnerIdString columns.

OwnerId will be used for a lookup field and OwnerIdString will store the text representation of the GUID selected in the lookup.

You may ask why do we need 2 fields when we can use only one lookup?

The answer is that currently there is a limitation on a virtual table where you can’t filter it by the lookup field so we need to filter it by the text instead, but we still want users to be able to edit the ownership in the lookup field like they are used to.

Ok, but still do we really need two fields in the SQL table or we can use just one?

There is also another limitation of the virtual table that doesn’t allow us to use the same column twice in the virtual table as an external name so we need to duplicate data in our data source.

SQL table should look something like this one:

That’s all we need to do in SQL to make it work, let’s jump into the Dataverse.

Lookup Field

First, we need to make a lookup field that will point to the Team table. This field will be exposed to the users via UI and users will be able to update it just like any other field.

There is nothing special about this one, just a simple lookup field that must have an external name set to the OwnerId column in the SQL table.

Text Field

The second field that we need is a single line of text field that will store the string representation of the GUID.

We should map this one, by setting the external name, to the second Owner field in the SQL database, OwnerIdString in this case.

Update plugin

Users should not experience our little hack with two fields that will hold the same value so we need to automate the process of copying the data to the second field first.

Let’s create a simple plugin that will be triggered when the Owner lookup field is changed.

There is really nothing spectacular happening in this plugin. It will only copy the GUID to the string field every time the owner field is changed.

We should trigger it as a pre-operation plugin step on the change of an owner field.

Now we can jump to the most important part.

RetriveMultiple Plugin

RetrieveMutliple plugins don’t get so much love from people in general and I think that they definitely deserve to be used more since there is so much potential there.

What will we do here?

We will intercept the RetrieveMultiple message and add some prefiltering to the original query so we filter out records that should not be shown to the user.

First of all, we need to normalize the query we get in the plugin since you will not always get QueryExpression which is easy for manipulation. Sometimes you will get FetchExpression instead and we need to convert it to QueryExpression.

It can be done with a simple helper function:

Now we have a QueryExpression every time, perfect!

Since we will do the filtering based on the owner team we need to get all the teams that our user is a member of.

When we have teams we can start editing the initial query.

Since we need to add our filters on top of all other conditions in the query we need to create a FilterExpression first to hold our conditions and set the operator to OR.

Then we should loop through all the teams we got in the GetUserTeams function and add them as ConditionExpressions to the filter.

In the end, we should append our filter to the original query and set the Query input parameter to the newly updated query.

Here is the whole plugin code:

When we have all code written it’s time to register a new RetrieveMultiple plugin step in the Plugin Registration Tool.

Make sure that you register it as a sync Pre-operation plugin, otherwise you will not be able to edit the Query parameter.

Final Solution

Let’s see how it works. I have created records that have different owning teams, but my user is a member of only one of them.

Here is the list of all records before we turn the RetrieveMultiple plugin.

After we turn on the plugin we should see the filtered list. The user in this example is a member of Team Rocket so all the records that are owned by Team Pikachu will not be shown to him in the list.

This is the situation after we turned on the plugin:

Finally, we can have a working example for basic security around the virtual tables. Woohoo!

Conclusion

Working with virtual tables is still not perfect and we still need to pull out some hacks, but the situation is getting better with each release.

This method is not tied to SQL data sources so feel free to try it with any other data source out there.

I hope that in the future we will be able to add working polymorphic lookups to the virtual tables which will help us to achieve the final goal and that is the owner field that we are used to.

Until then we can go this basic route that allows us to edit the owner or simply hide the ownership from our users in the UI and set it in the hidden text field so we can do the filtering that way.

Enable in-app notifications easily

Introduction

In-app notifications created a lot of hype on release day, but I still see that people struggle to set them up even with the step-by-step guide provided by Microsoft in the docs.

Since I hate repetitive tasks I created a simple Javascript snippet for one of my demos that will do everything for you with a simple copy & paste action in the dev tools console of your favorite browser. That way you can speed up enabling notifications in multiple apps with an almost one-click action.

Let’s explain how to do it!

Solution

First of all, navigate the browser to the model-driven app that does not have in-app notifications enabled.

Next, you need to open the dev tool which can be done via simple keyboard shortcuts – F12 and Ctrl+Shift+I or you can select the option from the browser settings by clicking on the three dots -> More tools -> Developer Tools.

Navigate to the Console tab and paste the snippet below to it.

You will see the loading screen that says Enabling in-app notifications… and after a few seconds if everything goes well you will have in-app notifications enabled for your app, otherwise, you will be presented with an error that you need to fix before running it again.

Now you should see the bell icon in the upper right corner of the screen if everything went well, if not, initiate Hard Reload (Ctrl + F5) just to be sure that you got the latest version of the app from the server.

It’s time to start spamming your users with all kinds of notifications!

Conclusion

This is nothing revolutionary, but I’m sure it can save you some time while enabling the notifications on multiple apps in multiple enviromments. I’m also sure that Microsoft will soon provide a user-friendly way to achieve this, but until that, you can simply copy & paste the snippet and wait for work to be done for you.

Live Dataverse data in email

Introduction

We all know that it’s possible to send static Dataverse data in an email that we send as a notification to the user, but what happens when sent data is not relevant anymore to the user. Data can be changed before the user opens the email with old data and in that case, it’s maybe not helping him at all. We need to figure out how to send live data with an email.

Idea

You can try to send an email with custom JavaScript that does something useful, but email clients will just trim that script part and your email will still be a static one. The same thing happens with you add the IFrame element that you can try to add to the email.

The only way you can pass dynamic data to the email is an image with a URL pointing to the web service that is returning an image back to you which will then be rendered in email.

In this example, we will create a web service that will fetch task records by ID. It will return HTML converted to the image based on the data that is currently live on the task record. After that, we will use that web service in the Power Automate flow to send an email when the task is created. Email will include image that what will point to the created web service.

Let’s do some work!

Solution

Creating web service

First we need to create a web service that will communicate with our Dataverse instance and construct images from the retrieved data.

We will use Azure Functions as our weapon of choice in this example, but you can choose whatever suits you best when building a web service.

Create new Azure Functions project in Visual Studio and add a simple HTTP GET function to it.

Few NuGet packages are needed for this example so please install following packages:

  • HtmlRenderer.Core
  • HtmlRenderer.WinForms
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly

First 2 packages HtmlRenderer.Core & HtmlRenderer.WinForms will be used to create images from the HTML constructed inside the function, on the other hand, the other 2 packages are of course for connecting to the Dataverse instance.

Let’s start coding!

Function must have one query string parameter that will be called id and we will send task record id to it.

When we have a task id it’s time to fetch that task from the Dataverse so let’s create a helper class that will do the job for us.

All you need to do in this snippet is to replace values for the connection string and you are good to go.

Now we need to fetch the task based on the ID passed to the our function and get data from it.

Perfect, now we have everything we need so we can start constructing HTML that will be converted to image at the end.

We will return status as text and also we will color code it so users can get feedback from it straight away when they see the image. Let’s get status text and color values for our task record.

When we got those values we are ready to construct the HTML code for our image. We will add 2 divs that will contain the task data and add the background color that we prepared earlier to the div that contains status data.

After that, all we need to do is create an image from the HTML using the HtmlRenderer library that we added as a package at the start by just one line of code.

Finally, we have the image we need to present our data. The last step of the web service part will be returning the image from it. It’s a little bit different than the basic JSON objects that are most commonly used as return values.

The image needs to be converted to bytes array first and after that response, content must be created from those bytes array. RenderToImageGdiPlus method returns PNG image by default and that’s why we need to set the content type to image/png too.

Whole function code should look lie this if you just want to copy it in one go.

Now we have everything we need so we can test the web service to see if the right values are showing in the image.

Create one open task we have stored in the Dataverse.

Let’s call function with ID of that record as a prameter.

If everything went well we should get image that says Open marked in blue and shows my user as a Owner.

It’s working as expected so let’s mark that task as completed and see what will we get now.

This time we will get status text Completed and marked in green color.

Perfect, we now have a web service that can be used in the email straight away after we deploy it in Azure which is something I assume you are familiar with if you used Functions before so it will not be covered in this example.

Setting up Power Automate Flow

First thing here you need to get URL of the function from Azure that will look something like this.

After you got this let’s create a basic Power Automate Flow.

Trigger for our flow should be set up on create of the task record which is pretty much straight forward.

Next step is to send a email to the user. The important part here is to open code view in email action and start writing HTML there.

When you opened the code view simply add img HTML tag with src attribute that points to the function URL and add dynamic ID parameter to it like shown below.

Finally we have everything set up and we can see the whole process in action.

Create a new task and check your email and you should see the image in email that shows Open status for task.

If we close the task and check the email once again we should see the Completed status on the image.

Now you are able to present a real time data inside the email to your users. How cool is that?

Conclusion

This example shows you that you can construct live information about the data stored in Dataverse straight from the email notifications. Your users don’t need to access model-driven apps before they realize if they actually need to do something with the record sent in the email.

Where do I use this kind of approach in the real world? Let’s say you send an email to the distribution list that has multiple people in it and that they can work on that item. Some people are slow in reading emails and they maybe don’t need to bother with it if the item is already picked by someone else or maybe even completed.

Offloading Dataverse Attachments To DMS

Introduction

File storage was always an issue when you wanted to store a lot of documents inside your environment in the past. Microsoft was very aware of that so today we have a much better situation since we don’t store files in the database, but in the separate storage used only for files.

Some customers still don’t want to store the files inside the Dataverse environment, but they want to access the file through the Model-Driven app. The question is how to achieve this inside Dataverse while keeping the notes UX the same. That way users will use well know notes feature to upload/download documents.

Idea

We need to modify 2 processes to achieve this solution.

  • Uploading notes with attachments
  • Downloading attachments

When we upload not that contains attachment we want to upload that attachment to some DMS system and free the space in our Dataverse environment so that we don’t use storage twice.

On the other hand, when we want to download the attachment to our local machine we don’t want to go to the DMS to do it, but we want to do it straight from the timeline where the notes are shown. We are obviously missing the link to the file in this case so we somehow need to pull the file from the DMS every time the user initiates downloading the file in Dataverse.

Solution

Uploading notes with attachments

The first issue is how to upload the attachments to DMS every time user uploads a new note. We can do that via a batch job or in a plugin, but maybe there is a better way to do that.

Doing it via batch job we don’t get near-real-time uploading experience and we always need to check if there are new attachments even if there are no new ones uploaded.

Plugin on the other hand can be a problem because we maybe need to use 3rd party libraries to make our life easier or maybe upload fails and in that case it’s not that easy to initiate retry logic because of the 2 minutes timeout we have in the plugin runtime.

This example will show you how to utilize the webhooks and Azure Function to do the job for you. You ask why?

Webhooks will be used because that way you can easily notify another system that a new event happened in the Dataverse. That way you don’t need to check every now and then if something new arrived for sync. Plus side is also that every event holds a lot of data that will be sent via webhook. In our case, small attachments will be sent via event payload immediately and larger files will need to be fetched from the Dataverse since there is a message limit of 256kB for webhook payloads.

The handler for our webhook will be Azure Function that will handle uploading the file to the DMS system. That way we offload the upload process also from the Dataverse to the Azure where we can do a lot more things than in plugins.

All failed uploads would be handled with some fallback mechanism, which will not be included in this example, but I usually go with batch job route which will be relevant in really small number of cases.

Upload Implementation

This example will use Azure Blob storage as an DMS, but it can easily be used with any other system.

First we need to create a function that will handle note create event. Below is the function that we will use.

What does this function do?

First it converts JSON from the payload to the familiar RemoteExecutionContext object with JsonToRemoteExecutionContext function.

After we have execution context we can get everything we need to start with uploading the document.

We need to check if the x-ms-dynamics-msg-size-exceeded header is set to know what to do next. Why? Webhook will not send input parameters inside the execution context if the message size is too big. If the header is sent we need to fetch the record first to proceed, otherwise we have everything we need in Target object inside input parameters.

If we need to fetch the record form Dataverse we need to get 3 fields to perform the aciton with GetAttachment function.

Next, we need to send the attachment to the DMS and in this case, we will upload it to the Blob storage with the UploadFile function.

After successful upload to the DMS we need to free up the space in Dataverse by clearing the documentbody field.

If everything went OK our attachment would be uploaded in the DMS successfully and we are ready to configure the webhook that will trigger this function, but before that, we need to publish the function to Azure.

Open the Plugin Registration tool to start registering the webhook. Go to Register -> Register New Web Hook.

Populate the URL for your function and copy the Function Key from the Azure portal.

Next one up is to register the Create step for this webhook.

The step should be created as async one because that way we will not wait for the document upload every time we add a new file to the Dataverse notes. The key thing here is to set the message to Create and the primary entity to annotation.

After we created the webhook step we are ready to upload all files to the DMS every time a new file is added to the Dataverse notes entity.

Next step is to handle retrieving that file every time user initiates the download form the timeline.

Downloading attachments

When attachments are stored in the DMS we will not have document content stored in Dataverse, but we will have a reference that will be used to retrieve it from the DMS in real-time.

We need to register the Retrieve plugin which will retrieve files from DMS every time user initiates the download from the timeline.

If the upload is still not initiated file will be retrieved form the Dataverse, but if upload was completed successfully file will be retrieved from the DMS in real time.

Download Implementation

First we need to implement the Azure function that will download the file form the Blob storage for us.

We will send Dataverse ID as the input parameter to the function so we can browse the Blob storage and retrieve the right file.

Custom method called DownladFile will help us to get the file from the blob.

That method will return a simple base64 string because it’s all we need to get the file content, everything else is still stored in the Dataverse in note entity.

Now when we got our function ready all we need to do is create a plugin that will be executed on Retrieve message.

Retrieve plugin is quite simple in this case. Basically all it does is that it checks if documentbody field is empty and if it is it calls the function with the annotation ID to get the file. After that it just sets the documentbody field to the value retrieved from the blob.

That way every time retrieve message is executed it will replace the document body with the value form the DMS.

Now we got everything to set the last step and that is registering the plugin.

Open Plugin Registration Tool once again and this time register step on the AnnotationRetrievePlugin.

Set message to Retrieve and primary entity to annotation. Execution mode should be sync this time because we want to get the file straight away.

When this is set we can finally try the whole solution and see if it’s working.

Conclusion

This way you can easily move attachments to the DMS system without changing the UX for users in the model-driven app. The downside of this approach is that you of course need to have some fallback mechanism that will collect all the errors that failed to finish the upload files to the DMS, but luckily this will not affect a lot of files from my experience.

I know that that was a big issue in the past since file storage was pricy in the Dataverse, but nowadays some clients still don’t want to store the attachments in the Dataverse so I use this approach to achieve their needs.

Feel free to comment how you handle those kind of scenarios so we can maybe improve this process a bit, if not than I hope that you will find this useful.

Debugging PCF in Typescript

I see a lot of people lately started debugging PCFs by using Fiddler Autoresponder which is awesome and it’s definitely a way to do it.

If you are still one of those that are not using Fiddler when debugging your controls you should definitely need to read a deep-dive article by Diana Birkelbach which will guide you on how to set up Fiddler for the first time.

Debugging with Fiddler is a huge time saver for every developer out there, but what happens when it’s not enough.

Issues

You have that is using modern features like async/await pattern that is almost unreadable when translated to Javascript.

You separated complex control logic across multiple TypeScript files to make it more readable, but in the end, it’s bundled in a single file and you lost that readability when debugging.

You have a critical data related issue on the production environment and you need to debug it there, but there is only production ready minified code that can’t be debugged.

Solution

All the issues mentioned abouve can cause a lot of headaches when you are developing complex controls, but luckily there is always a tool that can help you with all the problems.

The answer to your problems is TypeScript’s feature called Source Map.

Source Map

Source Map is a feature that allows us to debug through our TypeScript files rather than generated JavaScript.

Those files basically contain all the code from your TypeScript files inside one bundled file which is pretty much unreadable for a human, but machines can do some magic with them.

An important thing to mention here is that your whole TypeScript code is copied to the source map file inside sourcesContent node.

When using Source Maps you can easily open your TypeScript files in dev tools of your choice and set breakpoints in them to start the much easier debugging session through much more readable TypeScript code.

Generate Source Map file

Now when we know what we need it’s time to make it work with the PCF project.

The first thing we need to do is instruct the compiler to generate source maps for us every time we make a change in our code.

This is a pretty straightforward process and it requires only one simple property in the tsconfig.json file located in the root folder of our project.

We need to add sourceMap property to the compilerOptions object inside the JSON file and set the value to true.

Your JSON should look something like this:

This was the easy part, but most of the people are stuck here because after the build source map file is not generated.

We need to do one more thing to make it work.

Modify the webpack config

Since PCF is using webpack as a bundler tool, we need to instruct it too that we want to use source maps because it’s not initially set up there.

Webpack configuration is quite hard to find in your project and that’s why most people fail in this step.

Configuration is hidden deep in the node_modules folder.

Path to the configuration file is:

node_modules/pcf-scripts/webpackConfig.js

Once you found the file open it and locate the oobConfig object.

You need to define new property inside this object called devtool and set the value to source-map like it’s shown in line number 5.

After that you are ready to generate the source map file.

Run a simple npm run build command and you should see that the bundle.js.map file is generated in command line output.

You can find those files in out/controls/<CONTROL_NAME> folder.

One more thing to check after the build is bundle.js file and search for one line at the end of the file.

//# sourceMappingURL=bundle.js.map

This line indicates a reference to the source map file that will be used in this JavaScript file.

Now we have everything to start debugging with the TypeScript files, but there is again a catch before we can actually do it.

If you try to pack the control and deploy it as a solution to the environment you will fast realize that you can’t find TypeScript files in the dev tools inside the browser.

The answer is because the source map was not packed inside the solution and deployed to the environment. There is an option called inline source map that will basically pack the whole source map file inside the bundle.js file.

That’s not a good practice because the size of the source map is most of the time larger than the original code size.

This is where Fiddler comes to the rescue and if you didn’t use Fiddler before please take a look at the post mentioned in the first section before you proceed with this article.

Fiddler Configuration

We need to inject the bundle.js.map file inside the browser to make everything work.

You need to create a simple new rule in Auto Responder rule.

Matchregex:(?inx).+bundle.js.map$
Action<LOCAL_PATH_TO_bundle.js.map>

With this, you are finally ready to jump to the browser and test the source maps.

Make sure that Fiddler’s Auto Responder is running and clear the browser cache before opening the form that has PCF control on it.

Once you open Dev Tools (Ctrl + Shift + I on Chrome/Edgium) you should navigate to the Sources tab.

You should see pcf_tools_<RANDOM-GUID> node there. When you expand it you should find all your TypeScript files there.

Open one of those files set a breakpoint and trigger the action that will hit that breakpoint.

Well done, you are finally debugging your TypeScript files.

Conclusion

This article showed you one more trick with Fiddler’s Auto Responder feature that will help you to speed up your development process, but also save your time when debugging when trying to find what causes the bug in the production environment in a more continent way.

Important things to remember are that source maps are not there with the default project configuration, but can be easily added when you get used to it.

I mentioned an inline source map that can be used without Fiddler, but it’s not a good idea to pack code one more time to the bundle.js file, definitely not in a production environment since the file size of the control is doubled.

I hope that from now on you will most likely stick with Fiddler debugging and use debugging harness less frequently.

Keep making great PCF controls!

PCF Preview Image

Almost every day we see a new control or two developed by the community and available as open source. Controls that are published as open-source are generally developed and published in a really short period of time and lack some of the best practices. My last few posts around PCF were just about that.

How to make our controls better by following simple steps?

This time the topic is preview image that is most of the time first impression about the control when we see it in the controls tab.

Today our general source of PCF controls is PCF Gallery where we can find more than 200 controls that are open source. Most of the controls have some kind of screenshot that shows us the interface of this control. Those images are a great way to get our first impression of the control and can easily make us download it and try it at the end.

More than 80% of the controls out there stick with the preview image that is by default. It’s time to change that!

Problem

The problem here is that people don’t add some kind of picture to their controls to make them more visually attractive in the first place.

Maybe people are not that responsible for not including the preview image to the control because in the official examples there are no preview images at all. I personally think that those examples are really good in general, but of course, it should be included there at least in one example control that people see it on the road.

Next is that it’s not presented in the template so people just skip that part.

Everyone can find it in the official docs but is it’s kinda overlooked because it’s one of the last parameters mentioned in the control node section.

How to add

Adding it to your control is really simple.

There is a preview-image attribute in the control node that needs to be defined. Prerequisite is, of course, that image is stored somewhere in the control folder so it can be caught by the webpack.

Let’s have a look at the example!

The example shows us how we can add a preview.png image that is located in the img folder inside a root control directory. Simply add a preview-image attribute and assign img/preview.png value to it.

Easy as that and you have your preview image added.

Another misconception here is that you need to add it also to the resources node with all the other files. Actually, you don’t need to add it there and it will render just fine if you add it just in the control node. The resource node is used only for the images that will be used inside your code.

Model-Driven Apps

Model-Driven App is a place where you will be able to see your preview image.

The preview image can only be seen when you are picking the control for your field/dataset in the form editor.

If you look at the preview image container when you set up your control it feels small and there is a catch in it. That container has fixed size and it will not scale with your image.

Container size is 170 x 130 pixels.

That’s the biggest issue with it in my opinion. Why?

You must be really careful when designing the preview image if you don’t want to be surprised by the weird shape at the end when you see it in the configuration page.

If you don’t keep the 17:13 ratio you will eventually be surprised because the image will stretch in both directions horizontally and vertically to match the size of the container.

The key takeaway here is as long as you keep that strange 17:13 ratio you will have the expected result shown at the end.

The best way to present what your control is capable to do is by using a GIF as a preview image. Yes, it’s possible to add it and I strongly encourage you to use it that way because it’s really eye-catching to see the control is some action.

Canvas Apps

When it comes to canvas apps it’s mentioned in the documentation somehow. A quick look over the Available for column gives you an answer that it’s not available for those kinds of apps, but who knows what future will bring.

The other problem with canvas apps is that controls with preview images cause really critical issues while importing to the app.

When you try to import the PCF control that has preview image included it will just fail to import to the app.

I hope that it’s just a temporary issue and that it will be fixed soon by the Microsoft guys and until then all you need to do is avoid using the preview images in your control.

Conclusion

Images are a great way of showing the features of the control in a quick and effective way. Saying that let’s start including them in our control to help the makers easily choose the right control from the list.

Keep the 17:13 ratio to avoid unnecessary image stretching!

Use GIFs because they the best way of showing the features!

It’s a shame that this feature limits our controls only to model-driven apps because of the issue with the canvas apps. If your control is supposed to be used on both types of apps try to include the preview image and comment that code for now, but hopefully, soon you’ll be able to uncomment it release it the way it should be done.

My top 3 features in 2020 Release Wave 1

Finally, the release notes are published and I’m more than happy to go through them.

You can find them on the Docs pages below:

After a quick first read, I wanted to make my favorites list and choose the top 3 features that caught my attention in this release.

Let’s start with the countdown!

3. Save is always visible in the command bar on edit forms

Customers using model-driven Power Apps will see the Save and Save & Close option on the out-of-box entity command bar when editing a record. Before this release, if the auto save option was turned on, both options were hidden and not available in the command bar. The save option was only visible in the footer.

Microsoft Docs

Really a small addition, but it makes out lives so easier. I stopped counting the requests that included adding the save button on the ribbon at all times. This is definitely one of the most requested features by the customers in the past years.

It will save us a lot of time and make us use Ribbon Workbench for more complex stuff than adding the Save button all around.

2. Modern solution import experience

In addition to supporting capabilities of classic solution import, the modern experience offers the following enhancements:

– You’re now able to create and authorize the connections required by your flows so that they’re created in a working state

– Set environment variable values required by other components in your solution

– Reduced complexity when choosing from import actions

– Accessibility

Microsoft Docs

Old solution import really needed an upgrade since nowadays we have a ton of new stuff that we can put inside and configure. Flows are a great example because at the moment our Flows are not working straight away when we import them to the environment. I hope that with the new experience we will set all the configurations and connections before publishing the final changes so we can run them straight after the import.

Setting the connections after the import to the new environment is the most annoying part for me today so this one makes me quite happy.

1. Web APIs for CRUD operations on Common Data Service entities

The liquid tags in portals today provide read-only capability. Creating, updating, and deleting operations require using UI components like entity list and entity forms. This feature adds Web APIs for portals that allow users to create, read, update, and delete operations across all Common Data Service entities.

Microsoft Docs

I’m 100% sure that all the Portal guys out there are so excited. We all know how Portals didn’t provide us an easy way to do a bit more complex operations other than simple creates and updates. With this feature, we can finally do some crazy magic like we are used to doing on classic forms on our applications. No more hacks to perform such straightforward actions anymore, but in the end, I must admit that I still love some Liquid hacks out there only because it just shows that imagination has no limits.

Nice addition to this one should be a support for PCF controls in the near future, maybe in the next release, so we can truly push the Portal forms to the next level.

Final thoughts

There is more interesting stuff in the release notes, but those 3 I mentioned up there stick to my mind on the first read. Since we can’t try any of those we can only hope that we will get the expected result as soon as they become available.

I’m also sure that there are already some hidden gems in this release and that we will find them out in the following days. Until then it’s time to study the given Docs even more!

PCF Localization

Today we have more than 100 open source PCF components and we should really start working on some best practices when developing the controls.

The thing I want to point out today is the localization of the PCF components. Your component will be downloaded by people all around the world and let’s assume that there are people that want to use the component in their native language.

Let’s use the inbuilt localization API in the framework to support multiple languages in our components. We can open our repositories for contributors that are not real developers to contribute with translations and make the component even better.

How can we achieve that?

RESX files

The answer lays in the ControlManifest.Input.xml file that is generated by the Power Apps CLI in the resources node.

The line that contains the resx node is the answer to our problem.

RESX file is basically an XML formatted file that contains keys and the localized string values and it’s created per language.

An example of one RESX file can be found below and you can use it as a template for building all the localization files.

Localization API

PCF supports two types of control localization. You can set localized values to the config page for the control that will show up in Form Editor or you can localize the values that will be shown when control is rendered on the form.

With those 2 cases, we cover both citizen developers that are doing the configuration and the end-users that actually see the control in their own language.

Adding RESX to the control

Adding a new RESX file to the control project is quite easy.

The first thing you need to do is uncomment the resx node from the generated manifest file that you got from Power App CLI’s init method. The thing you need to know here is that there is a special naming convention behind those RESX files in order to work properly.

RESX files should end with a locale identifier (LCID).

The example below is showing the name for the English (LCD = 1033) language RESX file.

A list of all languages and LCIDs can be found here.

TranslationExampleControl.1033.resx

The best practice is to create a strings folder inside your control folder that will hold all RESX files.

Your manifest should look something like this.

The next step is to create a RESX file inside the project.

Create a strings folder inside the control project and add a new file RESX file.

Copy the RESX template from the RESX section inside the newly created file.

Locate the data node on the bottom of the file. Data node can have 2 child nodes called value and comment.

Data node is the important one and it will hold the localization value for the key that is defined in the data node.

In the example below, we will have s key Input_Value_Display_Key that will be replaced with Textbox Input text when we will be using the app in the English language. Let’s add just another one that will be used for control description called Input_Value_Desc_Key and set the value to Single line text input field.

Configuration localization

The first thing that we will do is to translate the configuration page of the control.

Make sure that you have your keys in the manifest file. Our keys, Input_Value_Display_Key and Input_Value_Desc_Key should be located in the property tag.

When you did that push the control to the environment and see if it works. If you did everything right it should look something like this.

You can see that the Textbox Input string is located in property name and the Single line text input field is shown on the place of field description.

Let’s add support for another language, it will be Croatian in my example, but of course, you can put any language here.

LCID code for the Croatian language is 1050 so we need to create a file that ends with it in our strings folder.

I will create a RESX file named TranslationExampleControl.1050.resx and copy the whole content of the English file as a template.

Now we need to change the values for our 2 keys we created before to match the Croatian translation for those strings.

EnglishCroatian
Textbox Input Unos teksta
Single line text input field Polje za unos teksta u jednom retku

We have the translations in the table and all we need to do now is to change the key values to the right strings.

Croatian RESX file should look like the one above.

The last thing we need to do is to add the RESX to the manifest file just like we did for the English version. Just add new resx tag inside the manifest and set it’s path to strings/TranslationExampleControl.1050.resx .

What we need to see the result is to publish a control to the environment on which we have the base language set to Croatian in order to see the changes.

Once we did that let’s open the control configuration page again and see the result.

There we go, translated values for field name and description are here. You can play with all strings in the manifest to change the values in the configuration page.

Control UI localization

Now we know how to change the values in the configuration page and it’s time to try to change the values in the UI of the control.

We will add a simple textbox element to our control and change the placeholder text.

Add a bellow snippet in the init function.

There is a placeholder attribute that holds value <THIS_HAS_TO_BE_CHANGED> and we want to change that value to the appropriate language string.

Let’s reuse the key Input_Value_Desc_Key in our RESX file for this purpose.

Luckily there is an API call that will help us to fetch the right string value.

The function above gets the value for the key passed as the parameter for the current language set in user settings. Time to use it in our little snippet of code.

Once we did that we are done. Let’s build and push the control to the environment.

First, let’s check how it looks like if the English is set as our language in Personalization Settings.

Now we need to change the language by going in Personalization Settings -> Languages and change the User Interface Language to Croatian to see if we will get the placeholder translated to the Croatian language.

Once the page is refreshed we should see that placeholder text has changed.

Everything is working as expected and now you know how to change the strings all around the PCF controls based on the language.

Conclusion

This time we showed how we can make our controls more accessible to people that don’t understand more than one language, probably only their mother tongue.

I would never use my apps in Croatian, but I’m sure that there are people that would be more than happy to do it.

Let’s make both of us happy by making the controls in more languages or at least make them more customizable for others that can maybe only contribute by adding the translations to the control in the end.

Keep in mind that a localized configuration page will only be shown in the base language that was set when someone provisioned the environment.

UI translations, on the other hand, will be applied to the user’s language that is set in the Personal Settings and in that way you can support multilanguage control on the same environment for the users that use different languages.