Virtual Tables – Custom Ownership

This will be a follow-up post on my session about Virtual Tables at Nordic Summit. People have asked me if there is a blog post about how to set up custom security on SQL Virtual Tables that are available out of the box via the virtual connector.

The Problem

Virtual Tables are organization owned and that makes them quite limited in real business world scenarios where some users should not see all the data that is stored in the database. We will show how to implement a basic security model around them to make them more usable.

This example will use Azure SQL Virtual connector, but it’s working just the same with any other data source that is available out there. We need a custom owner field since there is no out-of-the-box one on the virtual table defined.

Polymorphic Lookup Issue

The owner on a normal table can be a team or a user, but on a virtual table, it’s kinda tricky. The problem lays in polymorphic lookup for virtual tables. You can create a polymorphic lookup, with a help of some code because it’s not available via UI, but it’s acting a bit weird when you see it for the first time.

The polymorphic lookup field will store only GUID in your external data source and because of that, you will have a problem while rendering the value in the UI. Lookup picker will allow you to choose tables defined in the lookup just fine, but the problem starts when you try to present the value to the user. It will be rendered in the right way for just the primary relationship defined in the lookup field and all other values will be rendered as No Name and will point to the wrong table.

Since we now know the limitations we will try to implement the security with a simple lookup this time until there is full support for polymorphic lookups in virtual tables space.

Prerequisites

We will not cover the whole process of creating a SQL virtual table via a virtual connector in this blog so if you are interested in how to do that there is a pretty well-written step-by-step guide by Microsoft here.

We will start building the extension to the virtual table that already exists in the Dataverse.

Solution Overview

We will create a simple virtual table against the SQL database which will contain two owner columns: one as a lookup to the team and the second one as a text column. Then we will create two plugins to help us achieve the final goal.

Steps:

  • Create duplicate owner ID fields in the SQL table
  • Create a custom lookup field that points to the team table
  • Create a single line of text field to store GUID as text
  • Implement a plugin that will store GUID as text on owner change
  • Implement a plugin that will inject filters on RetrieveMultiple message

SQL Table

Let’s create a simple SQL table as shown below.

There are only two columns that you need to include here. Those are OwnerId and OwnerIdString columns.

OwnerId will be used for a lookup field and OwnerIdString will store the text representation of the GUID selected in the lookup.

You may ask why do we need 2 fields when we can use only one lookup?

The answer is that currently there is a limitation on a virtual table where you can’t filter it by the lookup field so we need to filter it by the text instead, but we still want users to be able to edit the ownership in the lookup field like they are used to.

Ok, but still do we really need two fields in the SQL table or we can use just one?

There is also another limitation of the virtual table that doesn’t allow us to use the same column twice in the virtual table as an external name so we need to duplicate data in our data source.

SQL table should look something like this one:

That’s all we need to do in SQL to make it work, let’s jump into the Dataverse.

Lookup Field

First, we need to make a lookup field that will point to the Team table. This field will be exposed to the users via UI and users will be able to update it just like any other field.

There is nothing special about this one, just a simple lookup field that must have an external name set to the OwnerId column in the SQL table.

Text Field

The second field that we need is a single line of text field that will store the string representation of the GUID.

We should map this one, by setting the external name, to the second Owner field in the SQL database, OwnerIdString in this case.

Update plugin

Users should not experience our little hack with two fields that will hold the same value so we need to automate the process of copying the data to the second field first.

Let’s create a simple plugin that will be triggered when the Owner lookup field is changed.

There is really nothing spectacular happening in this plugin. It will only copy the GUID to the string field every time the owner field is changed.

We should trigger it as a pre-operation plugin step on the change of an owner field.

Now we can jump to the most important part.

RetriveMultiple Plugin

RetrieveMutliple plugins don’t get so much love from people in general and I think that they definitely deserve to be used more since there is so much potential there.

What will we do here?

We will intercept the RetrieveMultiple message and add some prefiltering to the original query so we filter out records that should not be shown to the user.

First of all, we need to normalize the query we get in the plugin since you will not always get QueryExpression which is easy for manipulation. Sometimes you will get FetchExpression instead and we need to convert it to QueryExpression.

It can be done with a simple helper function:

Now we have a QueryExpression every time, perfect!

Since we will do the filtering based on the owner team we need to get all the teams that our user is a member of.

When we have teams we can start editing the initial query.

Since we need to add our filters on top of all other conditions in the query we need to create a FilterExpression first to hold our conditions and set the operator to OR.

Then we should loop through all the teams we got in the GetUserTeams function and add them as ConditionExpressions to the filter.

In the end, we should append our filter to the original query and set the Query input parameter to the newly updated query.

Here is the whole plugin code:

When we have all code written it’s time to register a new RetrieveMultiple plugin step in the Plugin Registration Tool.

Make sure that you register it as a sync Pre-operation plugin, otherwise you will not be able to edit the Query parameter.

Final Solution

Let’s see how it works. I have created records that have different owning teams, but my user is a member of only one of them.

Here is the list of all records before we turn the RetrieveMultiple plugin.

After we turn on the plugin we should see the filtered list. The user in this example is a member of Team Rocket so all the records that are owned by Team Pikachu will not be shown to him in the list.

This is the situation after we turned on the plugin:

Finally, we can have a working example for basic security around the virtual tables. Woohoo!

Conclusion

Working with virtual tables is still not perfect and we still need to pull out some hacks, but the situation is getting better with each release.

This method is not tied to SQL data sources so feel free to try it with any other data source out there.

I hope that in the future we will be able to add working polymorphic lookups to the virtual tables which will help us to achieve the final goal and that is the owner field that we are used to.

Until then we can go this basic route that allows us to edit the owner or simply hide the ownership from our users in the UI and set it in the hidden text field so we can do the filtering that way.

Enable in-app notifications easily

Introduction

In-app notifications created a lot of hype on release day, but I still see that people struggle to set them up even with the step-by-step guide provided by Microsoft in the docs.

Since I hate repetitive tasks I created a simple Javascript snippet for one of my demos that will do everything for you with a simple copy & paste action in the dev tools console of your favorite browser. That way you can speed up enabling notifications in multiple apps with an almost one-click action.

Let’s explain how to do it!

Solution

First of all, navigate the browser to the model-driven app that does not have in-app notifications enabled.

Next, you need to open the dev tool which can be done via simple keyboard shortcuts – F12 and Ctrl+Shift+I or you can select the option from the browser settings by clicking on the three dots -> More tools -> Developer Tools.

Navigate to the Console tab and paste the snippet below to it.

You will see the loading screen that says Enabling in-app notifications… and after a few seconds if everything goes well you will have in-app notifications enabled for your app, otherwise, you will be presented with an error that you need to fix before running it again.

Now you should see the bell icon in the upper right corner of the screen if everything went well, if not, initiate Hard Reload (Ctrl + F5) just to be sure that you got the latest version of the app from the server.

It’s time to start spamming your users with all kinds of notifications!

Conclusion

This is nothing revolutionary, but I’m sure it can save you some time while enabling the notifications on multiple apps in multiple enviromments. I’m also sure that Microsoft will soon provide a user-friendly way to achieve this, but until that, you can simply copy & paste the snippet and wait for work to be done for you.

Live Dataverse data in email

Introduction

We all know that it’s possible to send static Dataverse data in an email that we send as a notification to the user, but what happens when sent data is not relevant anymore to the user. Data can be changed before the user opens the email with old data and in that case, it’s maybe not helping him at all. We need to figure out how to send live data with an email.

Idea

You can try to send an email with custom JavaScript that does something useful, but email clients will just trim that script part and your email will still be a static one. The same thing happens with you add the IFrame element that you can try to add to the email.

The only way you can pass dynamic data to the email is an image with a URL pointing to the web service that is returning an image back to you which will then be rendered in email.

In this example, we will create a web service that will fetch task records by ID. It will return HTML converted to the image based on the data that is currently live on the task record. After that, we will use that web service in the Power Automate flow to send an email when the task is created. Email will include image that what will point to the created web service.

Let’s do some work!

Solution

Creating web service

First we need to create a web service that will communicate with our Dataverse instance and construct images from the retrieved data.

We will use Azure Functions as our weapon of choice in this example, but you can choose whatever suits you best when building a web service.

Create new Azure Functions project in Visual Studio and add a simple HTTP GET function to it.

Few NuGet packages are needed for this example so please install following packages:

  • HtmlRenderer.Core
  • HtmlRenderer.WinForms
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly

First 2 packages HtmlRenderer.Core & HtmlRenderer.WinForms will be used to create images from the HTML constructed inside the function, on the other hand, the other 2 packages are of course for connecting to the Dataverse instance.

Let’s start coding!

Function must have one query string parameter that will be called id and we will send task record id to it.

When we have a task id it’s time to fetch that task from the Dataverse so let’s create a helper class that will do the job for us.

All you need to do in this snippet is to replace values for the connection string and you are good to go.

Now we need to fetch the task based on the ID passed to the our function and get data from it.

Perfect, now we have everything we need so we can start constructing HTML that will be converted to image at the end.

We will return status as text and also we will color code it so users can get feedback from it straight away when they see the image. Let’s get status text and color values for our task record.

When we got those values we are ready to construct the HTML code for our image. We will add 2 divs that will contain the task data and add the background color that we prepared earlier to the div that contains status data.

After that, all we need to do is create an image from the HTML using the HtmlRenderer library that we added as a package at the start by just one line of code.

Finally, we have the image we need to present our data. The last step of the web service part will be returning the image from it. It’s a little bit different than the basic JSON objects that are most commonly used as return values.

The image needs to be converted to bytes array first and after that response, content must be created from those bytes array. RenderToImageGdiPlus method returns PNG image by default and that’s why we need to set the content type to image/png too.

Whole function code should look lie this if you just want to copy it in one go.

Now we have everything we need so we can test the web service to see if the right values are showing in the image.

Create one open task we have stored in the Dataverse.

Let’s call function with ID of that record as a prameter.

If everything went well we should get image that says Open marked in blue and shows my user as a Owner.

It’s working as expected so let’s mark that task as completed and see what will we get now.

This time we will get status text Completed and marked in green color.

Perfect, we now have a web service that can be used in the email straight away after we deploy it in Azure which is something I assume you are familiar with if you used Functions before so it will not be covered in this example.

Setting up Power Automate Flow

First thing here you need to get URL of the function from Azure that will look something like this.

After you got this let’s create a basic Power Automate Flow.

Trigger for our flow should be set up on create of the task record which is pretty much straight forward.

Next step is to send a email to the user. The important part here is to open code view in email action and start writing HTML there.

When you opened the code view simply add img HTML tag with src attribute that points to the function URL and add dynamic ID parameter to it like shown below.

Finally we have everything set up and we can see the whole process in action.

Create a new task and check your email and you should see the image in email that shows Open status for task.

If we close the task and check the email once again we should see the Completed status on the image.

Now you are able to present a real time data inside the email to your users. How cool is that?

Conclusion

This example shows you that you can construct live information about the data stored in Dataverse straight from the email notifications. Your users don’t need to access model-driven apps before they realize if they actually need to do something with the record sent in the email.

Where do I use this kind of approach in the real world? Let’s say you send an email to the distribution list that has multiple people in it and that they can work on that item. Some people are slow in reading emails and they maybe don’t need to bother with it if the item is already picked by someone else or maybe even completed.

Offloading Dataverse Attachments To DMS

Introduction

File storage was always an issue when you wanted to store a lot of documents inside your environment in the past. Microsoft was very aware of that so today we have a much better situation since we don’t store files in the database, but in the separate storage used only for files.

Some customers still don’t want to store the files inside the Dataverse environment, but they want to access the file through the Model-Driven app. The question is how to achieve this inside Dataverse while keeping the notes UX the same. That way users will use well know notes feature to upload/download documents.

Idea

We need to modify 2 processes to achieve this solution.

  • Uploading notes with attachments
  • Downloading attachments

When we upload not that contains attachment we want to upload that attachment to some DMS system and free the space in our Dataverse environment so that we don’t use storage twice.

On the other hand, when we want to download the attachment to our local machine we don’t want to go to the DMS to do it, but we want to do it straight from the timeline where the notes are shown. We are obviously missing the link to the file in this case so we somehow need to pull the file from the DMS every time the user initiates downloading the file in Dataverse.

Solution

Uploading notes with attachments

The first issue is how to upload the attachments to DMS every time user uploads a new note. We can do that via a batch job or in a plugin, but maybe there is a better way to do that.

Doing it via batch job we don’t get near-real-time uploading experience and we always need to check if there are new attachments even if there are no new ones uploaded.

Plugin on the other hand can be a problem because we maybe need to use 3rd party libraries to make our life easier or maybe upload fails and in that case it’s not that easy to initiate retry logic because of the 2 minutes timeout we have in the plugin runtime.

This example will show you how to utilize the webhooks and Azure Function to do the job for you. You ask why?

Webhooks will be used because that way you can easily notify another system that a new event happened in the Dataverse. That way you don’t need to check every now and then if something new arrived for sync. Plus side is also that every event holds a lot of data that will be sent via webhook. In our case, small attachments will be sent via event payload immediately and larger files will need to be fetched from the Dataverse since there is a message limit of 256kB for webhook payloads.

The handler for our webhook will be Azure Function that will handle uploading the file to the DMS system. That way we offload the upload process also from the Dataverse to the Azure where we can do a lot more things than in plugins.

All failed uploads would be handled with some fallback mechanism, which will not be included in this example, but I usually go with batch job route which will be relevant in really small number of cases.

Upload Implementation

This example will use Azure Blob storage as an DMS, but it can easily be used with any other system.

First we need to create a function that will handle note create event. Below is the function that we will use.

What does this function do?

First it converts JSON from the payload to the familiar RemoteExecutionContext object with JsonToRemoteExecutionContext function.

After we have execution context we can get everything we need to start with uploading the document.

We need to check if the x-ms-dynamics-msg-size-exceeded header is set to know what to do next. Why? Webhook will not send input parameters inside the execution context if the message size is too big. If the header is sent we need to fetch the record first to proceed, otherwise we have everything we need in Target object inside input parameters.

If we need to fetch the record form Dataverse we need to get 3 fields to perform the aciton with GetAttachment function.

Next, we need to send the attachment to the DMS and in this case, we will upload it to the Blob storage with the UploadFile function.

After successful upload to the DMS we need to free up the space in Dataverse by clearing the documentbody field.

If everything went OK our attachment would be uploaded in the DMS successfully and we are ready to configure the webhook that will trigger this function, but before that, we need to publish the function to Azure.

Open the Plugin Registration tool to start registering the webhook. Go to Register -> Register New Web Hook.

Populate the URL for your function and copy the Function Key from the Azure portal.

Next one up is to register the Create step for this webhook.

The step should be created as async one because that way we will not wait for the document upload every time we add a new file to the Dataverse notes. The key thing here is to set the message to Create and the primary entity to annotation.

After we created the webhook step we are ready to upload all files to the DMS every time a new file is added to the Dataverse notes entity.

Next step is to handle retrieving that file every time user initiates the download form the timeline.

Downloading attachments

When attachments are stored in the DMS we will not have document content stored in Dataverse, but we will have a reference that will be used to retrieve it from the DMS in real-time.

We need to register the Retrieve plugin which will retrieve files from DMS every time user initiates the download from the timeline.

If the upload is still not initiated file will be retrieved form the Dataverse, but if upload was completed successfully file will be retrieved from the DMS in real time.

Download Implementation

First we need to implement the Azure function that will download the file form the Blob storage for us.

We will send Dataverse ID as the input parameter to the function so we can browse the Blob storage and retrieve the right file.

Custom method called DownladFile will help us to get the file from the blob.

That method will return a simple base64 string because it’s all we need to get the file content, everything else is still stored in the Dataverse in note entity.

Now when we got our function ready all we need to do is create a plugin that will be executed on Retrieve message.

Retrieve plugin is quite simple in this case. Basically all it does is that it checks if documentbody field is empty and if it is it calls the function with the annotation ID to get the file. After that it just sets the documentbody field to the value retrieved from the blob.

That way every time retrieve message is executed it will replace the document body with the value form the DMS.

Now we got everything to set the last step and that is registering the plugin.

Open Plugin Registration Tool once again and this time register step on the AnnotationRetrievePlugin.

Set message to Retrieve and primary entity to annotation. Execution mode should be sync this time because we want to get the file straight away.

When this is set we can finally try the whole solution and see if it’s working.

Conclusion

This way you can easily move attachments to the DMS system without changing the UX for users in the model-driven app. The downside of this approach is that you of course need to have some fallback mechanism that will collect all the errors that failed to finish the upload files to the DMS, but luckily this will not affect a lot of files from my experience.

I know that that was a big issue in the past since file storage was pricy in the Dataverse, but nowadays some clients still don’t want to store the attachments in the Dataverse so I use this approach to achieve their needs.

Feel free to comment how you handle those kind of scenarios so we can maybe improve this process a bit, if not than I hope that you will find this useful.

PCF Preview Image

Almost every day we see a new control or two developed by the community and available as open source. Controls that are published as open-source are generally developed and published in a really short period of time and lack some of the best practices. My last few posts around PCF were just about that.

How to make our controls better by following simple steps?

This time the topic is preview image that is most of the time first impression about the control when we see it in the controls tab.

Today our general source of PCF controls is PCF Gallery where we can find more than 200 controls that are open source. Most of the controls have some kind of screenshot that shows us the interface of this control. Those images are a great way to get our first impression of the control and can easily make us download it and try it at the end.

More than 80% of the controls out there stick with the preview image that is by default. It’s time to change that!

Problem

The problem here is that people don’t add some kind of picture to their controls to make them more visually attractive in the first place.

Maybe people are not that responsible for not including the preview image to the control because in the official examples there are no preview images at all. I personally think that those examples are really good in general, but of course, it should be included there at least in one example control that people see it on the road.

Next is that it’s not presented in the template so people just skip that part.

Everyone can find it in the official docs but is it’s kinda overlooked because it’s one of the last parameters mentioned in the control node section.

How to add

Adding it to your control is really simple.

There is a preview-image attribute in the control node that needs to be defined. Prerequisite is, of course, that image is stored somewhere in the control folder so it can be caught by the webpack.

Let’s have a look at the example!

The example shows us how we can add a preview.png image that is located in the img folder inside a root control directory. Simply add a preview-image attribute and assign img/preview.png value to it.

Easy as that and you have your preview image added.

Another misconception here is that you need to add it also to the resources node with all the other files. Actually, you don’t need to add it there and it will render just fine if you add it just in the control node. The resource node is used only for the images that will be used inside your code.

Model-Driven Apps

Model-Driven App is a place where you will be able to see your preview image.

The preview image can only be seen when you are picking the control for your field/dataset in the form editor.

If you look at the preview image container when you set up your control it feels small and there is a catch in it. That container has fixed size and it will not scale with your image.

Container size is 170 x 130 pixels.

That’s the biggest issue with it in my opinion. Why?

You must be really careful when designing the preview image if you don’t want to be surprised by the weird shape at the end when you see it in the configuration page.

If you don’t keep the 17:13 ratio you will eventually be surprised because the image will stretch in both directions horizontally and vertically to match the size of the container.

The key takeaway here is as long as you keep that strange 17:13 ratio you will have the expected result shown at the end.

The best way to present what your control is capable to do is by using a GIF as a preview image. Yes, it’s possible to add it and I strongly encourage you to use it that way because it’s really eye-catching to see the control is some action.

Canvas Apps

When it comes to canvas apps it’s mentioned in the documentation somehow. A quick look over the Available for column gives you an answer that it’s not available for those kinds of apps, but who knows what future will bring.

The other problem with canvas apps is that controls with preview images cause really critical issues while importing to the app.

When you try to import the PCF control that has preview image included it will just fail to import to the app.

I hope that it’s just a temporary issue and that it will be fixed soon by the Microsoft guys and until then all you need to do is avoid using the preview images in your control.

Conclusion

Images are a great way of showing the features of the control in a quick and effective way. Saying that let’s start including them in our control to help the makers easily choose the right control from the list.

Keep the 17:13 ratio to avoid unnecessary image stretching!

Use GIFs because they the best way of showing the features!

It’s a shame that this feature limits our controls only to model-driven apps because of the issue with the canvas apps. If your control is supposed to be used on both types of apps try to include the preview image and comment that code for now, but hopefully, soon you’ll be able to uncomment it release it the way it should be done.

PCF Localization

Today we have more than 100 open source PCF components and we should really start working on some best practices when developing the controls.

The thing I want to point out today is the localization of the PCF components. Your component will be downloaded by people all around the world and let’s assume that there are people that want to use the component in their native language.

Let’s use the inbuilt localization API in the framework to support multiple languages in our components. We can open our repositories for contributors that are not real developers to contribute with translations and make the component even better.

How can we achieve that?

RESX files

The answer lays in the ControlManifest.Input.xml file that is generated by the Power Apps CLI in the resources node.

The line that contains the resx node is the answer to our problem.

RESX file is basically an XML formatted file that contains keys and the localized string values and it’s created per language.

An example of one RESX file can be found below and you can use it as a template for building all the localization files.

Localization API

PCF supports two types of control localization. You can set localized values to the config page for the control that will show up in Form Editor or you can localize the values that will be shown when control is rendered on the form.

With those 2 cases, we cover both citizen developers that are doing the configuration and the end-users that actually see the control in their own language.

Adding RESX to the control

Adding a new RESX file to the control project is quite easy.

The first thing you need to do is uncomment the resx node from the generated manifest file that you got from Power App CLI’s init method. The thing you need to know here is that there is a special naming convention behind those RESX files in order to work properly.

RESX files should end with a locale identifier (LCID).

The example below is showing the name for the English (LCD = 1033) language RESX file.

A list of all languages and LCIDs can be found here.

TranslationExampleControl.1033.resx

The best practice is to create a strings folder inside your control folder that will hold all RESX files.

Your manifest should look something like this.

The next step is to create a RESX file inside the project.

Create a strings folder inside the control project and add a new file RESX file.

Copy the RESX template from the RESX section inside the newly created file.

Locate the data node on the bottom of the file. Data node can have 2 child nodes called value and comment.

Data node is the important one and it will hold the localization value for the key that is defined in the data node.

In the example below, we will have s key Input_Value_Display_Key that will be replaced with Textbox Input text when we will be using the app in the English language. Let’s add just another one that will be used for control description called Input_Value_Desc_Key and set the value to Single line text input field.

Configuration localization

The first thing that we will do is to translate the configuration page of the control.

Make sure that you have your keys in the manifest file. Our keys, Input_Value_Display_Key and Input_Value_Desc_Key should be located in the property tag.

When you did that push the control to the environment and see if it works. If you did everything right it should look something like this.

You can see that the Textbox Input string is located in property name and the Single line text input field is shown on the place of field description.

Let’s add support for another language, it will be Croatian in my example, but of course, you can put any language here.

LCID code for the Croatian language is 1050 so we need to create a file that ends with it in our strings folder.

I will create a RESX file named TranslationExampleControl.1050.resx and copy the whole content of the English file as a template.

Now we need to change the values for our 2 keys we created before to match the Croatian translation for those strings.

EnglishCroatian
Textbox Input Unos teksta
Single line text input field Polje za unos teksta u jednom retku

We have the translations in the table and all we need to do now is to change the key values to the right strings.

Croatian RESX file should look like the one above.

The last thing we need to do is to add the RESX to the manifest file just like we did for the English version. Just add new resx tag inside the manifest and set it’s path to strings/TranslationExampleControl.1050.resx .

What we need to see the result is to publish a control to the environment on which we have the base language set to Croatian in order to see the changes.

Once we did that let’s open the control configuration page again and see the result.

There we go, translated values for field name and description are here. You can play with all strings in the manifest to change the values in the configuration page.

Control UI localization

Now we know how to change the values in the configuration page and it’s time to try to change the values in the UI of the control.

We will add a simple textbox element to our control and change the placeholder text.

Add a bellow snippet in the init function.

There is a placeholder attribute that holds value <THIS_HAS_TO_BE_CHANGED> and we want to change that value to the appropriate language string.

Let’s reuse the key Input_Value_Desc_Key in our RESX file for this purpose.

Luckily there is an API call that will help us to fetch the right string value.

The function above gets the value for the key passed as the parameter for the current language set in user settings. Time to use it in our little snippet of code.

Once we did that we are done. Let’s build and push the control to the environment.

First, let’s check how it looks like if the English is set as our language in Personalization Settings.

Now we need to change the language by going in Personalization Settings -> Languages and change the User Interface Language to Croatian to see if we will get the placeholder translated to the Croatian language.

Once the page is refreshed we should see that placeholder text has changed.

Everything is working as expected and now you know how to change the strings all around the PCF controls based on the language.

Conclusion

This time we showed how we can make our controls more accessible to people that don’t understand more than one language, probably only their mother tongue.

I would never use my apps in Croatian, but I’m sure that there are people that would be more than happy to do it.

Let’s make both of us happy by making the controls in more languages or at least make them more customizable for others that can maybe only contribute by adding the translations to the control in the end.

Keep in mind that a localized configuration page will only be shown in the base language that was set when someone provisioned the environment.

UI translations, on the other hand, will be applied to the user’s language that is set in the Personal Settings and in that way you can support multilanguage control on the same environment for the users that use different languages.

Is your PCF control read-only?

Issue

I’ve noticed that most of the PCF controls out there that are available as open-source projects have the same problem. They don’t handle states of the field.

The most basic one is the disabled state when the data is read-only. Maybe you thought that framework will handle that by default, but the answer is sadly a big NO. The good news is that you can fix that in no time.

Get the right info

The answer is quite simple and it can be found in the context object of type ComponentFramework.Context<IInputs>.

There is a property called mode that holds two important pieces of information. Those 2 are isControlDisabled & isVisible that hold information if the control is read-only and if the control should be visible on the form.

You can easily get those to values typing following commands.

Those properties obviously give us a boolean value that is self-explanatory.

Now we know how to get those values, but the question that pops up after that is where should I use it.

Choose the right tool

You can get the context inside two main methods that are part of every control out there: init & updateView. Both context objects will give you the data you need, but you must understand the behavior of each to be sure that that’s the right place you want to use it.

The first one that comes to mind is init method that is called only once when the control is loading. This choice will cover most of the use cases out there for the fields regarding the disabled state because most of the time field is read-only or editable from the moment you opened the form till the moment you close it. Is it the best way to handle it?

The answer is again NO. The second choice would be a updateView method that is always called after the init method finishes with the initialization of the control which will result in the same output as calling it in the init method. The most important thing to know here is that updateView method will be called every time something in the context is changed. That way we can handle the events like business rules for example that will make the fields read-only as a result of some other actions on the form.

Solution

Now when we are familiar with all the concepts we can make it work as a complete solution. The first thing to we need to do is make out the HTML element assessable inside the scope of the control.

The next step is to add the component to the control container and expose it to the variable outside the init method.

Finally, we need to set the element state in the updateView method.

You can reproduce the steps with the code shown below.

Conclusion

It’s time to revisit all the controls you’ve created and implement the logic for handling the disabled state. Make your control a complete solution just by a few lines of code. You can also try to manage the visibility state just the same way we did it with the read-only state.

Since most of the updateView methods are empty these days it’s a perfect time to give them some love with just a few lines of code.

D365 Webhooks – Part 3

Introduction

In the last blog of the series, we looked into the structure of webhook response and managed to parse it to the known object that is easy to use. This time we will deep dive into the debugging part.

Debugging issue

If we think about debugging in the context of webhooks we must have in mind that most of the time we will have both parts located in the cloud. The issue is how to debug the webhook handler Function/API when the request is coming from the cloud system.

As we used Azure Functions as handler example before we will do it in this post too.

Try #1

In this example, we will use Visual Studio + Postman to debug the Azure Function that will be running on our local machine.

Let’s use the slightly modified function that we used in the last blog that will log the request body instead of the user that created the contact.

That way we can catch the request payload in our Azure Function monitoring section. Let’s preform simple contact create action and wait for the trace log in the Azure portal.

After a few minutes, we can find the log in the monitor section of our function deployed in Azure and it will look similar like the image below.

You can see the request payload displayed in the large text area on the bottom right of the image. You can select all the text inside and copy it to the clipboard for the next step.

Let’s run our function project locally on our machine. When you run it you will get the CMD popup like the one shown below.

The important part here is the URL marked in green color.

http://localhost:7071/api/LogUser

Now we have our function up and running on our local machine so it’s time to mock the incoming request from the webhook by making the POST request from Postman to the URL we noted above.

Open the Postman and set the request type to POST. In the address bar type in URL http://localhost:7071/api/LogUser. Change to the Body tab, select the raw radio button option, past in the request payload that we copied to the clipboard from the Azure portal and select JSON (application/json) from the dropdown on the right that has Text as default value.

http://localhost:7071/api/LogUser. Change to the Body tab, select the raw radio button option, past in the request payload that we copied to the clipboard from the Azure portal and select JSON (application/json) from the dropdown on the right that has Text as default value.

Steps are shown in the image below.

Now it’s time to set a breakpoint in our C# code. Set a breakpoint on the line that we parse the incoming JSON to the RemoteExecutionContext object and run the request in Postman.

If you did everything right your code will hit the breakpoint and you will be able to check the values of your variables and step through the code, but it’s only the start of your debugging nightmare.

This method will take you a large amount of time and it’s maybe the first thing that you will try when debugging the webhooks, but it’s kinda not the best way to debug your function because you need to wait for every call to be logged on the Azure portal before you test it locally on your machine.

The good news is that there is a much better way of doing it.

Try #2

This example will use Visual Studio + ngrok and trust me it’s by far the most efficient way of doing the webhooks debugging.

First of all, what does ngrok do? ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels. In other words, it exposes our applications to the internet just by running the simple command in the command prompt.

First, you need to create the account on the ngrok website and install ngrok by downloading the installer afterward.

After you download and install it’s a good idea to add the ngrok folder to the Windows PATH so you can easily call it afterward in the command line.

Next step is to save your auth token to the ngrok config by running the command in CMD.

ngrok authtoken <YOUR_AUTH_TOKEN>

Auth token can be found in the Connect your account section.

Saving the auth token created the config file in

C:\Users\<YOUR_USER>\.ngrok2\ngrok.yml

Open that file so we can modify it a little bit.

Add a snippet below the authtoken line

Be careful while pasting the configuration because if you don’t indent the lines properly it will not work at all and you will get dozens of errors.

This configuration sets the EU tunnel that will be created with name functions on the HTTP protocol and port 7071 that is used for Azure Functions local debugging. By setting this config we can easily fire up our tunnel via CMD.

We can start our tunnel by simply typing the following command into CMD.

ngrok start functions

When the tunnel is created you will see the information about it in the CMD and it will be similar like on the image below.

The important thing here is to copy one of the URLs in the line that says Forwarding.

http://31b1a2b8.eu.ngrok.io

or

https://31b1a2b8.eu.ngrok.io

This will be the base URL that must be configured in the webhook configuration via the Plugin Registration Tool. Let’s open the PRT and edit the webhook configuration we used before.

All you need to do here is to change the Endpoint URL value to http://31b1a2b8.eu.ngrok.io/api/LogUser which is ngrok forwarding URL followed by api/<FUNCTION_NAME> and hit save. Authentication value can be set to WebhookKey and some random value in it.

Now you can perform the action on which you registered your webhook (create of contact in our case) and you will hit the breakpoint if everything is set right.

Also, your request will be logged in the ngrok console

and you can view it on the web interface that can be reached on

http://127.0.0.1:4040

when ngrok tunnel is up and running.

This way you can easily see what is coming in the request from the Dynamics in the GUI and you can also hit the Reply button to perform the exact same webhook like it was sent from the Dynamics before. I think that this one is pretty neat functionality that will save you some time when debugging the same request over and over again.

Conclusion

I hope that after going through this 3 part webhooks series you can start making your own webhooks solution, know when and how to use them and finally debug them in the right way.

Feel free to ask any questions regarding the whole webhooks series and I will try to answer them as soon as I can.

I’m also open for continuing the series if someone has the topic that I didn’t cover or maybe needs to be covered in a more deeper discussion.

Let’s make webhooks get some love like they really deserve!

D365 Webhooks – Part 2

Introduction

Last time we did a configuration of a simple D365 webhook with Azure Functions. This time we will check how to implement sync/async webhooks and upgrade it to parse the data that is passed as the context to the webhook function. If you are not familiar with configuring webhooks you should definitely start with Part 1 of the blog series before reading this one.

Async/Sync Webhooks

You can register 2 types of webhooks in Dynamics and you can guess it’s asynchronous & synchronous ones. It’s pretty straight forward to know the basic difference between those two, but there is always something in the background that is good to know when you implement them.

Asynchronous Webhooks

The name tells us that those kinds of webhooks are initiated in an asynchronous manner. It makes the most suitable for long-running jobs that will take some time and we don’t want to wait for the execution to end. That way we don’t get the response right away, but we should be able to check the execution results somehow. It’s done the same way you would do it for every single async operation in Dynamics and that is in System Jobs page.

System jobs page can be reached by following the path below.

Settings > System > System Jobs

Example of one failed execution is shown in the picture below.

System Jobs does not provide much information about the error, but at least we can find the info when something failed to execute. We need to implement error logging on the webhook request handler side as the Message box suggests.

Synchronous Webhooks

Synchronous webhooks are just the opposite of the async ones. Sync ones allow us to get the response in the real-time when the event rises. This benefit also leads to the few restrictions that we must consider while implementing sync webhooks.

The first restriction is that every request that ends outside the 2xx range HTTP code will result as a failed process. This will result in rollback on the whole transaction that initiated the webhook and we will get the let’s say not pleasant popup dialog that is out of our control. We can’t pass the data to the dialog and it’s showing some generic error message that is not helping us to determine the real cause of the error and even if you click Show Detail and download the log you will not get something useful to trace the issue.

The second restriction is a timeout. Timeout is here even lower than the timeout of the workflows (2 minutes). Sync webhook timeout is set to 60 seconds. In the end timeout results also with the transaction rollback.

Request content

When the event is raised in Dynamics the event data is sent by a webhook in a form of the HTTP POST request.

We can divide that data into 3 types:

  • Query String
  • Header
  • Request Body

Query String

The query string parameter holds the least information of all the content posted in the request. You can get 2 different values here:

  • WebhookKey
  • HttpQueryString

Both values are set in the configuration of the webhook via Plugin Registration Tool which can be used to ensure that request comes from the trusted source.

Header

The header contains a little bit more information than the query string. List of all information can be found in the table below.

KeyDescription
x-request-id A unique identifier for the request
x-ms-dynamics-organizationThe name of the tenant sending the request
x-ms-dynamics-entity-nameThe logical name of the entity passed in the execution context data.
x-ms-dynamics-request-name The name of the Event that the webhook step was registered for.
x-ms-correlation-request-id Unique identifier for tracking any type of extension.
x-ms-dynamics-msg-size-exceeded Sent only when the HTTP payload size exceeds the 256KB

Most of the parameters are self-explanatory, but it’s good to know what’s the important thing here.

The first one I want to highlight here is x-ms-dynamics-organization parameter that will give us information about the URL of the tenant that is sending the request which is pretty useful for multi-tenant implementations where you can have single Webhook to handle the requests from multiple instances.

The second one I want to talk about here is x-ms-dynamics-msg-size-exceeded. This one is important because it can cause troubles if you totally ignore it. The point is that it’s set only if the HTTP payload exceeds the 256KB in size. If the request is greater than the threshold request will not contain information about ParentContext, InputParameters, PreEntityImages & PostEntityImages which can be very useful in some scenarios, so you should be careful while handling big payloads.

Request Body

The most important part of the HTTP request is its body. The body contains all the information about the actual event that happened in the Dynamics. It contains a JSON object that has many nodes filled with useful information.

You can find the example of one JSON that is returned when a contact record is created on the link below. An example is stored in the external side just because it’s too big to show it in the post.

JSON Example

Assuming that you are familiar with Dynamics plugin development, if you look carefully in the JSON you can see some familiar parameters there. All parameters are shown in the picture above and you can see that it’s pretty much the same as the object that you used many times before in plugins. We are talking of RemoteExecutionContext class that is used in every plugin code.

Boxes marked in green are the most used parameters and will contain the most useful data, so you can focus on those when trying to find the right info you need.

Now we need to deserialize this big JSON the RemoteExecutionContext. We will do it like we use to deserialize JSONs in our plugin code with the snippet below.

Function from the snippet accepts the JSON string and as an output gives the RemoteExecutionContext object that can be used in our C# development.

Azure Function Upgrade

Now when we have the code to deserialize JSON to the known object we are ready to upgrade our Azure function to use it.

First, we need to install the NuGet package (Microsoft.CrmSdk.CoreAssemblies) that will allow us to use RemoteExecutionContext in our code.

Let’s update the code in the function that will log which user has created the contact record.

After deploying the function just like we did it in the first part of the blog we should get the log in the Monitor section of the function just like it’s shown on the picture below.

There you go! Now you can use the data coming from Dynamics event in your Azure Function like you are used to in the plugin development.

Conclusion

It seems like we have everything ready to make more serious stuff, but we are still missing the debugging part. At the moment we can just trace our errors and variables in the log to find the bug or recreate the requests coming from the instance and sending them to local Azure Function. Sounds like a boring and long-running process?

In this part, we made our way to the final part of the blog series that will show us how to debug the webhook request that comes from Dynamics online instance directly to local Azure Function so we can finally start making more complex scenarios.

D365 Webhooks – Part 1

Overview

Webhooks are present in Dynamics CE world since version 9.0, end of 2017, but I think that they are not used that much by the people in the community. Why should we start using them?

Webhooks is a lightweight HTTP pattern for connecting Web APIs and services with a publish/subscribe model.

Microsoft Docs

First of all, let’s think about integrations that are done in Dynamics. Integrations are a pain point for most of the projects out there and it’s even harder to make them the way we want with the limitations of the platform. Did you ever wanted to do something in your custom plugin, but the platform just forced you to go a different way that is not always an easy one.

Limitations

  • We can’t use 3rd party libraries
  • 2-minute timeout
  • Port usage limitation (only 80 & 443 allowed)
  • We can’t use IP addresses, only named web addresses
  • Execution with partial trust
  • Resource heavy operations

Let’s think about D365 webhooks as plugins that can be triggered in the Dynamics itself, but executed outside the platform. Because of that, we can overcome the limitations that are mentioned before. Sounds great?

Prerequisites

There are 2 things we need to implement webhooks in Dynamics.

  1. Web service
  2. Webhook Configuration

We need to implement a service that will consume our webhook requests and the best way of doing that will be to put it in the Azure. Azure has multiple services that will do the job just fine, but Functions is the easiest way of doing that since it supports multiple programming languages.

Configuration part is done inside Plugin Registration Tool with few easy steps and it’s very similar to registering a plugin step.

Web service

Azure Functions are just great for the start since we have almost everything that can be used for a quick start. We can use the portal inside Azure to code our functions, but I can tell that it’s not the best idea because you will miss little things like IntelliSense or managing NuGet packages. Please use the Visual Studio for developing functions and save your time and nerves.

Azure Functions project can be easily created in Visual Studio by searching for functions in the templates.

After selecting the Azure Functions template you can start with the HTTP trigger template in one of the next steps which will provide you everything you need to make a simple function that can consume webhook request.

This will generate you a simple function that looks like:

The most important object that will be used in our webhooks scenario is the HttpRequest object. That object will hold everything you need from the Dynamics 365 event that triggered the webhook.

Let’s do a slight modification to the function so we can log the Body content via built-in logger feature.

You can now publish the function to the Azure subscription by just pressing the right click Publish on the project in Visual Studio. After the successful publish you should head to the Azure portal and open the newly created resource.

The thing you need from here is URL to your function and it can be found in the section that is called like your function (Function1 in our case).

Press the Get function URL link and past the info somewhere for the later use.

URL should look something like:

https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1?code=ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

Dynamics webhooks don’t like special characters in functions keys so I advise you to renew the default key by going to Manage part until you don’t have special characters (other than double = sign at the end) in your code.

Now we have everything to move to the next part.

Configuration

The configuration is done with the newest version of the Plugin Registration Tool (not the one in XrmToolBox).

You can get the latest version in 2 places:

  1. NuGet package
  2. CDS.Tools

I suggest you to get it via CDS.Tools site because you can just click on the Get the SDK Tools! link and download the ZIP file.

Connect to your organization via PRT and select Register New Web Hook from the dropdown menu.

There you need to define Endpoint URL that is the first part (before the question mark) of URL you got on the Azure portal.

Then you need to select WebhookKey value in the Authentication dropdown and past the code value that you got on the portal.

Example:

https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1?code=ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

Endpoint URL https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1
Value ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

After you entered the info hit Save button.

The final step is to register a new step to our webhook by right-clicking on the webhook and selecting Register new step. This one should be very familiar because it’s not different from registering the plugin step.

Let’s add sync post operation step on the Create action of Contact entity.

It’s time to trigger our webhook by creating the contact entity in Dynamics. After you create a new contact in the Dynamics you should head to Functions portal in Azure and check the Monitor part.

If you set everything up you should see 1 execution in the logs which will contain logged JSON from the request body. This logs will be delayed for like 5 minutes so don’t worry if you don’t see them straight away.

Conclusion

This is just a brief overview of how to set the webhooks up and running, but we still didn’t cover the in-depth structure of the request and how can you actually debug those things locally.

All those things will be covered in the next blog post that will be released soon.