Power BI Embedder for XrmToolBox

Introduction

I can finally announce that my first XrmToolbox tool is publicly available. It’s called Power BI Embedder and it allows you to embed Power BI reports into the forms on your Model-Driven apps.

GitHub Repository

The idea about the tool popped up in Brussels during the session “XML Hacks – Customisation Tips for Dynamics 365 Solutions” by Marc Gerner where he showed an example of how you can embed Power BI report into the form via manipulating the form XML. I thought it’s quite repulsive for most users out there and because of that it’s maybe not used at all.

How to do it via XML is actually quite good documented on the official Microsoft docs page, but still, let’s see how can you do it with the Power BI Embedder tool for XrmToolBox.

Download

You can get the tool like any other tool via the Tool Library inside the XrmToolBox by searching for Power BI Embedder. Once it’s downloaded you can find it in the Tools tab.

The UI

The UI is quite simple and allows you to easily populate all the required fields to achieve the final goal.

The first section called Target is the part where you choose the location where your Power BI report will be embedded. You are guided with 4 dropdowns to the final location which will be the form.

The next section is the Formatting one that allows you to choose the Section Name that will actually overwrite the current section label. and the height of the section that handled by the Rowspan attribute.

The third step is where the magic starts to happen and it’s the most complex one because you need to search up all the required data needed.

Power BI Config section

The only prerequisite is that you have your report published online. After you did this go open https://app.powerbi.com to find the right parameters.

The group ID is the first thing you need to find here and it’s quite easy if you want to use the report that is located in the place called My Workplace. In that case, the Group ID parameter will always be equal to 00000000-0000-0000-0000-000000000000.

If you are not that lucky you will need to dig a bit more for the Group ID value. It’s not hidden that deep.

Go to the workplace that contains your Power BI report, open Reports tab and open up the report you want to embed. Check out the URL from the browser and it should like something like the one below.

https://app.powerbi.com/groups/420ab21a-031a-48ce-97ec-061af35ed1dd/reports/8699df09-bf84-4d0e-bf06-37d5df426f49

GUID after the groups part is the Group ID that you are looking for, 420ab21a-031a-48ce-97ec-061af35ed1dd in this example.

The next one up is Report ID and it’s quite easy when you already have the report URL.

Report ID is the GUID that is positioned after the reports part in the URL.

https://app.powerbi.com/groups/420ab21a-031a-48ce-97ec-061af35ed1dd/reports/8699df09-bf84-4d0e-bf06-37d5df426f49

Report ID in the upper example is 8699df09-bf84-4d0e-bf06-37d5df426f49.

The URL parameter is the base address of the URL you got from the report URL which is https://app.powerbi.com in most cases but can be something like https://xyz.powerbi.com.

Before the final step, you need to check what is the value of the Embed Power BI Setting in the top menu. You need to set this value to Yes, otherwise, your report will not render on the form.

Your form should look something like the one below.

Once you set the settings to Yes you are ready to publish your first report on the form by pressing the Publish Report button from the upper menu.

After a few seconds, you can load your form in the browser and if you did everything right you should see the report there.

Great you have your report embedded in the form. This report is still not aware of the context of the form on which it’s rendering, but we can fix that very easy.

Filtering

Click on the Filter checkbox first to enable the Filter section.

You can see 3 contacts on the report from the last example that are child records under 2 accounts.

Jane Doe and John Doe are created under the Contoso account. Ivan Ficko is created under the Span account.

We would like to filter the report based on the account record that is opened.

First, you need to open the report in the Power BI editor and find out the name of the table that you want to do filtering on. You can easily find the name by looking at the Fields tab on the right and copy the exact same name of the table that you find there. The casing of the table name is important in this step.

The table name in our case is Contacts.

The next one up is the column that you want to be filtered against the field on the actual form. The field that we are looking for here is called Account Name, but we need to find the right schema name from the fields list. The right schema name for Account Name is parentcustomerid.

The final thing here is to choose the field that will be used in the filter is the field on the Account form. We need to select accountid which will held the value of the current Account loaded on the form.

The goal is to show all Contacts that have Account Name (parentcustomerid) equal to Account ID (accountid) on the report on the form.

Finally, we can again press the Publish Report button to see the result on the form.

Let’s show how the form would look like if we open the Contoso account form and confirm that only John and Jane are showing in the report.

Well done you are now ready to use the tool for other scenarios that will fit your needs.

Feedback

I hope that you will find the tool useful. Since this tool is a community tool feel free to post your feedback and ideas for features so we can make the tool even better.

Please post the issues/feature requests on the GitHub repository!

Is your PCF control read-only?

Issue

I’ve noticed that most of the PCF controls out there that are available as open-source projects have the same problem. They don’t handle states of the field.

The most basic one is the disabled state when the data is read-only. Maybe you thought that framework will handle that by default, but the answer is sadly a big NO. The good news is that you can fix that in no time.

Get the right info

The answer is quite simple and it can be found in the context object of type ComponentFramework.Context<IInputs>.

There is a property called mode that holds two important pieces of information. Those 2 are isControlDisabled & isVisible that hold information if the control is read-only and if the control should be visible on the form.

You can easily get those to values typing following commands.

Those properties obviously give us a boolean value that is self-explanatory.

Now we know how to get those values, but the question that pops up after that is where should I use it.

Choose the right tool

You can get the context inside two main methods that are part of every control out there: init & updateView. Both context objects will give you the data you need, but you must understand the behavior of each to be sure that that’s the right place you want to use it.

The first one that comes to mind is init method that is called only once when the control is loading. This choice will cover most of the use cases out there for the fields regarding the disabled state because most of the time field is read-only or editable from the moment you opened the form till the moment you close it. Is it the best way to handle it?

The answer is again NO. The second choice would be a updateView method that is always called after the init method finishes with the initialization of the control which will result in the same output as calling it in the init method. The most important thing to know here is that updateView method will be called every time something in the context is changed. That way we can handle the events like business rules for example that will make the fields read-only as a result of some other actions on the form.

Solution

Now when we are familiar with all the concepts we can make it work as a complete solution. The first thing to we need to do is make out the HTML element assessable inside the scope of the control.

The next step is to add the component to the control container and expose it to the variable outside the init method.

Finally, we need to set the element state in the updateView method.

You can reproduce the steps with the code shown below.

Conclusion

It’s time to revisit all the controls you’ve created and implement the logic for handling the disabled state. Make your control a complete solution just by a few lines of code. You can also try to manage the visibility state just the same way we did it with the read-only state.

Since most of the updateView methods are empty these days it’s a perfect time to give them some love with just a few lines of code.

D365 Webhooks – Part 3

Introduction

In the last blog of the series, we looked into the structure of webhook response and managed to parse it to the known object that is easy to use. This time we will deep dive into the debugging part.

Debugging issue

If we think about debugging in the context of webhooks we must have in mind that most of the time we will have both parts located in the cloud. The issue is how to debug the webhook handler Function/API when the request is coming from the cloud system.

As we used Azure Functions as handler example before we will do it in this post too.

Try #1

In this example, we will use Visual Studio + Postman to debug the Azure Function that will be running on our local machine.

Let’s use the slightly modified function that we used in the last blog that will log the request body instead of the user that created the contact.

That way we can catch the request payload in our Azure Function monitoring section. Let’s preform simple contact create action and wait for the trace log in the Azure portal.

After a few minutes, we can find the log in the monitor section of our function deployed in Azure and it will look similar like the image below.

You can see the request payload displayed in the large text area on the bottom right of the image. You can select all the text inside and copy it to the clipboard for the next step.

Let’s run our function project locally on our machine. When you run it you will get the CMD popup like the one shown below.

The important part here is the URL marked in green color.

http://localhost:7071/api/LogUser

Now we have our function up and running on our local machine so it’s time to mock the incoming request from the webhook by making the POST request from Postman to the URL we noted above.

Open the Postman and set the request type to POST. In the address bar type in URL http://localhost:7071/api/LogUser. Change to the Body tab, select the raw radio button option, past in the request payload that we copied to the clipboard from the Azure portal and select JSON (application/json) from the dropdown on the right that has Text as default value.

http://localhost:7071/api/LogUser. Change to the Body tab, select the raw radio button option, past in the request payload that we copied to the clipboard from the Azure portal and select JSON (application/json) from the dropdown on the right that has Text as default value.

Steps are shown in the image below.

Now it’s time to set a breakpoint in our C# code. Set a breakpoint on the line that we parse the incoming JSON to the RemoteExecutionContext object and run the request in Postman.

If you did everything right your code will hit the breakpoint and you will be able to check the values of your variables and step through the code, but it’s only the start of your debugging nightmare.

This method will take you a large amount of time and it’s maybe the first thing that you will try when debugging the webhooks, but it’s kinda not the best way to debug your function because you need to wait for every call to be logged on the Azure portal before you test it locally on your machine.

The good news is that there is a much better way of doing it.

Try #2

This example will use Visual Studio + ngrok and trust me it’s by far the most efficient way of doing the webhooks debugging.

First of all, what does ngrok do? ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels. In other words, it exposes our applications to the internet just by running the simple command in the command prompt.

First, you need to create the account on the ngrok website and install ngrok by downloading the installer afterward.

After you download and install it’s a good idea to add the ngrok folder to the Windows PATH so you can easily call it afterward in the command line.

Next step is to save your auth token to the ngrok config by running the command in CMD.

ngrok authtoken <YOUR_AUTH_TOKEN>

Auth token can be found in the Connect your account section.

Saving the auth token created the config file in

C:\Users\<YOUR_USER>\.ngrok2\ngrok.yml

Open that file so we can modify it a little bit.

Add a snippet below the authtoken line

Be careful while pasting the configuration because if you don’t indent the lines properly it will not work at all and you will get dozens of errors.

This configuration sets the EU tunnel that will be created with name functions on the HTTP protocol and port 7071 that is used for Azure Functions local debugging. By setting this config we can easily fire up our tunnel via CMD.

We can start our tunnel by simply typing the following command into CMD.

ngrok start functions

When the tunnel is created you will see the information about it in the CMD and it will be similar like on the image below.

The important thing here is to copy one of the URLs in the line that says Forwarding.

http://31b1a2b8.eu.ngrok.io

or

https://31b1a2b8.eu.ngrok.io

This will be the base URL that must be configured in the webhook configuration via the Plugin Registration Tool. Let’s open the PRT and edit the webhook configuration we used before.

All you need to do here is to change the Endpoint URL value to http://31b1a2b8.eu.ngrok.io/api/LogUser which is ngrok forwarding URL followed by api/<FUNCTION_NAME> and hit save. Authentication value can be set to WebhookKey and some random value in it.

Now you can perform the action on which you registered your webhook (create of contact in our case) and you will hit the breakpoint if everything is set right.

Also, your request will be logged in the ngrok console

and you can view it on the web interface that can be reached on

http://127.0.0.1:4040

when ngrok tunnel is up and running.

This way you can easily see what is coming in the request from the Dynamics in the GUI and you can also hit the Reply button to perform the exact same webhook like it was sent from the Dynamics before. I think that this one is pretty neat functionality that will save you some time when debugging the same request over and over again.

Conclusion

I hope that after going through this 3 part webhooks series you can start making your own webhooks solution, know when and how to use them and finally debug them in the right way.

Feel free to ask any questions regarding the whole webhooks series and I will try to answer them as soon as I can.

I’m also open for continuing the series if someone has the topic that I didn’t cover or maybe needs to be covered in a more deeper discussion.

Let’s make webhooks get some love like they really deserve!

D365 Webhooks – Part 2

Introduction

Last time we did a configuration of a simple D365 webhook with Azure Functions. This time we will check how to implement sync/async webhooks and upgrade it to parse the data that is passed as the context to the webhook function. If you are not familiar with configuring webhooks you should definitely start with Part 1 of the blog series before reading this one.

Async/Sync Webhooks

You can register 2 types of webhooks in Dynamics and you can guess it’s asynchronous & synchronous ones. It’s pretty straight forward to know the basic difference between those two, but there is always something in the background that is good to know when you implement them.

Asynchronous Webhooks

The name tells us that those kinds of webhooks are initiated in an asynchronous manner. It makes the most suitable for long-running jobs that will take some time and we don’t want to wait for the execution to end. That way we don’t get the response right away, but we should be able to check the execution results somehow. It’s done the same way you would do it for every single async operation in Dynamics and that is in System Jobs page.

System jobs page can be reached by following the path below.

Settings > System > System Jobs

Example of one failed execution is shown in the picture below.

System Jobs does not provide much information about the error, but at least we can find the info when something failed to execute. We need to implement error logging on the webhook request handler side as the Message box suggests.

Synchronous Webhooks

Synchronous webhooks are just the opposite of the async ones. Sync ones allow us to get the response in the real-time when the event rises. This benefit also leads to the few restrictions that we must consider while implementing sync webhooks.

The first restriction is that every request that ends outside the 2xx range HTTP code will result as a failed process. This will result in rollback on the whole transaction that initiated the webhook and we will get the let’s say not pleasant popup dialog that is out of our control. We can’t pass the data to the dialog and it’s showing some generic error message that is not helping us to determine the real cause of the error and even if you click Show Detail and download the log you will not get something useful to trace the issue.

The second restriction is a timeout. Timeout is here even lower than the timeout of the workflows (2 minutes). Sync webhook timeout is set to 60 seconds. In the end timeout results also with the transaction rollback.

Request content

When the event is raised in Dynamics the event data is sent by a webhook in a form of the HTTP POST request.

We can divide that data into 3 types:

  • Query String
  • Header
  • Request Body

Query String

The query string parameter holds the least information of all the content posted in the request. You can get 2 different values here:

  • WebhookKey
  • HttpQueryString

Both values are set in the configuration of the webhook via Plugin Registration Tool which can be used to ensure that request comes from the trusted source.

Header

The header contains a little bit more information than the query string. List of all information can be found in the table below.

KeyDescription
x-request-id A unique identifier for the request
x-ms-dynamics-organizationThe name of the tenant sending the request
x-ms-dynamics-entity-nameThe logical name of the entity passed in the execution context data.
x-ms-dynamics-request-name The name of the Event that the webhook step was registered for.
x-ms-correlation-request-id Unique identifier for tracking any type of extension.
x-ms-dynamics-msg-size-exceeded Sent only when the HTTP payload size exceeds the 256KB

Most of the parameters are self-explanatory, but it’s good to know what’s the important thing here.

The first one I want to highlight here is x-ms-dynamics-organization parameter that will give us information about the URL of the tenant that is sending the request which is pretty useful for multi-tenant implementations where you can have single Webhook to handle the requests from multiple instances.

The second one I want to talk about here is x-ms-dynamics-msg-size-exceeded. This one is important because it can cause troubles if you totally ignore it. The point is that it’s set only if the HTTP payload exceeds the 256KB in size. If the request is greater than the threshold request will not contain information about ParentContext, InputParameters, PreEntityImages & PostEntityImages which can be very useful in some scenarios, so you should be careful while handling big payloads.

Request Body

The most important part of the HTTP request is its body. The body contains all the information about the actual event that happened in the Dynamics. It contains a JSON object that has many nodes filled with useful information.

You can find the example of one JSON that is returned when a contact record is created on the link below. An example is stored in the external side just because it’s too big to show it in the post.

JSON Example

Assuming that you are familiar with Dynamics plugin development, if you look carefully in the JSON you can see some familiar parameters there. All parameters are shown in the picture above and you can see that it’s pretty much the same as the object that you used many times before in plugins. We are talking of RemoteExecutionContext class that is used in every plugin code.

Boxes marked in green are the most used parameters and will contain the most useful data, so you can focus on those when trying to find the right info you need.

Now we need to deserialize this big JSON the RemoteExecutionContext. We will do it like we use to deserialize JSONs in our plugin code with the snippet below.

Function from the snippet accepts the JSON string and as an output gives the RemoteExecutionContext object that can be used in our C# development.

Azure Function Upgrade

Now when we have the code to deserialize JSON to the known object we are ready to upgrade our Azure function to use it.

First, we need to install the NuGet package (Microsoft.CrmSdk.CoreAssemblies) that will allow us to use RemoteExecutionContext in our code.

Let’s update the code in the function that will log which user has created the contact record.

After deploying the function just like we did it in the first part of the blog we should get the log in the Monitor section of the function just like it’s shown on the picture below.

There you go! Now you can use the data coming from Dynamics event in your Azure Function like you are used to in the plugin development.

Conclusion

It seems like we have everything ready to make more serious stuff, but we are still missing the debugging part. At the moment we can just trace our errors and variables in the log to find the bug or recreate the requests coming from the instance and sending them to local Azure Function. Sounds like a boring and long-running process?

In this part, we made our way to the final part of the blog series that will show us how to debug the webhook request that comes from Dynamics online instance directly to local Azure Function so we can finally start making more complex scenarios.

PCF Time – Gallery Control

I’ve been playing with PCF for last few days and I finally managed to finish my first control.

I wanted to make a control that will show thumbnails of image files attached in notes. First thought was to attach it to subgrid, but I forgot that you can’t make subgrid of notes with OOB functionality. This didn’t stop me of trying to make it work so I decided to bind it to the simple text field.

Control is a really simple one. It shows all the images that are found in notes and present them as images. When you click on a thumbnail you will get a bigger preview of the image just below the thumbnails. Clicking on a big preview hides the big one.

Under each thumbnail, there is filename displayed which is also clickable and will result with download action for the clicked file.

Here is the GIF that will show the control in action.

Download the latest solution here.

I know that the Typescript coding is not my strongest side at the moment, but I keep improving in the future for sure.

You can check source code at the GitHub repository:

https://github.com/DynamicsNinja/PCF-Gallery-Control

The PCF just feels like a great tool for Dynamics developers and I think we finally have a tool that was always a dream by every Dynamics developer out there. There is also a features wish list already start at the community so you can check it out and support the ideas by others.

Can’t wait to see what will community do with this powerful tool in following months.

D365 Webhooks – Part 1

Overview

Webhooks are present in Dynamics CE world since version 9.0, end of 2017, but I think that they are not used that much by the people in the community. Why should we start using them?

Webhooks is a lightweight HTTP pattern for connecting Web APIs and services with a publish/subscribe model.

Microsoft Docs

First of all, let’s think about integrations that are done in Dynamics. Integrations are a pain point for most of the projects out there and it’s even harder to make them the way we want with the limitations of the platform. Did you ever wanted to do something in your custom plugin, but the platform just forced you to go a different way that is not always an easy one.

Limitations

  • We can’t use 3rd party libraries
  • 2-minute timeout
  • Port usage limitation (only 80 & 443 allowed)
  • We can’t use IP addresses, only named web addresses
  • Execution with partial trust
  • Resource heavy operations

Let’s think about D365 webhooks as plugins that can be triggered in the Dynamics itself, but executed outside the platform. Because of that, we can overcome the limitations that are mentioned before. Sounds great?

Prerequisites

There are 2 things we need to implement webhooks in Dynamics.

  1. Web service
  2. Webhook Configuration

We need to implement a service that will consume our webhook requests and the best way of doing that will be to put it in the Azure. Azure has multiple services that will do the job just fine, but Functions is the easiest way of doing that since it supports multiple programming languages.

Configuration part is done inside Plugin Registration Tool with few easy steps and it’s very similar to registering a plugin step.

Web service

Azure Functions are just great for the start since we have almost everything that can be used for a quick start. We can use the portal inside Azure to code our functions, but I can tell that it’s not the best idea because you will miss little things like IntelliSense or managing NuGet packages. Please use the Visual Studio for developing functions and save your time and nerves.

Azure Functions project can be easily created in Visual Studio by searching for functions in the templates.

After selecting the Azure Functions template you can start with the HTTP trigger template in one of the next steps which will provide you everything you need to make a simple function that can consume webhook request.

This will generate you a simple function that looks like:

The most important object that will be used in our webhooks scenario is the HttpRequest object. That object will hold everything you need from the Dynamics 365 event that triggered the webhook.

Let’s do a slight modification to the function so we can log the Body content via built-in logger feature.

You can now publish the function to the Azure subscription by just pressing the right click Publish on the project in Visual Studio. After the successful publish you should head to the Azure portal and open the newly created resource.

The thing you need from here is URL to your function and it can be found in the section that is called like your function (Function1 in our case).

Press the Get function URL link and past the info somewhere for the later use.

URL should look something like:

https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1?code=ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

Dynamics webhooks don’t like special characters in functions keys so I advise you to renew the default key by going to Manage part until you don’t have special characters (other than double = sign at the end) in your code.

Now we have everything to move to the next part.

Configuration

The configuration is done with the newest version of the Plugin Registration Tool (not the one in XrmToolBox).

You can get the latest version in 2 places:

  1. NuGet package
  2. CDS.Tools

I suggest you to get it via CDS.Tools site because you can just click on the Get the SDK Tools! link and download the ZIP file.

Connect to your organization via PRT and select Register New Web Hook from the dropdown menu.

There you need to define Endpoint URL that is the first part (before the question mark) of URL you got on the Azure portal.

Then you need to select WebhookKey value in the Authentication dropdown and past the code value that you got on the portal.

Example:

https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1?code=ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

Endpoint URL https://<YOUR_APP_NAME>.azurewebsites.net/api/Function1
Value ZoM1Z1V0eZZ35ZPgIDEpsd06geGhoWVt7L9Ts13QyoPbe7NhU39JiA==

After you entered the info hit Save button.

The final step is to register a new step to our webhook by right-clicking on the webhook and selecting Register new step. This one should be very familiar because it’s not different from registering the plugin step.

Let’s add sync post operation step on the Create action of Contact entity.

It’s time to trigger our webhook by creating the contact entity in Dynamics. After you create a new contact in the Dynamics you should head to Functions portal in Azure and check the Monitor part.

If you set everything up you should see 1 execution in the logs which will contain logged JSON from the request body. This logs will be delayed for like 5 minutes so don’t worry if you don’t see them straight away.

Conclusion

This is just a brief overview of how to set the webhooks up and running, but we still didn’t cover the in-depth structure of the request and how can you actually debug those things locally.

All those things will be covered in the next blog post that will be released soon.

How to embed Forms Pro in Dynamics 365

Introduction

It’s been a while since Forms Pro became available as a public preview and I must say that I fell in love with them when I saw them for the first time. From the very first moment, I was thinking how can I use them inside Dynamics 365 interface.

There are 3 types of embedded controls for Forms Pro:

  • Inline
  • Pop-up
  • Button

The first one that got my attention was Pop-up because I thought that it might serve me as a dialog (happily after old dialogs became deprecated). Sadly that option disappointed me really fast because of CORS (Cross-Origin Resource Sharing). When you make a ribbon button/field event that will call a simple function for showing pop-up you will get a CORS warning in the console and nothing will happen in the end.

The problem is in the authentication that is posting to the microsoft.com domain and not to the dynamics.com so there is no way of making it work in any kind supported manner.

The same thing happens if you try to call other to type (Inline and Button) of embedded controls from the ribbon or form event.

The only way to overcome CORS limitation is to put the code into the IFRAME and invoke the code inside of it, but let’s make it more useful by putting the Dynamics form context to the survey response.

Goal

Our goal for this one will be to create a survey for the contact form. The survey will have 2 questions about customers satisfaction with some CRM product. We will concatenate those answers to one string, update the survey response record (activity entity), set regarding to the contact and finally show it in the Timeline on the contact form.

Solution

The first thing we need to do is to create our survey that will be showed on the contact form.

First question will choice between 3 answers:

  • Salesforce
  • Dynamics 365
  • Zoho

Second question will be just a rating one that will hold values 1-5.

Next step is the Send Survey tab in which we need to choose an Embed option. Since we will do it on the form inside an IFRAME the best option here is to choose Inline.

Since it will be embedded on the contact form we need to create a custom parameter that will hold id of the contact.

After adding the custom parameter it’s time to press generate code button. You will get something like this:

Key things here are 2 parameters:

  • parentElementId
  • context

parentElementId parameter is just a string value that represents an id of the div that will be used as survey container.

context parameter is used for sending additional data to the survey response activity that is created once you submit the survey.

Let’s make some small changes to the code to make it work in our case. We need to add container DIV for the survey and get the contact id from the form context.

Now when we did some modifications we need to create new HTML web resource in Dynamics. It will be called fic_Embed.html and content will be just the same as the one in the code box above.

After that add a new tab to the contact form and inside it add a section with a web resource (in this case fic_Embed.html).

You should see something like this when you save and publish the form and open a Survey tab.

Congratulations, you have successfully added a survey to your Dynamics form.

Now we need to see what is going on with data created in the background. Let’s test the survey with options Dynamics 365 and rate it as 5 stars. After we submit the answers some records will be created in the background.

The first record that is created is Survey response record, which is the activity entity by the way. It contains a few fields that are important in our case.

The Subject field is a field that is inherited from activity entity and it’s blank when you submit the survey which is not great in our case because we want to show the activity in the Timeline.

Context Data field is a field that contains JSON object with one core node called EmbedContextParameters that contains all the custom parameters (only one contactId in our case) that we posted in JS code.

Every Survey response record has his child entities called Survey question responses. Those records hold the values of the submitted answers. Answers are stored in the Response field in plain text format.

Now when we know how the data is stored it’s time to make a Flow that will transform the data in the way we want.

Let’s make a blank Flow to start.

Trigger that will be used is Common Data Service one that is triggered when the Survery response record is created.

The first action will be used to retrieve all Survey question responses related to the record that triggered the Flow.

The only parameter we need to set here is a filter query that will filter our all related question responses. Survey response GUID is hidden in Activity parameter from the trigger. Question responses are always returned in the order that is the same as question numbers sorted ascending.

Next, we need to parse the JSON that is holding EmbedContextParameters from the Context Data field of Survey response entity.

Schema that is generated by payload looks like:

Finally in the last step of the Flow we will update the Survey response entity.

As record ID we will use Activity field from the trigger. Regarding field has 2 parts. Regarding needs to be set to the contactId value that we got from the JSON in the previous part and the Regarding Type needs to be set to contacts.

The subject is the part that will be shown in the Timeline so we need to put something human readable here. Let’s concatenate the answers of the questions with “-” sign in between. We know that in returned Survey question responses dataset answer of the first question will be returned as the first item in the array and the answer of the second one will be returned as the second item in the array. Knowing that it’s easy to make a concatenation.

First body(…) part from Subject field in the picture must be equal to:

Second body(…) part from the Subject field in the picture must be equal to:

After we set those parameters in the last action and started the Flow everything is ready for the final test.

Let’s submit new answers to the same contact as before.

After we submit the answers we should wait for a few seconds for Flow execution and then refresh the Timeline on the contact entity.

Well done if you see the submitted answers on the Timeline!

Conclusion

It’s quite easy and intuitive to work with survey answers once you understand the data model in behind, but I see there so much to be done. As I said at the beginning I see big potential in this for dialog scenarios in the future, but first, they need to change the authentication method to the same domain as Dynamics so we can use it anywhere on the form.

I’m really looking forward to this feature and I think that big things will come really soon because feedback in the community regarding the feature is just huge.

When smart matching became dumb matching

There are 2 ways of tracking emails in Dynamics 365: folder-level tracking and correlation method. Folder-level tracking will fit only a few cases, but the correlation method will do the job for most cases and it’s a way to go for most customers.

When you decide to go correlation route there are also 2 (or even 0) ways to do that. First one is to use tracking token which is the most precise method, but also the least accepted method by customers because it adds a token in the subject of the mail. The second one is the smart matching method that is suggested by most people because it gives the flexibility to match emails that are started from the new thread, but is it really worth it?

What is smart matching?

When you search for definition in docs about the smart matching it says that it’s done by checking the email subject, sender and recipients to find the matching email thread in the emails that are synced to Dynamics.

Subject matching is done by setting the regex expression that will remove the all unnecessary characters from the subject to improve the matching algorithm.

Sender and recipient matching allows us to set parameters for maximum numbers of recipients to get better results allowing us to tweak the matching to match our specific needs. In most cases it takes some time until you find the best batch for your specific case.

Sounds like a perfect solution, does it? There is always a but!

Problematic case

I had a problem with some emails that were not synced in Dynamics for a long period of time. Since then I tried to find some patterns between unsynced emails, but with no luck.

Problematic emails were automatic emails sent every now and then with the same subject and to nearly the same recipients by the same sender. Half of the emails are synced to Dynamics and the other half ends in the error. The error message on mailbox just doesn’t tell much about the actual issue.

Emails that were synced successfully looked just like the ones that didn’t when you are looking at the metadata of the email message, so there should be no reason for sync to fail.

Investigation

Investigation started with EWS Editor for Dynamics 365 tool which is used to troubleshoot issues with Exchange server integrations with Dynamics 365. With this tool you can view all the metadata of the particular email and test if that mail can be synced to Dynamics. We had no luck with the tool in this case, but maybe it can point you in right direction while you try to troubleshoot the issues.

After a deep dive into the problem one strange pattern emerged.

Emails with the exact same subject and recipients started to fail after the initial period of successful syncs and after that, they fail to sync every single time. Since it was on the on-premise installation we started the trace log on the server to monitor the errors coming from the Dynamics.

You can force the sync on the same email by modifying the one property of the email or just moving it to the different folder and then back to the inbox.

Every single time mail failed to sync with the exact same error message and it’s quite a simple one “SQL timeout expired” and this one leads us to the problem with SQL queries that are executed on email sync.

When we scrolled through the trace log we found one SQL query above the timeout messages that looked like this one:

When we tried to execute the query in SQL Management Studio it was running for almost a minute which was a serious problem if you consider that it’s run every time a single email is synced to the Dynamics.

The problem lays in EmailHashBase table that has a lot of records in it and with all those filtering and joins it really take time to execute the query. After a call with Microsoft, we were told that it looks like smart matching fallback SQL query.

Finally when we had that information we tried to switch off the smart matching feature and everything started working like a charm. Looks like when we made our solution less smart everything is now working better.

Conclusion

Query on email hash table is causing the issue if you have the email with same subject and recipients over and over again.

You can increase the timeout limit which is not the smartest idea to do or you can just get rid of the smart matching feature because it will just kill your integration. Switching off the smart matching feature is not that bad idea if you consider that it’s way older feature than the correlation method one. Maybe it’s looking like that it’s just an addition to the correlation method because it is shown as a nested feature under a correlation, it really isn’t. It’s only a fallback option if the correlation method fails to find the right match.

There will always be a case where the smart matching feature is a good one to use, but you should really think if you will have the situation described above before you decide to tick that checkbox.

I can’t believe that I can say that a “dumb” solution made me so happy that day. Looks like that you don’t need to go the smart route every time, maybe the dumb one will just save your time and nerves.

D365 Forms – VE vs​ Embedded Canvas App

A lot of people asked me an opinion on using embedded canvas apps instead of virtual entities to show additional information on the form, so I decided to put those things into the blog post.

Both approaches have a bright future and I’m like 100% sure in that because they fit different design scenarios. Let’s discuss the diferences between those two and when to use them.

Comparison

Virtual EntitiesEmbedded Canvas Apps
OOB Data Sources2200+
Custom Data SourcesAny Data SourceOnly OData v4
Form ContextAll FieldsAll Fields
Related Entities ContextAll FieldsLimited Fields
AppearanceDynamics ViewCanvas App
Elements Per PageUnlimited1

Virtual Entities

Virtual Entities look like forgotten feature by Microsoft, but some people from the Microsoft confirmed that they are doing some serious work regarding Virtual Entities in the background. On the other hand, some people have reverse engineered the Plugin Registration Tool and found out that everything is set for full CRUD support in the tool right now, but it’s not accessible via UI. I’m a big fan of VEs, so I was really happy when I heard that information.

Data sources are really a thing that can make you go both ways. Virtual entities can handle 2 OOB connectors: OData v4 & Cosmos DB (buggy at the moment). OData v4 connector is the most stable connector at the moment in the VE world and it’s definitely the way to go if you have web services that meet the standard. On the other hand, you can literally connect to any data source writing the C# code for the custom connector, but you must be aware that this type of integration will take a lot of your time because you need to implement every little thing (fetching, filtering, sorting, paging,…) by yourself or your friend developer.

You should use virtual entities if you want to use that data on places other than your entity from (related records subgrid), for example, charts, dashboards, reports, advanced find.

Ribbon buttons are also a great way to interact (select few records and fire action) with records shown on the subgrid and at the moment it’s not possible to configure those buttons to interact with elements inside the embedded canvas app.

Limit to the number of subgrids is also a plus for VEs. When you are using UCI you can put as many subgrids as you want to the form, but when you use Classic view that number is set to 10 maximum.

Editing the actual UI representation is easier because it can be done by anyone that used Dynamics views before.

Embedded Canvas App

This feature is still marked as a preview, but lately, I can see that it became more stable, so you should definitely consider using this one. Microsoft is pointing us in the direction to use Canvas Apps as much as we can and I think we should definitely give them a try even in embedded scenarios.

When we come to data sources it’s pretty clear that more than 200 OOB connectors are a big plus for the Canvas Apps approach, but the situation changes when we need to use data outside those OOB connectors. Data that you want to fetch must be available as REST web service and we can expect SOAP support in the near future.

You should consider using a canvas approach if you need to enrich your records list with additional graphical elements (eg. images) or adding some buttons that will do some action to each record.

Getting some callback actions from Canvas App to the form is not available at the moment, but I think that it’s a must feature in future and that we will not wait too much for that one.

Limit to only one canvas app per page is the con here, but you can make your app as complex as you want to show more than one list. If you go with a complex app you should consider performance issues when fetching the data from the data sources.

Canvas Apps give us more flexibility when building the actual UI of the app, but it also requires people that are familiar with designing the Canvas Apps which can take some time before you can achieve your idea inside the app.

Conclusion

You should really consider every single limitation of the VEs and Embedded Canvas Apps before you do the final decision which way is the right one. Sometimes both ways are good and will do the job, but when you think in future changes maybe the one will be just a better choice. I’m sure that there are scenarios that will make you stick to the one option because the other one will not fit the requirements in the start.

If you ever find yourself in the position where you really need to choose either VEs or Canvas Apps approach and you are sure that both will do the job just fine I will suggest you go Canvas Apps route because of the compatibility issues in future and the popularity of Canvas Apps in general that makes Microsoft rapidly develop the new features that will help you even more.

New CrmSdk force us to use .NET v4.6.2

It’s that time of the year when all older organizations are upgraded to the latest version of Dynamics and with that upgrade tons of new issues are produced as usual.

First of all, I have noticed that a lot of async jobs started to fail (not every time) due to 2 errors:

  • System.ServiceModel.CommunicationObjectFaultedException: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state
  • System.TimeoutException: Couldn’t complete execution of the custom activity XYZ plug-in within the 2-minute time limit.

The main problem was that I forgot to update the SDK on the workflow activities project to the v9.x version, but when I tried to update to the latest SDK version I was stopped by an interesting error.

There is no info about required .NET framework version on the NuGet installer screen.

But when you search up for the NuGet package on the web (https://www.nuget.org/packages/Microsoft.CrmSdk.CoreAssemblies) you will find the answer very quickly.

The answer is that from now on you need to use .NET v4.6.2 on all of your projects that will use new SDK NuGet packages.

This is a smart move by Microsoft that finally confirms that .NET v4.6.2 DLL assemblies are officially supported on D365.

It’s time to update all those .NET v4.5.2 projects to the new version of the framework and take advantages of it.