Set MaxReceivedMessageSize from C# code

Boris (2 posts)
January 29, 2020 11:29 AM
Accepted Answer

Hello guys,



We have adTempus 4.0 (4.4.0.17325) server running in our environment. Our C# client dynamically / programmatically connecting to a list of adThempus servers to fetch jobs status. A list of adThempus servers can be changed at any time so we have no entries in our client Web.config to define adThempus servers endpoints, the connections got created programmatically using Scheduler.Connect adThempus API.

 

Recently in our client logs we noticed these error messages trying to get data from the adTempus server:



"Exception in getting AdTempus jobs: ArcanaDevelopment.adTempus.Shared.ConnectionClosedException: The server connection has failed or has been closed: The maximum message size quota for incoming messages (10240000) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.: The maximum message size quota for incoming messages (10240000) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. [ADT005166E]"



I checked the API docs and API definitions and cannot find a way to set MaxReceivedMessageSize in the Scheduler.Connect or GetJobMonitorData APIs. Could you please help us to understand where and how we can define this parameters if we are calling AdTempus APIs from C# code?

 

Code snippet:

 

using (Scheduler connection = Scheduler.Connect(connectionDescriptor, LoginAuthenticationType.Windows, "", ""))
{
     using (DataContext context = connection.NewDataContext())
     {
             JobMonitorViewFetchOptions fetchOptions = JobMonitorViewFetchOptions.PastInstances;

             InstanceQueryParameters queryParameters = new InstanceQueryParameters()
             {
                   FetchOptions = ObjectFetchOptions.FullFetch // Use FullFetch because we are also getting LogMessages. Using StubsOnly is much slower.
             };

                  context.GetJobMonitorData(fetchOptions, queryParameters, startDate, endDate, out DateTime calculatedRangeEnd, ref lastFetchTimeUTC, 
                        out System.Collections.ObjectModel.ReadOnlyCollection<JobExecutionTimes> scheduledExecutions, out ExecutionHistoryItemCollection history);

                  if (history != null)
                 {
                        foreach (var h in history.OrderByDescending(q => q.ExecutionStart))
                         {..................

                         }                

                 }

           }

}

 

Thank you,

Boris

Bill Staff (601 posts)
January 29, 2020 12:56 PM
Accepted Answer

It's possible to increase this through an app.config setting: under appSettings, add a value named "ServiceHost:MaxReceivedMessageSize" and set it to the new max size (in bytes):

    <add key="ServiceHost:MaxReceivedMessageSize" value="102400000"/>

However, since you're hitting the existing limit that means the server is sending back more than 10MB of data for this call. It might be better to look at limiting the amount of data being returned. I notice in the code that you're doing a full fetch to include the log messages, so that's likely to be the cause. There are several possibilities:

  • You've got a really big date range, so there's just a lot of instances (and associated log messages) to return
  • You've got one job that has a lot of log messages, or some log messages with a lot of content. This can happen if you've got a script or application that's logging a lot of custom messages to the Job Log through the API or script hosting interface.

If you can tell me a little about what you're doing with the LogMessages, I may be able to suggest a better way to fetch them. For example, if you don't need the messages for all of the instances, you can identify which instances you want it for, and then fetch the messages for that subset all at once.

Meanwhile there are a few ways to troubleshoot this:

1. Programmatically, you can change your FetchOptions to fetch StubsOnly and make sure this works. See how many instances come back. Then loop through them and fetch LogMessages for them, and see if there's a particular instance where you hit a problem.

2. In the database, you can query to see if there are job instances that have a lot of log messages:

select j.name,j.oid,ehi.jobInstanceID,count(*) as LogMessageCount
from job j join executionHistoryItem ehi on j.oid=ehi.jobOID
join LogMessage lm on ehi.oid=lm.executionHistoryOID
group by j.name,j.oid,ehi.jobInstanceID
order by count(*) desc

3. You can have the server do a file dump of the data it's trying to send back to the client. We can look at that to see what's going on. If you want to go this route, open a support case so I can you configuration instructions and so you can send the dump file to us without posting it publicly.

 

Bill Staff (601 posts)
January 29, 2020 01:16 PM
Accepted Answer

Irrelevant to the problem at hand, but just FYI (with apologies for the spotty API documentation), based on what you're showing in your code, you don't need to be calling GetJobMonitorData. This method is there to support the Job Monitor view in the Console and to efficiently get past, current, and future instances all in one call. If you're not interested in future (scheduled) instances, you can just use GetJobHistory instead. There shouldn't be a functional or performance difference, but the code is a little bit simpler on the client side because it eliminates those out and ref parameters that you don't care about:

queryParameters.StartTimestamp=startDate;
queryParameters.EndTimestamp=endDate;
queryParameters.Statuses.AddRange(JobStatusHelpers.FinalStatuses);   //This tells it to only get instances that have finished running

var history=context.GetJobHistory(queryParameters);

This won't make a difference with your quota problem, because it will send over the same data as the call you're making.

Boris (2 posts)
January 29, 2020 02:48 PM
Accepted Answer

Thank you very much, Bill, for these details and quick response. Let us validate the possible solutions and we will reach you if we will need more help on one of your approaches. 

 

Thanks again,

Boris.

samy (2 posts)
February 18, 2021 02:33 PM
Accepted Answer
I tried increasing the size in appSEttings but it still didn't work. Do I need to put the increased size within the code itself somewhere?
Bill Staff (601 posts)
February 18, 2021 05:08 PM
Accepted Answer

The setting just needs to go in the config file. I notice in my example above the limit is the same as the default (102400000) so it's not doing any good--you need to use a larger number.

If you don't need LogMessages for the instances, remove the

FetchOptions = ObjectFetchOptions.FullFetch

so that you're not fetching the messages--those are a lot of data.

The other solution I didn't think to mention in my previous answer is to use paging to fetch fewer records at a time:

using (Scheduler connection = Scheduler.Connect(connectionDescriptor, LoginAuthenticationType.Windows, "", ""))
{
        using (var context = connection.NewDataContext())
        {
            JobMonitorViewFetchOptions fetchOptions = JobMonitorViewFetchOptions.PastInstances;

            InstanceQueryParameters queryParameters = new InstanceQueryParameters()
            {
                FetchOptions = ObjectFetchOptions.FullFetch // Use FullFetch because we are also getting LogMessages. Using StubsOnly is much slower.
            };


            var instances = new List<ExecutionHistoryItem>();

            queryParameters.PageSize = 100;


            for (int pageNumber = 1; ; pageNumber++)
            {
                queryParameters.PageNumber=pageNumber;
                
                context.GetJobMonitorData(fetchOptions, queryParameters, startDate, endDate, out DateTime calculatedRangeEnd, ref lastFetchTimeUTC,
                      out System.Collections.ObjectModel.ReadOnlyCollection<JobExecutionTimes> scheduledExecutions, out ExecutionHistoryItemCollection history);

                
                instances.AddRange(history);
                
                if(history.Count() < queryParameters.PageSize)
                {
                    //all records have been returned
                    break;
                    
                }
                //else loop to fetch the next page
            }

            foreach (var h in instances.OrderByDescending(q => q.ExecutionStart))
            {

            }

        }
    }
samy (2 posts)
February 18, 2021 05:32 PM
Accepted Answer

So I did do that but it still thinks the limit is102400000

 <add key="ServiceHost:MaxReceivedMessageSize" value="204800000"/>
Bill Staff (601 posts)
February 19, 2021 12:49 PM
Accepted Answer

Please try setting the MaxBufferSize instead of the MaxReceivedMessageSize:

<add key="ServiceHost:MaxBufferSize" value="204800000"/>

That resolved the problem in my tests.


Replies are disabled for this topic.