From: Ben on
1) defer only acts on the FP it is associated with. The others will update fine.
2) If you only present the data to the graph when the data changes instead of always send the same data 5 times a second will keep the CPU demands low.
Ben
From: crcragun on
Thanks Ben,
The data being send to the XY Graph is in reality always changing although it may be by a very small amount.
Is it possible to defer updates for a specific object such as the XY Graph rather than for an entire front panel?
Cliff
From: altenbach on
You can just place the graph terminal in a case structure and only update every Nth iteration of the loop. Use quotient& remainder to divide the iteration count by N and wire the remainder to your case selector. Place the graph update code in the "0" case and leave the default case empty.
From: crcragun on
altenbach
If the defer panel update does not work as hoped, I plan on doing something similar to you suggestion.  I still want to Graph to update every 200ms, but with less data.  I plan on reducing the array size from 10K to 5K and send every-other point from the RT to the Host.  In the end I will probably continue to decimate the RT data until the CPU usage is reasonable.  In addition, changing the line thickness to a thinner line dramatically reduces the CPU usage.
Cliff
From: StevenA on
There is no question that reducing the amount of data on the graph will help with the performance issue.  I think Ben's idea with the defer panel updates is an interesting one.  
 As I'm familiar with Cliffs project, let me give a little more perspective on the issue.  I think the challenge he is having is that this graph is monitoring continuous data from a machine  and the X and Y channels that are plotted against each other are selectable by the user on the fly depending on what he needs to monitor.  The selection of channels that can be chosen for the graph have very different ranges.  Keeping the auto scaling turned on is very nice so that the data can be properly displayed.  What happens is that if the data selected by the user goes to zero, and all you have is some noise, the plot will zoom in (auto scaling) and turn into this "blob" of lines that the X-Y graph really starts to choke on.  Yet at any instant there could be a large transient in the data that needs the resolution of 10k points to see the detail.  The graph has no problem plotting the 10K points when those points are spread out over a large range.
Programatically reducing the number of data points when the data is near zero and turns into the dreaded "blob" is not trivial because in some cases, depending on the channels selected, there is valid data in that range. 
Thanks for everyones input :smileyhappy: