From: crcragun on
LabView 7.1.1 and 8.5.

 

My application requires that a XY Graph that allows the Operator to select which of many channels are plotted on the X and Y axis.  I have implemented a 10,000 point circular buffer that updates every 200ms.  The problem occurs after testing is complete and the buffer over time becomes filled with noise for each of the two selected channels.  The result is a tight blob of 10,000 points all connected with lines on the XY Graph.  Since the auto-scale is enabled, the size of the blob of data is maximized to fill the XY Graph.  In addition, the operator is in control of when the XY Plot is enabled/disabled.  The result of this scenario is excessive CPU usage causing the whole computer to be bogged down and nearly unresponsive to Operator selections.

 

The attached VI is a very simple (reduced) example of what I am experiencing.

 

Through some troubleshooting, I have learned the following.

The point style, interpolation, and line thickness all directly effect CPU usage.  A round point requires more CPU usage than does a square point.  A large dot takes more CPU usage than a small dot.  A thick line takes more CPU usage than a thin or no line.

The summary of what I observe is that the more pixels that are required to be represent the graph, the higher the CPU usage.

 

Does anyone have any ideas of how to further reduce the CPU usage either by other XY Graph settings or by some method of intelligently selecting which points to plot?

 

Thank you.


CUP Hog - XY Plot.vi:
http://forums.ni.com/attachments/ni/170/335354/1/CUP Hog - XY Plot.vi
From: GerdW on
Hi crcragun," the more pixels that are required to be represent the graph, the higher the CPU usage" - that's true! :smileyvery-happy:So your options are:- obvious: draw less points (with thinner lines)! Can you really distinguish 10k points on this plot??? Especially when they are located on just 5 x-values???- also obvious: don't draw in each timeout case! Draw only when new points are added or plot options change...Message Edited by GerdW on 06-26-2008 09:03 PM
From: StevenA on
To me it does not seem un reasonable to have 10k points on a graph. (the data in the example is only sample data not real data)  It seems that regardless of the number of points, If the graph is taking up a significant portion of the plot area you see the same results.  LV uses a lot of CPU to plot it.  If you have the same number of points follow a more continuous line over a larger range and there is a lot of empty space in the plot area, LV handles it without any trouble. 
Is there anyway to make this more efficient other than modifying the data and the range that data occupies?Message Edited by StevenA on 06-26-2008 01:31 PM
From: crcragun on
I understand that 10K points is a lot, however there are reasons for this.  I can and will look into reducing the number of plotted points.
I should also make one thing clear; any time valid data is plotted and there is actual useful data displayed on the XY Graph, the CPU usage is very low.  When useful data is displayed, the data points are spread out over the whole plot which is constantly auto-scaling.  The problem only seems to occur when many points are filling the plot causing and single large blob consisting of lines and points.
Thanks
From: johnsold on
Can you identify that "blob" by some simple measure such as standard deviation or max - min? If so, you could suppress the plot and have a boolean indicator to show "Invalid data" or something. Lynn