In my last post, I showed one of the big downsides of using preallocated clones—when you have a large hierarchy of preallocated reentrant VIs, the number of clones in memory can get big fast.
In this post, I want to clarify one thing about reentrant hierarchies, and then start talking about how to work our way out of these programming problems.
Let’s summarize where we are so far…
In part 1, we started talking about VIs that maintain state information from one call to the next. I want to clarify that there are two different kinds of such VIs. Sometimes you want global state—no matter where the VI is called anywhere in an application, you want its state to be shared among calls. This is what a functional global variable is. FGVs are, in general, not reentrant, to protect against parallel access to the global data. In part 1, I introduced a different flavor, where you don’t really want to share data across all calls; you wanted each call to have its own state, but you wanted each instance to maintain state between calls.
In part 2, we learned why VIs like this have to use preallocated clones, not shared clones. But what are the downsides of always using preallocated clones? What does “reduces memory usage” mean?
In part 2 of this series, I showed how shared clones can’t be used for VIs that maintain internal state. I want to explain a few more details about how shared clones work by answering a few common questions…
- When are shared clones allocated? How many are allocated?
- When are shared clones deallocated?
- How do shared clones behave inside timed loops?
- How do shared clones work with the VI Server?
In the first part of this discussion, I ended with homework that asked questions about shared clones and whether they would work for the running average I was trying to compute per channel. I included a graph showing that the filters worked correctly when set to use shared clones.
As answered in the comments, using shared clones is wrong in this case. But why was the graph right?
I was having a discussion (sometimes called “arguing” ) with another engineer at NI about how to maintain state in a LabVIEW application. We disagreed on the best way to maintain state in his application. Since at least two LabVIEW experts don’t agree on this topic, I think it will make a good topic for this blog. We also talked about how LabVIEW could make some things easier for what he was trying to do.
In this multi-part post, I want to start by explaining what state information is and why you might need it. Then I want to explain different ways you might want to implement it, including a comparison to how other languages support state information.
In computer science, there’s a concept of a purely “functional” subroutine, in which the subroutine returns values which are only a function of the inputs to that subroutine. Such a function has no side effects on the state of the rest of the system.
Consider the “add” function, for example…
Given the same input values for x and y, the add will always produce the same result.
A subroutine that has one or more side effects can’t be “functional”. Let’s consider the case where we want to keep a running average of acquired data points. (You might do this if you want to smooth the data to remove noise.)
Happy new year from the NI Field Architects! Just a quick post to let you know of an opportunity for you to make LabVIEW better.
As you might imagine with a product as powerful as LabVIEW, we are working on a variety of research projects with several leading universities around the world. There’s one in particular that I want to highlight today.
At Oregon State University, Dr. Chris Scaffidi and his students imagine a LabVIEW that guides you to write better code.
Disclaimer: This is not a veiled marketing post, attempting to entice you to purchase Desktop Execution Trace Toolkit (DETT). However, we and our customers have experienced much value from this tool and we think it’s valuable for anyone writing large LabVIEW applications. (Perhaps we would should have listened to NI’s marketing presentations earlier. )
Are you currently using Desktop Execution Trace Toolkit (available for LabVIEW 8.6.1 or later)?
If you answered yes, skip this post, or rather fast forward to the comments and let us know if you have additional feature requests.
For those of us who are not using DETT, does anyone actually have a reasonable excuse? Cost? Time? One more tool to learn?
Costs Too Much?
NI wisely rolled DETT (as well as VI Analyzer and the Unit Test Framwork) into the LabVIEW Developer Suite in 2011. So if you own Developer Suite, you have no additional cost. Otherwise, it’s a $999 investment with an enormous payback. One customer was able to identify and fix 90% of the memory leaks in less than a day. Another was able to identify the source of unreported errors in minutes. Those undetected issues would have been far more expensive had they not been repaired prior to deployment.
Yes, this is one more step in your process. It does take time to run your code through various scenarios. However expending a few hours or a day during development may save you that or far more after deployment.
One More Tool to Learn?
Yes, but this tool is very simple and you should be up and running over your lunch break, even if you only have a spare 30 minutes. Follow these steps:
- Enable VI Server Support by going to Tools>>Options>>VI Server and selecting TCP/IP under Protocols. If you don’t, the DETT will remind you about this step. (Note that the reminder
- Once you are in DETT, select new trace.
- Specify the instance that you want to trace.
- Select Start.
- Run your code.
It’s really that simple.
But Wait… There’s More…
My inbox has been filling up with emails from those of you who have been eagerly awaiting this follow-up to the previous post. Okay, so the truth is that I went on a post-NIWeek 6-week vacation and am finally back at work. [Or perhaps Nancy was doing the other part of her job... helping customers be successful through face-to-face interaction.--ed.]
In the last post, we looked at the typical use cases and benefits for Packed Project Libraries (PPLs). However, as the Field Architects have been working with customers, we ran into a few issues.
We are faced with a design challenge when working on large projects with multiple Packed Project Libraries (PPLs), specifically layers of libraries. We need to understand how PPLs link to other PPLs when a hierarchy of PPLs exist. Let’s look at an example…
Our first guest blog post… we’re excited to have a short write-up from Aristos Queue (known in real life as Stephen Mercer, one of our Senior Software Engineers in LabVIEW R&D).
A rumor has reached my ears. A false rumor about LabVIEW, inlining subVIs and buffer allocations. A rumor in need of quashing.
Do you know the party game, “telephone”? It’s where a group gets in a circle, and someone whispers a statement to the person next to them, who in turn whispers it to the person next to them, until the message gets all the way around the circle. Invariably, the message gets corrupted along the way, and the statement at the end has lost all of its original meaning. I find it both funny and sad.
The same thing seems to have happened with some information on race conditions and functional global variables in LabVIEW, so I want to try to clear it up.
It started earlier this week when I found an NI-internal document that’s used for code reviews, which said…
Functional Global Variables
A way to avoid race conditions associated with local and global variables is to use functional global variables. Functional global variables are VIs that use loops with uninitialized shift registers to hold global data.
· The FGV eliminates race conditions
Whoa! Given no context, that last statement is just plain wrong.
An internal document is one thing, but I’ve also heard this echoed by at least one customer in the past month, and also in an informal conversation here at NI.
What’s going on??? I decided to find out. Keep reading to understand more about race conditions and how this game of “telephone” progressed to where we are today.