New output NDFs may also be generated by a process termed propagation, in which a new structure is created based on an existing template NDF. This is the most common method of creating an NDF to contain output from a processing algorithm, and is typically used whenever an application draws input from one or more NDFs and produces a new output NDF as a result.
As far as the user of such applications is concerned, the output dataset would typically be based upon one of the input datasets; i.e. it might inherit its shape and component type, storage form and possibly values from an input dataset.13Of course, the output data structure would also incorporate whatever changes the processing algorithm is designed to perform.
Seen from within such an application, the purpose of propagation is to create a ``skeleton'' output NDF based on an input structure, but containing ``blank'' (i.e. undefined) components into which calculated results can be inserted. Usually, there will also be ``non-blank'' (i.e. defined) components in the newly-created NDF, which derive their values directly, without change, from one of the input datasets. Such components are said to have been propagated.
The way in which components (and extensions) are selected for propagation is
central to the philosophy of the NDF and it is important to understand the
principles if you are to write applications which process NDFs consistently.
There are two sets of propagation rules which apply separately to
standard NDF components and to extensions.
The distinction between them is explained in the following two sections,
after which the way in which these ideas are implemented in practice using
NDF_ routines is described.