5.3 Advanced run options

There are default values for most of these codes, which are adequate for most simpler analyses, and analyses of small to moderately sized data sets. Hence these options are predominantly of interest to users which want to fit complicated models or analyse large data sets, and thus need to tweak some of the options for ordering of equations, parameterisations, maximisation strategies and convergence criteria to get the best possible performance from WOMBAT.

In contrast to the basic options above, several of these options can be followed by one or more, optional numerical arguments. If such arguments are given, they must follow immediately (no spaces), and multiple options need to be separated by comma(s). Arguments must be given sequentially, i.e. if we want to set argument k  , all preceding arguments (1,...,k − 1  ) are required as well. If no arguments are given, the corresponding variables are set to their default values (see A.1).

5.3.1 Ordering strategies

The order in which the equations in the mixed model are processed can have a dramatic impact on the computational requirements of REML analyses. Hence, WOMBAT attempts to reorder the equations, selecting a default ordering strategy based on the number of equations; see A.1 for details. This can often be improved upon by selecting the strategy used explicitly.

Option --mmd specifies ordering using the multiple minimum degree procedure.

Option --amd selects an approximate  minimum degree ordering [1].

Option --metis selects an ordering using a multilevel nested dissection procedure. Up to four optional arguments modifying the behaviour of this subroutine can be specified.

The number of graph separators to be considered, which can be between 0  and 20  . As a rule, the higher this number, the better the quality of ordering tends to be. However, the time required for ordering increases substantially with this number, and in some cases intermediate values (between 3 and 9) have been found to work best. The manual for MeTiS recommends values up to 5  . However, Meyer [18] found values as high as 12  or 14  to be advantageous for large analyses. By default, WOMBAT uses a value of 5  for analyses with more than 50000  and up to 200000  equations, and a value of 10  for larger models.
A factor which tells MeTiS which rows in the matrix are to be treated as dense. For instance, a value of 150  means all rows with more than 15% (factor divided by 10) elements than average. The default used by WOMBAT is 0.
An option to select the edge matching strategy employed by MeTiS. Valid values are 1  for random, 2  for heavy and 3  for standard edge matching. WOMBAT uses a default of 1  .
An option to select whether MeTiS uses one- or two-sided node refinement. WOMBAT uses a default of 1  which, paradoxically (to agree with the MeTiS convention), invokes two-sided refinement. This is deemed to produce somewhat better orderings. An option of 2  here selects one-sided refinement, which can be faster.

In addition, the option --metisall is available. This causes WOMBAT to cycle through the number of graph separators from 5  to 16  , considering density factors of 0  and 200  , as well as random and standard edge matching (48 combinations). Warning : This can be quite time-consuming !

To obtain the ordering, WOMBAT assigns a large matrix to store information on the non-zero elements  in the mixed model matrix. The default size chosen is based on the number of equations and traits analysed, and is meant to be generous enough to be sufficient for a wide range of cases. In some instances, however, this can result in WOMBAT trying to allocate arrays which exceed the amount of RAM available and thus failing to run while, in reality, much less space is required for the analysis concerned. In other cases, the default guess is simply not sufficient. To solve these problems, the run time option --choozhz is supplied, which allows the user to set this number. If used as shown, it causes WOMBAT to pause, write out the default value, and read in (input from the terminal!) the new value. Such interactive input may be inconvenient in some scenarios. Hence, alternatively, the new value may be specified as part of the run time option, immediately after (no spaces!) --choozhz. For example, --choozhz11000000 sets the value no 11 million. If --choozhz0 is given, WOMBAT will replace the 0 with the current, hard-coded maximum value for the array size (use run option --limit to find out what it is); this is convenient if something larger than the default is required and plenty of RAM is available.

5.3.2 REML algorithms

WOMBAT can be told to use a particular algorithm to locate the maximum of the likelihood function. In the following, let n  (must be an Integer number) denote the maximum number of iterates to be carried out by an algorithm, and let C  denote the convergence criterion to be used. Unless stated otherwise, C  represents the threshold for changes in log likelihood between subsequent iterates, i.e. convergence is assumed to be reached if this is less than C  .

Option --aireml specifies a ‘straight’ AI algorithm. It can be followed by values for n  and C  . Valid forms are --aireml, --airemln  and --airemln,C  but not --airemlC  .



limits the number of iterates carried out to 30, and stops estimation when the change in log likelihood is less than 10−3  .

By default, the AI algorithms enforces an increase in log likelihood in each iterate. This can be switched off using the option --nostrict. This can improve convergence in some cases.

In this case, a modification of the AI matrix can result in better performance of the AI algorithm. Four procedures have been implemented in WOMBAT. These are selected by specifying --modaimn  , with n = 1,2,3,4  selecting modification by setting all eigenvalues to a minimum value (n = 1  ), by adding a diagonal matrix (n = 2  , the default), or through a modified (n = 3  ) [2930] or partial (n = 4  ) [8] Cholesky decomposition of the AI matrix; see A.5 for details.

Option --emalg selects estimation using a standard EM algorithm. As for --aireml, this option can be followed by n  or n,C  to specify the maximum number of iterates allowed and the convergence criterion.

Option --pxem then selects estimation using the PX-EM algorithm. As above, this can be followed by n  or n,C  .

Option --pxai specifies a hybrid algorithm consisting of a few initial rounds of PX-EM, followed by AI REML. This is the default for full rank estimation (unless -c is specified without any explicit definition of the algorithm to be used). This option can have up to three sequential arguments, m  denoting the number of PX-EM iterates, and n  and C  as above.



defines 6 PX-EM iterates, followed by up to 50 AI iterates and a convergence criterion of 10−3  for the log likelihood.

Option --emai is like --pxai except that the initial iterates are carried out using the standard EM algorithm. This is the default for reduced rank estimation. Optional arguments are as for --pxai.

Option --cycle, followed by an optional argument n  , is used in conjunction with --pxai or --emai. If given, it will repeat the number of (PX-)EM and AI iterates specified by these options n  times. If n  is omitted, the number of cycles is set to 100  . This option is useful for reduced rank analyses, where cycling algorithms appears to be beneficial.

Finally, WOMBAT incorporates two choices for a derivative-free algorithm. While little used, these can be usefully to check for convergence in ‘tough’ problems. Option --simplex selects the simplex or polytope algorithm due to Nelder and Mead [25], as previously implemented in DfReml. This option can have three arguments, n  and C  as above (but with the convergence criterion C  describing the maximum variance among the log likelihood values in the polytope allowed at convergence), and S  the initial step size. Option --powell invokes maximisation using Powell [27]’s method of conjugate directions, again ported from DfReml and with optional parameters n  , C  (as for --aireml) and S  .

Not really a REML algorithm: Option --like1 selects a single likelihood evaluation for the current starting values, or, if combined with -c, the currently best point as contained in BestPoint. When combined with -v, the individual components of logℒ  are printed to the standard output (screen).

Similarly, option --valid does nothing more than calculate likelihood values. It is provided to aid in the use of cross-validation to estimate the tuning factor for penalised estimation. This involves obtaining estimates for a range of tuning factors for a ‘training’ data set. In doing so, WOMBAT creates a file PenBestPoints.dat (see 7.2.11). The corresponding likelihood values in the ‘validation’ data set are then easily obtained with the --valid run time option: for each set of estimates in PenBestPoints.dat, WOMBAT calculates the likelihood and writes it together with the tuning factor to a file ValidateLogLike.dat.

5.3.3 Parameterisation

By default, WOMBAT reparameterises to the elements of the Cholesky factor of the covariance matrices to be estimated (except for full rank analyses using the PX-EM or EM algorithm, which estimate the covariance components directly).

As a rule, the Cholesky factorisation is carried out pivoting on the largest diagonal element. This can be switched off with the --noreord option.

Reparameterisation to the elements of the Cholesky factors removes constraints on the parameter space. Strictly speaking, however, it leaves the constraint that the diagonal elements should be non-negative. This can be removed by transforming the diagonal elements to logarithmic scale. This is selected using the option --logdia, which applies the transformation to all covariance matrices. Conversely, the option --nologd prevents WOMBAT from carrying out this transformation. If neither option is set (the default) and WOMBAT encounters small diagonal elements in any covariance matrices to be estimated during iterations, it will switch to taking logarithmic values of the diagonal elements for the respective matrices only.

5.3.4 Matrix storage mode

By default, WOMBAT assumes that the mixed model equations are sparse and carries out most computations using sparse matrix storage. In certain cases, this may be inappropriate and lead to substantial overheads – a typical example is the case where a random effect has a user-defined covariance matrix which is dense (e.g. a genomic relationship matrix). For this case, the option --dense is provided – specifying this option causes the one triangle of coefficient matrix to be stored assuming all elements are non-zero. Similarly, all calculations are carried out without checking for zero elements - this is done loading subroutines from the BLAS [? ] and LAPACK [? ] libraries, using packed storage of the coefficient matrix. For n  equations, the latter requires n(n+ 1)∕2  elements to be stored. With integer*4 addressing,  this sets an upper limit of n = 65,535  . For larger problems, the option --dense2 is available which uses storage in two-dimensional arrays instead. Note that use of --dense2 may require substantial amounts of RAM and that it is not yet implemented for all algorithms (e.g. not for PX(EM)).

5.3.5 Sparse matrix factorisation, auto-differentiation and inversion

By default, WOMBAT now employs a ‘super-nodal’ approach in the sparse matrix manipulations. This involves gathering dense sub-blocks and processing these using BLAS [? ] and LAPACK [? ] library routines. While this readily allows processing  utilising multiple cores in these steps, the library routines require matrices to be strictly positive definite. Occasionally, they may fail where the previous procedure used may have circumvented the problem. The run time option --old is available to switch back to the latter.

5.3.6 Other Pedigree pruning

By default, WOMBAT ‘prunes’ the pedigree information supplied, i.e. eliminates any uninformative individuals (without records and links to only one other individual), before calculating the inverse of the  numerator relationship matrix. Exceptions are prediction runs (option --blup) and models which fit correlated additive genetic effects. In some instances, however, it may be desirable not to ‘prune’ pedigrees. Pruning can be switched off through the run time option --noprune. Overriding defaults for pedigree reduction

For sire model analyses, the default is not to carry out any pruning or pedigree reduction. The option --redped is provided to switch on pedigree reduction for sire model analyses, i.e. elimination of all individuals without a link to an individual in the data from  the pedigree file.

Occasionally – for instance to compute breeding values for progeny of animals in the data – it is desirable not to reduce the pedigree prior to analysis. Providing the  option --norped eliminates this step and switches off ‘pruning’ at the same time. Option --redped is invoked automatically for runs with options --solvit or --s1step. No programmed pause please

Generally, WOMBAT does not require any interactive input. An exception is a ‘pause’ after a warning message on a constellation of options which is new or can be problematic. These programmed pauses can be switched off using the run time option --batch. Centering data

By default, WOMBAT ‘centers’ data, i.e. subtracts the trait-specific mean from each observation. For some (rare) tasks this is undesirable. Hence, the option --nocenter is provided which eliminates this step. Choosing the algorithm to calculate inbreeding

By default, WOMBAT uses the algorithm of Tier [31] to compute inbreeding coefficients before setting up the inverse of the numerator relationship matrix between individuals. This algorithm is implemented with a limit of 23 generations. Occasionally, this results in insufficient space, both for pedigrees with mode generations and very large pedigrees. The option --maxgen, followed immediately by two digits giving the new value (e.g. maxgen25) allows this limit to be increased, up to a maximum value of 30. Note that this may result in WOMBAT trying to assign some very large arrays, and may slow calculations.

Alternative algorithms, due to Quaas [28] and Meuwissen and Luo [15], which do not have this limitation but are slower and use very little memory, can be selected using the run time option --quaas and --meuw, respectively. Using only the first n  observations

Sometimes, it is useful to carry out a preliminary analysis considering a small(ish) subset of the data only. For instance, we may want to test that we have fitted the model correctly or we may want to estimate suitable starting values. A quick way to do this is available through the option --nonlyN  , where N  is an integer number specifying the number of records to be used (no space between nonly and N  ). For example, --nonly1000 selects an analysis utilising the first 1000 records only. Omitting the preliminary LSQ analysis

One of the preliminary steps carried out by WOMBAT is a simple least-squares analysis of the fixed part of the model – this aims at identifying any dependencies amongst fixed effects levels which might not have been correctly specified. However, if there are many cross-classified effects or covariables, this can be very slow, especially for large data sets (code for this step is very old and probably the weakest part of WOMBAT and not as highly  optimised as the main part of the calculations). Option --nolsq is available to skip this step and is recommended for large analyses with many cross-classified effects – but it does rely on all rank deficiencies in the part of the coefficient matrix for fixed effects being identified: the run is likely to fail with an appropriate error message otherwise.

A scenario where this option is useful is when we fit different random effects for a given data set where the fixed part of the model remains constant – typically, it is then sufficient to carry out the least-squares step only for the first run. Alternatively, it is beneficial for simulations studies where the data is re-sampled, so that data and pedigree structure are the same for all replicates. Omit writing out solutions

Run option --nosolut will skip writing out  files with fixed and random effects solutions at the end of an estimation run. This is useful for simulation studies where these are not of interest. -