There are default values for most of these codes, which are adequate for most simpler analyses, and analyses of small to moderately sized data sets. Hence these options are predominantly of interest to users which want to fit complicated models or analyse large data sets, and thus need to tweak some of the options for ordering of equations, parameterisations, maximisation strategies and convergence criteria to get the best possible performance from WOMBAT.
In contrast to the basic options above, several of these options can be followed by one or more, optional numerical arguments. If such arguments are given, they must follow immediately (no spaces), and multiple options need to be separated by comma(s). Arguments must be given sequentially, i.e. if we want to set argument , all preceding arguments () are required as well. If no arguments are given, the corresponding variables are set to their default values (see A.1).
The order in which the equations in the mixed model are processed can have a dramatic impact on the computational requirements of REML analyses. Hence, WOMBAT attempts to reorder the equations, selecting a default ordering strategy based on the number of equations; see A.1 for details. This can often be improved upon by selecting the strategy used explicitly.
Option --mmd specifies ordering using the multiple minimum degree procedure.
Option --amd selects an approximate minimum degree ordering .
Option --metis selects an ordering using a multilevel nested dissection procedure. Up to four optional arguments modifying the behaviour of this subroutine can be specified.
In addition, the option --metisall is available. This causes WOMBAT to cycle through the number of graph separators from to , considering density factors of and , as well as random and standard edge matching (48 combinations). Warning : This can be quite time-consuming !
To obtain the ordering, WOMBAT assigns a large matrix to store information on the non-zero elements in the mixed model matrix. The default size chosen is based on the number of equations and traits analysed, and is meant to be generous enough to be sufficient for a wide range of cases. In some instances, however, this can result in WOMBAT trying to allocate arrays which exceed the amount of RAM available and thus failing to run while, in reality, much less space is required for the analysis concerned. In other cases, the default guess is simply not sufficient. To solve these problems, the run time option --choozhz is supplied, which allows the user to set this number. If used as shown, it causes WOMBAT to pause, write out the default value, and read in (input from the terminal!) the new value. Such interactive input may be inconvenient in some scenarios. Hence, alternatively, the new value may be specified as part of the run time option, immediately after (no spaces!) --choozhz. For example, --choozhz11000000 sets the value no 11 million. If --choozhz0 is given, WOMBAT will replace the 0 with the current, hard-coded maximum value for the array size (use run option --limit to find out what it is); this is convenient if something larger than the default is required and plenty of RAM is available.
WOMBAT can be told to use a particular algorithm to locate the maximum of the likelihood function. In the following, let (must be an Integer number) denote the maximum number of iterates to be carried out by an algorithm, and let denote the convergence criterion to be used. Unless stated otherwise, represents the threshold for changes in log likelihood between subsequent iterates, i.e. convergence is assumed to be reached if this is less than .
Option --aireml specifies a ‘straight’ AI algorithm. It can be followed by values for and . Valid forms are --aireml, --aireml and --aireml but not --aireml.
By default, the AI algorithms enforces an increase in log likelihood in each iterate. This can be switched off using the option --nostrict. This can improve convergence in some cases.
In this case, a modification of the AI matrix can result in better performance of the AI algorithm. Four procedures have been implemented in WOMBAT. These are selected by specifying --modaim, with selecting modification by setting all eigenvalues to a minimum value (), by adding a diagonal matrix (, the default), or through a modified () [29, 30] or partial ()  Cholesky decomposition of the AI matrix; see A.5 for details.
Option --emalg selects estimation using a standard EM algorithm. As for --aireml, this option can be followed by or to specify the maximum number of iterates allowed and the convergence criterion.
Option --pxem then selects estimation using the PX-EM algorithm. As above, this can be followed by or .
Option --pxai specifies a hybrid algorithm consisting of a few initial rounds of PX-EM, followed by AI REML. This is the default for full rank estimation (unless -c is specified without any explicit definition of the algorithm to be used). This option can have up to three sequential arguments, denoting the number of PX-EM iterates, and and as above.
Option --emai is like --pxai except that the initial iterates are carried out using the standard EM algorithm. This is the default for reduced rank estimation. Optional arguments are as for --pxai.
Option --cycle, followed by an optional argument , is used in conjunction with --pxai or --emai. If given, it will repeat the number of (PX-)EM and AI iterates specified by these options times. If is omitted, the number of cycles is set to . This option is useful for reduced rank analyses, where cycling algorithms appears to be beneficial.
Finally, WOMBAT incorporates two choices for a derivative-free algorithm. While little used, these can be usefully to check for convergence in ‘tough’ problems. Option --simplex selects the simplex or polytope algorithm due to Nelder and Mead , as previously implemented in DfReml. This option can have three arguments, and as above (but with the convergence criterion describing the maximum variance among the log likelihood values in the polytope allowed at convergence), and the initial step size. Option --powell invokes maximisation using Powell ’s method of conjugate directions, again ported from DfReml and with optional parameters , (as for --aireml) and .
Not really a REML algorithm: Option --like1 selects a single likelihood evaluation for the current starting values, or, if combined with -c, the currently best point as contained in BestPoint. When combined with -v, the individual components of are printed to the standard output (screen).
Similarly, option --valid does nothing more than calculate likelihood values. It is provided to aid in the use of cross-validation to estimate the tuning factor for penalised estimation. This involves obtaining estimates for a range of tuning factors for a ‘training’ data set. In doing so, WOMBAT creates a file PenBestPoints.dat (see 7.2.11). The corresponding likelihood values in the ‘validation’ data set are then easily obtained with the --valid run time option: for each set of estimates in PenBestPoints.dat, WOMBAT calculates the likelihood and writes it together with the tuning factor to a file ValidateLogLike.dat.
By default, WOMBAT reparameterises to the elements of the Cholesky factor of the covariance matrices to be estimated (except for full rank analyses using the PX-EM or EM algorithm, which estimate the covariance components directly).
As a rule, the Cholesky factorisation is carried out pivoting on the largest diagonal element. This can be switched off with the --noreord option.
Reparameterisation to the elements of the Cholesky factors removes constraints on the parameter space. Strictly speaking, however, it leaves the constraint that the diagonal elements should be non-negative. This can be removed by transforming the diagonal elements to logarithmic scale. This is selected using the option --logdia, which applies the transformation to all covariance matrices. Conversely, the option --nologd prevents WOMBAT from carrying out this transformation. If neither option is set (the default) and WOMBAT encounters small diagonal elements in any covariance matrices to be estimated during iterations, it will switch to taking logarithmic values of the diagonal elements for the respective matrices only.
By default, WOMBAT assumes that the mixed model equations are sparse and carries out most computations using sparse matrix storage. In certain cases, this may be inappropriate and lead to substantial overheads – a typical example is the case where a random effect has a user-defined covariance matrix which is dense (e.g. a genomic relationship matrix). For this case, the option --dense is provided – specifying this option causes the one triangle of coefficient matrix to be stored assuming all elements are non-zero. Similarly, all calculations are carried out without checking for zero elements - this is done loading subroutines from the BLAS [? ] and LAPACK [? ] libraries, using packed storage of the coefficient matrix. For equations, the latter requires elements to be stored. With integer*4 addressing, this sets an upper limit of . For larger problems, the option --dense2 is available which uses storage in two-dimensional arrays instead. Note that use of --dense2 may require substantial amounts of RAM and that it is not yet implemented for all algorithms (e.g. not for PX(EM)).
By default, WOMBAT now employs a ‘super-nodal’ approach in the sparse matrix manipulations. This involves gathering dense sub-blocks and processing these using BLAS [? ] and LAPACK [? ] library routines. While this readily allows processing utilising multiple cores in these steps, the library routines require matrices to be strictly positive definite. Occasionally, they may fail where the previous procedure used may have circumvented the problem. The run time option --old is available to switch back to the latter.
By default, WOMBAT ‘prunes’ the pedigree information supplied, i.e. eliminates any uninformative individuals (without records and links to only one other individual), before calculating the inverse of the numerator relationship matrix. Exceptions are prediction runs (option --blup) and models which fit correlated additive genetic effects. In some instances, however, it may be desirable not to ‘prune’ pedigrees. Pruning can be switched off through the run time option --noprune.
For sire model analyses, the default is not to carry out any pruning or pedigree reduction. The option --redped is provided to switch on pedigree reduction for sire model analyses, i.e. elimination of all individuals without a link to an individual in the data from the pedigree file.
Occasionally – for instance to compute breeding values for progeny of animals in the data – it is desirable not to reduce the pedigree prior to analysis. Providing the option --norped eliminates this step and switches off ‘pruning’ at the same time. Option --redped is invoked automatically for runs with options --solvit or --s1step.
Generally, WOMBAT does not require any interactive input. An exception is a ‘pause’ after a warning message on a constellation of options which is new or can be problematic. These programmed pauses can be switched off using the run time option --batch.
By default, WOMBAT ‘centers’ data, i.e. subtracts the trait-specific mean from each observation. For some (rare) tasks this is undesirable. Hence, the option --nocenter is provided which eliminates this step.
By default, WOMBAT uses the algorithm of Tier  to compute inbreeding coefficients before setting up the inverse of the numerator relationship matrix between individuals. This algorithm is implemented with a limit of 23 generations. Occasionally, this results in insufficient space, both for pedigrees with mode generations and very large pedigrees. The option --maxgen, followed immediately by two digits giving the new value (e.g. maxgen25) allows this limit to be increased, up to a maximum value of 30. Note that this may result in WOMBAT trying to assign some very large arrays, and may slow calculations.
Alternative algorithms, due to Quaas  and Meuwissen and Luo , which do not have this limitation but are slower and use very little memory, can be selected using the run time option --quaas and --meuw, respectively.
Sometimes, it is useful to carry out a preliminary analysis considering a small(ish) subset of the data only. For instance, we may want to test that we have fitted the model correctly or we may want to estimate suitable starting values. A quick way to do this is available through the option --nonly, where is an integer number specifying the number of records to be used (no space between nonly and ). For example, --nonly1000 selects an analysis utilising the first 1000 records only.
One of the preliminary steps carried out by WOMBAT is a simple least-squares analysis of the fixed part of the model – this aims at identifying any dependencies amongst fixed effects levels which might not have been correctly specified. However, if there are many cross-classified effects or covariables, this can be very slow, especially for large data sets (code for this step is very old and probably the weakest part of WOMBAT and not as highly optimised as the main part of the calculations). Option --nolsq is available to skip this step and is recommended for large analyses with many cross-classified effects – but it does rely on all rank deficiencies in the part of the coefficient matrix for fixed effects being identified: the run is likely to fail with an appropriate error message otherwise.
A scenario where this option is useful is when we fit different random effects for a given data set where the fixed part of the model remains constant – typically, it is then sufficient to carry out the least-squares step only for the first run. Alternatively, it is beneficial for simulations studies where the data is re-sampled, so that data and pedigree structure are the same for all replicates.
Run option --nosolut will skip writing out files with fixed and random effects solutions at the end of an estimation run. This is useful for simulation studies where these are not of interest. -