TSM Blog

No change in functionality. Some minor tweaks to the code required because Ox 9 turns out not to behave exactly like previous versions. Crashes could have occurred in rare circumstances.  

The program has been re-compiled for Ox 9 Console, which is currently supplied for 64-bit systems only. For this reason, the same update has been posted for Ox 8 (32 or 64 bit). There are a couple of bugfixes, in particular the dynamic formula generation module in Data Transformation and Editing was broken by the last fix.   

Small bugfix in Calculator - additional trap for syntax errors.

Tiny change in matrix calculator - cosmetic improvement only

Version 4.51 has been recompiled to run with Ox 8. The 64-bit implementation now runs under Ox Console. There are no changes to the TSM code.

Version 4.50 has one minor new feature.  The GARCH-M model now allows the conditional variance or standard deviation to be included as a variable in a user-coded  function. A new reserved name H# is defined to represent this variable.

This release adds a new feature to the calculator. Two or more plots of functions of one variable can now be plotted on one graph. The method is to type each of the formulae into the same text field, separated by semi-colons (;).
A new feature is also added to the probability and critical value look-up dialogs. These now give the option of reporting either the upper tail probabilities or the lower tail probabilities, with the former being the default. Entering P to obtain a critical value with the lower-tail option is equivalent to entering 1 - P with the upper tail option. Likewise, entering a critical value in the lower-tail mode computes the probability mass lying below the value, instead of above it. The areas in the PDF plots are shaded appropriately. 
Finally, there is a bugfix in the two-dimensional plotting feature in the calculator. 

Two bugfixes. Values dialogs were not displaying correctly when the larger icon and font sizes selected, and dates in plots could be selected incorrectly.

Several enhancements to the Calculator. Three new functions have been added, the factorial fct(x) = x!, the error function erf(x), and log(Gamma), lgm(x). Note that the latter is recommended for (e.g.) computing binomial coefficients for large arguments, with less rounding error than using fct().
In the plotting feature for functions with one or two unspecified arguments, the plot data are now used automatically to compute the integral of the function over the selected range, using the composite trapezoidal rule. The number of plot points is now conveniently entered in the Calculator dialog. Choosing a large number of points increases the accuracy of the integral, but note that the plot is not drawn if more than 100 points are specified.
Alternatively, if the plotting bounds and the number of plot points are chosen to make the evaluation nodes integers, the simple sum of the plotted points is reported. For example, set N plot points and bounds 0 and N-1. This configuration can be used to evaluate a power series.      

The Edit / Rename function in Setup / Data Transformation and Editing has a new feature. By selecting a variable and entering specified codes in the New Name field, the operation performed is either to strip out, or replace by underscore characters, any characters appearing in the existing variable name which are illegal in a formula. This applies especially to spaces (see blog entry for 06/09/16). See the User's Manual, Section 1.5 for further details.  The codes are, respectively, "#delsp" to delete, and "#repsp" to replace, omitting quotes, in either upper case or lower case.  (Too bad, if you want to use these as names for your variables!)  If there are two or more variables needing this treatment, highlight them all and keep pressing OK. There is no need to input the code more than once. In addition, the code "#delspfr" truncates the name at the first space encountered.

Small interface change. Automatic Regressor Selection now has its own selection criterion choice. This is no longer shared with Tests and Diagnostics. 

Belated fixes to the graphing routine to accommodate changes in Gnuplot 5. The colour indexes have been changed, for some reason. This work has led to a closer look at the graphics options offered by TSM, some of which no longer worked. Bitmap dimension selection for PNG files is now removed. The simplest way to create different bitmap resolutions is to resize the screen image before copying to the clipboard and pasting into your favourite graphics editor  (free Irfanview recommended).  The GIF bitmap option could not be made to work in Gnuplot 5 and in any case served no purpose, since format conversion is easily done with Irfanview. In Windows the recommended file format is EMF, which produces an exact replication of the screen image with the same colours. PNG and EPS both have restricted colour palettes, making for some differences in appearance. EMF files can also be created directly from the screen image, as an alternative to the clipboard..                

Minor enhancements and fixes in matrix calculator.

Tiny bugfix. Certain test outputs, such as Nyblom-Hansen individual tests, were being mislabelled when parameters fixed.  
(For about 5 minutes, at around 16.30 GMT today, the posted version had a flaw which corrupts settings.tsm. Nothing for it but to delete and  reinstate your backup, after re-installing software! Many apologies to anyone who has had the misfortune to download this version!)  

Nothing new in this post, but a good deal of code fixing. An improved Fourier bootstrap algorithm (currently undocumented, but a working paper is forthcoming) some further polishing of the unit root tests, and a bug fixed in the matrix calculator. Now, retaining newly computed matrices is optional - to discard, choose Cancel in the naming dialog. The targets of this feature are the scalar outputs (determinant, trace, norm) which are printed to the result window.  These often won't need to be saved as 1x1 matrices, which is the default action.    

Automatically defined variable names have up to now made extensive use of the underscore character "_". This symbolically represented a space, but it seemed most natural to have names appearing as joined-up strings, in the manner of program variable names. However, this usage conflicts with recent versions of Gnuplot, which uses the LaTeX convention that the underscore indicates a subscript for the character following. In the current release (rather belatedly) the automatic underscores have been replaced by spaces. This makes for nicer plot titles, but also releases the underscore for applications where the user would like a subscript to appear in their plot title. Note that the hat character "^" is likewise interpreted by Gnuplot as indicating a superscript. This is retained in the naming conventions, because in the majority of cases the superscript interpretation is appropriate.
Today's release also adds the ability to retrieve sequences of recursive or rolling estimates and add them to the data set directly. It has previously been possible to save them in a spreadsheet, but this option may be more convenient for some purposes, and coordinates the dates of the series. The series are named as in the spreadsheet format but with the run ID number appended, separated by a space, not an underscore!       

I've implemented bootstrap versions of the popular tests for a unit root in the "Summary Statistics" dialog. The feature is not idiot-proof, and in particular the user must decide how to treat autocorrelation in the differences. The sieve-autoregressive approach appears effective, but anyone using this feature for serious work is advised to check out their technique with some Monte Carlo experiments. The documentation explains how to do this.      

I've been able to borrow a Macintosh computer for long enough to test out the Mac implementation of the Wine package. As it turns out, this works pretty well. Starting the program is a wee bit klutzy, requiring some typing on the Terminal command line as opposed to just clicking an icon, but once TSM is running it looks and feels exactly as it should. The only compromise is the integration with Gnuplot graphics. These don't open automatically. To display them the user must interact with Gnuplot by, for example, double-clicking on the .plt file in Wine Explorer.  
Apart from this, I have found no major glitches in the Mac implementation. This is not to say there aren't any, and feedback from users will be gratefully received. If anyone knows how to write a script to reduce the start-up sequence to a single icon click, that would be great!
I have also successfully run Wine (an earlier version) in Ubuntu. I suspect that the instructions provided in Appendix A of the documentation will translate straightforwardly to Linux -- but again, any advice gratefully received.   

As of today TSM is free software to individual users, though I won't turn down donations, modest or otherwise. I will also be posting the source code in due course.
TSM has in any case been pretty inexpensive compared to its commercial rivals. The modest charge has been viewed as covering the costs of support and maintenance, though in practice the number of users reporting problems has been pleasingly small. Anyway, looking ahead to a time when I may want to spend less time developing the program (not quite yet) I prefer that it become progressively self-maintaining, and maybe discover a wider user base than in the past. I'll continue to welcome user feedback and bug reports.   

The version increment to 4.48 is justified mainly to consolidate a succession of recent minor changes - and to mark the New Year. The matrix calculator dialog has been redesigned to allow objects such as the covariance matrix from the last estimation run to be imported. Eigenvalues of non-symmetric matrices can now be computed. Complex-valued results are still disallowed, but the option of computing the moduli of the eigenvalues is offered instead.

The Microsoft Surface tablet has a very high display resolution relative to screen size, necessitating software rescaling, and this feature does not interact very well with the current Java JRE release (v8.66). Scroll bars, in particular, appear collapsed if their pixel dimensions are too small. The more rescaling of the display, the larger screen objects need to be rendered. With this problem in mind, the TSM installation now offers four different scale options for fonts and icons. The 100% option should look best on most desktop and laptop installations, although some users may like the "big application" feel of 150% scaling. This is also a good resolution for projecting classroom demonstrations. The 200% and 250% options are chiefly for Surface users (and possibly others with tablet displays). The rule is, if your scrollbars appear collapsed, choose a bigger size. Hopefully, future Java releases will resolve this problem more elegantly. Switching temporarily between size options is possible for users without administrator privileges by a setting in Options / General, which places a text file in the home directory to be read at start-up.        

This release of TSM 4.47 has some new options for instrument selection in IV/GMM, and also for spreadsheet editing, aimed chiefly at preparing panel data sets.
The 'Instruments' dialog now has an additional scrollbar allowing the minimum lag order to be greater than 0. This avoids the need to to create lags as new variables, if the current values of additional instruments are inappropriate. Note that the number of lags selected remains the same, and hence the maximum lag increased commensurately with this setting.
Next, it is now possible to insert or delete rows of the data matrix in spreadsheet mode. Inserted rows are filled with .NaN, and hence need to be edited by hand to contain data. However, these may be needed only temporarily. Panel data files where the observations are arranged by individual, with equal numbers of successive time observations for each, may nonetheless lack guide columns identifying individuals by name or number, and it could be potentially be tedious to create these by hand. Provided the panel is balanced and complete (equal numbers of time periods for each individual) there is now an option to create the guide columns automatically, when the data are edited in spreadsheet mode. In the case of unbalanced or irregular panels, this option can still be adopted by going through and adding dummy observations (rows) to "complete" the panel. After creating the guide columns they can be deleted again. These latter operations must be done by hand, but in large data sets this would be a good deal less laborious than creating the guide columns by hand.   

A new option "Save Data with Settings" in Options/ General, is enabled by default. This saves the working environment continuously as TSM runs to a file with default name "settings.tsd", including data, listings and graphics. Among other things, enabling this option means that once loaded from a spreadsheet file, data sets remain available to the program even if the original files are moved or deleted between sessions. Temporary changes to the data set, such as the addition of retrieved series, are saved to disk for later runs without the need for changes to source files. If the program is not closed down properly for any reason, recovery is now seamless, with everything still in place including any data transformations and the contents of the results window. The only reason to disable this option might be if the data set is large enough that writing it to disk incurs a noticeable processor time and/or disk space penalty. 
A Java bug can cause TSM to hang. A resolution of the problem has proved elusive since difficult to replicate, however, a crash can be induced by holding Shift or Control down while selecting a variable from a list. (The "click with shift key depressed" functionality to select a range, as found in Windows Explorer, is not implemented in TSM. Right-click and 'drag-select' are two ways to obtain the equivalent functionality.) If a freeze is encountered, closing the DOS console running Ox generally kills the program. If the window still does not close, run the utility "renewjava.exe" in the installation directory to re-start Java. By default, the full program environment including the results window contents is preserved on restarting.           

Today's release of v4.46 has a number of fixes for Monte Carlo experiments with bootstrap methods, but is chiefly of interest for work on the GUI under Windows 10. Windows 10 has a rather forbidding white theme for window and dialog title bars, which it seems you can't easily change. White menu, status and tool bars look more in keeping with this environment. In previous releases these had the same colour as the dialog backgrounds, but now it's desirable to set them independently. They are set to white automatically when the package is first installed under W8 or W10. Thereafter, any chosen RGB triplet can be specified in the tsmgui4.oxh header file; see the new function Get_FrColor().
Please note that v4.46.29-08-15 had a bug in the display of error messages, fixed in the 31-08-15 release. Also worth mentioning are some fixes to the matrix calculator interface. The naming and saving of new matrices should now be working more smoothly than in the original release. Kindly report any further problems with encountered with new or revised code - thanks!      

This version implements some new discrete data options. For time series of discrete data, such as counts, the latent mean process can now incorporate an autoregressive component. This feature allows the mean to be driven by an exponentially weighted moving average of forcing variables, which may include the lagged dependent variable. The dynamics are analogous to those of a GARCH model. Another new option is 'zero-inflated' forms of the Poisson, negative binomial and ordered probit/logit  models. These models can account for the phenomenon in which discrete data exhibit more zero cases than the assumed distributions would predict, introducing a second regression model to explain the probability that the observation is drawn from a 'zeros' regime.
In the data transformation dialog a range of kernel smoothing functions have been added. Basically, these are alternative locally weighted moving averages with Epanechikov, Gaussian, triangular, uniform and biweight kernel options.
Some changes have been made to the GUI code so that the Java interface looks better under Windows 8 (and hopefully Windows 10 too, when that appears). Choice boxes, and error and information pop-ups, have a new look. Some dialogs have been redesigned, to make what were initially experimental modelling features more accessible.
Last but not least, a major effort has been made to bring the programming manual up to date. Now included are the commands to use new features such as consistent specification tests and the skip-sampling test for long memory in GPH estimation.       

Version 4.45 takes the calculator idea to its logical conclusion. The new feature is a matrix calculator. Second moment matrices can be computed from the dataset by selecting row and column components from the variable list (choose matching sets for a square symmetric matrix). A calculator window then allows formulae to be evaluated. Matrix expressions can be entered in something close to standard notation. Matrices can be also be constructed from scratch and edited element by element. They are stored and saved to disk in vectorized form, as spreadsheet columns. The idea of this feature is to provide a means of evaluating new test statistics without resorting to programming. At the moment it is just a calculator. Future releases will allow formulae to be saved as a model and hence used routinely, and available for simulation exercises.
The other feature of this release is an alternative GUI display format, selectable at installation, with larger icons and fonts. This may provide a better appearance in some displays. On Windows 8.1 tablets, in particular, the scrollbars may not display properly unless this option is selected.

Version 4.44 has a new utility that should have been added long ago - a calculator, to allow numerical formulae to be evaluated. This has all kinds of uses where a hand calculator or spreadsheet program might otherwise serve, but with hopefully greater convenience. At the simplest, use it to compute confidence bounds given reported point estimates and standard errors. Text can be cut and pasted from the results window to facilitate this kind of operation.  The algebra syntax is the same as that already in use for nonlinear equations, test restrictions and data transformations. The only difference is that here you cannot reference model features, such as parameters or variables from the data set - this is a free-standing gadget unconnected with the other functions of TSM. However, if an algebraic symbol such as X or Y is included in the formula you type in, the result is to plot a graph of the resulting function of the variable over a specified range (which must be entered following the formula). This can be done with one variable or two. In the latter case, a three-dimensional figure is the result.
Regrettably, in the process of fixing a minor bug another more serious bug was introduced into release 4.43.13-10-14. This could interfere with the optimization of certain dynamic models. Updating your installation to version 4.44 is therefore strongly recommended.. 

Version 4.43 has a two new features aimed chiefly at stochastic simulations. The shock process can now be generated by a user-specified formula, based on either (or both) of the uniform[0,1] distribution and the standard normal. As many independent components as desired, with these distributions, can be included in the formula. Note how the uniform[0,1] can be used to generate a Bernoulli distribution with the code "ips(u# - prob)" where u# is replaced by the uniform series and "prob" is replaced by a literal or an element of the parameter set. This device could be used to generate a mixed normal or jump distribution, for example.
When simulating a model that generates a random conditional variance (ARCH/GARCH, or Markov switching) it is now possible to save the simulated variance process to the data set along with the series itself. This feature is activated by including the string #CVAR in the variable's description field. In a similar way, data transformations can be implemented by including the required formula (and its inverse, if the mapping is 1-1) in the description field. Both of these features are switched on and off with the toggle setting at Options / General / Special Settings / Automatic Data Transformation. The transformations (logarithms, for example) are done "on the fly" when running an estimation, so that the series don't have to be stored in the data matrix. This approach allows a forecast to be computed for a variable when the model has been constructed for the transformed series. 
Finally, this release includes new functions that can be implemented when calling TSM from a user's Ox program, specifically to save results to a disk file and retrieve them again subsequently.       

This release of version 4.42 contains a change to the EGARCH implementation. This has always adopted a different dynamic parameterization from that proposed in the original Nelson (1991) paper, and also allowed the mean of the absolute shock values to be estimated as a component of the intercept, so not modelled explicitly. Since the distribution of the shocks is always conjectural, it appeared better not to impose this functional form on the specification and simplify the structure.
This approach works OK for the usual ARMA exponential memory decay parameterization. However, we are currently studying the long memory FIEGARCH case, for which the centring of the absolute shock process is critical, since applying the fractional integration filter with non-summable coefficients to a non-centred series results in a fractional drift term. For this case, removal of the absolute mean is important, so this has now been implemented for the three continuous likelihood models, the Gaussian, Student's t and GED. This term is included by default, but reversion to the old parameterization is possible via a toggle in the Special Settings  menu (see Options / General).
Before modelling with this program feature, be sure to look at Section 6.2.6 of the TSM main document, which explains the implication of the sign of the parameter alpha, controlling the contribution of the hyperbolic lags. In the long memory case this needs to be negative, to match the sign of d

Some operations on TSM data sets have up to now required another package, such as Microsoft Excel. TSM data sets may contain various different guide columns containing date labels, indicators and panel data formatting information. These are hidden from users in ordinary use, but it is sometimes convenient to edit them manually. In particular, panel data sets could not previously be created except by use of a spreadsheet program.
Version 4.42 has a new feature which lets data spreadsheets be edited in "raw" form. Guide columns are treated just like ordinary series for creation and editing. In this mode, TSM functions essentially as a simple spreadsheet editor. A new "Spreadsheet Editing..." command in the File / Data menu allows a file to be loaded from disk in spreadsheet mode. The Data Transformation and Editing dialog opens automatically, and other functions are disabled. Closing the dialog terminates the editing session. The loaded file is then available for normal operations.
As an additional new feature, identifiers of individuals or groups in a panel, consisting of text strings of up to eight  characters, may now be stored in the spreadsheet itself in numerical form (n.b. a "double" value occupies eight bytes, so can encode eight characters.). When viewing the file in Excel, these values may sometimes appear as zeros in the standard cell format, but don't be fooled! TSM displays their contents correctly, and the spreadsheet editing mode now allows them to be set up. The previous scheme of placing panel names in a text file is now discontinued. Files will need to be converted to restore this feature. 
This version also corrects a bug introduced in v4.41, which disabled Monte Carlo forecasting.       

Version 4.41 features enhancements to the simulation/bootstrap and Monte Carlo options.
Models with infinite-variance errors can now be simulated, with the inclusion of random variables following a alpha-stable law, for alpha <= 2. In this context, Gaussian is the case with alpha = 2. With alpha < 2 a skewness setting is available with parameter beta.
The sieve-AR bootstrap option has been redesigned so that it can be combined, as a pre-whitening step, with the block bootstrap and Fourier bootstrap options, as well as being used to represent sample dependence in its own right.
Finally a "m/T" bootstrap option allows the bootstrap sample size (m) to be a fraction of the original sample size (T). Choosing m as a fractional power of T may provide a consistent bootstrap procedure in cases where the regular bootstrap fails, such as where the shocks have infinite variance.
The Options / Simulation and Resampling dialog has been redesigned to accommodate these changes. Note that the scrollbar for setting 'm' shares its location with the scrollbar setting the number of bootstrap replications. To switch between the two simply select/deselect "m/T", leaving it selected to activate the option once both settings have been made.             
The further new feature is an option in the Monte Carlo dialog for 2-sided p-value EDF calculations. If the "p-value" denotes the position of the sample statistic in the test distribution, checking this option shows (for example) under "5%" the number of cases outside the range [0.025, 0.975], instead of the default report of the number of cases below 0.05. This option has applications in two circumstances. If signed t-values are specified, the size of t-tests based on equal-tailed (rather than symmetric) confidence intervals can accordingly be estimated. Asymmetry may arise in the case of a bootstrap test, in particular. The size of any test based on a user-specified statistic can be studied similarly.      

This release has an updated setup program, so that either 32-bit or 64-bit versions of TSM can be installed from the same file. Note that the 64-bit version requires 64-bit Ox 7, which is not currently a free download. You will need to purchase OxMetrics 7 Professional to make use of it. The installation procedure is the same, except for one new page in the wizard which for 32-bit installation must be clicked through without any changes.
Appendix A of the documentation contains a new section on installing TSM under Ubuntu. This is possible using the Wine package in conjunction with the Windows versions of the various TSM component packages (Ox, Java, Gnuplot). Installation on Mac OS X should also be possible this way, although I have not tried this as yet. It may be necessary to purchase Crossover, a commercial implementation of Wine. Feedback from users who have attempted this will be gratefully received.     

The chief novelty in version 4.40 is the 64-bit implementation, which runs with Ox 7 Professional on computers running 64-bit Windows 7 or  8 operating systems. Henceforth, both 32 bit and 64 bit compilations of the software will be available in tandem. The only difference in operation is that the bundled (32-bit) Gnuplot executable is omitted from the 64-bit installation, and to obtain graphics it will be essential to install Gnuplot separately, version 4.6.3 or later.
This version also refines the data transformation features introduced in Version 4.39. In models with conditional heteroscedasticity, it is now possible to retrieve the conditional variance processes generated by the simulation stage of a Monte Carlo experiment, and hence analyse these at the estimation stage.    

Today's update resolves a problem with Gnuplot. The latest release (4.6.3) requires different command line inputs and interactive plotting will not work with earlier versions, if installed. If no separate Gnuplot installation exists, the fix ensures that TSM defaults to using the compact bundled version (Gnuplot 4.2.6), with the appropriate command line inputs. There is an optional switch (see Options / General / Special Settings) to force the use of the bundled version. This one may be preferred for its treatment of monochrome plots, or if an installed Gnuplot version cannot be updated.       

Version 4.39 runs under Ox 7, which you must install before upgrading (i.e., making a new installation). Previous versions of TSM do not work with Ox 7, however, you can run Ox 6.2 and Ox 7 installations quite happily side by side. There is no need to remove any existing TSM installations, although since the Windows shortcuts get over-written, you will need to find a different way of starting the old version. Instructions for setting up shortcuts by hand can be found in Appendix A.
Sorry, this version does not run under Linux. I don't have the expertise to compile the new version of OxJapi required to run the GUI. Linux and OS-X programmers interested in attempting this, please get in touch!    
New features of TSM itself in this release:
1. The warp-speed Monte Carlo method for bootstrap estimators of Giacomini et al. (Econometric Theory 28, pp 567-589). This method combines the Monte Carlo replication and bootstrap replication stages together, with an order-of-magnitude reduction in computing time, by making a single bootstrap draw in each replication of the Monte Carlo experiment. The distribution of the second-stage statistics, computed from these draws, provides the "bootstrap" tabulation from which the p-values for the first-stage replicates are then estimated. See the cited article for discussion of the applicability and performance of this procedure.
2. Any specified transformation of a data series can be performed "on the fly", before estimating a model. This feature has been available before  for log transformations, but now any function that can be coded using the program's formula syntax can be used. The formula can incorporate other data series as well as constants. It must be entered into the data description field of the series in question. If the transformation is one-to-one with a unique inverse, the inverse formula can also be coded, and then the fitted values and forecasts are reported for the original variables, not the transformed ones. This feature saves the need to store transformed series in the data matrix, but could be more than just a convenience for setting up some Monte Carlo experiments. Now one can generate data using a given model, and then compute estimates and tests in a model of the transformed data, without the need for a nonlinear estimator.
Both of these new features are activated via the Options / General / Special Settings drop-down menu.             

A user reported that the algorithm computing the roots of an autoregressive system was taking a very long time to run in a BEKK-GARCH model, where the number of roots equals the number of equations squared times the lag order. Although such a problem is rare, making the computation of roots a selectable option appears a worthwhile addition. The new checkbox, enabled by default, can be found in the Options / ML and Dynamics dialog.
Another addition in this release is a table of moments (mean, variance, skewness, kurtosis) for the Monte Carlo distributions available for plotting. This information can be useful in assessing the behaviour of tests. Quantiles alone can sometimes be deceiving, if for example the mean and variance to a t ratio are offset in compensating directions. The table, which appears after the usual Monte Carlo outputs, can be viewed as a numerical summary of the information contained in density plots. Note that it is only available when the replications are saved as a frequency table.          

The major feature of this release is new code for computing analytic ex ante forecasts. These are now available for a much wide class of models than before, including nonlinear structures such as bilinear models, linear and nonlinear error correction models, and smooth-transition regime switching models, as well as user-specified nonlinear coded specifications. Only Markov-switching models are still restricted to the class of (vector-) ARMA equations. All models can feature conditionally heteroscedastic errors. The code works by solving the model simulation algorithm forward with zero stochastic inputs. A set of pseudo-moving average weights is constructed by passing a unit perturbation through the same filter, allowing the construction of standard error bands and impulse responses. Be careful to note that these represent linear approximations in the case of nonlinear-in-variables models. This code remains experimental and has not been validated with all possible model options. Hence, the old code is still available, and can be enabled as a selection in Options / General / Special Settings. Alternatively use the Monte Carlo forecasting option.

Another new feature is the forecast error variance decomposition for multi-equation models. This shows how different shocks propagate through the system, and hence how each variable responds to innovations in each variable, including itself, at each forecast horizon.

One other small addition in this release, at the request of a user, is the option to change the title of any graphic prior to creating it. With this option selected in Options / Graphics, a text field is presented to the user when the graphic is created, containing the default title for editing or replacing. Alternatively, the title can simply be omitted. This is convenient when the user plans to edit the graphic and supply text in a drawing program. Don't overlook that the legends for multiple plots can also be optionally omitted. Users therefore have complete freedom to supply their own graphical annotations for publication purposes.             

This release has no major changes, just a collection of fairly minor fixes and amendments. These are as follows.

  1. EMF (Windows enhanced metafile) graphics file output is now supported. At the same time, the TEX graphics format is no longer an option.  

  2. Computing and reporting forecast standard errors is now optional, although enabled by default.

  3. The Options / Forecasting dialog also has the option to suppress all but the point forecasts in spreadsheet outputs. By default a full set of quantiles is written to the file, which can be inconvenient when the output are to be used for subsequent processing. (This option was previously available under Options / General / Special Settings.) 
  4. Diagnostic test selections are now stored as a vector of  identifiers (named integers) having the dimension of the number of tests selected, in place of an extended vector of Booleans. This change makes the stored model description (as viewed in the Model Manager) both easier to read and easier to modify in command line mode. See the revised programming manual for details.
  5. Other program settings previously given explicit integer values are now identified by mnemonics in the model descriptions, including TRUE and FALSE in the case of Boolean variables. 
  6. The programming manual has been revised and re-formatted to reflect 4 and 5.
  7. Some speed improvements have been made in the evaluation of LM statistics, with components that were previously recomputed each time a test was called now being stored.
  8. In the regime-switching dialog, the intercept (or intercepts) can now be included in the set of switching parameters independently of the regressors.     

Today's release features some interface modifications. The instrument selection dialog (Models - Select Instruments...)  has been redesigned to let the user more easily set the number of lagged endogenous variables to be used as instruments in GMM. There is now a separate scrollbar for this choice, instead of a checkbox. Another new feature is the option to suppress the computation of  estimation outputs (residuals, covariance matrix, test statistics, forecasts etc.) following an optimization run. These steps can be time consuming in large models. Often, finding the optimum of a complicated criterion function is best achieved by stepwise runs of the algorithm subject to parameter fixes, and enabling this option can speed things up considerably. The output can always be generated subsequently with the "Evaluate at Current Values" command.  The switch for this option is in the Options / Optimization and Run dialog, and replaces the switch that enabled the printing of output following a convergence failure. (This can now be done with "Evaluate at Current Values", as above.)  

Version 4.36 has several new tests, including the LM version of the Andrews (1993) test for structural change, the V/S test for weak dependence (a variant on the KPSS test) and some experimental consistent tests of functional form and dynamic specification, related to the Bierens (1990) residual-based tests.

An enhancement to the data plotting routine that automatically sizes the plot area to exclude non-existent (.NaN) data points in the chosen range. Previously, it was necessary to do this exclusion manually. (This was really a bug, not an intended feature.) In multiple series plots, the plot area is now sized to include the series with the most existing data points in the chosen range.      

In this release, a very small change allows a separate installation of Gnuplot to provide graphical capabilities. Simply install the package (current release Gnuplot 4.6.0) and TSM will use it instead of the bundled (compact, but now superseded) release 4.2.6. In case the installation path is non-default, this can be entered in tsmgui4.h.
Other changes include further work on the sorting of the model list in Model Manager. The same ordering now appears in the Monte Carlo dialog.    

Two small interface improvements.
1. The number of entries displayed in a combo-box (choice widget/pull-down menu) is now a setup option (edit tsmgui4.h) and has the default of 20. Much less scrolling required to find the item you want.
2. A new menu item on the Model menu, "Quick Model Save". This saves the current specifications under the current model name, without requiring the user to confirm the name in a text box and then confirm over-writing. In other words, saving a model can be done with one mouse click instead of three. To make small changes to a succession of models gets a lot quicker.
There is also a fix to restore functioning of the local Whittle estimator, which was not working correctly. Thanks to a user for drawing attention to this.

There is  small change to the Cointegration Analysis dialog to resolve a potential ambiguity. The choice of 'lag length' for the cointegrating regression refers to the number of lagged differences to be added to the system. The selection is now explicitly denoted 'additional lags', to avoid confusion with the choice of maximum lag of any variable in the system (necessarily at least one). The options now explicitly include the case "zero", whereas previously this choice was unavailable.
A new feature is to allow the individuals in a panel to be identified by name. The names cannot be stored in the data spreadsheet file, but instead can be entered in a text file with the same name and location as the data file, but with extension ".txt". 
There has also been a bug fix relating to the selection of panel time dummies when the number of time periods is less than the maximum.   

The Model Manager now has a Sort option, to place the stored models in alphabetical order. Sorting can just as easily be undone, to place models back in the order they were created. The Model Manager menu item has also been moved from the Setup menu to the Models menu. This is  a long-postponed change to a more logical layout, especially as the Setup menu is becoming burdened with additional entries. This change has entailed a corresponding rearrangement of the user's manual. This release also has some corrections in the analytic derivative code and more stable computation of inverses for some test statistics.  

Version 4.35 offers for the first time in TSM a state space modelling option. This feature makes use of the package SsfPack 2.2 which, like Ox, is free to academic users for research and teaching purposes. (Other users should purchase a licence before using SsfPack). The user has only to install SsfPack in the "packages" folder of the Ox installation. Then, running the TSM installation package will automatically detect it and enable the relevant features of TSM. These include a dialog (accessed from the Setup menu) where parametric state space models can be specified interactively and estimated by maximum likelihood. Series for predictions, forecasts and smoothed states and disturbances can be generated and are added to the data matrix, for further analysis and display using TSM's graphics capabilities. Full instructions for the use of the interactive dialog can be found in the user's manual, but familiarity with state space modelling principles, and careful reading of the SsfPack documentation, are essential to get the most out of these methods.
Note that this first release is a preliminary "beta". It has been tested, but the code is quite extensive and some issues could remain. Also, not all the SsfPack features are so far implemented. Any users' comments  and suggestions will be welcome.

Other features of this release include a redesign of the Options / Graphics dialog. To simplify the interface, the options for choosing line and symbol styles are now moved to a separate panel. A new graphics feature is the ability to plot one or more series with 2-standard error confidence bands. A feature of SsfPack's output are series of calculated state and prediction variances, showing the uncertainty in the modelled series. These series are reported by TSM in square root form, so it is easy to create a graphic showing confidence bounds, as illustrated in the SsfPack documentation.              

A small change has been made to the handling of sample selection to allow for missing observations (with value .NaN) in a data set. The previous operation to exclude missing observations from a sample would simply find the last such observation in the series, and set the first available observation to be the one following. The new behaviour, following the command "All Available Observations" in Setup / Set Sample...,  is to look for the longest unbroken sequence of observations. If the selected sample is outside this range, the longest unbroken sequence within the chosen sample is selected.
This release also fixes a bug introduced in v4.34, that caused a crash when selecting from the variable list in Model / Regime  Switching...        

This update provides new commands for organizing TSM's working files into folders. A new menu item File / Folders allows six different file locations to be shown and changed.
In the case of model files, changing the file location causes the files themselves to be moved. This feature was becoming necessary because of the large numbers of files generated by some activities such as running batch jobs on Condor. Large numbers of model (.tsd) files can also tend to make the home directory excessively cluttered. These  files can now be optionally moved to subdirectories.
Data files can be loaded from anywhere, but will be saved to the designated data folder by default. This helps ensure that if a data file is edited, the original version is kept intact unless it is stored in the designated folder.     

Today's release has two new bootstrap test options. The stationary bootstrap of Politis and Romano (JASA 1994) is an addition to the range of available options, it's the variant of the moving blocks bootstrap in which the length of the blocks is drawn randomly. The "static bootstrap" option creates the bootstrap data by adding the resampled residuals directly to the fitted forms of the estimated equations. In the case of static models in which all the explanatory variables are exogenous (held fixed in the replications) the two procedures are equivalent, although the static procedure will run substantially quicker. However, in a dynamic equation the observed values of the lagged dependent variables are used rather the simulated ones.
There are also a couple of important bugfixes. One of these issues, introduced in v4.33, caused a crash when attempting to compute the F-form of a test of restrictions. The other prevented the loading of settings (,tsm) files created with versions 4.25 and earlier. 

Version 4.34 has one new option, a cusum of squares test for regression residuals.
However, there are several improvements to the interface and important bug-fixes. The Values dialogs have a redesign that makes it easier to navigate a multi-equation or multi-regime model, you can now cycle both backwards and forwards through the screens. (This  was already in later  releases  of 4.33.) 
There is a new way of selecting a block of variables in  a data list - simply select the first item in the block as usual, then use the right-mouse button to select the last member of the block. This method supplements and co-exists with the existing "drag-select" method. Use which ever appears the easiest.
The handling of multiple data sets is now improved - the "foreground" dataset now appears, "greyed out", in the list of loaded data sets, a bit less confusing than the old format. "Remove" replaces "Clear" in the lost of commands, slightly less ambiguous.
The data spreadsheet and the data formula editing dialog are now both accessible from the Setup menu, as well as from within Setup / Data Transformation and Editing. In the former case, there is no option here to select variables and observations, all are displayed. These additions (which don't effect existing functionality at all) provide a bit more flexibility for users and also allows some rationalization of the Help pages.
The data spreadsheet is now resizable. Drag the corner to bring more observations and variables into view.
Last, but by no means least, a long-standing problem with the data editing dialog under Linux has been remedied. The (Linux installation has to use the old version of the Java interface, until a Linux programmer comes forward to help us update the interface as has been done for Windows. Any offers?) My apologies to Linux users (relatively few, but no less valued) that this fix has taken so long.  

A bug has been corrected that affected the releases of 11/06 and 19/06. The Values dialog was failing to display values for linear regression models. Upgrading is recommended, with apologies to any users inconvenienced by this. 

As well as consolidating a large number of minor changes, Version 4.33 supports Excel 2007/2010 workbooks with the extension ".xlsx" for both reading and writing. This feature requires Ox 6.20, and upgrading the Ox  installation is strongly recommended. This version also runs under Ox 5.1 and Ox 4.1 as usual, but without the enhanced file support.  Also in this release is a recoding of the data formula dialog, hopefully making this feature easier to use.      

A speedy repost following  a report from a user about a dialog display problem. On some systems, combo-boxes (pull-down menus) evidently default to larger font sizes than others, and as a result the boxes can obscure other dialog objects. This font size is now set to the default of 11 points, and is user-selectable by changing the line "Get_ChFontSize()" in tsmgui4.h.
Another, simplifying change in this release is the remove the option to have variable names case insensitive. Case insensitivity is now a rather old-fashioned computer convention and the setting was in any case only relevant to command-line implementations of TSM. Case sensitivity is now the rule.  

Unfortunately, the last-posted release was flawed! The problem is only likely to effect the handling of large multi-equation models. In other cases the bug is unlikely to affect performance, but an update is advised to be on the safe side. 

This release adds a couple of data transformation features that may be useful in volatility modelling.
1) An m-point moving average transformation with equal weights 1/m.
2) An enhanced recursive formula that allows (for example) an exponentially-weighted moving average (EWMA) transformation of a series, as in RiskMetrics. The revised coding allows any chosen initial values to be set, by over-writing an existing series from which the values are taken.
The existing feature that extended the data matrix to accommodate leads and lags has been removed. Series extending beyond the limits of the current matrix after leading /lagging are now truncated.    

It is now possibly to call TSM from an Ox program and retrieve outputs from the summary statistics and cointegration test modules for further analysis. Thus, the actions of the program could be made conditional on the outcome of tests of (for example) integration order. Previously it was only possible to print these results to the console. Note that this is still the case for the Wald tests on the cointegrating space, but the Johansen beta matrix and trace and maximal eigenvalue tests are retrievable. See the programming manual for details of how to implement these options.         

As warned on the download page, this version makes a change to the format of user-coded Ox files for inclusion in TSM. The program won't start if an attempt is made to load a code file in the old format, so this is important! The change required is to add the line
UserCode(const Input, const Output){}
to the end of any such file.

A minor interface enhancement in this release, but could occasionally be useful. The available controls in Options / Graphics are quite complex, with lots of line and text styles to choose from. Different settings may be desired for different purposes (for example, colour versus monochrome graphs). Now, all the settings in this dialog can be exported to a named file, and re-imported subsequently. The new options in the File / Graphics menu allow your favourite graphics settings to be stored while making temporary changes, and re-instated easily.    

A further update of the Data Transformation and Editing dialog! Using this facility intensively (something I don't usually need to do) has convinced me that it could be more user-friendly. The operation of displaying the editing spreadsheet now also has a dedicated button - no need to select this option from the pull-down menu. Also, double-clicking the variable list is now dedicated to opening the spreadsheet. Other operations chosen from the pull-down menus need to be launched with 'Go'.
The operation of the Formula text box (also now with dedicated button and menu item)  has been further modified. It now stays open by default. The 'Close' button (replacing the 'Cancel' button) closes the box without evaluating the displayed formula, but does not discard formula edits. The Escape key has this role. The 'OK' button is now 'Go', evaluating the formula but not closing the box.
Another new item is a TSM splash screen. The main window is displayed only after settings files are loaded, and this can take a few seconds if the file is large - the splash screen lets users know that the loading process is under way. 

This release has an enhanced data editing spreadsheet, with two new features. First is the option of editing  an observation "in place". Double-clicking a cell now opens it for editing with the usual keyboard options, the Escape key to discard the edit and any other navigation key to save it. The existing arrangement with an editing text field is retained, since this is still the most convenient way of editing a sequence of cells.
The second is a very nice feature for getting data into the program from awkward sources. Published data often have unhelpful formatting, with observations listed "in line", rather than in columns. Data may also be scanned from printed sources, resulting in irregular formatting. All that needs to be done to get such data into a TSM data set is to copy it to the Windows clipboard. Numerical values can have arbitrary separators, including spaces, tabs, carriage returns, or indeed any non-numeric characters. Then, having created and named an empty variable (conveniently, use the "Make NaN" command for this purpose), simply highlight the cell where the first observation in the list is to go and press the "Paste Clipboard" button. The values on the clipboard are inserted into the column in sequence. Any non-numeric text in the pasted string is ignored. 

25/08/10, 01/09/10
The number of assignable toolbar buttons has been increased from two to six - hopefully enough for all possible modes of use. Unused assignable buttons are not displayed on the toolbar, and this means that in the new set-up, only one button appears by default. This is Button 1, which by default displays the most recently opened dialog, as before. The other buttons appear only when assigned. The tool bar is therefore tidier by default than it used to be, but can look quite busy if all the options are made use of. 
Also, a small redesign of the Setup / Data Transformation and Editing dialog. The "Go" button has been moved,  to make it more prominent and more closely associated with the choice box that it works with. In its place is a shortcut button to open the "Make Formula" dialog directly, without selecting this option in the choice box (although that procedure still works).
Note that this option can also now be accessed through the menus without opening Setup / Data Transformation and Editing. This allows the user to (e.g.) quickly re-run a transformation after modifying the data.   

This update includes a feature to allow a model fitted in logarithms to be specified in the original data, with the transformation being performed "on the fly". This saves the need to create logarithmic transformations and add them to the data set. It also allows simulations and forecasts to be created for the measured data, with confidence bands constructed appropriately. To signal to the program that a variable is to be transformed when it appears in a model with this feature implemented, a flag is added to the data descriptions field of the variable.

Version 4.32, released today, has few new features apparent to the user, but represents a major code reorganization. The graphics code, formerly part of the graphical user interface, has now been included in the kernel. This means that it can be accessed by users' command/line Ox programs that import TSM as a module. The programming manual has a new section describing how to implement the new graphics functions. The Gnudraw code module is now compiled as part of trmknl4.oxo, which accordingly is the only TSM component needing to be imported to access TSM functions.
All these changes should be entirely transparent to the user of the GUI. However, a new version of Gnudraw allows some enhancements to the data-plotting options, basically extending multiple-series plotting to the correlogram, partial correlogram, spectrum and QQ-representations of series. Also new is user control over graphics text point size and font, although the latter setting is effective only for exported graphics files. Interactive graphics have the Arial font by default, but this can, as before, be edited in the Gnuplot window by clicking the top-left frame icon.
Among minor interface revisions, note that the file type selector for graphics files has been moved to the "Options / Output and Retrieval" dialog, to accommodate the new items in Options / Graphics. Also, from the same dialog it is now possible to list the gradient of the criterion function in the output. This feature may have occasional diagnostic uses. There are also many other small fixes and enhancements too small to document. Hopefully this release is now reasonably clean, but bug reports will be gratefully received, as always.          

The Values dialogs have had a redesign. The "Refresh", "Clear" and "Next..." buttons are now located on their own panel below the scroll-pane. This saves the need to scroll the dialogs in the case when there are a large number of parameters in the model. 

This release clarifies the format for reporting cointegration (ADF and PP) tests. It also updates the LM test for parameter fixes, which are now computed with analytic derivatives whenever available. This is an extremely useful feature that allows the test of virtually any model specification against virtually any more general alternative. The procedure is very simple. Given any fitted model, extend its specification to the alternative of interest. Then, open the relevant Values dialog, and check the Fixed checkbox against the newly specified parameters, with their values set to zero. Next, select Actions /  Compute Test Statistics / LM Tests of Parameters Restrictions, and hit OK, after checking out the cautionary message - note that the test will only be valid if it is computed at the restricted estimates, so these must not be changed. A way to ensure this is to check the "LM Tests of Parameter Restrictions" checkbox in Options / Tests and Diagnostics. Then re-estimate with the restrictions imposed, and the test is reported automatically. If restrictions are set up through the Model / Parameter Constraints dialog (check the "Restricted Estimation" radio button) the test will be conducted on these restrictions instead. This option is handy if some parameters need to be fixed under the alternative. 

Today's release features listing and plotting of partial correlograms. It is also possible to set the number of correlogram points (regular or partial) to be plotted, by setting the scrollbar in the Summary Statistics dialog. It also sorts out some problems with creating EPS graphics files, and there has been some redesign of dialogs. A problem with asymmetric (threshold) Garch has been sorted. These models need numerical derivatives to be selected for estimation and this was not happening automatically, as it should. Analytic volatility forecasts for asymmetric Garch models have also been corrected.     

Version 4.31, released today, only has a couple of new features. The Elliott-Rothenberg-Stock (1996) efficient tests of I(1), and also the Robinson-Lobato (1998) test of I(0), can now be computed through the Summary Statistics dialog.  There is an option to include either the squares or the absolute values of regressors in a conditional variance equation. (Previously, absolute values was the only choice.) Otherwise, this release is mainly a consolidation of many minor fixes and improvements. The algorithm for DCC Garch has been improved. A new release of  GnuDraw is incorporated, which fixes some issues related to graphing of confidence bands, and the 'filled' options for conditional variances, probabilities and similar plots. The 'fill' feature now works generally, previously the plots needed to have dates for the option to be implemented. As detailed in the last blog post, discrete data options now include the ordered probit and ordered logit models.    

This is the latest of a series of releases during September reflecting our polishing of the Condor capability, and the fixing of various issues that came to light. Analytic derivatives for the skew-Student ML are now in place.
The sample can now be independently selected for each of the various program functions - estimation/simulation, plotting, editing and transformations, summary statistics and so forth. To save confusion, the current selection is now displayed at the top-left of each of the relevant dialogs, except for the estimation/simulation sample which appears as usual on the status bar.
A new capability in this release, though not major enough to justify incrementing the version number, is ordered probit and logit estimation. The models are parameterized in such a way as to make it straightforward to identify the model in the presence of empty categories.

A further bug-fix release posted today.

Whoops!  The initial post of v4.30 was a dud, unfortunately. Some experimental code stubs not removed, so everything was taking a year to optimize. Apologies, and please upgrade to v4.30.24-07-09 without delay. 

While it may not appear so, Version 4.30 is quite a major upgrade. A long-standing ambition to implement analytic derivatives for the search algorithm has now been largely implemented. This should result in faster and more reliable optimization, especially for the larger and more complicated models. More accurate calculation of the Hessian matrix of the criterion function is a useful spinoff. The implementation is not yet completed and some models, most notably the Markov-switching class, EGARCH models, and the skew-Student and GED maximum likelihood estimators, still use the old method. We intend that the capability will be extended to all cases in future releases. Since our code has not yet been thoroughly tested in all situations, a fail-safe procedure compares numerical and analytic derivatives, at the default parameter values, at the start of each optimization. The numerical option is used in the event of a discrepancy. These changes should be seamless to the user, although a message in the output indicates that the numerical option has been used, for whatever reason.  However, in case of an unanticipated problem a checkbox in the Options / Optimization and Run dialog allows the numerical option to be reinstated as the default.

Another innovation allows jobs to be run on the Condor HTC (high-throughput computing) system, on networks where this facility is installed. Monte Carlo jobs can be split into any number of parallel instances to be run simultaneously on otherwise idle workstations on a network. Aggregating these results gives the effect of a running a single job on an arbitrarily fast processor. On a large university network, experiments could run hundreds of times faster than on a conventional desktop. This option is still experimental, and the current implementation is for Windows-based networks only. Feedback from users with access to a Condor installation will be most welcome. 

This release implements (finally) the White heteroscedasticity test, in LM and conditional moment variants. The opportunity has been taken to redesign the Options / Tests and Diagnostics dialog, which was already overcrowded and incapable of accommodating any more options. Now, there is a separate dialog for Diagnostic Tests, allowing ample space for new tests to be added in the future. The new dialog can be accessed with a button in the simplified Tests and Diagnostics dialog. However, it is compact enough to keep open alongside the Linear Regression and Dynamic Equation dialogs, and these  are equipped with new buttons to access it directly. The result is, hopefully, a more direct and intuitive model specification interface.    

These updates have fixed a problem with the latest code, and also restore a feature that got lost in the update to OxJapi2 - this is the ability to plot a series by choosing it from any list and pressing the "Plot" button. 

This update of v4.29 has an enhanced Model menu that includes quick links to the basic Model Manager functions, saving a model and loading from a list of available models. By default, up to 20 model names are displayed at a time, with the most recently loaded appearing first. This is different from the list in Model Manager, which can include up to 100 models listed by order of creation, although these can optionally be rearranged.  

Version 4.29 (Windows version) supports file drag-and-drop. To load a data set (or several data sets) simply drag them from Windows Explorer over the TSM window, and release. Settings (.tsm) files, model (.tsd) and tabulation files can be loaded in the same manner. 

There is also a change in the operation of error message display, following an invalid action by the user. The message box now has a "Continue" button that needs to be clicked to close it. Previously, the message box was displayed for two seconds and then closed automatically. The latter behaviour still applies to message boxes displaying simple information.     

There have been several recent updates, to fix various small problems that came to light with the new code. The current posting is hopefully now a stable version. Reports of any additional fixes needed will be appreciated!  

Two new features in this release.
1) The Monte Carlo module now automates the procedure of launching two or more identical batch jobs and aggregating the results after they complete. These are reported exactly as if a single large Monte Carlo experiment had been specified. If you have a dual core or quad-core machine, this effectively provides a parallel processing capability. An experiment can be run in a half or a quarter the time (depending on how many processors you have) as would be taken on a single processor machine. There is no limit to the number of simultaneous jobs, and they could be sent to other machines to extend the parallel capability indefinitely.  
2) An estimation sample must normally represent a consecutive series, without missing observations, and the specified sample is truncated to be sure missing observations are excluded. This is the appropriate behaviour for time series. For the analysis of cross-section data sets, it is now possible to specify that missing observations are simply filtered out of a specified sample, without truncating it. This means that the observations need not be consecutive, so the option is disabled if any dynamic modelling features are specified. 

There was a flaw in the last upload, with an incorrect file included, apologies. Only Linux users were seriously troubled, but an update is recommended to be on the safe side. Other small fixes are included.   

Version 4.28 allows two or more data sets to be open at the same time. This feature takes advantage of the ample memory of modern computers to allow any number of  projects to co-exist in the same session. To switch data sets, the menu command Setup / Data Sets / Switch To..., opens a popup menu with the list of sets currently in memory. Choosing a set from the list brings it to the 'foreground', while the current set takes its place in the background.  All data sets open at closedown are automatically re-opened at the next start-up, provided the disk files have not been deleted or moved. Likewise, 'exported' settings files contain all the datasets currently open.

The status bar at the bottom of the TSM window now shows the current "Data Set", instead of "Data File" as previously. Data sets are named according to the name of the file containing them, excluding extension. This means that data files must have unique names if they are to be loaded at the same time.  Files with the same name but different type, and with different storage locations (paths), are not treated as distinct. 

TSM Version 4.27 introduces a recoded graphical user interface for Windows, making use of Java Swing technology. Up to now, TSM has used an ingenious public domain package called JAPI (Java Application Programming Interface) due to Merten Joost (www.japi.de), adapted for the Ox language, as OxJapi, by Christine Choirat and Raffaello Seri. This code was last updated in 2003, and used the original AWT version of the Java runtime environment. It has had a few limitations such as the lack of a "table" object, tabbed dialogs, and so forth.

Tim Miller of the University of Exeter has now comprehensively revised Joost's Java code, so that the package can be implemented in Java Swing. We call the new version OxJapi2. The new package is installed under Windows by running the installation wizard in the usual way. There is one additional wizard page, where users can choose between three "look and feel" options. The appearance of the new version is similar to the old one apart from the different designs of dialog objects such as buttons and checkboxes. There is a new set of toolbar buttons, supporting the transparency attribute. The main new feature is a spreadsheet-style data editing dialog, which allows any number of series to be displayed in columns. The Windows file dialog is a lot neater and more informative, this was another weak point of the original OxJapi.  The installation program deletes any old OxJapi files and icon files before copying the new ones, so upgrading an existing installation should be seamless. So far only a Windows compilation is available, but Linux users can run TSM 4.27 with the original OxJapi code. All the required files are still available in the zip file distribution. Please see readme.txt and  Appendix A of the documentation for details.

Note 1: A number of code rewrites have been called for to make TSM work properly with Java Swing, which does a number of things differently from the old AWT Java. I hope nothing has been overlooked, but if any part of the GUI is found not to be working as it should, kindly notify me. Reversion to the old technology is not difficult, see Appendix A of the documentation. 

Note 2: We encountered a display corruption issue that will make problems for TSM 4.27, when running the JRE (V6, Update 7) with an nVidia graphics card. This appears to affect Swing applications, but not AWT. To see if you have the problem on your system, open the Java Control Panel, and then the Windows Task Manager. Put the Task Manager on top of the Java window, then give the latter window the focus (click its title bar). Check whether the Java window "bleeds through" the Task Manager window, which has the "always on top" attribute. If you get this problem, reduce the hardware acceleration of your card by 50%. (DON'T update the nVidia driver - that actually caused us more trouble!) 

The update posted on 14-10-08 contained a small revision to the settings load sequence, and regrettably introduced a small bug. This is fixed in today's update.

Two small enhancements of data analysis capability in today's release. The Hodrick-Prescott filter can now be computed for a series, in the data transformation dialog. Bivariate histograms and kernel densities for pairs of data series can be plotted in the data graphics dialog 

This update adds two features that may be useful primarily when working with independent samples (cross-section) data. First, a data  bootstrap is implemented with bootstrap samples drawn randomly with replacement from any designated range of the data set. Note that all the variables (dependent and exogenous) are drawn jointly in this case. (With the residual bootstrap, exogenous variables are held fixed in repeated sampling.).
Second, it is now possible to select a sample by means of indicators, so any subset of the data can be drawn, not just consecutive sequences. For example, given a dummy for female respondents in an employment survey a regression could be run on the female members of the sample alone, without requiring any sorting of the data, using a copy of the dummy series to supply the sample indicators.
One further novelty is the facility to tint the results window with your favourite colour. This might make it easier to pick out the TSM window on a crowded desktop, or maybe is just for fun. Make your selection by cycling through the Options / General. Your choice will be remembered. There are five preset shades (including the usual white) or specify your own colour by changing RGB values in the tsmgui4.h file.        

These updates have a bit more work on the simulation and bootstrap implementation of panel data models. Also bugfixes in the criterion grid plotting option. 

The version posted today is a maintenance update which tidies up the panel data implementation. It should now be possible to estimate systems of panel equations with fixed effects, by OLS or IV, just like the usual linear system case. However, systems are not yet implemented for random effects.
Another new feature (unconnected with panels) is the ability to specify models which incorporate the random disturbance nonlinearly, in simulation exercises. Previously, to be simulated an equation had to have the disturbance (which could have been Gaussian, Student-t or other) simply added on. Now it can be incorporated in other ways, or transformed, by having it appear explicitly as a variable in a coded equation. For example, one could square a Gaussian disturbance to get a chi-squared, or exponentiate to get a log-normal distribution.
This new syntax cannot be used to estimate the model, because there is no feasible way to invert an arbitrary nonlinear equation numerically. Usually, however, it will be possible to construct a complementary version of the model for estimation, using the existing "residual" format.   
 A number of bugs have been fixed, including a problem with the recently implemented Bierens specification test. It turns out that some other M-test statistics may also have been computed incorrectly in some circumstances. The effect was generally of small order, but recomputing any mission-critical M-statistics with this update is recommended.  

Version 4.26 introduces panel data capability. Panel data sets can now be read, manipulated and analysed, using a data file format similar (but not identical) to the Arellano-Bond DPD package. The standard estimators are implemented, that is, OLS for fixed effects and GLS/ML for random effects, and also instrumental variables.  Some basic diagnostics such as the Breusch-Pagan test and Hausman specification test are implemented, as is simulation using Gaussian or bootstrapped disturbances. Specialized GMM methods for dynamic models will, hopefully, appear in a later release.
This is quite an extensive update. It has been thoroughly tested, but there is always the chance of a problem with existing code that has had to be modified. I can see no reason not to recommend installation, but in case a problem turns up, reinstall the latest release of Version 4.25 (19-04-08). This has the graphics fixes indicated in the previous blog entry.
The main panel data operations have also been verified against the Grunfeld data set, but not all implemented features have yet been tested in all combinations. This is still work in progress. Reports of suspected issues will be gratefully received.   

An obscure Gnudraw bug has turned up involving plots of likelihood contours. Until this can be fixed, a work-around is to replace the copy of tsmgnu4.oxo in your installation with the one from Version 4.22. (See the download page for zips of old versions). Doing this will of course lose some nice graphics features that have since been added, so rename the current version of  tsmgnu4.oxo, rather than delete it, to allow reinstatement for normal use.

This maintenance release fixes a backwards compatibility issue with v25.29-02-2008, involving settings exported from earlier versions. The program could hang when displaying a data graphic.

Seasonal dummy generation has been enhanced so that "Quarter 1" is now the first quarter of the year, not the first observation in the file. The seasonal frequency is set automatically unless the data are undated.

This release has an improved installer. The Start-in directory can now be selected by typing the path or browsing the folder tree, instead of choosing from preset options. If the package is reinstalled without uninstalling first, then the current selections will be retained. making upgrading simpler than ever.   

A Gnudraw update, restoring variable font sizes, is also included.

Version 4.25 has had numerous mini-upgrades in the last few weeks. Most of the changes are internal, but a few new features have been added, as follows: Data Transformation (max and min functions); formula coding (indicator functions); Summary Statistics (quantiles option); critical value look-up (Kolmogorov-Smirnov table) and most recently, improved sample selection. Now, different subsamples can be set for different program functions, such as estimation, summary statistics, log-periodogram regressions, and plotting.

On the graphics front, TSM may currently be the only package to offer colour-coded scatter plots - you can tell which part of the sample a data point belongs to by its colour; red at the start of the sample, green in the middle, blue at the end. These plots can be substantially more informative than simple scatter plots. See for example this scatter of the log(DM/$) vs. log(/$) daily exchange rates for 1980-1996.

Regrettably, a serious bug found its way into a release posted a couple of weeks ago. This resulted in an incorrect ADF statistic being reported. Apologies for this. Fortunately, the printed value could not be mistaken for the correct one. Please upgrade your installation with the current release if you need to. 

Since TSM is work in progress, new features may not work wholly as advertised at first. They often get tweaked as more experience is gained with them. If you have trouble with any new feature, or its documentation, a report will be much appreciated.