The purpose of this wiki page is to develop a TRACE document for GRNmap. We will first develop our documentation on this wiki page and then create the Word/PDF documents.
This is a TRACE document (“TRAnsparent and Comprehensive model Evaludation”) which provides supporting evidence that our model presented in:
was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose.
The rationale of this document follows:
and uses the updated standard terminology and document structure in:
Grimm, V., Augusiak, J., Focks, A., Frank, B. M., Gabsi, F., Johnston, A. S., ... & Thorbek, P. (2014). Towards better modelling and decision support: documenting model development, testing, and analysis using TRACE. Ecological Modelling, 280, 129-139.
- TRACE template Word document
- TRACE document example 1
- TRACE document example 2
- TRACE document example 3
Augusiak, J., Van den Brink, P. J., & Grimm, V. (2014). Merging validation and evaluation of ecological models to ‘evaludation’: a review of terminology and a practical approach. Ecological Modelling, 280, 117-128.
1. Problem Formulation
This TRACE element provides supporting information on: The decision-making context in which the model will be used; the types of model clients or stakeholders addressed; a precise specification of the question(s) that should be answered with the model, including a specification of necessary model outputs; and a statement of the domain of applicability of the model, including the extent of acceptable extrapolations.
- Decision-making context in which the model will be used:
- Types of model clients or stakeholders addressed:
- Precise specification of the questions that should be answered with the model
- Specification of necessary model outputs
- Statement of the domain of applicability of the model
- Extent of acceptable extrapolations
2. Model Description
This TRACE element provides supporting information on: The model. Provide a detailed written model description. For individual/agent-based and other simulation models, the ODD protocol is recommended as standard format. For complex submodels it should include concise explanations of the underlying rationale. Model users should learn what the model is, how it works, and what guided its design.
- What the model is:
- How it works:
- What guided its design:
3. Data Evaluation
This TRACE element provides supporting information on: The quality and sources of numerical and qualitative data used to parameterize the model, both directly and inversely via calibration, and of the observed patterns that were used to design the overall model structure. This critical evaluation will allow model users to assess the scope and the uncertainty of the data and knowledge on which the model is based.
- Source of numerical and qualitative data used to parameterize the model:
- Direct sources of data
- Parameters estimated inversely
- Quality of the numerical and qualitative data used to parameterize the model:
- Observed patterns that were used to design the overall model structure
4. Conceptual Model Evaluation
This TRACE element provides supporting information on: The simplifying assumptions underlying a model’s design, both with regard to empirical knowledge and general, basic principles. This critical evaluation allows model users to understand that model design was not ad hoc but based on carefully scrutinized considerations.
- Simplifying assumptions underlying the model's design
- Empirical knowledge
- General, basic principles
5. Implementation Verification
This TRACE element provides supporting information on: (1) whether the computer code implementing the model has been thoroughly tested for programming errors, (2) whether the implemented model performs as indicated by the model description, and (3) how the software has been designed and documented to provide necessary usability tools (interfaces, automation of experiments, etc.) and to facilitate future installation, modification, and maintenance.
- Documentation of how the computer code was tested for programming errors:
- Documentation of how the model performs as indicated by the model description:
- Software design and documentation
- Usability tools (interfaces, automation of experiments)
- Facilitate future installation, modification, mainenance
Testing of GRNmap GRNmap is tested by both its coders and the lab’s data analyst users. The coding team tries to break the code with various tasks in order to increase its functionality. The data analysts run their generated GRNs through the code and analyze the outputs. When running the code whether through the executable or MATLAB and errors appear, the data analyst notifies the coding team of the specific bugs and where they lie within the code. For example, when running a few networks in June 2015, an issue appeared on the screen. After tracing the issue back to the specific line, the code had a problem with reading a sheet in the Excel file. Some genes were missing data at duplicate time points, leaving some cells without a value. As a result of this error, the coding team now works to re-write the code to be able to handle missing data values.
For implementing the model & ensuring it’s been thoroughly tested, talk to GRNmap coding team or have them write this portion of this section
Software Design and Documentation Documentation and design provide necessary usability tools & to facilitate future installation, modification, and maintenance
- Design (help of Eddie)
- Open Source Code on GitHub & allows you to track changes of versions of code
- Executable & MATLAB
6. Model Output Verification
This TRACE element provides supporting information on: (1) how well model output matches observations and (2) how much calibration and effects of environmental drivers were involved in obtaining good fits of model output and data.
- How well model output matches observations
- How much calibration and effects of environmental drivers were involved in obtaining good fits of model output and data
7. Model Analysis
This TRACE element provides supporting information on: (1) how sensitive model output is to changes in model parameters (sensitivity analysis), and (2) how well the emergence of model output has been understood.
- How sensitive model output is to changes in model parameters (sensitivity analysis):
- How well the emergence of the model out put has been understood:
8. Model Output Corroboration
This TRACE element provides supporting information on: How model predictions compare to independent data and patterns that were not used, and preferably not even known, while the model was developed, parameterized, and verified. By documenting model output corroboration, model users learn about evidence which, in addition to model output verification, indicates that the model is structurally realistic so that its predictions can be trusted to some degree.
- How model predictions compare to independent data and patterns that were not used while the model was developed, parameterized, and verified: