WG-10 - Discussion on Standards for Continuum Models

ErdemirA 16:55, 8 February 2010 (EST)

I may be repeating what all of you may have talked about before but it seems like we should continue this discussion. 

I was not able to attend Rob's presentation on Feb 8 but I had a chance to look at the slides. He definitely voiced my concerns and some of my collaborators. We have been suffering from similar problems when conducting macroscale biomechanics simulations. When we try to couple multiple scales and/or physical domains, the problem gets more convoluted. That is simply because of the need to glue analysis types and software packages together, which are sometimes not that flexible. I certainly see open access (and particularly open source) tools having a significant value to share models. It is possible to provide models in proprietry forms (both in terms of model data and solver which simulates the physics). One needs to know what drives the model and what outputs need to be returned for interactions (good documentation indeed). Nonetheless, the interaction of such models with other models and simulation processes are rather clumsy and relies on accessibility. E.g. How good is it if you give me a Word document when I cannot open it?

I relate dissemination of models and relevant information with its intended use:

  1. Model as stand alone. In this case we need (I believe this fits in Rob's context)
    1. computational implementation of physics that drives the model (solver): For people to run analysis immediately, it will be helpful to provide the solver as an executable (cross-platform libraries) and/or in source form. If solver cannot be distributed (due to licensing issues), its version should be documented with information on where to get it.
    2. data that establish the model itself (model parameters): geometric representation, constitutive representations, etc., potentially in a publicly accessible form, e.g. hdf5 (http://www.hdfgroup.org). If data format is proprietry, information on software that can be used to access this information should be provided (version and means to acquire).
    3. data used to run the model (simulation parameters): these are solver settings, e.g. tolerance for convergence, potentially in a publicly accessible form.
    4. input variables: Variables that are need to simulate a certain condition with the model, with customization possibility. A sample set should be provided, possibly in a publicly accessible form.
    5. output variables: Results that are of interest to the user, with customization possibility, A sample set should be provided, possibly in a publicly accessible form.
    6. documentation: This should be sufficient enough to facilitate someone to directly run a simulation with the whole package. It should aslo inform the user about potential pitfalls when changing input conditions (correct use clause, I guess this relates to the next item). Minimum required documentation can be standardized.
    7. model certification: This information should establish confidence to model (verification, validation, reproducibility). It does not need to be complete but should have adequate information for someone to judge the boundaries of the model. Minimum required information for model certification can be standardized.
  2. Model interacting with other models. In this case some requirements of model sharing may change:
    1. computational implementation of physics that drives the model (solver): If coupling of models is weak, e.g. based on input-output kind of data flow, solver can be provided as an executable. Yet, a clumsy file based scripting to move data from one analysis to another is needed. If the solver is provided in source form or as a library, the whole scripting and analysis can utilize data structures that can be converted from one another. Expedites and streamlines the simulation process but requires description of data structures of the solver in documentation. Another possibity is strong coupling where equations of different scales, domains, etc solved concurrently with one solver (may require too much effort on the analyist).
    2. data that establish the model itself (model parameters): same as above.
    3. data used to run the model (simulation parameters): same as above.
    4. input variables: This may represent the entry point of data flow when coupling the simulation to another. Need to provide examples on how this can be done.
    5. output variables: This may represent the exit point of data flow when coupling the simulation to another. Need to provide examples on how this can be done.
    6. documentation: This should be sufficient enough to facilitate someone to combine the simulation with another analysis. It should aslo inform the user about potential pitfalls when changing input conditions (correct use clause, I guess this relates to the next item). Minimum required documentation can be standardized.
    7. model certification: This information should establish confidence to model when coupled to different analysis(verification, validation, reproducibility). It does not need to be complete but should have adequate information for someone to judge the boundaries of the model. Minimum required information for model certification can be standardized.


Monday February 8, 2010 - Rob Kunz, Penn State University

Title: Discussion following Friday's IMAG presentation:

On Friday February 5, I gave an IMAG web-brief entitled "OpenFOAM as a Model Sharing Environment for Macro-Microscale Biomedical Simulation". Nearly 20 folks participated on-line and/or with download on the phone. Lots of ggod discussion and controversy! Several issues came up and I attempt here to introduce each for further discussion:

1) Difference between definitions of a "model" at the macroscale (please add to this!)

Definition 1: Model=Physical Model. Complete specification of all of the components of how a system is modeled. Includes: Top level definition of how discipline physics are modeled (e.g., fluid mechanics=Navier-Stokes, structural mechanics=elastic membrane). Detailed definition of how discipline physics are modeled (e.g., turbulence model, non-standard constitutive equations, deposition model, drag model, pre-stress, dimension reduction treatments), Interfaces between disiplines (e.g., interpolation at physical interfaces, numerical coupling), Boundary conditions. Also might include: Gridding treatments (e.g., unstructured, octree, overset), geometry and topology processing approaches (e.g., segmentation, truncation, 3-D/1-D interfacing). Enables a modeler to understand a simulation approach.

Definition 2: Model=Data Model. Complete set of mesh, boundary conditions, constitutive properties as a minimum to uniquely define a particular simulation to be performed. Also might include: Numerical parameters (such as discretization strategy specification, timestep size, non-dimensionalization values), Interface information (code-code and grid-grid), Outputs (vector, scalar, convergence history), CAD file association. Enables a user to reproduce a simulation.


2) Difference between a format/datbase standard (like CGNS) and a software modeling system (like the many script wrapped multidisciplinary toolkits developed for IMAG and across all computational physics modeling communities).

Drawing on 1) A format and/or database specification enables establishment, archival and reproduction of a "data model". This is 'totally different' from a modeling framework which enables interoperability of multidisciplinary simulation. This is moving rapidly towards object oriented modular software engineering with high-level scripting.


3) OpenFOAM as a stand-alone solver, vs. OpenFOAM as a library.

In the context of multidiciplinary simulation, the baseline version of OpenFOAM should be viewed as a library rather than a stand-alone solver. Interoperability of multidisciplinary simulation is moving ever towards high-level scripting and data that “defines itself”. The enabling feature of OpenFOAM (and Deal.II and other libraries) is that by virtue of their object oriented framework, models developed within these paradigms would be interoperable with the standard OF library, parts of the OF library (say if we selected another CSM solver - see slide 24 from the Feb. 5 brief), another CFD solver or simply a (well instrumented) script. OpenFOAM is going to be around for a while, but that is not the point. The point is that sharing complete standalone codes is unacceptable, sharing no software and relying only on documentation is unacceptable, so sharing well designed objects that can interface with driver scripts, other modules and even other CFD codes is doable, and this is what an approach like OpenFOAM brings to the table. You “need” to have OpenFOAM as the main CFD solver only in the sense that it makes most sense to from an efficiency standpoint – as Eric mentioned we do NOT use OpenFOAM as the CSM solver in our FSI although we have developed OpenFOAM components to interface the CFD with the CSM.

Table sorting checkbox
Off