TAM 2010, Abstracts
The research paradigm in almost all disciplines has shifted exceedingly to data-driven methods and research questions. This involves several dilemmas. First, huge amounts of scientific data are stored in isolated repositories, or even on researchers' desk-top computers. This poses a difficult dilemma, as data accessibility is crucial for all research, regardless the focus and scale. On the other hand, fundamental global challenges, such as improving health conditions or gaining accurate environmental information, for example on the Gulf of Mexico oil spill , dependend on timely access to various and often unconnected data repositories. Thus, the problem lies not only inthe accessibility of data, but also in the interconnectivity and interoperability of these resources.
The third important dimension is the curation and preservation of these data sets. Once created, they are part of the scientific knowledge base. Proper preservation exceeds the data life cycle and saves costs as the same data set need not be created twice.
Finally, the sheer volume of data complicates the challenge: how to manage data repositories of Petabyte scale?
Magnetic fields are ubiquitous in the universe - galaxies, stars, and planets are all magnetised. The magnetic fields of stars and galaxies are thought to arise as the result of an interplay between turbulent motions at scales much smaller than the object itself, and nonuniform rotation at the scale of the object.
Astrophysical flows are also characterised by large density and temperature contrasts. The effects of compressibility are, therefore, pronounced in these flows, and the existence of shocks is very common. To model the dynamo process generating the magnetic fields relies on numerical solutions to the full set of magnetohydrodynamic equations under these conditions.
This poses a great computational challenge, as a wide range of spatial scales is needed to be included, firstly, to resolve enough of the turbulent scales, and secondly, to have enough scale separation to see the generation of magnetic fields at much larger scales. This requires high resolution simulations, which can only be performed as massively parallel tasks in supercomputers. Recently, we have studied stellar convective turbulence and dynamos with the PENCIL CODE taking advantage of the HPC-programmes at CSC (GC-programme DYNAMO08) and DEISA/DECI (CONVDYN09), that have enabled us to reach parameter regimes not possible before.
While cholesterol is one of the crucial components of cells, it is also involved in a number of conditions such as cardiovascular diseases. The main factors in these conditions are low density (LDL) and high density (HDL) lipoproteins whose function is e.g. to carry cholesterol in the system. As the sizes of LDL and HDL are of the order of 10 nm, rendering experimental studies of their properties very difficult, their structures and thus functions have remained largely unclear. Here we discuss how atomistic and coarse-grained simulations can be used to elucidate LDL and HDL properties. We also discuss how these molecular-scale simulations can be bridged to large-scale phenomena characterized by systems biology approaches.