Main Page

From TINKER-OPENMM
Revision as of 08:16, 13 September 2017 by Root (Talk | contribs) (Features)

Jump to: navigation, search


TINKER-OPENMM

A high performance software for molecular model and design. Running Molecular Dynamics using AMOEBA force field and OpenMM GPU engine through TINKER-OpenMM interface. You will be able to run Molecular Dynamics simulations with the regular TINKER inputs and ouputs using the OpenMM as the backend engine.

Download

Download and Install cuda driver and library (this will install the Nvidia driver). Usually install into /usr/local/cuda. Currently we are using cuda_7.5 but 6.5 should also work.

Download OpenMM (forked from Pande group, with modifications). Tips for installing this version of openmm: http://biomol.bme.utexas.edu/wiki/index.php/Research_ex:Openmm#Installation

https://github.com/pren/tinker-openmm

Download tinker with openmm interface:

https://github.com/jayponder/tinker

Installation

Intel compilers are recommended.

Compile tinker/source as usual.

Compile dynamic_omm.x by copying files from tinker/openmm/ into tinker/source. You need to modify the Makefile and replace with your own openmm directory

OPENMMDIR = /users/mh43854/openMMstable2/bin
Example source file for running dynamic_omm
export PATH=$PATH:/usr/local/cuda-7.5/bin/
export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:/usr/local/cuda-7.5/lib:$LD_LIBRARY_PATH
export CUDA_HOME=/usr/local/cuda-7.5/
export LD_LIBRARY_PATH=/users/mh43854/openMMstable2/bin/lib:$LD_LIBRARY_PATH
export PATH=/users/mh43854/openMMstable2/bin/:$PATH
export OPENMM_PLUGIN_DIR=/users/mh43854/openMMstable2/bin/lib/plugins
export OPENMM_CUDA_COMPILER=/usr/local/cuda-7.5/bin/nvcc
export CUDA_VISIBLE_DEVICES=0

Now you can run dynamic_omm.x as the usual dynamic.x but it will use GPU instead of CPU.

Features

Support:

  • AMOEBA force fields and tinker inputs (same as those for CPU version)
  • Bussi thermostat
  • RESPA (2fs) integrator. 3fs is possible if using H mass re-partitioning.
  • Cubic PBC
  • 4-5 ns/day for the DHFR (JAC) benchmark system on GTX970 (cuda 7.5).
  • Softcore vdw and scaled electrostatics for free energy calculations.
  • NPT is possible with Monte Carlo barostat ("Barostat MONETECARLO" in key file). However, the most reliable NPT is available on CPU using Nose-Hoover integrator/thermostat/baraostat (1 fs time-step).
  • centroid based distance restraint, torsion restraint

Work in progress:

  • AMOEBA virial and constant pressure

Example tinker key file

parameters amoebapro13.prm
integrator              RESPA
#respa integrator allows 2-fs time step; 3fs with H mass repartitioning
thermostat              BUSSI
barostat montecarlo
ewald
neighbor-list
vdw-cutoff              12.0
ewald-cutoff            7.0
polar-eps               0.00001
polar-predict
archive
#pme-grid 72 72 72 
#grid can be set automatically by TINKER/OpenMM; if set manually, it should be at least 1.1 x box size and allowed values are in kewald.f
a-axis    56.050244
#BONDTERM only
#ANGLETERM only
#UREYTERM  only
#OPBENDTERM  only 
#TORSIONTERM  only
#ANGTORTERM  only
#VDWTERM  only
#MPOLETERM  only
#POLARIZETERM  only
#verbose
#if above ucommented ("verbose" enabled); tinker will print both CPU and GPU energy of the initial structure for comparison before MD starts. Suggested for testing before long simulations
#Also answer no if asked if you want send information from GPU back.

Hardware

We have run the code on GTX780, 970 and 980. Testing on GTX1070 is in progress.