Main Page

From tinkergpu
Revision as of 17:37, 6 April 2018 by Root (talk | contribs)
Jump to: navigation, search

Links to the other Tinker platforms:

Go back to Pengyu Ren Lab website


A high performance software for molecular model and design. Running Molecular Dynamics using AMOEBA force field and OpenMM GPU engine through TINKER-OpenMM interface. You will be able to run Molecular Dynamics simulations with the regular TINKER inputs and ouputs using the OpenMM as the backend engine.


Download and Install cuda driver and library (this will install the Nvidia driver). Usually install into /usr/local/cuda. Currently we are using cuda_7.5 but 6.5 should also work.

Download OpenMM (forked from Pande group, with modifications). Tips for installing this version of openmm is below.

Download tinker with openmm interface:



The softcore modified openMM Source  can be obtained  by running the command git clone

Environmental variables:

enter the following environmental variable export commands into ~/.bashrc.  Change prefixes to match your system.

cuda libraries:

export PATH=$PATH:/usr/local/cuda-7.5/bin/
export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:/usr/local/cuda-7.5/lib:$LD_LIBRARY_PATH
export CUDA_HOME=/usr/local/cuda-7.5/

openMM directories( use the same prefix as in the installation below)::

export LD_LIBRARY_PATH=/users/mh43854/openMMstable4/bin/lib:$LD_LIBRARY_PATH
export PATH=/users/mh43854/openMMstable4/bin/:$PATH  (should be same folder as used for CMAKE_INSTALL_PREFIX in openMM make below);
export OPENMM_PLUGIN_DIR=/users/mh43854/openMMstable4/bin/lib/plugins
export OPENMM_CUDA_COMPILER=/usr/local/cuda-7.5/bin/nvcc

Step 1 Install openMM softcore

Make sure to enter above environmental variables before starting. Unzip the provided openMM zip file. Make a empty build directory, and inside this build folder run

press t to go into advanced options and add the -std=c++0x flag into CXX flags (need for softcore mod using tuples.Then press 'g' to generate a makefile.

Note that this should be done on a machine with a CUDA compatible graphics card and a CUDA installation. make sure OPENMM_BUILD_AMOEBA_CUDA_LIB, OPENMM_BUILD_AMOEBA_PLUGIN, and OPENMM_BUILD_CUDA_LIB are on, and that CUDA_TOOLKIT_ROOT_DIR points to a valid cuda ( version 7.5+) installation. Set and record the option under "CMAKE_INSTALL_PREFIX" (use it in the PATH above);

Still inside this  build folder, run

make install 
make PythonInstall (if you use python to run openmm; not required for tinker-openmm)

Step 2 Make modified tinker dynamic_omm.x executable

Download tinker by running git clone

First, set the environmental CC variable to icc, and F77 to ifort. Make sure icc is in your PATH variable (you can also use gcc instead of intel compiler). First, you need to compile fftw by entering the /fftw/ folder, and running ./configure --prefix=/Users/ponder/tinker/fftw --enable-threads( changing the prefix to be the absolute path of the current folder , then make , then make install. The prefix options changes installation directory. It defaults to /usr/lib/, but can be changed to allow install without root access.. 

 copy /make/Makefile   into /source/. modify this makefile so TINKERDIR is set to the tinker home directory, uncomment the appropriate platorm( intel is advised for performance reasons, and must match the fftw set above ), and then   run make. 

Copy the contents of /openmm/ into source, modifying the makefile to  set TINKERDIR to the base untarred directory of the  softcore modified  tinker( the folder that contains the /source/ folder), and OPENMMDIR to the home directory of the openMM installation, containing the bin and lib directories( this was what you set in step 1 of the installation, not the build folder you ran make inside). Uncomment the compile options corresponding to the appropriate platform( again, use iniel), and make this Makefile. modify the libs line as shown above. This should result in the creation of a dynamic_omm.x executable which is used to run tinker simulations on a GPU using CUDA, as well as executables for tinker. 



  • AMOEBA force fields and tinker inputs (same as those for CPU version)
  • Bussi thermostat
  • RESPA (2fs) integrator. 3fs is possible if using H mass re-partitioning.
  • Cubic PBC
  • 4-5 ns/day for the DHFR (JAC) benchmark system on GTX970 (cuda 7.5).
  • Softcore vdw and scaled electrostatics for free energy calculations.
  • NPT is possible with Monte Carlo barostat ("Barostat MONETECARLO" in key file). However, the most reliable NPT is available on CPU using Nose-Hoover integrator/thermostat/baraostat (1 fs time-step).
  • centroid based distance restraint, torsion restraint

Work in progress:

  • AMOEBA virial and constant pressure


How to use it

Information on how to run simulations:

Example tinker key file to use with dhfr (14 ns/day on GTX1070, 3fs time step)

parameters amoebapro13.prm
integrator              RESPA         #respa integrator allows 2-fs time step;
HEAVY-HYDROGEN                        # 3fs with H mass repartitioning
thermostat              BUSSI
#barostat montecarlo
a-axis 62.23 
vdw-cutoff              12.0
ewald-cutoff            7.0
polar-eps               0.0001
pme-grid 64 64 64  #grid set automatically by TINKER/OpenMM is very conservative; if set manually, it should be at least 1.1 x box size and allowed values are in tinker/source/kewald.f
#VDWTERM  only
#if above uncommented ("verbose" enabled); tinker will print both CPU and GPU energy of the initial structure for comparison before MD starts. Suggested for testing before long simulations
#Also answer no if asked if you want send information from GPU back.


We have run the code on GTX780, 970, 980, GTX1070 and 1080. GTX1070 gives the best performance/price ratio for Tinker-OpenMM.


Harger, M., D. Li, Z. Wang, K. Dalby, L. Lagardere, J.P. Piquemal, J. Ponder, and P.Y. Ren, Tinker-OpenMM: Absolute and Relative Alchemical Free Energies using AMOEBA on GPUs. Journal of Computational Chemistry, 2017. 38(23): p. 2047-2055.