TINKER-OPENMM Download & Installation
A high performance software for molecular model and simulations. Running Molecular Dynamics using AMOEBA force field and (modified) OpenMM GPU engine through TINKER-OpenMM interface. You will be able to run Molecular Dynamics simulations with the regular TINKER inputs and ouputs using the OpenMM as the backend engine.
Download and Install cuda driver and library (this will install the Nvidia driver). Usually install into /usr/local/cuda. We advise use of version 8.0 or higher.
Download OpenMM (forked version from Pande group, with modifications). Tips for installing this version of openmm is below.
Download tinker with openmm interface:
The softcore modified openMM Source can be obtained by running the command git clone https://github.com/TinkerTools/Tinker-OpenMM.git
Enter the following environmental variable export commands into ~/.bashrc. Change prefixes below to match your system.
cuda libraries (you may want to update cuda version to 8 or 9)
export PATH=$PATH:/usr/local/cuda-8.0/bin/ export LD_LIBRARY_PATH=/usr/local/cuda-8/0/lib64:/usr/local/cuda-8.0/lib:$LD_LIBRARY_PATH export CUDA_HOME=/usr/local/cuda-8.0/ #you may need to set your own gcc: export LD_LIBRARY_PATH=/users/mh43854/gcc/lib64:$LD_LIBRARY_PATH
openMM directories (use the same prefix as in the installation below):
export LD_LIBRARY_PATH=/users/mh43854/Tinker-OpenMM/bin/lib:$LD_LIBRARY_PATH export PATH=/users/mh43854/Tinker-OpenMM/bin/:$PATH (should be same folder as used for CMAKE_INSTALL_PREFIX in openMM make below); export OPENMM_PLUGIN_DIR=/users/mh43854/Tinker-OpenMM/bin/lib/plugins export OPENMM_CUDA_COMPILER=/usr/local/cuda-8.0/bin/nvcc
Step 1 Install (forked) openMM
Make sure to enter above environmental variables before starting. Unzip the provided openMM zip file. Make a empty build directory, and inside this build folder run
ccmake../Tinker-OpenMM/( note that ccMake should be at least version 3.8)
press t to go into advanced options and add the -std=c++0x flag into CXX flags (need for softcore mod using tuples.Then press 'g' to generate a makefile.
Example options for ccmake: Cmakeoption.txt
Note that this should be done on a machine with a CUDA compatible graphics card and a CUDA installation. make sure OPENMM_BUILD_AMOEBA_CUDA_LIB, OPENMM_BUILD_AMOEBA_PLUGIN, OPENMM_BUILD_CUDA_LIB,OPENMM_BUILD_C_AND_FORTRAN_WRAPPERS, andOPENMM_BUILD_SHARED_LIB are on, and that CUDA_TOOLKIT_ROOT_DIR points to a valid cuda ( version 8.0+) installation. Set and record the option under "CMAKE_INSTALL_PREFIX" (use it in the PATH above); Also, swig should be version 3.0.10 or higher.
Still inside this build folder, run
make make install make PythonInstall (if you use python to run openmm; not required for tinker-openmm)
Step 2 Make modified tinker dynamic_omm.x executable
First, you need to compile "regualar" Tinker. Download tinker by running git clone https://github.com/TinkerTools/Tinker.git 1) compile fftw by entering the /fftw/ folder, and running ./configure --prefix=/Users/ponder/tinker/fftw --enable-threads (changing the prefix to be the absolute path of the current folder , then make , then make install. The prefix options changes installation directory. 2) compile Tinker in source/. More notes about compiling CPU version of Tinker is here.
Next, compile dynamic_omm.x which allows you run MD simulations on GPUs. Copy the contents of Tinker/openMM into /source/. modify this "Makefile" so TINKERDIR is set to the tinker home directory, uncomment the appropriate platorm (intel is advised for performance reasons, and must match the fftw set above). Use gcc if you have trouble with intel. Use the FFTW compiled under tinker/fftw. Set TINKERDIR to the base directory of tinker (the folder that contains the source/ folder), and OPENMMDIR to the home directory of the openMM installation, containing the bin and lib directories (this was what you set in step 1 of the installation, not the build folder you ran make inside). Uncomment the compile options corresponding to the appropriate platform (e.g. intel), and run "make" using this Makefile. This should result in the creation of a dynamic_omm.x executable which is used to run tinker simulations on a GPU using CUDA, as well as executables for tinker.
See Lee-Ping's note with very detailed instructions.
- AMOEBA force fields and tinker inputs (same as those for CPU version)
- Bussi thermostat
- RESPA (2fs) integrator. 3fs is possible if using H mass re-partitioning.
- Cubic PBC
- 4-5 ns/day for the DHFR (JAC) benchmark system on GTX970 (cuda 7.5).
- Softcore vdw and scaled electrostatics for free energy calculations.
- NPT is possible with Monte Carlo barostat ("Barostat MONETECARLO" in key file). However, the most reliable NPT is available on CPU using Nose-Hoover integrator/thermostat/baraostat (1 fs time-step).
- centroid based distance restraint, torsion restraint
Work in progress:
- AMOEBA virial and constant pressure
How to use it
Molecualr dynamics simulations
After you source the bash file to set up enviromental variables needed for tinker-openmm, you can then run dynamic_omm.x for MD simulations using regular tinker files. An example source file is given below:
export PATH=$PATH:/usr/local/cuda-8.0/bin/ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/lib:$LD_LIBRARY_PATH export CUDA_HOME=/usr/local/cuda-8.0/ export LD_LIBRARY_PATH=/home/mh438/OMM/bin/lib:$LD_LIBRARY_PATH export PATH=/home/mh438/OMM/bin/:$PATH export OPENMM_PLUGIN_DIR=/home/mh438/OMM/bin/lib/plugins #if you have multiple GPU, select one export CUDA_VISIBLE_DEVICES=0
More detailed Information on how to run simulations: https://biomol.bme.utexas.edu/tinkergpu/index.php/Tinkergpu:Tinker-tut
BAR free energy analysis (post MD) also can be perform on GPU by using bar_omm.x.
An example tinker key file to use with dhfr (14 ns/day on GTX1070, 3fs time step)
parameters amoebapro13.prm integrator RESPA #respa integrator allows 2-fs time step; HEAVY-HYDROGEN # 3fs with H mass repartitioning thermostat BUSSI #barostat montecarlo a-axis 62.23 ewald neighbor-list vdw-cutoff 12.0 vdw-correction ewald-cutoff 7.0 polar-eps 0.0001 polar-predict archive pme-grid 64 64 64 #grid set automatically by TINKER/OpenMM is very conservative; if set manually, it should be at least 1.1 x box size and allowed values are in tinker/source/kewald.f #BONDTERM only #ANGLETERM only #UREYTERM only #OPBENDTERM only #TORSIONTERM only #ANGTORTERM only #VDWTERM only #MPOLETERM only #POLARIZETERM only #verbose #if above uncommented ("verbose" enabled); tinker will print both CPU and GPU energy of the initial structure for comparison before MD starts. Suggested for testing before long simulations #Also answer no if asked if you want send information from GPU back.
To use the OPT method to accelerateate polarization:
The code has been used on GTX780, 970, 980, GTX1070, 1080 (Ti), and GV100 (courtey of Nvidia)
Installation notes from Lee-Ping Wang
Lee-Ping Wang (UC Davis) Last updated June 27, 2018
Building and installing Tinker CPU version
OS: Ubuntu 16.04.3 Compiler: ICC 15.0.3 20150407 CUDA version: 9.1
OS: CentOS 7 Compiler: GCC 4.8.5 CUDA version: 8.0
- Tinker githubs now host TInker, Tinker-HP, Tinker-OpenMM https://github.com/tinkertools
- Tinker website: https://tinkertools.org,
- Tinker (canonical code version 8, OpenMP) https://dasher.wustl.edu/tinker/
- Tinker-HP (Piquemal lab, high performance implementation of Tinker, MPI/OpenMP) http://www.ip2ct.upmc.fr/tinkerHP/
Harger, M., D. Li, Z. Wang, K. Dalby, L. Lagardere, J.P. Piquemal, J. Ponder, and P.Y. Ren, Tinker-OpenMM: Absolute and Relative Alchemical Free Energies using AMOEBA on GPUs. Journal of Computational Chemistry, 2017. 38(23): p. 2047-2055.