Tinkergpu:Openmm-install

From tinkergpu
Jump to: navigation, search

TINKER-OPENMM Download & Installation A high performance software for molecular model and simulations. Running Molecular Dynamics using AMOEBA force field and (modified) OpenMM GPU engine through TINKER-OpenMM interface. You will be able to run Molecular Dynamics simulations with the regular TINKER inputs and ouputs using the OpenMM as the backend engine.

 

 

 


Notice

  • The latest (>=8.7, summer 2019, and beyond) tinker source and parameter files separate out "angle" and "anglep." Anglep means using projected angles at trigonal centers such as amide bond. Th eearly version of tinker does this auotmatically without "anglep." The latest tinker now requires anglep explicitly in the parameter files to do this projection. So the latest amoebaxxx.prm now has anglep instead angle for some angle parameters if they are involved at trigonal center. The new parameter files can not be used with old tinker executable, or vice versa. Basically do not mix parameter and tinker exe from different versions.
  • Current Tinker-OpenMM may have seg fault issues when using cuda 10.2 and on RTX cards on certain version of gcc/Linux distro. cuda 10.0 and centos 7.x, cuda 10.2 and centos 8x, have been tested ok for RTX cards.

 

Download

Download and Install cuda driver and library (this will install the Nvidia driver). Usually install into /usr/local/cuda. We advise use of version 8.0 or higher.

 

Updated summer 2020:

Use latest OpenMM: https://github.com/openmm/openmm 

Latest Tinker 8.8: https://github.com/TinkerTools/tinker 


Download OpenMM (forked version from Pande group, with modifications). Tips for installing this version of openmm is below.

https://github.com/TinkerTools/Tinker-OpenMM

Download tinker with openmm interface:

https://github.com/TinkerTools/Tinker

 

Installation

The softcore modified openMM Source can be obtained  by running the command git clone https://github.com/TinkerTools/Tinker-OpenMM.git

 

Environmental variables 2019

CUDA version: 10.0; GPU Card: Nvidia RTX2070 

We are using Centos 7, cuda 10, gcc 4.4.7. For CUDA, gcc, OS compatibility, see https://docs.nvidia.com/cuda/archive/ 

 export PATH=/usr/bin:$PATH #cmake version 3.6.1
 export PATH=/home/liuchw/bin/bin/:$PATH #swig version 3.0.12
 export PATH=$PATH:/usr/local/cuda-10.0/bin/
 export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:/usr/local/cuda-10.0/lib:$LD_LIBRARY_PATH
 export CUDA_HOME=/usr/local/cuda-10.0/
 export LD_LIBRARY_PATH=/home/liuchw/Softwares/tinkers/Tinker-OpenMM/AMOEBA/binCU10.0/lib:$LD_LIBRARY_PATH
 export OPENMM_PLUGIN_DIR=/home/liuchw/Softwares/tinkers/Tinker-OpenMM/AMOEBA/binCU10.0/lib/plugins
 export OPENMM_CUDA_COMPILER=/usr/local/cuda-10.0/bin/nvcc

Environmental variables:

Enter the following environmental variable export commands into ~/.bashrc.  Change prefixes below to match your system.

cuda libraries (you may want to update cuda version to 8 or 9)

export PATH=$PATH:/usr/local/cuda-8.0/bin/
export LD_LIBRARY_PATH=/usr/local/cuda-8/0/lib64:/usr/local/cuda-8.0/lib:$LD_LIBRARY_PATH
export CUDA_HOME=/usr/local/cuda-8.0/
#you may need to set your own gcc:
export LD_LIBRARY_PATH=/users/mh43854/gcc/lib64:$LD_LIBRARY_PATH 

openMM directories (use the same prefix as in the installation below):

export LD_LIBRARY_PATH=/users/mh43854/Tinker-OpenMM/bin/lib:$LD_LIBRARY_PATH
export PATH=/users/mh43854/Tinker-OpenMM/bin/:$PATH  (should be same folder as used for CMAKE_INSTALL_PREFIX in openMM make below);
export OPENMM_PLUGIN_DIR=/users/mh43854/Tinker-OpenMM/bin/lib/plugins
export OPENMM_CUDA_COMPILER=/usr/local/cuda-8.0/bin/nvcc

 

 

Step 1 Install (forked) openMM 

Make sure to enter above environmental variables before starting. Unzip the provided openMM zip file. Make a empty build directory, and inside this build folder run

ccmake../Tinker-OpenMM/( note that ccMake should be at least version 3.8)
press t to go into advanced options and add the -std=c++0x flag into CXX flags (need for softcore mod using tuples.Then press 'g' to generate a makefile.

Example options for ccmake: File:Cmakeoption.txt

Note that this should be done on a machine with a CUDA compatible graphics card and a CUDA installation. make sure OPENMM_BUILD_AMOEBA_CUDA_LIB, OPENMM_BUILD_AMOEBA_PLUGIN, OPENMM_BUILD_CUDA_LIB,OPENMM_BUILD_C_AND_FORTRAN_WRAPPERS, andOPENMM_BUILD_SHARED_LIB   are on, and that CUDA_TOOLKIT_ROOT_DIR points to a valid cuda ( version 8.0+) installation. Set and record the option under "CMAKE_INSTALL_PREFIX" (use it in the PATH above); Also, swig should be version 3.0.10 or higher.

Still inside this  build folder, run

make
make install 
make PythonInstall (if you use python to run openmm; not required for tinker-openmm)

 

Step 2 Make modified tinker dynamic_omm.x executable

 

First, you need to compile "regualar" Tinker. Download tinker by running git clone https://github.com/TinkerTools/Tinker.git 1) compile fftw by entering the /fftw/ folder, and running ./configure --prefix=/Users/ponder/tinker/fftw --enable-threads (changing the prefix to be the absolute path of the current folder , then make , then make install. The prefix options changes installation directory. 2) compile Tinker in source/. More notes about compiling CPU version of Tinker is here.

Next, compile dynamic_omm.x which allows you run MD simulations on GPUs. Copy the contents of Tinker/openMM into /source/. modify this "Makefile" so TINKERDIR is set to the tinker home directory, uncomment the appropriate platorm (intel is advised for performance reasons, and must match the fftw set above). Use gcc if you have trouble with intel. Use the FFTW compiled under tinker/fftw. Set TINKERDIR to the base directory of tinker (the folder that contains the source/ folder), and OPENMMDIR to the home directory of the openMM installation, containing the bin and lib directories (this was what you set in step 1 of the installation, not the build folder you ran make inside). Uncomment the compile options corresponding to the appropriate platform (e.g. intel), and run "make" using this Makefile. This should result in the creation of a dynamic_omm.x executable which is used to run tinker simulations on a GPU using CUDA, as well as executables for tinker. 

See Lee-Ping's note with very detailed instructions.

Features

Support:

  • AMOEBA force fields and tinker inputs (same as those for CPU version)
  • Bussi thermostat
  • RESPA (2fs) integrator. 3fs is possible if using H mass re-partitioning.
  • Cubic PBC
  • 4-5 ns/day for the DHFR (JAC) benchmark system on GTX970 (cuda 7.5).
  • Softcore vdw and scaled electrostatics for free energy calculations.
  • NPT is possible with Monte Carlo barostat ("Barostat MONETECARLO" in key file). However, the most reliable NPT is available on CPU using Nose-Hoover integrator/thermostat/baraostat (1 fs time-step).
  • centroid based distance restraint, torsion restraint

Work in progress:

  • AMOEBA virial and constant pressure

 

How to use it

Molecualr dynamics simulations

After you source the bash file to set up enviromental variables needed for tinker-openmm, you can then run dynamic_omm.x for MD simulations using regular tinker files. An example source file is given below:

export PATH=$PATH:/usr/local/cuda-8.0/bin/
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/lib:$LD_LIBRARY_PATH
export CUDA_HOME=/usr/local/cuda-8.0/
export LD_LIBRARY_PATH=/home/mh438/OMM/bin/lib:$LD_LIBRARY_PATH
export PATH=/home/mh438/OMM/bin/:$PATH
export OPENMM_PLUGIN_DIR=/home/mh438/OMM/bin/lib/plugins

#if you have multiple GPU, select one
export CUDA_VISIBLE_DEVICES=0

More detailed Information on how to run simulations: https://biomol.bme.utexas.edu/tinkergpu/index.php/Tinkergpu:Tinker-tut

BAR free energy analysis (post MD) also can be perform on GPU by using bar_omm.x.

An example tinker key file to use with dhfr (14 ns/day on GTX1070, 3fs time step)

parameters amoebapro13.prm
integrator              RESPA         #respa integrator allows 2-fs time step;
HEAVY-HYDROGEN                        # 3fs with H mass repartitioning
thermostat              BUSSI
#barostat montecarlo
a-axis 62.23 
ewald
neighbor-list
vdw-cutoff              12.0
vdw-correction
ewald-cutoff            7.0
polar-eps               0.0001
polar-predict
archive
pme-grid 64 64 64  #grid set automatically by TINKER/OpenMM is very conservative; if set manually, it should be at least 1.1 x box size and allowed values are in tinker/source/kewald.f
#BONDTERM only
#ANGLETERM only
#UREYTERM  only
#OPBENDTERM  only 
#TORSIONTERM  only
#ANGTORTERM  only
#VDWTERM  only
#MULTIPOLETERM  only
#POLARIZETERM  only
#verbose
#if above uncommented ("verbose" enabled); tinker will print both CPU and GPU energy of the initial structure for comparison before MD starts. Suggested for testing before long simulations
#Also answer no if asked if you want send information from GPU back.

To use the OPT method to accelerateate polarization:

To use OPT in OpenMM, use the "POLARIZATION OPT" keyword in your Tinker keyfile. This will default to the OPT3 method. Tinker will also take POLARIZATION OPTx, where x=1,2,3 or 4 to get different levels of OPT. But I think OpenMM may be hardcoded to do OPT3. Perhaps Andy should comment on this, as he is the one who actually wrote the OpenMM code. Use of OPT with OpenMM will result in roughly 50-100% greater overall MD speed compared to 10-5 or 10-6 for iterative convergence. Hwever, it is still under investiagtion that if and how much OPT perturbs things like free energy results. Obvervation so far is that the difference between SCF converged and OPT so far is usually small but outside the statistical error.
OPT reference: Simmonett et al J Chem Phys 2016 Oct 28; 145(16): 164101

 

Hardware

The code has been used on GTX780, 970, 980, GTX1070, 1080 (Ti), and GV100 (courtey of Nvidia)

As of Jan 2019, we also tested on RTX2080 and it is about 15% faster than GTX1080ti in our hand. You may use cuda10.0 to support Turing architecture and change some of the ccmake options (see above).

Node with multiple GPU cards

use the first line to decide the device order, otherwise you may have multiple jobs submitted on the same card

export CUDA_DEVICE_ORDER=PCI_BUS_ID
export CUDA_VISIBLE_DEVICES= 0 # or 1,2,3 if you have 4 cards

 

Notes on CentOS 8.1 

GNU compilers: gcc/gfortran/c++ version 8.3; cmake version: 3.17; cuda version: 10.2

Do the following change in Tinker-OpenMM/CMakeLists.txt

CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
if("${CMAKE_VERSION}" VERSION_GREATER "3.0" OR "${CMAKE_VERSION}" VERSION_EQUAL "3.0")
    #CMAKE_POLICY(SET CMP0042 OLD)
    CMAKE_POLICY(SET CMP0042 NEW)
endif()
 

 

 

Installation notes from Lee-Ping Wang

Lee-Ping Wang (UC Davis) Last updated June 27, 2018

Building and installing Tinker CPU version

OS: Ubuntu 16.04.3 Compiler: ICC 15.0.3 20150407 CUDA version: 9.1

OS: CentOS 7 Compiler: GCC 4.8.5 CUDA version: 8.0

Detailed Instructions:

https://docs.google.com/document/d/1Kf5ovnX5yJeqsvGXiT88mCLZOc-6oyuRYNgaaVfw4ME/edit?usp=sharing 

If you have errors related to non AMOEBA plugin, please consider to turn them off in the cmake options (see example above)

 

Notes compiled by CW

 http://leucinw.com/posts/2020/03/Tinker-OpenMM/  

Links to other related sites:

Citation

Harger, M., D. Li, Z. Wang, K. Dalby, L. Lagardere, J.P. Piquemal, J. Ponder, and P.Y. Ren, Tinker-OpenMM: Absolute and Relative Alchemical Free Energies using AMOEBA on GPUs. Journal of Computational Chemistry, 2017. 38(23): p. 2047-2055.