Tinkergpu:Software-list

From TINKERGPU
Jump to: navigation, search

3D molecule builder

Free software for building and modifying chemical structure in 3D

Avogadro

* can build polypeptide, nucleic acids using Insert function
* cheap structural optimization & energy minimization using GAFF, MMFF, UFF etc 

iqmol

* Can build using common chemical groups and fragments
* Geometry optimizaiton using Qchem server (or our own server on water.bme.utexas.edu)

Arguslab

  • Has built-in semi-empirical and simple force field optimization.
  • Nive building function (polypeptides are easy)
  • No longer supported. Sometimes crashes.

ACD labs

Academic version: http://www.acdlabs.com/resources/freeware/chemsketch/

ForceBalance

Installation files: ~pren/forcebalance-installation/

to Update: svn update

run on node100 and beyond to install:

source ~pren/.cshrc.force 
python setup.py install --prefix=/home/pren/forcebalance

replace the last PATH to your desired path (or just use mine) Won't work on bme-nova or water (too old).

Check python: python -c 'import sys; print sys.path'


How did I compile python 2.7 (not able to do yum on old nodes): python 2.7

Tutorial and example how to fit liquid and gas-phase dimer energy is in /Users/pren/Dropbox/RESEARCH/AMOEBA/AMOEBA-halogen/leeping/CondensedPhase_Update_03-07

pymol

Linux

Try this first: /opt/software/CHEM/pymol2016/pymol Official installation guide

http://www.pymolwiki.org/index.php/TOPTOC

64 bit on Centos 6.6

/opt/software/CHEM/pymol64/pymol/pymol

Can be run on biomol, bme-venus, bme-pluto, bme-neptune and bme-saturn.

http://www.pymolwiki.org/index.php/User:Tlinnet/Linux_Install

replace the fist wget:

wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/

After install all the rpms, don't need to installpymol.sh, just run

/opt/software/CHEM/pymol64/pymol/pymol 

Run this before pymol if need MPEG support

/opt/software/CHEM/pymol64/pymol/pymolMPEG.sh

pymol prebuild

Download from here: http://biomol.bme.utexas.edu/~pren/courses/bme346-soft/

http://biomol.bme.utexas.edu/~pren/courses/bme346-soft/archive/ (linux)

These educational version has no Ray tracing ability

pymol on windows (with Ray Tracing)

http://tubiana.me/how-to-install-and-compile-pymol-windows-linux-mac/

From this site: http://www.pymolwiki.org/index.php/Windows_Install

Download the 4 files (*.whl) below from http://www.lfd.uci.edu/~gohlke/pythonlibs/#pymol

Open a CMD windows as admin, cd Downloads\ and run the following (if you install 64 bit pythin, you may need download 64 bits for below as well)

C:\Users\macbook12\Downloads>C:\Python27\python.exe pip-8.1.2-py2.py3-none-any.whl/pip install pip-8.1.2-py2.py3-none-any.whl
C:\Users\macbook12\Downloads>C:\Python27\python.exe pip-8.1.2-py2.py3-none-any.whl/pip install "numpy-1.11.1+mkl-cp27-cp27m-win32.whl"
C:\Users\macbook12\Downloads>C:\Python27\python.exe pip-8.1.2-py2.py3-none-any.whl/pip install Pmw-2.0.1-py2-none-any.whl
C:\Users\macbook12\Downloads>C:\Python27\python.exe pip-8.1.2-py2.py3-none-any.whl/pip install pymol-1.7.6.0-cp27-none-win32.whl

Create a folder from command line (this allows you to write output etc in your own folder)

mkdir %HOMEPATH%\pymol

Make a shortcut to C:\pthyon29\scripts\pymol (right click) and move it to above folder.

Right click the shortcut and select “properties”, change the “starts in” filed to %HOMEPATH%\pymol
You can rename the shortcut to mypymol.cmd or anything you like.

Math

Matlab

Matlab is available on water (linux) and bme-black (windows)

Chemistry

pubchem

Here you can download the 3D structure quickly of known compounds. Or smile string, 2d structure etc. http://pubchem.ncbi.nlm.nih.gov

logP

You can enter the smile string and it predicts the logP. It'll be useful to screen for bioavailability. It is a quick empirical prediction. You can get the smile string from pubchem or babel.

http://www.molinspiration.com/services/logp.html http://www.molinspiration.com/cgi-bin/properties --

add H to proteins based on predicted pKa

To add H to proteins based on predicted pKa

1. Use PROPKA and PDB2PQR to add hydrogens to your x-ray structure:

PDB2PQR: http://pdb2pqr.sourceforge.net/
http://www.poissonboltzmann.org/ 
PROPKA: http://propka.ki.ku.dk/ 


2. H++ http://biophysics.cs.vt.edu/index.php

Openeye

We have academic license for a suite of software from openeye forsimilarity search, docking, molecular conformation generation (2d to 3d conversion) etc.

2016-2018

REHL 6 version, including fastroc on GPU

/opt/software/CHEM/openeye2016/openeye/

Source /opt/software/CHEM/openeye2016/openeye/bashrc

See /opt/software/CHEM/openeye2016/openeye/readme for fastroc on GPU example.

2013 edition

You can only run the new openeye software on RHEL5 or centos5, which means water.bme.utexas.edu or bme-nova. The license file is in /opt/openeye/license/ which expires in Aug 2013.

Set up bashrc:

export OE_LICENSE=/opt/openeye/license/oe_license.txt
export OE_DIR=/opt/openeye/RHEL5/openeye/
export PATH=/opt/openeye/RHEL5/openeye/bin:$PATH

That should be it...Try type "rocs"

tutorial

ROCS and eon

 similarity search

old version

How do I install the license file (/opt/openeye/license)? Place the license file named oe_license.txt in the OpenEye directory and define the environme nt variable OE_LICENSE to point to the location of the license file.

 

On windows the envir var can be set up for $OE_DIR (see below). The oe_license.txt should be located under the $OE_DIR. On WinXP/Vista/Windows7, use Start->Settings->Control Panel->System->Advanced->Environment Va riables

 

  1. How do I install my license file oe_license.txt on linux?

License files should be defined by environment variable $OE_LICENSE. It is recommended that t hese files be located at OE_DIR=/usr/local/openeye/etc/oe_license.txt. License files can also

be used if named oe_license.txt and either (1) present in the current working directory or (

2) present in the directory defined by environment variable $OE_DIR. However, these methods a re not as reliable and not generally recommended.


/usr/local/openeye/ $OE_DIR /usr/local/openeye/etc/oe_license.txt $OE_LICENSE

NIH online tool

openbabel

A great chemoinformatics tool, for converting files from one format to another and now generate 3D structure for molecule. Available on linux, window and Mac osx.

Now support 2d to 3d conversion

Installed on bme-sun: opeenbabel 2.2 (AUg 2008 latest) Log into bme-sun to use tools in /usr/local/bin:

-rwxr-xr-x 1 root root  103471 2008-08-22 17:40 roundtrip*
-rwxr-xr-x 1 root root  137869 2008-08-22 17:40 obrotate* 
-rwxr-xr-x 1 root root  187903 2008-08-22 17:40 obrotamer*
-rwxr-xr-x 1 root root  183326 2008-08-22 17:40 obprop*
-rwxr-xr-x 1 root root  222132 2008-08-22 17:40 obprobe*
-rwxr-xr-x 1 root root  204595 2008-08-22 17:40 obminimize*
-rwxr-xr-x 1 root root  145286 2008-08-22 17:40 obgrep*
-rwxr-xr-x 1 root root  197430 2008-08-22 17:40 obgen*               -----generate 3D structure NOW!
-rwxr-xr-x 1 root root  208538 2008-08-22 17:40 obfit*
-rwxr-xr-x 1 root root  201210 2008-08-22 17:40 obenergy*
-rwxr-xr-x 1 root root  222575 2008-08-22 17:40 obconformer*
-rwxr-xr-x 1 root root  129654 2008-08-22 17:40 obchiral*
-rwxr-xr-x 1 root root  156700 2008-08-22 17:40 babel*

See here for detailed description: http://openbabel.org/wiki/Guides

babel -H list all formats supported (including read Gaussian fchk and output so you can convert other format such as pdb for visualization).

The installation source files are in /opt/openbabel-2.2.0/ or /opt/software/CHEM/openbabel/

Computational Chemistry

jmol

can view Gaussian and many other format; make small molecules.

/opt/software/CHEM/jmol-13.2.3/jmol.sh

Avogadro

Installed on bme-earth (2/2014). "ssh -X bme-earth" and the "avogadro".

Avogadro is a free, open source, cross-platform molecular editor designed for flexible use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas.  
Packages are available for Mac OSX, Windows and Linux, as well as source code provided under the GNU GPL.

Can built peptides etc.

To download:
http://avogadro.openmolecules.net/wiki/Get_Avogadro

Gabedit

Read and visualize Gaussian, Gamess, Mopac, molpro... file (first structure and last structure in optimization). Runs on Linux!

/opt/software/CHEM/Gabedit/gabedit 

To visualize (gaussian optimized) structure, click on menu "Geometry" and then "draw" right click on black window and select read ....

MS Windows

FreeCommander: Free file manager. Alternative to total commander. Dual panels. Capable of synchronization and compare directories. Download: http://www.freecommander.com/

Visualization:

Facio

2/2007 Jenny

There is a new visulization software Facio available, which is publicly available at no charge. You can download it for your windows machine from the website. Although Facio is a native application of Windows environment, it can work on Linux with the help of WINE, which we have already. To download the software for your WINDOWS machine and know more about the software, please visit this website:

http://www1.bbiq.jp/zzzfelis/Facio.html

To use facio on your workstation without vmware or remote access, please place the following alias in your .cshrc file

alias facio "/opt/cxoffice/bin/wine /opt/software/Facio1081/Facio.exe"

It is an OpenGL-based graphics program for molecular modeling and visualization of quantum chemical calculations(GAUSSIAN and GAMESS), so it is easy to write the GAUSSIAN input file, view the output for optimized Geometry or load checkpoint file etc. It can also load TINKER xyz,PDB...format files. I am also attaching the manual on this email for your reference.

Jchem and Marvin

http://www.chemaxon.com/products.html visualization, pka, logP, logD, 2D to 3D, fragmentation and many other usages.

  • Windows: Version 5.3 installed on bme-mercury OBSOLETE
  • Linux: Version 5.2 (11/2009)

To access, put “/opt/software/CHEM/ChemAxon5.2/JChem/bin” in your path, and add the following environmental variable in your .cshrc file

 setenv CHEMAXON_LICENSE_URL /opt/software/CHEM/chemaxon/license.cxl

You can then use ChemAxon software For example type "mview /opt/database/known_drug/cmc20021_he.mol2"


I have installed a very useful Jchem package (http://www.chemaxon.com/products.html)

To access, put “/opt/software/CHEM/ChemAxon/JChem/bin” in your path, and add the following environmental variable in your .cshrc file

setenv CHEMAXON_LICENSE_URL /opt/software/CHEM/chemaxon/license.cxl

Now you can use the executables in bin on your worksrtation: mview can let you view a database of small molecules, for example:

“mview /opt/database/kinase-biogen/ligands_4pengyu.sdf”
You can clean up and convert 2d structure to 3d structure using Edit/clean/3d …

Msapce let you viz PDB structure of proteins. Msketch let you draw a new molecule

Mview read gaussian output file.

To install on your windows/linux computer, you can use the installation file in /opt/software/CHEM/chemaxon/ (note chemaxon and ChemAxon are two different directories). Copy the license.cxl into /User/yourusername/chemaxon (to make the new folder chemaxon yourself) for vista. For XP, create the same directory in user directory and copy the license file.

vmd

Since vmd186 is not functioning correctly on Linux workstation, you may want to use vmd 1.8.5 (type plain vmd) for now. To use the rmsd plot function, create a file .vmdrc in your home directory with the following content:

lappend auto_path {/usr/local/lib/vmd/plugins/noarch/tcl/rmsdtt} vmd_install_extension rmsdtt rmsdtt_tk_cb "WMC PhysBio/RMSDTT" menu main on

For admin: To install vmd, cd into /opt/vmdxxx/, run “configure LINUXAMD64 OPENGL”, and the cd ./src and “make install”.

Simulation: GROMACS

Installation (updated 8/20/2014)

Installed under /opt/, can run on all nodes.

  • /opt/gromacs-4.5.3-mpi/bin
  • /opt/gromacs-4.5.3-intel/bin (need source ~pren/.cshrc.f13)
  • /opt/gromacs-4.5.4/bin 

Add /opt/gromacs-4.5.3-mpi/bin to your path (cshrc or bashrc) to use the executables. 

In gromacs-4.5.3-mpi/bin mdrun_mpi is that one does mpi parallel simualtions, mdrun is serial.

 
  • General Instruction: http://ccmst.gatech.edu/wiki/index.php?title=GROMACS *
  • On nova fftw (3.2.4) was built by mike in /opt/fftw, copied to in /usr/local on NOVA
  • On nodes fftw (3.2.4) was installed through yum, in /usr/lib64 (single, double, long double)
  • How mike compiled /opt/gromacs-4.5.4: /home/schnied/Software/gromacs-4.5.4/config.log
  • Here is how I compiled gromacs-4.5.3-mpi on NOVA (fftw is in /opt/fftw)
    • Use intel compiler 10: source /opt/intel/fce/10.0.023/bin/ifortvars.csh; source /opt/intel/cce/10.0.023/bin/iccvars.csh
    • ​cd /home/pren/gromacs/gromacs-4.5.3
    • make clean
    • ./configure --enable-mpi --program-suffix=_mpi --enable-float --with-fft=fftw3 CPPFLAGS=-I/opt/fftw3/include LDFLAGS=-L/opt/fftw3/lib --prefix=/home/pren/gromacs/gromacs-4.5.3
    • Note the enable-float (single precision) was used for fftw so it should be used for gromacs to be consistent
    • make -j 6; make install
    • I then copied everything to /opt/gromacs-4.5.3-mpi
  • ​How to compile gromacs-4.5.3-intel using intel compiler (v13) and mkl (2013) fftw on NOVA
    • ​reference: http://www.hpcadvisorycouncil.com/pdf/GROMACS_Best_Practices.pdf
    • make clean
    • source ~/.cshrc.f13
    • ./configure --enable-mpi --with-fft=mkl CC=icc F77=ifort CXX=icpc --program-suffix=_mpi --prefix=/home/pren/gromacs/gromacs-4.5.3
    • make -j 4
    • make install
    • copy to /opt/gromacs-4.5.3-intel
    • To run mdrun_mpi etc on nodes, source ~pren/.cshrc,f13
  • If you run mdrun_mpi and see error related fftw3f library not found, that is because fftw library is not installed in /usr/lib64 (yum install fftw* should solve this)

 

 

Check Gromacs WIKI following the link below

http://wiki.gromacs.org/index.php/Main_Page

Quick Guide

~/gromacs/parallel/bin/pdb2gmx -f prozn.pdb -o prozn -ff oplsaa -p prozn.top -ter ~/gromacs/parallel/bin/editconf -f prozn.gro -o proznw.gro -d 0.8 -bt octahedron cp prozn.top proznwat.top ~/gromacs/parallel/bin/genbox -cp proznw.gro -cs spc216 -o proznwat.gro -p proznwat.top

// any constraints, rstarin put right behind the itp in top file.

~/gromacs/parallel/bin/grompp -v -f em -c proznwat.gro -o em -p proznwat.top ~/gromacs/parallel/bin/mdrun -v -s em.tpr -o em.trj -c after_em -g emlog

mpdboot -n 4 -f ./mpd.hosts --ncpus=2

~/gromacs/parallel/bin/grompp -v -f md -o md -c after_em -p proznwat.top -shuffle -np 8

  1. in case refined with short no-constraint md

~/gromacs/parallel/bin/grompp -v -f md -o md -c after_md0 -p proznwat.top -shuffle -np 8

restart from the last frame (or given time, see grompp): ~/gromacs/parallel/bin/grompp -v -f md -o md -c after_md.gro -t md.trj -p proznwat.top -shuffle -np 2

mpirun -n 2 mdrun_mpi -v -e md -s md.tpr -o md.trj -c after_md.gro -g md.log > & md.job &

  1. mpirun -n 2 ~pren/gromacs/parallel/bin/mdrun_mpi -v -e md -s md.tpr -o md.trj -c after_md.gro -g md.log > & md.job &

trjconv -f md.trj -o movie.pdb -n now.ndx -s md -b 1 -e 100 -skip 1

make_ndx -f after_md.gro -o now.ndx
    1 | 12 | 13 | 14

make index to creat new index

       a 1-10  //a group of atoms from 1-10
    name 17 pep  // rename group 17
    1 & ! pep   // a group with all protein atoms except 1-10 (pep)

Running mpi

If you see an error related to "libmpich.so.1.2:" not found, add the mpich2 lib path

setenv LD_LIBRARY_PATH "/opt/mpich2-fc13-p26/lib/:/opt/software/CHEM/openbabel-2.3.0/lib:"{$ATLAS}

REMD HOWTO

Benchmarks

Here are some observations for Kinase system (38772 atoms) using our own cluster from node18 to node30.

Using 8 , the speed is 2.54 ns/day. 
Using 12 , the speed is 3.14 ns/day. 
Using 15 , the speed is 3.38 ns/day.  
The scaling become worse beyond 20, the speed is lower than 1ns/day.

 

GPU nodes

hostnames

 gpu nodes

run "bash"

Source /usr/local/bash/bashrc.amber or /usr/local/bash/bashrc.openmm before use. If has error (esp amber), please try different /opt/software/bash/bashrc.amber.*

installation

We already has Canopy version pythin2.7 but If need to install python: http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/

  • Centos 6.5
node2xxx
mount -t nfs 10.0.0.200:/opt /opt
mkdir ~/.ssh
cp -r /opt/system-setup/rootssh2013/* ~/.ssh
restorceon -R ~/.ssh
node-setup.sh
  • yum update
reboot
  • install nvidia graphics driver: /opt/nvidia/NVIDIA-Linux-x86_64-343.22.run
reboot
  • Install both nvidia cuda toolkit:
 updated:7.5  is latest for OpenMM
/opt/nvidia/cuda_6.5.14_linux_64.run (say No to nvidia driver)
/opt/nvidia/cuda_6.0.37_linux_64.run (say yes to obsolte/uncompatible and No to nvidia driver)
  • Install gdk_ in /opt/nvidia (2016 August: this is now necessary for new stuff Jay put in for detecting GPU automatically)
  • mkdir /usr/local/bash
ln -s /opt/software/bash/.bashrc.amber bashrc.amber (newer CPU)
ln -s /opt/software/bash/.bashrc.amber.earth bashrc.amber (older CPU)
ln -s /opt/software/bash/.bashrc.amber.970 bashrc.amber (new CPU/GTX 970: node210 211)
 

Run bash;  source /opt/software/bash/.bashrc.amber.earth; cd /opt/software/amber14-gpu-earth/test/cuda/dhfr/; ./Run.dhfr.ntb2_ntt1.my

(If you are using different source, go to the corresponsing amber dir)
If this bombs, have to make a copy of the amber-14-gpu, and recompile for that CPU (see amber-GPU below)

Install openMM - Don't have to, just need to link openmm to usr/local/openmm, SEE BELOW

 source /opt/software/bash/.bashrc.openmm
/opt/software/OpenMM6.1-Linux64/install.sh (this install precompiled opemm into /usr/local and /opt/software/python27

OpenMM-GPU setup

  • Yum update your cents 6.5 and reboot
  • Install nvidia driver in /opt/nvidia/NVIDIA-Linux-x86_64-343.22.run
  • Install nvidia toolkit (opt/nvidia/cuda_6.5xxx cuda_6.0 for openMM6.1 on node201-207) to /usr/loca/ and skip the driver installation (already done above)
Cuda 6.5 required for gtx970 on node210. 
  • install python 2.7 from Canopy into /opt/software/python27/ (python27-780 for GTX780, openmm6.1 on node201-207)
~pren/canopy-1.4.1-rh5-64.sh
this step is already done, no need to repeat for new computers

  • Do one of the following (1-3, 2 or 3 is the latest)
 1. For node 201-207, go to the openmm prebuilt binary folder OpenMM6.1-Linux64 (copy a fresh one from ~pren/OpenMM6.1-linux64, you can delete this after done)
source /opt/software/bash/bashrc.openmm
 ".\install.sh"
Here you have to use python2.7. See /opt/software/bash/bashrc.openmm
------------------------------
  2. For nodes 201-211, GTX 780 or 970/980, download openmm6.2 source (/opt/software/OpenMM6.2-Linux64) from github and compile by following instruction (on latest GPU 980)
source /opt/software/bash/bashrc.openmm.970
 http://biomol.bme.utexas.edu/wiki/index.php/Software:Tutorials#Installation
 
Once the aboove step is done, I copied the /usr/local/openmm to /opt/software/openmmxxx, and the new node only need a link to
 
    "ln -s /opt/software/openmm6.2-installed /usr/local/openmm", where "openmm6.2-installed" was compiled on node201 and copied to opt.
 
Future nodes, no need to recompile the openmm. Just use the link above. Openmm install things into the shared python directory and /usr/local/openmm!!!
-----------------------------------------------------
3. For any new openmm distro:
install nvidia and cuda drive as above. 
Copy "/opt/software/python27" to a new python27-openmm6.x"
Make a new bashrc.openmm.6.x with python changed accordingly
Download and compile openmm as in 2.
------------------------------------------------------
  • test
bash
source /opt/software/bash/bashrc.openmm or
source  /opt/software/bash/bashrc.openmm.970 (on any GPU nodes to use openmm6.2 - need /usr/local/openmm linked to /opt/software/openmm6.2-installed)
/home/pren/OpenMM/shfr-test/simulateMTS.py
  • Notes: OpenMM python: /opt/software/python27/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/site-packages/simtk/openmm/app 
/home/pren/forcebalancev2/lib/python2.7/site-packages/forcebalance/openmmio.py


AMBER-GPU

Install AMBER-GPU 

September 2017  --Chengwen

Compile Amber18-gpu for new gpu nodes (node70 to 100)

1. Install cuda7.0 from /opt/nvidia/ (Caution: do NOT install graphic driver)(root)

2. Copy amber file from /home/pren/amber-git:

  cp -r /home/pren/amber-git/amber /home/liuchw/

3. Set environmental variables:

  source /opt/intel/composer_xe_2013.2.146/bin/compilervars.sh intel64
  export AMBERHOME=/home/liuchw/amber
  export PATH=$PATH:/usr/local/cuda-7.0/bin
  export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:${LD_LIBRARY_PATH}
  export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$AMBERHOME/lib
  export PATH=$PATH:/opt/mpich2-1.4.1p1/bin/mpicc/
  export CUDA_HOME=/usr/local/cuda-7.0 
  export DO_PARALLEL='mpirun -np 1'

4. Configure and install

  cd $AMBERHOME
 ./configure -cuda -mpi -noX11 intel 
  make install

5. Copy the compiled whole directory to /opt/software/ (root)

  cp -r amber /opt/software/amber18-gpu

6. Change $AMBERHOME to /opt/software/amber18-gpu

 

Install CPU version

See /home/pren/amber-git/readme for installation notes

Serial version: /home/pren/amber-git/amber17/ambercpu Parallel version: /home/pren/amber-git/amber17/ambercpu-mpi

Before running jobs (e.g. mmpbsa.py) source bashrc.ambercpu or bashrc.ambercpumpi

 

September 2014

compiled Amber14-gpu from /home/pren/amber12-2013/

First, install CUDA software AND drivers from NVIDIA website. You should be able to confirm CUDA is installed by running the test /usr/local/bin/deviceQuery or nvidia-smi. You need to define the following environment variables. AMBERHOME and CUDA_HOME need to be defined. Also, the library_path needs to be defined for amber and mpich2. Make sure that a compatible version of python is defined as well. Example:

source /opt/intel/composer_xe_2013.2.146/bin/compilervars.sh intel64
export PATH=$PATH:/usr/local/cuda-6.5/bin
export LD_LIBRARY_PATH=/usr/local/cuda-6.5/lib64:$LD_LIBRARY_PATH
export AMBERHOME=/opt/software/amber14-gpu
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$AMBERHOME/lib
export LD_LIBRARY_PATH=/opt/mpich2-1.4.1p1/lib/:$LD_LIBRARY_PATH
alias python=/usr/bin/env\ python2.6
export PYTHONPATH=/usr/bin/python2.6
export PATH=/opt/mpich2-1.4.1p1/bin/mpicc/:/opt/mpich2-1.4.1p1/bin:$PATH
export CUDA_HOME=/usr/local/cuda-6.5


Second, compile AMBER by the following:

./configure -mpi -cuda intel
make install
export DO_PARALLEL='mpirun -np 2'
make test #(or try the executable test/test_amber_cuda_parallel.sh ; actual cases in  test/cuda/)
Run dhfr test: /opt/software/amber16-gpu/test/cuda/dhfr/Run*.my

If you get x11 lib error, yum install *x11* and yum install libXt-devel*. This install pmemd,cuda.mpi in /opt/software/amber14-gpu/bin/

Due to the linking of libraries (intel,nvidia,amber), there will probably be errors. It is easier to compile amber without the flags -mpi and -cuda and then to add those after solving any problems.

 

2015/6/11 Install on node210, GTX970

cd ~pren/amber-git

Update git: amber.readme.git

git checkout master 

git pull 

cp amber amber15-gpu (later moved to /op/software/amber16-gpu)

source /opt/software/bash/bashrc.amber.970

./configure -mpi -cuda intel
make install

test...

Run AMBER on GPU

1) Log into node201, 202... (2 GTX 780 cards installed). or node 210 3 GTX970 installed

2) nvidia-smi to check which cards are available/busy (by checking the temperature and fan load)

If no card is being used, you can ignore this step. Otherwise you should specify to use the next available card.
To select specific GPU card to run, "export CUDA_VISIBLE_DEVICES="0"" ( http://ambermd.org/gpus/#Running for explanation)
"0" means the first card, 1 second, 2 the third. ****Also works for OPENMM

3) bash; source /usr/local/bash/bashrc.amber or bashrc.amber.970 (> node210)

4) Run (using 1 card, specify 1 after np option below; use 2 card, specify 2...)

export  DO_PARALLEL="/opt/mpich2-1.4.1p1/bin/mpirun -np 1"  
export sander="/opt/software/amber14-gpu/bin/pmemd.cuda_SPFP.MPI
or on > node210 
export sander="/opt/software/amber16-gpu/bin/pmemd.cuda.MPI"
$DO_PARALLEL $sander -O -o $output -c inpcrd.equil -r restrt -x mdcrd < dummy

Run AMBER GPU (Sep 2017)

GPU version of AMBER was compiled for use on new GPU nodes (node 70 to 90+).  Executables are in /opt/software/amber18-gpu. In order to run them, the following environmental variables need to be specified:

#AMBER GPU
source /opt/intel/composer_xe_2013.2.146/bin/compilervars.sh intel64
export AMBERHOME=/opt/software/amber18-gpu
export PATH=$PATH:/usr/local/cuda-7.0/bin
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$AMBERHOME/lib
export PATH=$PATH:/opt/mpich2-1.4.1p1/bin/mpicc/
export CUDA_HOME=/usr/local/cuda-7.0
export DO_PARALLEL='mpirun -np 1'

Tips to run amber GPU as fast as you can

use single precision floating point SPFP, NOT DPFP
write out infrequently (ntpr=1000,ntwx=1000, ntwr=1000)
Pressure is expensive to calculate (~15% slower than NVT). Use NPT to estimate density and then use NVT for production run. 
in NPT, baraostat=2 (Monte Carlo is a little cheaper.
Use 4fs (dt=0.004) time step (H mass repartition) if you only care about thermodynamics (kinetics is wrong).
More tips can be found here: http://ambermd.org/gpus/#Max_Perf 

Using a single card, I was able to get 102ns/day on JAC system (23K atoms) using dt=0.002 (2fs), and 220ns/day using dt=0.004

6) NVT Example: /opt/software/amber14-gpu/test/cuda/jac/Run.jac.my SPFP

see  Run.jac.my for options in NVT.

7) NPT example: /opt/software/amber14-gpu/test/cuda/dhfr

see Run.dhfr.ntb2_ntt1.my for NPT options.

Amber 14 manual: http://ambermd.org/doc12/Amber14.pdf

Occasioanlly when you two cards, you may see errors like this. Just rerun

 "gpu_allreduce cudaDeviceSynchronize failed an illegal memory access was encountered

4fs timestep

For a 4fs timestep, you need to repartition the H mass. This is done using parmed. Note that this needs to be done on bme- Sugar, and that AMBERHOME needs to point to a amber14 installation in your .bashrc.  The installation of parmed in /opt/software/amber14-gpu is broken. instead, you need to use /opt/software/ParmeEd-master/parmed.py. Prep a parmtop as normal. startparmed by invoking the file parmed.py with the prmtop as a -p command ex. parmed.py -p Myparm.prmtop. Then, invoke HMassRepartition. This adjusts H masses to 3.024 daltons, lowering mass of heavy atoms. Then, to save the resulting .prmtop, invoke "parmout MyNewParm.prmtop". then invoke go. The resulting prmtop can then be used as input for AMBER simulations using the .inpcrd generated by leap.
 

Quick steps to run amber or OpenMM GPU

First, make sure your bashrc has nothing related to mpich, cuda, intel compiler in it. If you need them for specific apps, create a .bashrc.thatapp and source it before running the specific app.

Log into NOVA and then node201, 202, 205, 206, 210, 211, or bme-pluto, saturn or earth

source /usr/local/bash/bashrc.amber or /usr/local/bash/bashrc.openmm 

Source the one with xxxx.970 if you are running on 210, 211, earth.

Nvidia-smi to check which device (card) is available.  The unused card has a low temperature < 40 C and less memory usage < 30MB.

export CUDA_VISIBLE_DEVICES=“0” to chose card 1
export CUDA_VISIBLE_DEVICES=“0,1” to choose both card 1 and 2 (only amber can use 2 cards in parallel for one simulation; don’t ever use 3)

Then run your amber or open simulation. You can check the examples for amber (only root can actually run this): /opt/software/amber14-gpu/test/cuda/dhfr/Run.dhfr.ntb2_ntt1.my, and you will see the actual pmemd command is

"/opt/software/amber14-gpu-970/bin/pmemd.cuda.MPI” 

Everything else is the same as regular amber.

Examples for openmm is in /home/pren/OpenMM/zzz/simulateMTS.py.

More tips to run these fast, such as using 4fs time-step, write files/log as infrequent as possible, avoid pressure control etc can be found here: http://biomol.bme.utexas.edu/wiki/index.php/Software:links#Tips_to_run_amber_GPU_as_fast_as_you_can

Simulaiton analysis with LOOS

LOOS by Alan Gorssfield.

To use the native_contact.cpp

Here's what you need to do to build it:

1) Download the newest LOOS from loos.sourceforge.net

2) Unpack and build it. If you don't already have them installed, you'll need boost (and boost-devel, if it exists on your distribution), atlas, and scons installed. They should be readily available as packages for most majors linuxes. If I recall correctly, Ubuntu breaks boost into many small packages -- if so, install all of them.

3) Copy native_contacts.cpp into the Tools subdirectory of LOOS.

4) Edit Tools/SConscript, which is actually just a python script. You'll see a series of lines that create a variable called apps. Edit the last one, and append " native_contacts" to it.

5) cd to the main LOOS directory, and say "scons" (or scons -j4 if you've got a 4 core machine). This will build LOOS, and will build native_contacts in the Tools directory.

6) If you want, say "scons install" -- this will create an installed copy, by default in /opt/loos (you can edit this by modifying the custom.py file in the main loos directory.

1) I didn't do a huge amount of checking on it -- I wrote it, ran a couple of quick tests, and sent it along. Don't hesitate to contact me if you're having trouble making sense of it.

2) The way it's written, native_contacts takes a selection (for instance, you could specify the protein, but only include core residues), breaks it into pieces by residue, and computes the contacts between the centers of mass of the residues. So, if you want the whole thing, you'd just select the protein. If you wanted alpha carbons, you'd do something like

segid == "PROT" && name == "CA"

Beta carbons would look very similar. If you wanted to exclude backbone, something like this would work (I haven't tested it)

segid == "PROT" && !(name =~ "^(C|O|CA|N)$"

The above assumes you're using a package that has segid's -- Amber doesn't, so you'd have to specify a range of residue numbers instead, eg

resid >= 3 && resid <=45  && !(name =~ "^(C|O|CA|N)$"

See the LOOS docs, in particular the page about the selection language, for more hints.

  • can LOOS directly work on AMBER mdcrd

As for the latter question, yes, it'll work directly on the amber trajectory. If you just run it (say "native_contacts") it'll list the command line arguments it wants. The "system" description should be a pdb file, or an amber parmtop, or a charmm/namd psf. PDB is fine, as long as it has the same atoms (in the same order). That's where LOOS will get the meta-data (atom name, residue name and number, etc). Depending on what format you use, there may or may not be coordinates -- psfs don't have them, for example, and parmtops don't either (although the code will automatically look for an accompanying inpcrd file, which does).

For the next option (trajectory), it takes the name of the trajectory file, which it identifies based on the filename. Amusingly, you can mix and match packages (eg charmm psf with amber mdcrd).

Quantum Mechanics

Psi4

The latest version of Psi4 has been installed at /users/mh43854/bin/psi4updated. Installation required the latest gcc. To use, you need the following environemental variables( for bash). executable is /users/mh43854/bin/psi4updated/bin/psi4

export PYTHONPATH=/users/mh43854/bin/psi4updated/lib:$PYTHONPATH
export LD_LIBRARY_PATH=/users/mh43854/gcc/lib64:$LD_LIBRARY_PATH

Installation of GCC:

I installed 4.9.4( download info is available at http://www.gnu.org/prep/ftp.html) at ~mh43854/gcc. To add prerequisites to the build, run ./contrib/download_prerequisites. mkdirobjdir. cd into objdir and run /gcc-4.9.4/configure --prefix= /path/to/install/dir/. make and then make install. This will take a long time.

Q-Chem 5

2017/7 Qchem 5 is now available on:

node40
node41
node42
node142
node38

Need these lines in your source if you use csh (change if you bash)

setenv QCSCRATCH /opt/scratch/qchemscratch
setenv QCLOCALSCR /scratch/qchemscratch
setenv QCMPI mpich
source /opt/qchem/5/qcenv.csh
setenv LD_LIBRARY_PATH /usr/lib64/openmpi-1.10/lib/:$LD_LIBRARY_PATH

Qchem Q-chem 4.1

8/1/2016 See Qchem under tutorial. Qchem (4.2) can be run on the following nodes (all cores):

node50-57, 8 cores each, 7GB mem
node35
node36
node37
node38
genome

8/17/2013

License email: license from qchem

Installed the serial and multiple-core version (not MPI yet, available if needed) in /opt/qchem/4.1

To run qchem, see tutorials. Below is the printout after installation. Note that there are few env (crutches) needed to be defined.

 To run Q-Chem calculations simply source the setup script below.
                                                             
[tcsh/csh]                                                      
source /opt/qchem/4.1/qcenv.csh                          
                                                             
[bash]                                                          
. /opt/qchem/4.1/qcenv.sh                                
                                                             
You can put the above lines in your shell startup script        
~/.cshrc for tcsh/csh or ~/.bashrc for bash.                    
                                                             
To get the latest Q-Chem updates please run                     
                                                             
/opt/qchem/4.1/qcupdate.sh                               
                                                             
To regenerate license data please run                           
                                                             
/opt/qchem/4.1/qcinstall.sh --update-lic        

Installation files are in /opt/qchem/4.1

To generate licesen for nodes (/opt/qchem/4.1/install-file/lincese_nodes), run

 export QC=/opt/qchem/4.1
$QC/bin/license_info <file_for_output> license@q-chem.com

Give the nodes file. Mail the output file to

Q-Chem

Version 4.0 is located in /work/opt/qchem4.0/.

Version 3.2 is located in /opt/qchem-para.

From: Zhen Xia <zhenxia@mail.utexas.edu> Date: Monday, February 27, 2012 1:53 PM To: Daniel Dykstra <dnl.dykstra@gmail.com>, Mike Schneiders <michael.schnieders@gmail.com>, Yue Shi <shiyue8638@gmail.com>, mu xiaojia <muxiaojia2010@gmail.com>, Qiantao Wang <qiantao.wang@gmail.com>, Gaurav Chattree <Gaurav.Chattree@mail.utexas.edu> Cc: Pengyu Ren <pren@mail.utexas.edu> Subject: Q-Chem update to 4.0 version

HI, everyone,

I’m glad to tell you that the Q-Chem software has been updated to 4.0 version. The new features of 4.0 version can be seen at: http://www.q-chem.com/. To use Q-Chem 4.0, please add the following paths to your shell profiles. An example of bash shell is looks like:

 #QCHEM    #qchem

QC=/work/opt/qchem4.0
export QC      
export OMP_NUM_THREADS=1
QCAUX=$QC/aux; 
export QCAUX   
QCPLATFORM=LINUX_Ix86_64;
export QCPLATFORM
QCSCRATCH=/opt/scratch/qchemscratch;   
export QCSCRATCH       
export QCLOCALSCR=/scratch
export QCMPI=mpich
export QCRSH=/usr/bin/ssh
export PBS_NODEFILE=/home/pren/qchemnode
noclobber=""                   
export PATH=/work/opt/qchem4.0/bin/:$PATH
if [ -e $QC/bin/qchem.setup.sh ]        ;       then   
$QC/bin/qchem.setup.sh                 
fi  

Please let me know if you have any questions.

Cheers, Zhen

Spartan

Update: Spartan 08 is installed on water.bme.utexas.edu


Bme-nova now has the QM package Spartan 06 (will update to 08 soon). A useful feature is RI-MP2 method, which is much more efficient than canonical MP2 for large molecules. We have 8 license for backend jobs but only one license for GUI. You can start the GUI by typing “spartan” (locally at bme-nova or ssh –X bme-nova from your own workstation).

A manual is available at /opt/software/spartan06.129_26_int9e/Spartan06Manual.pdf.

You can set up and submit jobs using the GUI and then exit (to free the license for others).

ORCA

Free QM package. Run parallel. Compute CD? orca HOWTO

cadpac

http://www-theor.ch.cam.ac.uk/software/cadpac.html

OpenMOPAC

Implement AM1 (stands for Austin Model 1), PM3 and other semiempirical methods (by James Stewart of UT Austin)
These methods are also available in G03, GAMESS, etc.
How good is SEM multipoles?
What is the alternative to Gaussian in derive DMA?
http://www.openmopac.net/background.html

Job Scheduling

References

Endnote

Bibtex