扫描手机二维码

欢迎您的访问
您是第 位访客

开通时间:..

最后更新时间:..

  • 高安康 ( 副研究员 )

    的个人主页 http://faculty.ustc.edu.cn/gaoankang/zh_CN/index.htm

  •   副研究员
教师博客 当前位置: 中文主页 >> 教师博客
Nektar++ on the USTC HPC cluster hanhai22
点击次数:

Access to the cluster


ssh username@211.86.151.115


Compilation instructions

Before you start compiling Nektar++, you need to load the following modules

module load hpcx/2.12/hpcx  mkl/latest icc/2022.1.0 \
compiler-rt/latest tbb/latest boost/1.82.0 \
fftw/3.3.10-hpcx-intel cmake/3.19.0














Download the source code

git clone https://gitlab.nektar.info/gaoak/nektar.git nektar++
cd nektar++





Apply the following patch

diff --git a/solvers/IncNavierStokesSolver/IncNavierStokesSolver.cpp b/solvers/IncNavierStokesSolver/IncNavierStokesSolver.cpp
index 0eb1bbaa9..851cd406e 100644
--- a/solvers/IncNavierStokesSolver/IncNavierStokesSolver.cpp
+++ b/solvers/IncNavierStokesSolver/IncNavierStokesSolver.cpp
@@ -36,7 +36,7 @@
 #include <SolverUtils/Driver.h>
 
 #include <LibUtilities/BasicUtils/Timer.h>
-
+#include <fftw3.h>
 using namespace std;
 using namespace Nektar;
 using namespace Nektar::SolverUtils;
@@ -48,6 +48,11 @@ int main(int argc, char *argv[])
     string vDriverModule;
     DriverSharedPtr drv;
 
+    NekDouble phys[2];
+    NekDouble coef[2];
+    fftw_plan m_plan_forward = fftw_plan_r2r_1d(2, phys, coef, FFTW_R2HC,  FFTW_MEASURE);
+    fftw_destroy_plan(m_plan_forward);
+
     try
     {
         // Create session reader.









Create the build directory

cd nektar++
mkdir build
cd build








export following variables to help nektar find fftw

export INCLUDE=/opt/fftw/3.3.10/hpcx-intel/include:$INCLUDE
export FFTW_HOME=/opt/fftw/3.3.10/hpcx-intel





for AMD cpu, run cmake

CC=mpicc CXX=mpicxx
cmake -DNEKTAR_USE_MPI=ON -DNEKTAR_USE_MKL=ON \
-DNEKTAR_USE_SYSTEM_BLAS_LAPACK=OFF \
-DNEKTAR_USE_FFTW=ON -DNEKTAR_USE_HDF5=ON \
-DTHIRDPARTY_BUILD_BOOST=OFF \
-DTHIRDPARTY_BUILD_HDF5=ON  \
-DTHIRDPARTY_BUILD_FFTW=OFF  \
-DTHIRDPARTY_BUILD_BLAS_LAPACK=OFF   ..




for intel cpu, run cmake

CC=mpicc CXX=mpicxx
cmake -DNEKTAR_USE_MPI=ON -DNEKTAR_USE_MKL=ON \
-DNEKTAR_USE_SYSTEM_BLAS_LAPACK=OFF \
-DNEKTAR_USE_FFTW=ON -DNEKTAR_USE_HDF5=ON \
-DTHIRDPARTY_BUILD_BOOST=OFF \
-DTHIRDPARTY_BUILD_HDF5=ON  \
-DNEKTAR_ENABLE_SIMD_AVX512=ON  \
-DCMAKE_CXX_FLAGS=-mavx512f\ -mfma  \
-DTHIRDPARTY_BUILD_FFTW=OFF  \
-DTHIRDPARTY_BUILD_BLAS_LAPACK=OFF   ..












Then run

make -j4 install



















Submit jobs

Create an interactive job on the compute node in the test queue

salloc -p test -N 1 -n 1 --qos=testqos















Submit a job using sbatch, sbatch job.sub, with a job.sub file

run module load first

module load hpcx/2.12/hpcx  \
mkl/latest icc/2022.1.0 compiler-rt/latest tbb/latest boost/1.82.0









then, sbatch job.sh with file

#!/bin/sh
#SBATCH -J test2D
#SBATCH -o job-%j.log
#SBATCH -e job-%j.err
#SBATCH -p CPU-64C256GB
#SBATCH -N 1
#SBATCH -n 64
#SBATCH --time=3:0:0

echo Time is `date`
echo Directory is $PWD
echo This job runs on the following nodes:
echo $SLURM_JOB_NODELIST
echo This job has allocated $SLURM_JOB_CPUS_PER_NODE cpu cores.

#module load hpcx/2.12/hpcx  \
#mkl/latest icc/2022.1.0 compiler-rt/latest tbb/latest boost/1.82.0
export NEK_DIR=/home/ses/agao20/code/nektar/build2
export NEK_BUILD=$NEK_DIR/dist/bin
export LD_LIBRARY_PATH=$NEK_DIR/ThirdParty/dist/lib:$NEK_DIR/dist/lib64:$LD_LIBRARY_PATH


MPIRUN=mpirun #Intel mpi and Open MPI
$MPIRUN $NEK_BUILD/IncNavierStokesSolver wing.xml wingc.xml  -i Hdf5 -v --set-start-chknumber 0 > runlog 2>&1







References

https://www.nektar.info/nektar-on-archer2/
https://www.nektar.info/nektar-ic-hpc/















版权所有 ©2020 中国科学技术大学
地址:安徽省合肥市金寨路 96 号,邮政编码:230026