Skip to main content

PARAM HPC

CDAC PARAM Supercomputer at NUST & HPC Workshops in India & Namibia

Super Computer PARAM High Performance Computing (HPC)

CDAC shall also setup a Linux based Open Source PARAM Supercomputing Cluster consisting of 5-nodes; 1-master and 4-compute nodes with low latency Infiniband interconnect. This facility also includes 100TB of storage. Interconnects are low latency high-speed network which include Infiniband, Gigabit and Ethernet switches along with essential software. One 42U racks with required number of power sockets, Cables and Connectors. This facility will be deployed at NUST, Windhoek, Namibia.

HPC Cluster Specifications:

  • Total Rpeak:  17.7 TF (Teraflops)  (Pure CPU 10.7 TF+ GPU 7 TF)
  • Total Nodes: 4 CPU only nodes + 1 GPU node
  • Total Cores: 5320 (CPU 160 cores + GPU cuda cores 5160)
  • Total Memory:  480 GB
  • Storage: 100 TB  + 54 TB Backup
1 x Master Node: Dell PowerEdge R640
  • 2 x Intel Xeon Gold 6130 with 16C,2.1 GHz
  • Total RAM: 192GB (12 x 16GB) DDR4 2666MHz
  • Local storage: 6 x 600GB SAS Drives

4 x Compute Node: Dell PowerEdge R640
  • 2 x Intel Xeon Gold 6130 with 16C, 2.1 GHz
  • Total RAM: 96GB (12 x 8GB) 2666 MHz

1 x Compute Node NVidia GPU: Dell PowerEdge R740
  • 2 x Intel Xeon Gold 6130 with 16C, 2.1 GHz
  • Total RAM: 96GB (12 x 8GB) 2666 MHz
  • 1 x NVidia Tesla V100

1 x Visualization Node: Dell PowerEdge R740
  • 2 x Intel Xeon Gold 6130 with 16C, 2.1 GHz
  • Total RAM: 96GB (12 x 8GB) 2666 MHz
  • Local storage: 2 x 600GB SAS Drives

1 x Storage/Backup Node: Dell PowerEdge R740
  • 2 x Intel Xeon Gold 5118 12C,2.3 Ghz
  • Total RAM: 192GB (12 x 16GB) 2666 MHz
  • 2 x 2TB SATA Drive

Storage: Dell EMC ME4084
  • Storage: 28 x 8TB SAS Drive

Primary Communication: Dell H1024-OPF 24-Port

Secondary Communication: Dell N1524 24-Port

IPMI Communication: Dell N1524 24-Port

Tape Library: PowerVault TL1000

The facility will be used for running various applications from different domain areas like Bioinformatics, Climatology, Astrophysics, Computational Fluid Dynamics, Computational Chemistry, Molecular Dynamics, Finite Element Analysis, Agriculture etc., CDAC will also provide demonstration in Applications from Climatology and Bioinformatics domain.
This Facility will give boost to current research on High performance computing by adopting next generation Hybrid architecture. It will help the users and researchers to prepare their code for future architecture.

It will also comprise of CDAC's indigenous products InClus, Onama and CHReME:

  • InClus is a web-based tool for managing multiple Linux systems from a central location. It has many useful features for general system administration.
  • ONAMA is a GUI based Installer and Execution Model that supports installation and execution of a well-selected set of parallel and serial applications across several engineering disciplines such as Computer Science, Mechanical, Electronics & Communication, Electrical and Chemical engineering etc. Besides, it consists of a number of CUDA (Compute Unified Device Architecture) enabled applications.
  • CHReME (CDAC HPC Resource Management Engine) is a web-based portal that empowers scientists, researchers, system administrators with intuitive GUI to exploit the resources of HPC systems. CHReME provides GUI for creation, submission, monitoring and management of jobs. It allows users to configure their execution environment through compilers and libraries selection, scheduling parameters etc. Also, the engine facilitates administrators to perform various management tasks such as scheduler configuration, hosts management, compilers and libraries settings etc. Besides, it contains a workflow based WRF portal, which provides in-depth GUI for configuration of WRF scientific parameters and WRF data pre-processing, main execution and post-processing.

Robust backup solution

Daily backup and long-term archives

A robust backup solution has been proposed to mitigate any storage system failure.  Secondary storage will be based on Tape. Sizing will be done to accommodate both normal daily backup and long-term archives. Management, Monitoring and Remote support via Internet and software’s are also proposed.

Scientist’s exchange program for HPC training and joint collaborative activities have been proposed as per the detailed annexure. Porting of following scientific and engineering applications in the areas of Bioinformatics, Climatology, Astrophysics, Computational Fluid Dynamics, Computational Chemistry, Molecular Dynamics, Finite Element Analysis, Agriculture etc., would be carried out on x86-64 architecture. Also, selected applications available on Nvidia's GPGPU and Intel MIC Card will be ported on these architectures respectively. CDAC will also provide demonstration in Applications from Climatology and Bioinformatics domain.

1. Atmospheric & Ocean Modeling
  • Weather Research and Forecasting (WRF)
  • Mesoscale Model (MM5)
  • Modular Ocean Model (MOM4)
  • Regional Ocean Model (ROMS)

2. Bioinformatics & Molecular Dynamics

• T-Coffee • Biopython • CGView: Circular Genome Viewer • ClustalW • FASTA • PHYLIP (PHYLogeny Inference Package) • Glimmer • mpiBLAST: also BLAST and NCBI-Blast • Gromacs • EMBOSS • HMMER • MPI-Hmmer • MrBayes

3. Computational Fluid Dynamics
  • OpenFOAM
  • Free CFD
  • Gerris Flow Solver

5. Materials Modeling
  • ABINIT
  • Quantam Espresso
  • OpenMX

4. Finite Element Analysis
  • OOFEM
  • Finite Element (FElt)
  • Code_Aster
  • Impact
  • Z88

6. Data Visualization Tools
  • GRADS
  • Ferret
  • Friend
  • Artemis
  • SEAVIEW