UConn HPC profile Configuration

nf-core pipelines have been successfully configured for use on the UConn HPC cluster at Xanadu.

To use the xanadu profile, run the pipeline with -profile xanadu. This will download and apply xanadu.config which has been pre-configured for the UConn HPC cluster “Xanadu”. Using this profile, all Nextflow processes will be run within singularity containers, which can download and convert from docker containers when necesary.

A Nextflow module is available on the Xanadu HPC cluster, to use run module load nextflow or module load nextflow/<version> prior to running your pipeline. If you are expecting the NextFlow pipeline to consume more space than is available, you can set the work directory to /scratch/<userid> which can handle 84.TB with export NXF_WORK=/scratch/<userid>. CAUTION make sure to remove items from this directoy, it is not intended for long-term storage.

Config file

See config file on GitHub

xanadu.config
params {
    config_profile_description = 'The UConn HPC profile'
    config_profile_contact     = 'noah.reid@uconn.edu'
    config_profile_url         = 'https://bioinformatics.uconn.edu/'
 
    // max resources
    max_memory                 = 2.TB
    max_cpus                   = 64
    max_time                   = 21.d
 
    // Path to shared singularity images
    singularity_cache_dir      = '/isg/shared/databases/nfx_singularity_cache'
}
 
process {
    executor       = 'slurm'
    queue          = { task.memory <= 245.GB ? 'general' : (task.memory <= 512.GB ? 'himem' : 'himem2') }
 
    clusterOptions = {
        [
            task.memory <= 245.GB ? '--qos=general' : '--qos=himem'
        ].join(' ').trim()
    }
}
 
executor {
    name            = 'slurm'
    submitRateLimit = '2 sec'
    queueSize       = 100
}
 
singularity {
    enabled       = true
    cacheDir      = params.singularity_cache_dir
    autoMounts    = true
    conda.enabled = false
}