1588660 : Query Regarding Setting up mamba env with pytorch compatible packages

Created: 2026-03-31T08:50:31Z - current status: new

Anonymized Summary: A user with access to the Maxwell computing cluster is attempting to set up a Mamba environment with PyTorch + CUDA support and additional packages (sbi, PyTorch Lightning) for model training on Maxwell GPUs. The user has tried loading the pre-installed pytorch module and creating a Mamba environment but encounters dependency resolution issues during installation.


Suggested Solution:

To create a custom Mamba environment with PyTorch + CUDA support on Maxwell, follow these steps:

  1. Load the PyTorch module (includes CUDA support for Maxwell GPUs): bash module load maxwell pytorch # or a specific version, e.g., pytorch/2.3.1
  2. Create a Mamba environment and install additional packages: bash module load maxwell mamba . mamba-init mamba create -n my_env python=3.9 # Adjust Python version as needed mamba activate my_env mamba install -c conda-forge pytorch-lightning sbi
  3. If conflicts arise, try --strict-channel-priority or specify versions (e.g., pytorch-lightning=2.0.0).

Option 2: Full Conda/Mamba Setup (Advanced)

If the pre-installed module is insufficient: 1. Load CUDA and Mamba: bash module load maxwell mamba cuda/11.8 # Match CUDA version to PyTorch requirements . mamba-init 2. Create and activate the environment: bash mamba create -n my_env python=3.9 pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia mamba activate my_env mamba install -c conda-forge pytorch-lightning sbi - Verify CUDA support: bash python -c "import torch; print(torch.cuda.is_available())"

Option 3: Spack Environment (Alternative)

If Mamba/Conda fails, use Spack (pre-configured for Maxwell):

module load maxwell spack
spack env activate pytorch  # Pre-installed PyTorch environment
python -m pip install pytorch-lightning sbi  # Install additional packages via pip

Job Submission Example

After setup, submit a Slurm job to test GPU training:

#!/bin/bash
#SBATCH --job-name=torch_test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --partition=gpu  # or hpcgwgpu
#SBATCH --gres=gpu:1

module load maxwell pytorch  # or activate your Mamba environment
python -c "import torch; print(torch.cuda.get_device_name(0))"

Troubleshooting Tips

  • Dependency Conflicts: Use mamba install --dry-run to preview conflicts.
  • CUDA Mismatch: Ensure PyTorch and CUDA versions are compatible (e.g., PyTorch 2.3.1 → CUDA 12.1).
  • Maxwell-Specific Issues: Check GPU compatibility with nvidia-smi on a compute node.

Sources Used

  1. Maxwell PyTorch Documentation
  2. Maxwell Spack Tutorial (PyTorch Environments)