1580176 : Re: high quota usage by schropp on /gpfs/maxwell/home

Created: 2026-02-27T08:10:12Z - current status: new

**

Summary of the Issue

A user is nearing their 30GB quota limit on /gpfs/maxwell/home (91.2% used) and requests an increase to accommodate large Conda environments (ptypy, cuda, tomo, etc.) and other software tools. The user’s largest directories include: - .conda/ (22GB total, with subdirectories like pkgs/ and envs/) - sources/ (7GB) - .cache/ (1.9GB) - local/ (1.7GB).

The user acknowledges the quota cannot be increased (per Maxwell’s policy) and seeks alternative storage solutions.


Suggested Solution

  1. Relocate Conda Environments and Large Data
  2. DUST (/data/dust): Ideal for temporary/reproducible data and software installations (e.g., Conda environments). Fast but not backed up—suitable for non-critical data. Example: bash mkdir -p /data/dust/[USERNAME]/conda_envs conda create --prefix /data/dust/[USERNAME]/conda_envs/tomo
  3. Group-Specific GPFS (if applicable): If the user belongs to a group with dedicated GPFS storage (e.g., CFEL, CSSB, EXFEL), they should contact their group admins for access.
  4. dCache (/pnfs/desy.de/): For long-term storage of large datasets (not ideal for active Conda environments due to latency).

  5. Clean Up Existing Home Directory

  6. Remove unused Conda packages: bash conda clean --all --force-pkgs-dirs
  7. Delete old .cache files or sources/ if no longer needed.
  8. Use mmlsquota --block-size auto max-home to monitor quota usage.

  9. Avoid Storing Data in /home

  10. Per Maxwell’s policy, /home is for small, critical files only (30GB hard limit). Large datasets/software should use alternative storage.

  11. Contact Group Admins or Maxwell Support

  12. If the user’s group has dedicated storage (e.g., CFEL GPFS), request access via the appropriate admins.
  13. For general questions, email maxwell.service@desy.de.

Sources

  1. Maxwell Storage Infrastructure
  2. Data Management on Maxwell
  3. Where to Store Scientific Data