1565523 : booking 12 GPU nodes on Jan 15¶
Created: 2026-01-08T15:11:15Z - current status: new¶
Here is the anonymized and summarized version of the query:
Summary of the Issue¶
A user requests a reservation of 12 GPU nodes on the Solaris subcluster of the Maxwell cluster for a workshop scheduled on [DATE] from 08:00 to 12:00. The user confirms that the Solaris partition is acceptable for their needs and asks if the reservation is still feasible.
Possible Solution/Next Steps¶
- Check Availability:
- Verify if 12 GPU nodes are available in the
solgpupartition of the Solaris subcluster for the requested time slot. -
The Solaris subcluster has 15 GPU nodes (P100), but some may already be reserved or in use (see context below).
-
Reservation Process:
- If resources are available, the reservation can be created using Slurm commands (e.g.,
scontrol create reservation). - Example command (adjust as needed):
bash scontrol create reservation starttime=2026-01-15T08:00:00 duration=4:00 user=[USERNAME] nodes=12 partition=solgpu flags=overlap -
Ensure the reservation is exclusive to avoid interference with other users.
-
Fallback Options:
-
If 12 nodes are unavailable, consider:
- Reducing the number of nodes (e.g., 8–10).
- Splitting the workshop into multiple sessions.
- Using alternative partitions (e.g.,
solarmfor ARM-based tasks, though not GPU-enabled).
-
Confirmation:
- Once reserved, share the reservation ID and node list with the user for verification.
Key Context Used¶
- The Solaris subcluster has 15 GPU nodes (P100) in the
solgpupartition (source). - GPU nodes support hardware-accelerated graphical applications via VirtualGL (source).
- Reservations must be requested in advance due to shared resource constraints.
Sources: 1. Solaris Subcluster Documentation – GPU Nodes 2. Slurm Reservation Guide (External) (for reference on commands).