1578278 : Ansys Lumerical simulation on Maxwell¶
Created: 2026-02-20T05:57:58Z - current status: new¶
Here is the anonymized and summarized version of the reported issue:
Summary of the Issue¶
A user attempted to run an Ansys Lumerical FDTD simulation on the Maxwell cluster but encountered the following problems:
- Simulation Failure:
- After setting up the
.fspfile in the Lumerical GUI and submitting a job via the console (run;), they received the error:Warning: prompt line 1: in run: there was no simulation data. The simulation failed to run. -
The user suspected this was due to missing configuration of the job scheduler input file (
~/.config/Lumerical/job_scheduler_input.json), which they could not locate on the entrance node. -
Missing Configuration Directory:
-
The
~/.config/Lumerical/directory did not exist, preventing them from setting up the requiredjob_scheduler_input.jsonfile. -
Interactive Job Issues:
- When attempting to run an interactive job (
salloc -N 1 -p cfel -t 1-05:00:00), the node did not recognize thelumericalmodule, suggesting a module loading issue.
Suggested Solutions¶
1. Create the Missing Configuration Directory¶
The ~/.config/Lumerical/ directory must be created manually if it does not exist. The user should:
- Log in to the entrance node (or a compute node if running interactively).
- Run:
bash
mkdir -p ~/.config/Lumerical
- Then create the job_scheduler_input.json file with the following template (adjust [USERNAME] and [SSH_KEY_PATH]):
json
{
"user_name": "[USERNAME]",
"use_ssh": 1,
"use_scp": 0,
"cluster_cwd": "",
"master_node_ip": "max-display.desy.de",
"ssh_key": "[SSH_KEY_PATH]",
"path_translation": ["", ""]
}
- Replace [USERNAME] with their actual cluster username.
- Replace [SSH_KEY_PATH] with the path to their private SSH key (e.g., ~/.ssh/id_rsa).
- If submitting from outside the Maxwell cluster, adjust path_translation to map local paths to cluster paths.
2. Verify Module Loading¶
The lumerical module must be loaded before running simulations. The user should:
- Ensure they are on a compute node (not the entrance node) when running simulations.
- Load the module explicitly:
bash
module load maxwell lumerical/2024
- If running interactively, include this in the salloc session:
bash
salloc -N 1 -p cfel -t 1-05:00:00
module load maxwell lumerical/2024
3. Submit Jobs via Batch Script (Recommended)¶
Instead of running from the GUI console, the user should submit a Slurm batch script for better reliability. Example script (lumerical_job.sh):
#!/bin/bash
#SBATCH --partition=cfel # Adjust partition as needed
#SBATCH --time=1-00:00:00 # Adjust time limit
#SBATCH --job-name=lumerical
#SBATCH --nodes=1 # Adjust nodes as needed
unset LD_PRELOAD
source /etc/profile.d/modules.sh
module load maxwell lumerical/2024 openmpi/4.1.8
mpiexec -n 8 fdtd-engine-ompi-lcl -logall -remote $PWD/[PROJECT_FILE].fsp
- Replace
[PROJECT_FILE]with the name of the.fspfile. - Submit the job:
bash sbatch lumerical_job.sh
4. Check License Configuration¶
Ensure the license file (~/.config/Lumerical/License.ini) is correctly configured:
[license]
ansysserver\host=1055@zitlic10.desy.de
default=user
domain=1
- If using Lumerical 2023, set
domain=2.
5. Debugging Steps¶
- Verify the
.fspfile: Open the file in the GUI to ensure it contains valid simulation data. - Check Slurm logs: After submission, review the Slurm output file (e.g.,
slurm-<JOBID>.out) for errors. - Test interactively: Run a small test simulation interactively to isolate the issue:
bash salloc -N 1 -p cfel -t 01:00:00 module load maxwell lumerical/2024 fdtd-solutions [PROJECT_FILE].fsp