1578278 : Ansys Lumerical simulation on Maxwell

Created: 2026-02-20T05:57:58Z - current status: new

Here is the anonymized and summarized version of the reported issue:


Summary of the Issue

A user attempted to run an Ansys Lumerical FDTD simulation on the Maxwell cluster but encountered the following problems:

  1. Simulation Failure:
  2. After setting up the .fsp file in the Lumerical GUI and submitting a job via the console (run;), they received the error: Warning: prompt line 1: in run: there was no simulation data. The simulation failed to run.
  3. The user suspected this was due to missing configuration of the job scheduler input file (~/.config/Lumerical/job_scheduler_input.json), which they could not locate on the entrance node.

  4. Missing Configuration Directory:

  5. The ~/.config/Lumerical/ directory did not exist, preventing them from setting up the required job_scheduler_input.json file.

  6. Interactive Job Issues:

  7. When attempting to run an interactive job (salloc -N 1 -p cfel -t 1-05:00:00), the node did not recognize the lumerical module, suggesting a module loading issue.

Suggested Solutions

1. Create the Missing Configuration Directory

The ~/.config/Lumerical/ directory must be created manually if it does not exist. The user should: - Log in to the entrance node (or a compute node if running interactively). - Run: bash mkdir -p ~/.config/Lumerical - Then create the job_scheduler_input.json file with the following template (adjust [USERNAME] and [SSH_KEY_PATH]): json { "user_name": "[USERNAME]", "use_ssh": 1, "use_scp": 0, "cluster_cwd": "", "master_node_ip": "max-display.desy.de", "ssh_key": "[SSH_KEY_PATH]", "path_translation": ["", ""] } - Replace [USERNAME] with their actual cluster username. - Replace [SSH_KEY_PATH] with the path to their private SSH key (e.g., ~/.ssh/id_rsa). - If submitting from outside the Maxwell cluster, adjust path_translation to map local paths to cluster paths.

2. Verify Module Loading

The lumerical module must be loaded before running simulations. The user should: - Ensure they are on a compute node (not the entrance node) when running simulations. - Load the module explicitly: bash module load maxwell lumerical/2024 - If running interactively, include this in the salloc session: bash salloc -N 1 -p cfel -t 1-05:00:00 module load maxwell lumerical/2024

Instead of running from the GUI console, the user should submit a Slurm batch script for better reliability. Example script (lumerical_job.sh):

#!/bin/bash
#SBATCH --partition=cfel       # Adjust partition as needed
#SBATCH --time=1-00:00:00      # Adjust time limit
#SBATCH --job-name=lumerical
#SBATCH --nodes=1              # Adjust nodes as needed

unset LD_PRELOAD
source /etc/profile.d/modules.sh
module load maxwell lumerical/2024 openmpi/4.1.8

mpiexec -n 8 fdtd-engine-ompi-lcl -logall -remote $PWD/[PROJECT_FILE].fsp
  • Replace [PROJECT_FILE] with the name of the .fsp file.
  • Submit the job: bash sbatch lumerical_job.sh

4. Check License Configuration

Ensure the license file (~/.config/Lumerical/License.ini) is correctly configured:

[license]
ansysserver\host=1055@zitlic10.desy.de
default=user
domain=1
  • If using Lumerical 2023, set domain=2.

5. Debugging Steps

  • Verify the .fsp file: Open the file in the GUI to ensure it contains valid simulation data.
  • Check Slurm logs: After submission, review the Slurm output file (e.g., slurm-<JOBID>.out) for errors.
  • Test interactively: Run a small test simulation interactively to isolate the issue: bash salloc -N 1 -p cfel -t 01:00:00 module load maxwell lumerical/2024 fdtd-solutions [PROJECT_FILE].fsp

References

  1. Lumerical on Maxwell Documentation
  2. Lumerical Job Scheduler Integration (Ansys)
  3. Maxwell FastX Login Guide