Start by thinking about how you would do this in a standard ordinary least squares model with a continuous outcome. Say that your software reported an Intercept of 5 and coefficients of -1.689 for Gender = 1, 2.506 for Image = 1, and -1.925 for the [Gender = 1] * [Image = 1] interaction.
With the "dummy coding" evidently used here, the Intercept is the expected outcome at the reference levels of categorical independent variables. SPSS chooses the highest level of each as the reference by default. So the Intercept value of 5 is the expected outcome for Gender = 2 and Image = 2.
The coefficient of -1.689 for Gender = 1 is the difference from that intercept value when Gender = 1 and you still have Image = 2. The predicted value for Gender = 1 and Image = 2 is thus 5 - 1.689 = 3.311.
The coefficient of 2.506 for Image = 1 is the difference from the intercept value when Image = 1 and you still are at the reference level of Gender = 2. The predicted value for Image = 1 and Gender = 2 is thus 5 + 2.506 = 7.506.
The [Gender = 1] * [Image = 1] interaction coefficient of -1.925 is the extra difference from what you would otherwise predict from those two individual coefficients. That is, to get the prediction for [Gender = 1] and [Image = 1] in ordinary least squares, you start with the Intercept and add both the individual coefficients and the interaction coefficient. For this hypothetical example, you get an estimate of 5 - 1.689 + 2.506 - 1.925 = 3.892.
In the ordinal regression model the primary interest is in the regression coefficients themselves.* Instead of each coefficient representing a change in linear outcome associated with a change in a predictor, it represents the change in log-odds of being in a higher outcome category. Note that my hypothetical example used the same coefficients as your ordinal regression model. So your understanding:
Men (Gender 1) were less likely (the value is negative) than women to give a high rating of danger. Those exposed to image 1 were more likely to give a high rating of danger than those exposed to image 2.
is partly correct and needs to be modified to:
Men (Gender 1) were less likely (the value is negative) than women to give Image 2 a high rating of danger. Women (Gender 2) exposed to image 1 were more likely to give a high rating of danger than women exposed to image 2.
That's because, with the interaction, the coefficient for Gender (not really a "main effect" when there's an interaction) is the difference between men and women when viewing the reference Image, Image 2. The coefficient for Image (again, not really a "main effect") is the difference between Image 1 and Image 2 when viewed by the reference Gender, women. That's not specific to ordinal regression. When there's an interaction between predictors and predictors are coded this way (called dummy of treatment coding), then all single-predictor coefficients are presented for differences when their interacting predictors are at reference levels.
Interaction coefficients can be harder to express in words. One way to describe the negative interaction coefficient could be:
Men exposed to Image 1 were less likely to give it a high rating of danger than you would expect based on their reaction to Image 2 and women's reaction to Image 1.
That complexity of explaining an interaction coefficient is a reason to show illustrative examples at combinations of predictor values instead. With only 4 combinations that's pretty easy. Relative to the reference of women seeing Image 2, men seeing image 2 had a difference of - 1.689 in the log-odds of a higher danger rating, women seeing Image 1 had 2.506 higher log-odds of a higher danger rating, and men seeing Image 1 had a difference of -1.689 + 2.506 - 1.925 = -1.106 in log odds.
Exponentiating those log-odds differences give you corresponding odds ratios of higher danger ratings.
Calculations up to that point are pretty simple; you just think in terms of changes of log-odds (or odds ratios) instead of changes in linear outcomes. But you should combine such estimates with estimates of the error, which require taking the coefficient covariance matrix into account. There presumably are ways to do that in SPSS, but I don't use it and software-specific questions are off-topic on this site.
This answer goes into more detail about interactions in general and for generalized linear models like ordinal regression in particular. The UCLA OARC web page on ordinal regression in SPSS provides more information specific to ordinal regression and its implementation in SPSS. The SPSS syntax for calculating probabilities of specific outcome ratings given combinations of predictors does seem awkward, but the approach of starting with the probability for the highest outcome level and working downward from there with the level-specific intercepts and the level-independent regression coeffcients can be implemented by hand for point estimates.
*Although there is a relationship between the reported "Threshold" values and a set of intercepts, "In general, these are not used in the interpretation of the results" according to the UCLA OARC web page on ordinal regression in SPSS. In the SPSS output, the "Threshold" values are the negatives of corresponding intercepts for each level.
self-studytag and read the policy for handling such questions on this site. This web page on ordinal regression with SPSS might be helpful. – EdM Jul 08 '22 at 17:23