Unless the conditional density varies rapidly with the conditioning event, a brute-force rejection sample works.
Let's state the general problem. You wish to study the distribution of some function $f(X,Y)$ conditional on an event $\mathscr E$ which is either extremely rare or, as in this case, has zero probability (but positive density).
The idea is that with some care--there are geometric and probabilistic subtleties here (that are partially explored below)--you can "thicken" $\mathscr E$ by an amount $\delta$ to an event $\mathscr E^\prime = \mathscr E(\delta) \supset \mathscr E$ that has positive probability. Generate $(X,Y)$ from its joint distribution and reject any results not in $\mathscr E^\prime,$ then examine the empirical distribution of $f(X,Y)$ and compare it to what you expected theoretically.
This works provided the probability of $\mathscr E^\prime$ is not so small that it takes forever for randomly-generated $(X,Y)$ to fall within it. To obtain a sample of size $n,$ you will need (on average) to generate $N = n/\Pr(\mathscr E^\prime)$ values of $(X,Y).$ I recommend starting with a modest value of $N,$ generating values, and observing what $n$ turns out to be. Extrapolate from that to determine how many values you have time to generate. If it's too small, you will have to thicken $\mathscr E$ more -- and you can make a reasonable guess about how much more is needed based on this preliminary study.
Let's use your problem as an example. $X$ and $Y$ are independently Exponentially distributed. Because the parameter only establishes the unit of measurement, we may take it to be $1$ with no loss of generality. The function is $f(X,Y) = X+Y.$
A quick and dirty R implementation mirrors the strategy. This one examines the distribution of $X$ conditional on $X + Y \approx 0.5,$ using an amount $\delta = 0.05$ to thicken $\mathscr E:$
x <- rexp(1e4)
y <- rexp(1e4)
z <- x + y # z = f(x,y)
i <- abs(z - 0.5) <= 0.05 # Thicken E to E'
X <- data.frame(x = x[i], y = y[i], z = z[i]) # Reject results outside E'
Here is what the empirical $(x,y)$ scatterplot might look like:
with(X, plot(x, y))

The line $x+y=0.5$ is plotted for reference: it corresponds to the conditioning event $\mathscr E.$ The gray area (comprised of thousands of generated points) more or less fills out $\mathscr E^\prime.$
With such a simulation in hand, you can explore the conditional distribution of any $f(X,Y)$ in all the familiar ways, such as with a histogram

(the dashed line is the theoretical value) or an empirical CDF (ECDF)

The (faintly visible) dashed blue line is the theoretical distribution, shown for reference.
There are clear edge effects near $X = 0.5$ but those are understandable consequences of the fact that $Y \lt 0$ is not possible, thereby reducing the probability at that corner of $\mathscr E^\prime.$ (This is why examining the first scatterplot is helpful: it alerts you to such problem regions and helps you interpret the results.)
I mentioned the need for care. Let's illustrate that by showing what happens when the event $\mathscr E$ is thickened differently, using the same three figures.

I have allowed $\mathscr E^\prime$ to be thicker at smaller $X$ values than at larger ones, thereby relatively oversampling the smaller $X$ values. (This has a rationale: it eliminates the edge effects near $X = 0.5$) The histogram

and the ECDF

reflect that. They clearly differ from the theoretical result.
The moral of this example is that you need to understand what you're doing when you compute distributions conditional on events with zero probability!