Accepting the premise that no failures have occurred in the sample so far, I think there are three separate steps in the reasoning here.
(A) The best estimate of the probability of failure is currently zero.
(B) The estimated probability of failure cannot decrease. But a failure will occur one day, so the estimated probability of failure is certain to increase eventually.
(C) Since the probability of failure cannot decrease and may increase, then (assuming you must use the device at some point) it is safer to do so now than to wait.
Does anyone actually read this stuff? I have put some funny pictures at the bottom instead. Scroll down if you like, you may prefer them.
For (A), two related posts show how to estimate the probability of failure, when no failures have yet occurred: see here and here. The true probability of failure remains unknown. These posts show it is not correct to deem the current probability of failure to be zero, merely because no failures have occurred in the sample observed to date. Since the device could have failed, even though it didn't, claiming the probability of failure was zero in those attempts is a confusion of ex-ante and ex-post. Focusing on the empirical (observed) probability of success of this particular device, over its recent uses, rather than considering the general rate of success among all such devices over a longer time period, is a classic example of the base rate fallacy, and to some extent the availability heuristic and recency bias. (Even if the device is "unique", we can still consider a wider set of somewhat similar devices, or analyse the likely performance of the device based on the performance of its components and the way they interact. This answer lists some references for how Probabilistic Risk Assessment is used for novel spacecraft or nuclear power plants where engineers face similar issues of lack of long-term data about new or unique designs.)
For (B), the very fact that a future failure is even being countenanced is sufficient to show that the speaker does not believe the current probability of failure is zero, despite what they previously stated! So there is a glaring inherent logical contradiction in what the speaker is saying, regardless of the probability model we might use to analyse the situation. But this is also revealing a flaw in the statistical analysis: why would the estimated probability of failure get worse if a failure occurs, but only stay the same if there is another success? If a Bayesian has the prior belief that the probability of failure is truly zero, then any subsequent successes will not cause them to update their belief. But this would be a very poor choice of prior. See Cromwell's rule: "I beseech you... think it possible that you may be mistaken." Moreover, the fact the speaker countenances the possibility of failure shows this case does not apply here. Nor would the more sensible estimators linked to in the paragraph above behave this way. The probability of failure at the next attempt might already be estimated to be very low if lots of successes have been observed so far, but it will get even lower (without hitting zero exactly) if there is another success. This would even be the case for a simple estimator like empirical probability, if you had already included some failures in the "track record". For example, if there had been 2 failures in 9 attempts, then one more success would bring the empirical probability of failure down to 2 in 10 (a reduction from about 22% to 20%).
Provided you accept its premises, (C) seems more reasonable. Considering what I said about (A) and (B), could those premises ever be fulfilled? I previously argued against the premise that "the probability of failure cannot decrease and may increase" if we are measuring it properly, but I was making some implicit assumptions when I did so: that the true probability of failure is an unknown constant, that events are independent, and so on. There are other models where this premise is true, and (assuming we also accept the premise that we must use the device at some point) it is indeed safer to do so now rather than wait.
Here is one such model. An interesting feature of the claim made here is "once a failure has occurred, failures become more likely in future". The reasoning behind it was incorrect (just because your estimated probability rose, it doesn't mean the true probability rose — you may simply have been underestimating the risk and received a reality check!) but it isn't absurd to claim events are not independent. For mechanical devices, "it's more likely to fail once it's failed before" seems a decent heuristic — many of us have had a car that was never the same after its first breakdown. Let's suppose the device's state of maintenance is either adequate (state $A$, fails 20% of the time) or shoddy (state $S$, fails 60% of the time) and your faith in the engineering team is such that you initially believe it's 50:50 whether the device started in the adequate or shoddy state. The device's state stays the same except, after any failure, its state permanently becomes "shoddy" — so if it was previously adequate then all subsequent use becomes more risky, but if it was shoddy already then, well, plus ça change...
If the device has a 100% track record of two successful uses out of two attempts, how would you (in Bayesian parlance) "update your prior belief" that the device is 50:50 to be in the adequate or shoddy states? We apply Bayes' theorem! Given that the device was shoddy, the probability of observing two successes uses (which I'll denote $UU$) out of two is $\Pr(UU|S) = 0.4 \times 0.4 = 0.16$. If the device was adequate, that rises to $\Pr(UU|A) = 0.8 \times 0.8 = 0.64$. So what was the probability of observing two successful uses? We need to weight our previous answers by your prior probabilities for the state of the device:
\begin{align}
\Pr(UU) &= \Pr(UU|A) \Pr(A) + \Pr(UU|S) \Pr(S) \\
\Pr(UU) &= 0.64 \times 0.5 + 0.16 \times 0.5 = 0.4
\end{align}
How likely do we now think it is that the device is in an adequate state, given its two successful uses?
\begin{align}
\Pr(A|UU) &= \frac{\Pr(A\text{ and }UU)}{\Pr(UU)} \\
\Pr(A|UU) &= \frac{\Pr(UU|A)\Pr(A)}{\Pr(UU)} \\
\Pr(A|UU) &= \frac{0.64 \times 0.5}{0.4} = 0.8
\end{align}
Given the new evidence, our new (posterior) belief is that there's an 80% chance the device state is "adequate". How do we feel about using it on the third attempt? There's an 80% chance it's adequate and fails 20% of the time and a 20% chance it's shoddy and fails 60% of the time, so the probability of a failure $F$ on the third attempt given the available evidence is
$$\Pr(F|UU) = \Pr(F|A)\Pr(A|UU) + \Pr(F|S)\Pr(S|UU) = 0.2 \times 0.8 + 0.6 \times 0.8 = 0.28$$
Contrary to (A), despite the 100% observed success rate, we don't think there's zero chance of failure. If we don't fancy a 28% chance of failure, what if we decide to stand back and observe another use of the device first? This will give us more information to update our beliefs. I'll reset things so our new priors for adequate and shoddy are 0.8 and 0.2. If we observe a failed attempt, then even if the device wasn't shoddy before, we know for sure it's shoddy now! But if we observe a successful attempt,
$$\Pr(A|U) = \frac{\Pr(U|A)\Pr(A)}{\Pr(U|A)\Pr(A) + \Pr(U|S)\Pr(S)}
= \frac{0.8 \times 0.8}{0.8 \times 0.8 + 0.4 \times 0.2} = \frac{8}{9}$$
So the evidence suggests we should update to an even higher (about 89%) probability the device is adequate, and our estimate for the probability the next use will fail drops to $\frac{8}{9}\times 0.2 + \frac{1}{9}\times 0.6 \approx 0.244$.
So contrary to (B), the estimated probability of failure can fall as we observe extra successes. (To see the relevance of Cromwell's criterion, rerun the above calculations with a prior where you're 100% certain the device is adequate. You'll find no number of successes will cause you to change that belief, so your estimated probability of failure gets stuck at 20%.)
Does the fall in your estimated probability that the next use of the device will fail, mean that you made a wise choice to delay using it yourself? Well, if you'd observed a failure instead, I suspect you'd also say the decision to delay was very wise, since it means you avoided an early demise! In truth it sounds like you'd rather delay than use, because you just don't want to use it at all — and who could blame you? But if you have to go, you're still better to go early, as (C) suggests. With each use, there's a 20% chance an adequate device becomes shoddy. If the probability the device was initially adequate is $p_A$, then the probability it is still adequate after $n-1$ uses is $0.8^{n-1} p_A$, so clearly decreasing (provided $p_A > 0$ i.e. there was at least some chance it was initially adequate). The probability of failure must be rising: the chance of failing on the $n$-th use is $0.2(0.8^{n-1} p_A) + 0.6(1 - 0.8^{n-1} p_A)$. Using the device at the third attempt is better than using it at the fourth, regardless of the true probability that the device was initially in an adequate state. Collecting more data lets you better judge which "state of the world" you are in, but by itself doesn't improve the state of the world. Since (C)'s premise of an increasing probability of failure is, in this model, actually fulfilled, then its exhortation was correct. Things are only going to get worse so you might as well get on with it!

Now, I might be being overly generous in the way I wrote (C). I interpreted references to increasing probability of failure as being to the true probability, but given the context, perhaps they mean it's better to go now merely because the estimated probability of failure is increasing. If that is the case, it's another logical error, because it's the true probabilities that really matter. Let's alter our toy model so that even if the estimated probability of failure might improve if you wait an extra use, the actual probability always gets worse. At the moment a successful use of an adequate device does no harm, but we could introduce a small probability that the device transitions to the shoddy state despite, to all appearances, performing its job successfully. Or a continuous model of wear and tear, where the 20% and 60% failure rates of adequate and shoddy devices gradually deteriorate with use, towards e.g. 30% and 70% respectively. Something like $\Pr(F|A)=0.3 - (0.3 - 0.2)\lambda^{n-1}$ would work, where $0 < \lambda < 1$ is a damping parameter (near one means slower deterioration). If we observe a few more successful uses, we will become more confident that the device is an adequate state. If this overwhelms the effect of deterioration, our estimated probability of failure will decrease.
But you'd still be irrational to wait for that to happen before using the device! An omniscient observer with access to the true probabilities would know that by waiting you are taking a worse chance now than if you'd not waited. You just feel more comfortable with the risk you were taking due to your lower estimate for it. And if you yourself reassessed what the probability of failure was on the turn that you skipped, given your new estimated probabilities for the state of the device, even you would see it would have been better to go then! In the end it's the true probability of failure which gets you. The fallacy of replacing a concept in your reasoning by your estimate or measure of it, is called surrogation.
These estimated probabilities become much more decision-relevant, though, if I remove the premise in (C) that you must use the device eventually. Instead you can weigh up the payoffs of chickening out, using the device and it catastrophically failing, or successfully using the device and engaging in your mission (which in a decision tree may branch out into outcomes of various degrees of success and failure, each to be weighted by its own payoffs and probability of occurrence). We are in the realm of decision making under uncertainty and the value of (imperfect) information. Another option is to see how well the device works a couple more times before deciding to use it, or not, yourself. This lets you make a better estimate of the probability the device will fail. If there's no way for this to change your decision — either you have no choice or it can't influence the expected payoffs enough — then this information is of zero value. If it can influence your choices, then it has a positive value, which means you'd be prepared to pay a price for it. In business analytics we'd express that in monetary terms. In this context, you would accept a certain reduction in expected payoff: perhaps the delay allows a key target to escape (reducing the payoff of a successful mission) and the enemy to strengthen their defences (reducing the probability of a successful mission). You'd even accept some added personal risk from additional wear and tear to the device.
If better knowledge of what state of the world you're in lets you make better decisions, the value of that information can justify you entering a somewhat worse state in order to obtain it. So when you have the option of postponing a go/no-go decision until more evidence is available, we are no longer so persuaded by the argument we should "do it now, because the situation will only get worse while we collect more data" — even if every word after the "because" is true. To make my formulation of (C) reasonable, I emphasised the lack of choice. But if the characters believe it's possible (albeit cowardly) to decide not to go ahead, their version of (C) is erroneous. Choice matters: choices give information a value, which we can trade off against the costs (including additional risk) of obtaining the information. Those costs are no longer the not be-all and end-all, but must be set against the benefits of being able to change your mind based on the information: if the extra data suggests the state of the device is likely unacceptable, you can choose not to go even though you'd previously been inclined to try; or if previously unwilling to go, but would if further evidence suggests the device is good enough.
My aim in this answer was to explore some of the rational ways to approach decision-making under uncertainty, in the hope it illuminates some of the irrationality in your quote. Showing how those irrationalities map onto a taxonomy of Fallacy X or Bias Y is trickier. It would be nice to see an answer that looks in more detail at whether probability estimates are well calibrated as that's clearly an issue in this case. Another quick thought experiment: we could imagine an early prototype device that generally improves over time, as each use gives valuable real-world data to your engineer. By sheer luck it hasn't failed yet, but we might model the probability of failure falling as further improvements are made. This example really puts the lie to the claim "since the current failure rate is zero, the probability of failure is doomed to increase", even though it's true the empirical probability cannot improve. This illustrates why we shouldn't rely on the empirical probability to make our decisions. While this means we'd benefit from delaying our use of the device, the extent to which this goes against the advice in (C) to use the device ASAP is due to the premise of an increasing risk of failure being completely reversed, rather than a flaw in (C)'s logic. But nor would the analogous exhortation "it's always getting better, so you should always delay" be very helpful: you'd just postpone indefinitely. This is where you need a decision-theoretic framework that lets you trade off the benefits and costs of delaying.
Funny pictures
Best viewed full-size in a new tab (browsers usually let you do this if you right-click on a computer or press and hold on a mobile device). Read a decision tree left-to-right, but payoffs flow through right-to-left. The expected value of a "chance" node is found by weighting its payoffs by their probabilities; the value of a "decision" node comes from selecting the optimum choice. State transition diagrams (see: Markov chain) show how the state of the device changes with each use. Numbers on arrows coming out of a state represent probabilities: arrows coming out of a state must sum to 100%, and arrows can loop back round if the state doesn't change. These diagrams are not "solutions" to the problem, but rather illustrations to show how (A) and (B) can fail and to explore why the exhortation to "go as soon as possible, before the success rate gets worse" is surprisingly thorny.
Figure 0: If the device improves with use, you're better off delaying
I numbered this differently as it's the only scenario I explored where the device tends to gets better with use: every time the device fails, it gets repaired. This returns an 'adequate' device to an adequate state, and one third of repairs on 'shoddy' devices fix an underlying bug, so now they work adequately. The longer a device has been in service, the more likely all its bugs have been ironed out (though even 'adequate' devices fail 20% of the time, it's still better than 60% for shoddy ones).

Contrary to (A), the 2 out of 2 success rate doesn't mean our estimated probability of failure is zero. Contrary to (B), the estimated probability of failure can improve if we observe another success. Another test run risks breaking out 100% record of success, but that doesn't stop us preferring to go later than go now — the true probability of success is what matters, and that never gets worse and sometimes improves.
Figure 1: If the device deteriorates with use, you're best to go ASAP
In this (and subsequent) figures, shoddy devices stay shoddy, while adequate devices become shoddy if they fail. The more uses it's had, the more likely a device is to have degraded to the shoddy state.

Unsurprisingly, now the true probability of failure really does get worse with time, we prefer to take (C)'s advice and go early. But just when everything seems cut and dried...
Figure 2: Choice changes everything, by giving extra information a value
In previous figures, the results from an extra trial run weren't useful to us, because we couldn't respond to them. But what if we got the choice to go or not, depending on whether we thought the device is in a favourable state? The value of extra data, yielding a clearer picture of what state we're in, means we may now prefer to delay!

The above tree is "abbreviated" to focus on the decision-making. Here's the full tree.

Of course we wouldn't always choose to delay: it depends on the numbers chosen for the payoffs and probabilities. If devices deteriorate quickly that would put us off delaying. The extra data can be of no value if it wouldn't change our mind. E.g. in the bottom branch of the tree, if we'd choose to go on the mission regardless of whether the trial run was a success or failure, then the trial run was pointless and not worth risking a deterioration of the device for.
Figure 3: extra risk can be part of the price you're willing to pay for more information
Even when the probability of failure really is getting worse over time, having choice means we no longer believe "the increasing risk means you're better to go now, than to delay". You might quibble that in the above example, we only use the device if we see a third success, in which case the risk of the device deteriorating hasn't materialised (regardless of whether the device was truly adequate or shoddy all along). So I now add an extra element of risk if you choose to delay teleporting: even if you arrive in one piece, the guards are more likely to get you. As an extra disincentive to delay, I also reduced the mission pay-offs. Again it all comes down to the exact numbers used, but in this case delaying has a price you're (only just) prepared to pay. So yes, even when delay definitely makes things riskier, it can still be rational to delay and collect more data... provided you have the ability to respond to what you learned by doing so.

[fallacy]-tagged questions too, though: What's the opposite of the gambler's fallacy? (still open!) also got a wide range of answers – Silverfish Oct 09 '23 at 15:30