I am currently studying the textbook In All Likelihood by Yudi Pawitan. Example 2.4 of chapter 2.2 Examples says the following:
Example 2.4: Suppose $x$ is a sample from $N(\theta, 1)$; the likelihood of $\theta$ is $$L(\theta) = \phi(x - \theta) \equiv \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} (x - \theta)^2}.$$ The dashed curve in Figure 2.3(d) is the likelihood based on observing $x = 2.45$.
Suppose it is known only that $0.9 < x < 4$; then the likelihood of $\theta$ is $$L(\theta) = P(0.9 < X < 4) = \Phi(4 - \theta) - \Phi(0.9 - \theta),$$ where $\Phi(x)$ is the standard normal distribution function. The likelihood is shown in solid line in Figure 2.3(d).
Suppose $x_1, \dots, x_n$ are an identically and independently distributed (iid) sample from $N(\theta, 1)$, and only the maximum $x_{(n)}$ is reported, while the others are missing. The distribution function of $x_{(n)}$ is $$\begin{align} F(t) &= P(x_{(n)} \le t) \\ &= P(X_i \le t, \ \text{for each $i$}) \\ &= \{\Phi(t - \theta)\}^n. \end{align}$$ So, the likelihood based on observing $x_{(n)}$ is $$L(\theta) = p_\theta (x_{(n)}) = n\{ \Phi(x_{(n)} - \theta)\}^{n - 1} \phi(x_{(n)} - \theta).$$
What is the reasoning behind the string of equality $P(x_{(n)} \le t) = P(X_i \le t, \ \text{for each $i$}) = \{\Phi(t - \theta)\}^n$?