To examine the estimator for $\Phi$ we will treat $\mathbf{x}$ as a control variable, meaning that the auction house sets their own prices for each round of auctions. Without loss of generality, let us suppose that the values in this control vector are in non-decreasing order and let $\Phi_i \equiv \Phi(x_i)$ denote the corresponding unknown CDF values at the chosen points, so that we have a set of ordered unknown points $0 < \Phi_1 \leqslant \cdots \leqslant \Phi_n < 1$.$^\dagger$ This gives us the likelihood function for the data is:
$$\begin{align}
L_{\mathbf{x},\mathbf{t}}(\Phi)
&= \prod_{i=1}^n \text{Geom}(t_i| 1-\Phi(x_i)) \\[6pt]
&= \prod_{i=1}^n \Phi(x_i)^{t_i-1}(1-\Phi(x_i)) \\[6pt]
&= \prod_{i=1}^n \Phi_i^{t_i-1} (1-\Phi_i) \\[6pt]
\end{align}$$
As you can see, the likelihood depends on the distribution $\Phi$ only through the specific points used for the control variable, so without any additional structural assumption on the distribution, you can only learn about the distribution through inference about the CDF at these points.
It is possible to find the MLE by solving the multivariate optimisation problem that maximises the above function over the constrained space $0 < \Phi_1 \leqslant \cdots \leqslant \Phi_n < 1$. This is a constrained multivariate optimisation problem, which will typically require some transformation and numerical methods. Below I show how you can construct an algorithm to compute the MLE by writing the CDF points as a transformation of an unconstrained parameter vector, thereby converting the problem to an unconstrained optimisation.
Solving to obtain the MLE gives us an estimator of the form $\hat{\boldsymbol{\Phi}} = (\hat{\Phi}_1,...,\hat{\Phi}_n)$, and with some further work we can obtain the estimated asymptotic variance matrix for this estimator (and therefore obtain the standard errors of each of the elements). To estimate the CDF at points other than those used as control variables you could use interpolation, noting that this entails some implicit assumptions about the structure of the distribution (e.g., if you were to estimate the median using linear interpolation from the MLE then this would involve an implicit assumption that the CDF is linear between the relevant control points used in the interpolation). It is also possible to use alternative modelling methods where you assume a particular parametric distributional form for $\Phi$ and then estimate the parameters of the distribution using the MLE or another estimator.
Computing the MLE: As noted above, the MLE involves a constrained multivariate optimisation problem, and this can be solved by conversion to an unconstrained mutivariate optimisation problem. To facilitate this analysis it is useful to write the parameters $\Phi_1,...,\Phi_n$ as transformations of an unconstrained parameter vector $\boldsymbol{\gamma} = (\gamma_1, ..., \gamma_n) \in \mathbb{R}^n$, given by:$^\dagger$
$$\Phi_i = \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)}.$$
This transformation allows us to write the likelihood function in terms of the parameter $\boldsymbol{\gamma}$ and obtain the MLE through this parameter. If we let $\bar{t}_n \equiv \sum_{i=1}^n t_i$ then we can write the likelihood function as:
$$\begin{align}
L_{\mathbf{x},\mathbf{t}}(\boldsymbol{\gamma})
&= \prod_{i=1}^n \bigg( \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg)^{t_i-1} \bigg( 1 - \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg) \\[6pt]
&= \prod_{i=1}^n \bigg( \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg)^{t_i-1} \bigg( \frac{1+\sum_{r=i+1}^n \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg) \\[6pt]
&= \frac{\prod_{i=1}^n (\sum_{r=1}^i \exp(\gamma_r))^{t_i-1} (1+\sum_{r=i+1}^n \exp(\gamma_r))}{(1+\sum_{r=1}^n \exp(\gamma_r))^{n \bar{t}_n}} \\[6pt]
\end{align}$$
and the corresponding log-likelihood is:
$$\begin{align}
\ell_{\mathbf{x},\mathbf{t}}(\boldsymbol{\gamma})
&= \sum_{i=1}^n (t_i-1) \log \bigg( \sum_{r=1}^i \exp(\gamma_r) \bigg)
+ \sum_{i=1}^n \log \bigg( 1 + \sum_{r=i+1}^n \exp(\gamma_r) \bigg) \\[6pt]
&\quad \quad \quad - n(1+\bar{t}_n) \log \bigg( 1+\sum_{r=1}^n \exp(\gamma_r) \bigg) \\[6pt]
&= \sum_{i=1}^n (t_i-1) \text{LSE}(\gamma_1,...,\gamma_i)
+ \sum_{i=1}^n \text{LSE}(0,\gamma_{i+1},...,\gamma_n) \\[6pt]
&\quad \quad \quad - n \bar{t}_n \text{LSE}(0, \gamma_1,...,\gamma_n). \\[6pt]
\end{align}$$
(The function $\text{LSE}$ here is the logsumexp function.) The partial derivatives of the log-likelihood (which are the elements of the score function) are:
$$\begin{align}
\frac{\partial \ell_{\mathbf{x},\mathbf{t}}}{\partial \gamma_k}(\boldsymbol{\gamma})
&= \sum_{i \geqslant k} \frac{(t_i-1) \exp(\gamma_k)}{\sum_{r=1}^i \exp(\gamma_r)}
+ \sum_{i < k} \frac{\exp(\gamma_k)}{1+\sum_{r=i+1}^n \exp(\gamma_r)}
- \frac{n \bar{t}_n \exp(\gamma_k)}{1+\sum_{r=1}^n \exp(\gamma_r)} \\[12pt]
&= \sum_{i \geqslant k} (t_i-1) \exp(\gamma_k - \text{LSE}(\gamma_1,...,\gamma_i))
+ \sum_{i < k} \exp(\gamma_k - \text{LSE}(0,\gamma_{i+1},...,\gamma_n)) \\[6pt]
&\quad \quad \quad - n \bar{t}_n \exp(\gamma_k - \text{LSE}(0, \gamma_1,...,\gamma_n)) \\[12pt]
&= \exp( \text{LSE}(\log(t_k-1) + \gamma_k - \text{LSE}(\gamma_1, ..., \gamma_k) , ..., \log(t_n-1) + \gamma_k - \text{LSE}(\gamma_1,...,\gamma_n)) ) \\[6pt]
&\quad \quad + \exp( \text{LSE}(\gamma_k - \text{LSE}(0,\gamma_{2},...,\gamma_n), ..., \gamma_k - \text{LSE}(0,\gamma_{k},...,\gamma_n)) ) \\[6pt]
&\quad \quad - n \bar{t}_n \exp(\gamma_k - \text{LSE}(0, \gamma_1,...,\gamma_n)) \\[6pt]
\end{align}$$
Setting all partial derivatives to zero and solving for $\boldsymbol{\gamma}$ gives a critical point of the log-likelhood function, which will give the MLE $\hat{\boldsymbol{\gamma}}$. From the invariance property of the MLE we can then easily obtain the MLE $\hat{\boldsymbol{\Phi}}$. In the code below we create a function MLE.auction which computes the MLE for any input vectors x and t (with some other arguments to control the optimisation). The function produces a list giving the MLE and some associated outputs relating to the optimisation.
MLE.auction <- function(x, t, gradtol = 1e-7, steptol = 1e-7, iterlim = 1000) {
#Check inputs x and t
if (!is.vector(x)) stop('Error: Input x should be a numeric vector')
if (!is.numeric(x)) stop('Error: Input x should be a numeric vector')
if (!is.vector(t)) stop('Error: Input t should be a numeric vector')
if (!is.numeric(t)) stop('Error: Input t should be a numeric vector')
if (any(t != as.integer(t))) stop('Error: Input t should contain only integers')
if (min(t) < 1) stop('Error: Input t should contain only positive integers')
if (length(x) != length(t)) stop('Error: Inputs x and t should have the same length')
#Check input gradtol
if (!is.vector(gradtol)) stop('Error: Input gradtol should be numeric')
if (!is.numeric(gradtol)) stop('Error: Input gradtol should be numeric')
if (length(gradtol) != 1) stop('Error: Input gradtol should be a single numeric value')
if (min(gradtol) <= 0) stop('Error: Input gradtol should be positive')
#Check input steptol
if (!is.vector(steptol)) stop('Error: Input steptol should be numeric')
if (!is.numeric(steptol)) stop('Error: Input steptol should be numeric')
if (length(steptol) != 1) stop('Error: Input steptol should be a single numeric value')
if (min(steptol) <= 0) stop('Error: Input steptol should be positive')
#Check input iterlim
if (!is.vector(iterlim)) stop('Error: Input iterlim should be numeric')
if (!is.numeric(iterlim)) stop('Error: Input iterlim should be numeric')
if (length(iterlim) != 1) stop('Error: Input iterlim should be a single numeric value')
if (iterlim != as.integer(iterlim)) stop('Error: Input iterlim should be an integer')
if (min(iterlim) <= 0) stop('Error: Input iterlim should be positive')
#Set preliminary quantities
ORD <- order(x)
x <- x[ORD]
t <- t[ORD]
n <- length(t)
tbar <- mean(t)
#Set negative log-likelihood function
NEGLOGLIKE <- function(gamma) {
TT <- matrixStats::logSumExp(c(0, gamma[1:n]))
T1 <- rep(0, n)
T2 <- rep(0, n)
for (i in 1:n) {
T1[i] <- matrixStats::logSumExp(gamma[1:i])
if (i < n) { T2[i] <- matrixStats::logSumExp(c(0, gamma[(i+1):n])) } }
NLL <- - sum((t-1)T1) - sum(T2) + ntbar*TT
#Set derivative
SS <- n*tbar*exp(gamma - TT)
S1 <- rep(0, n)
S2 <- rep(0, n)
for (k in 1:n) {
S1[k] <- exp(matrixStats::logSumExp(log(t[k:n]-1) + gamma[k] - T1[k:n]))
if (k > 1) { S2[k] <- exp(matrixStats::logSumExp(gamma[k] - T2[1:(k-1)])) } }
DERIV <- - S1 - S2 + SS
attr(NLL, 'gradient') <- DERIV
#Give output
NLL }
#Compute optima
OPT <- nlm(NEGLOGLIKE, p = rep(0, n),
gradtol = gradtol, steptol = steptol, iterlim = iterlim)
#Convert to MLE for phi
GAMMA.MLE <- OPT$estimate
MAXLOGLIKE <- -OPT$minimum
TTT <- matrixStats::logSumExp(c(0, GAMMA.MLE))
TT1 <- rep(0, n)
for (i in 1:n) { TT1[i] <- matrixStats::logSumExp(GAMMA.MLE[1:i]) }
PHI.MLE <- exp(TT1 - TTT)
MLE.OUT <- data.frame(x = x, t = t, MLE = PHI.MLE)
rownames(MLE.OUT) <- sprintf('Phi[%s]', 1:n)
#Give output
list(MLE = MLE.OUT, maxloglike = MAXLOGLIKE, mean.maxloglike = MAXLOGLIKE/n,
code = OPT$code, iterations = OPT$iterations,
gradtol = gradtol, steptol = steptol, iterlim = iterlim) }
We can test this function on the set of input data $\mathbf{x} = (4, 6, 12, 15, 21)$ and $\mathbf{t} = (2, 7, 6, 12, 15)$. As you can see from the output, the computed MLE respects the CDF ordering for the input values and it also tells you the maximised value of the log-likelihood function.
#Set data
x <- c(4, 6, 12, 15, 21)
t <- c(2, 7, 6, 12, 15)
#Compute MLE
MLE.auction(x = x, t = t)
$MLE
x t MLE
Phi[1] 4 2 0.5000001
Phi[2] 6 7 0.8461538
Phi[3] 12 6 0.8461538
Phi[4] 15 12 0.9166667
Phi[5] 21 15 0.9333334
$maxloglike
[1] -14.08348
$mean.maxloglike
[1] -2.816695
...
$^\dagger$ Here we use the simplifying assumption that $\Phi_1 > 0$ and $\Phi_n < 1$. It is simple to proceed without this simplifying assumption by allowing $\boldsymbol{\gamma} \in \bar{\mathbb{R}}^n$, which is the extended Euclidean space.