Yes, you can compare these two models using AIC. I presume you're asking because the models are not nested. AIC is a technique to estimate the minimum KL divergence between each candidate model from a set of candidate models and the true data-generating distribution, and it is not required that the set of candidate models be 'nested' within each other. See e.g. page 266-267 of Burnham and Anderson.
That said, you could use a Taylor expansion on $\log(P)$ to argue that they are approximately nested. Suppose that you're fitting linear models. Then the right-hand sides of your two models look like
\begin{aligned}
\alpha_0 + \alpha_1P
\end{aligned}
and\begin{aligned}
\beta_0 + \beta_1\log(P) &\approx \beta_0 + \beta_1(P-1) - \beta_1(1/2)(P-1)^2 + \beta_1(1/3)(P-1)^3 - \beta_1(1/4)(P-1)^4 \\
&= \beta_0^* + \beta_1^*P + \beta_2^*P^2+ \beta_3^*P^3+ \beta_4^*P^4
\end{aligned}
Thus, you're basically choosing between a simple linear regression or a polynomial linear regression, but with the very restrictive assumption that all of the coefficients share a particular 1-1 correspondence.
y <- rnorm(100, x, 5) plot(y~x) log.x <- log(x)
f1 <- lm(y ~ log.x) f2 <- lm(y ~ poly(x, 4))
r <- seq(min(log.x), max(log.x), length.out=1000) s <- seq(min(x), max(x), length.out=1000)
t <- predict(f1, newdata = data.frame(log.x = r)) u <- predict(f2, newdata = data.frame(x = s))
par(mfrow=c(1,2))
plot(y ~ log.x) lines(r, t, col = "red")
plot(y ~ log.x) lines(log(s), u, col = "red")`
– Andreas Jul 05 '20 at 21:48<br/>– psboonstra Jul 08 '20 at 13:02