So let me begin with why both of these models concern me.
In the first model, the Frequentist Markowitzian model, Ito's method assumes that all parameters are known. No estimators are needed.
That is a big deal because White in 1958 proved that models with $\tilde{w}=Rw+\varepsilon, R>1$ have no solution if $R$ has to be estimated in the Frequentist paradigm. Although one could use methods like Theil's regression, it would be inconsistent with the economic theory. Ito's method is valid only if you know the true parameters for Apple Computer. I don't.
In the Bayesian framework, it is the parameters that are random and not data. It can take one of two forms, the objective and the subjective.
In the objective form, $\theta=k$, but is unobservable. A prior probability distribution is created by the observer regarding the location of $\theta$. If thought of in game theory, $\theta$ is chosen by nature at time zero. $\theta$ is a random variable if randomness is thought of as uncertainty rather than chance.
In the subjective form, $\theta\in{K}$ and nature draws $\theta$ at the beginning of each experiment. $\theta$ is not a constant, it is a true random variable.
If you look at the website you provided, Bayesian methods do not provide a single point answer as null hypothesis methods do. There is no equivalent to $\bar{x}$ or $s^2$. Instead, there is a distribution of possible values for $\mu$.
One can only arrive at single points by imposing a utility function, which is what they have done.
It is the interpretation of this that makes the solution problematic.
Under the subjective interpretation, you are anticipating that nature will make a very bad draw in the next time period. Indeed, you are anticipating, in some sense, the worst draw ever to have happened to be next.
Note that this is about the parameters and NOT the market realizations. It does not imply a crash. For example, let us assume that $\mu_{min}=1.01$, that does not preclude the realized value of $S_{t+1}$ from being a 95% increase. Because this is a systemic choice, what you are really doing, by implication, is assuming the very worst future set of economic conditions over the next period.
In the objective interpretation, every point in the posterior is a valid possible solution. Obviously, some points in the posterior as so improbable that a person could rule them out. The solution they provide is in the dense but improbable region.
In other words, it could be the real value for the parameters, but it is unlikely. Nonetheless, it is unlikely downward. In the worst case, that is likely to be the systematic outcomes, should you have a bad historical sample to build your posterior from. By choosing the worst case, you are likely to be always surprised.
The rough Frequentist equivalent would be to use the parameter estimates from the bottom of the 99.9% confidence intervals.
Every point in a confidence interval is equiprobable. Confidence intervals are a uniform distribution. If you had the correct utility function, then choosing the bottom of every confidence interval would be an equally valid solution.
As far as I can tell, there is nothing objectionable about plugging in the bottom value of a confidence interval instead of using the MVUE. While it still isn't a valid estimator, as per White, nothing makes it "wrong" to ignore the MVUE in favor of some point in an interval.
I have one other concern, but it would take a lot of work to determine its validity. By a lot of work, I mean I might solve it in an hour or it could take me weeks. I don't care that much, so I am not going to work on it, but I am providing it as a disclosure.
Axiomatically, there is only one difference between Kolmogorov's axioms and de Finetti's Dutch Book Theorem. It has to do with the counting of sets. It turns out, it is a big deal.
Under de Finetti's axiomatization of probability, which guarantees that market makers cannot be forced to lose money, sets have to be finitely additive. Under Kolmogorov's, sets must be countably additive.
For example, one of the hidden assumptions deep in the background math when solving things such as the minimum variance unbiased estimator for the population mean is that the distribution of the random variable can be cut into as many pieces as there are integers when dealing with a continuous random variable.
Formally, a set function $\mu$ possesses countable additivity if, given any countable disjoint collections of sets $\{E_k\}_{k=1}^n$ on which $\mu$ is defined $$\mu\left(\bigcup_{k=1}^\infty{E_k}\right)=\sum_{k=1}^\infty\mu(E_k).$$
An informal way to think about it is to imagine that you cut the normal distribution into segments one unit in size in both directions. That would be an infinite number of disjoint sets over a continuous distribution.
Bayesian methods are not countably additive. Instead, you may only cut such a set $n$ ways. You may not take $n$ to the limit at infinity.
As counterintuitive as it may sound, when placing money at risk, the difference is gigantic.
I believe that Leonard Jimmie Savage created the following analogy.
Imagine that you had an urn with $n$ lottery tickets in it. As a bookie or market maker, you could make rational decisions about how to price each ticket in a sensible manner.
Now imagine an urn with all of the integers in it. How could you sensibly price the risk for any ticket?
Frequentist methods produce solutions that allow someone who is clever to string together a convex combination of contracts such that they cannot lose money. Indeed, such contracts are often self-funding or payout amounts greater than the cost of funds.
If your counterparty uses a Frequentist method for things such as asset allocation, and you know how to do it, you can construct a riskless portfolio of contracts. It is the mathematical equivalence to color blindness. The Frequentist is effectively color blinded by the assumption of countable additivity.
To use an analogy, imagine there was a device that guaranteed a payout when a light was blue and never paid when green. You are color blind so you believe that chance is involved. The other party is not. They think you are insane or, at least, individually irrational.
However, using a Bayesian method is not a sufficient condition to produce a coherent result, that is a result that cannot be gamed by a clever counterparty.
Some prior distributions and some utility functions can create incoherent pricing.
You are safe if you use your real subjective proper priors from information outside the data set and if you use your personal utility function. It is when you start building artificial utility functions or priors that the research shows you can be forced into taking losses.
My concern is that this is a minimax solution and minimax solutions are not coherent, generally. At the minimum, I could probably create a statistical arbitrage case against you. On the other hand, it is unlikely that you would care. The use of this utility function implies that you are very conservative and afraid. You would willingly give up return for safety.
The open question is whether or not I could find a way to force you to lose money.