Yesterday I was given a data set $(a_1,\ldots,a_n)$ (i.e., $n$ i.i.d. realizations) and computed a desired empirical conditional probability $P(A_n|B_n)$ where $A_n,B_n$ are events in the data.
Today, I received a new data point and my total data set is now $(a_1,\ldots,a_n,a_{n+1})$. I again want to compute the "updated" $P(A_{n+1}|B_{n+1})$ given this new data.
My question is, how would "Bayesian updating" be used here? I could just compute $P(A_{n+1}|B_{n+1})$ using the definition, but I'm interested in learning how to use this Bayesian updating technique. My best guess is $$ P(A_{n+1}|B_{n+1}) = \frac{P(B_{n+1}|A_{n+1})P(A_n|B_n)}{P(B_{n+1})}, $$ i.e., using my previous posterior as my new prior, but this statement is not mathematically true. In particular, $P(A_{n+1}|B_{n+1}) \notin [0,1]$ necessarily. So, what is meant by "Bayesian updating," and why should I use it over just computing conditional probabilities using the definition?