2

The problem: A fair coin is tossed twice. Let X be the number of heads and let Y be the indicator function of the event X = 1. Find P(X = x,Y = y) for all appropriate values of x and y.

I've been treating $S$, the sample space for the experiment, as

$S=\{ \{T,H\}, \{H,T\}, \{H,H\}, \{T,T\} \}$ and defining the DRV $X$ as:

$$ X(\omega) = \begin{cases} 0, & \omega = \{T,T\}\\ 1, & \omega = \{H,T\}, \{T,H\}\\ 2, & \omega = \{H,H\} \end{cases} $$

and the DRV $Y$ as:

$$ Y(\omega) = \begin{cases} 1, & \omega = \{H,T\}, \{T,H\}\\ 0, & \omega = \{H,H\}, \{T,T\} \end{cases} $$

Is this in the right direction? I don't have a solution to verify with so if y'all don't mind, maybe have a hidden one for me once I get it?

I'm truly not sure if I"m on the right track.

Thanks people!

Michael M
  • 11,815
  • 5
  • 33
  • 50
  • 1
    It looks fine to me. Although they might be hard to find, there are many worked examples of this type on our site. Here's one stab at a search that might bring up some of them: https://stats.stackexchange.com/search?q=%5Comega+%5Bself-study%5D. – whuber Jul 13 '17 at 20:58
  • yeah, a few searches yielded nothing really obvious since I'm pretty new to the topic. Thanks for the resources. I've bookmarked them. – TheMathochist Jul 13 '17 at 22:13

2 Answers2

1

Yes, you're doing great! Now put in your different combinations of x and y and count! Hint: it all should sum up to 1 ;)

Good luck!

khaozavr
  • 56
  • 4
  • Given that the OP is being quite formal about this, "... and count" needs justification. – whuber Jul 13 '17 at 21:29
  • I'm pretty good at informal arguments, but it's holding me back in school. Trying to get my formalities down pat. :-) Much appreciated – TheMathochist Jul 13 '17 at 22:11
  • 1
    Okay, I'll expand on my hint then. Intuitively, "fair coin" means each of the four outcomes in the sample space must have probability $1/4$. That's the informal argument. Technically, though, all you know are these facts: (1) because a fair coin was used on the first toss, chances of H and T equal $1/2$: that is, $\Pr({HH,HT})=\Pr({TH,TT})=1/2$; (2) because a fair coin was used on the second toss, $\Pr({HH,TH})=\Pr({HT,TT})=1/2$; (3) the tosses are independent. From these facts alone (as well as the definition of "independent") you need to find the probability measure. – whuber Jul 13 '17 at 22:18
  • 1
    Ok. So correct me if I'm wrong, then P(X = 1,Y = 1) = 1/2 P(X = 2,Y = 0) = 1/4 P(X = 0,Y=0) = 1/4 – TheMathochist Jul 14 '17 at 01:54
  • Probability for me is like trying to get that tiny bit of eggshell out of a bowl of cracked eggs. – TheMathochist Jul 14 '17 at 11:20
  • @Mantissa001 you're correct! By "...and count" I meant to count the possibilities of combinations of x and y. P(X = 0, Y = 1) is not a possibility, which you correctly identified by not giving it any probability mass. In this case, as whuber already mentioned, every element of the space has the same probability, hence "count".

    Sorry for being unspecific earlier, I thought you just wanted to know whether you were on the right track - you were, and your formalities are top notch ;)

    – khaozavr Jul 14 '17 at 19:19
0

I don't know if I am exactly hitting the point of your question, but I hope that I am anyway helping you in some sort of way.

I would address this problem using the definition and the concept of probability measure. Referring to Kolmogorov's axioms of probability, a probability measure $\textit{P}$ on $\Omega$ with sigma algebra $A$ is a function $\textit{P} : A \to [0,1]$ such that:

1) $\textit{P}(\emptyset) = 0$ (probability measure of the empty set);

2) $\textit{P}(\bigcup_{i=1}^\infty E_{i}) = \sum_{i=1}^\infty \textit{P}(E_{i})$ for any pairwise disjoint sets $E_{1}, E_{2}, ..., \in A$ (countable additivity);

3) $\textit{P}(\Omega) = 1$ (certain event).

In your case, the set $ \Omega $ is the set of all possible outcomes of your experiment, so $ \Omega := \{ \{T,H\},\{H,T\},\{H,H\},\{T,T\} \} $, and $A$ is the powerset on $\Omega$. Then define the following probability measure: $ P(\omega) = 1/\text{Card}(\omega) = 1/4 $ for each $ \omega \in \Omega $. This is the uniform distribution on the set, and we can use it because of the physical characteristics of the experiment (as pointed out in whuber♦'s answer).

Note that we haven't define $P$ on all of the sets of $A$, which we would have to do in order to have a measure, but defining $P$ on these single-element-sets is sufficient to induce a measure in the whole space. So now you can claim that there is a unique proability measure on $ A $ that is consistent with definition.

The last step is the following: considering the set $ \{ \{H,T \}, \{T,H\} \} $; $ P( \{ \{H,T \}, \{T,H\} \} ) $ is the disjoint union of its building blocks, i.e. $ P( \{H,T \} ) $ and $ P( \{T,H\} ) $, and from bullet 2) above this is the sum $ P( \{H,T \} ) + P( \{T,H\} ) $, which is equal to $1/2$. You have now a measure of each set you are interested to.

EdoLu
  • 56
  • 3
  • At first I wanted to see if I was setting up the RVs correctly. But as I got deeper, I realized that, formally, I was lost. Intuition lead me to the solution, but I couldn't easily formalize my argument. You have helped indeed. I just read about an induced measure and now I know. Also, you laid it out nicely for me to get the sense of how to justify the step of defining the probability measure for just this particular group of events $\omega$. (Was my use of $\omega$ above incorrect? You've used it as a label for what I would have called the sample space, containing all simple events.) – TheMathochist Jul 18 '17 at 19:13
  • Actually, you got it right and the labels you used were correct. Sets in math are named with capital letters, indeed the sample space is usually labeled with capital omega $ \Omega$ or with $ S $ (as you did). Meanwhile, "my $ \omega $" refers to events element of the set. So don't worry, your use is ok and mine is not 100% correct. – EdoLu Jul 19 '17 at 20:27