In general, $S$ is NOT independent of $W$, given the observation of $T$. That is, $p(S|W,T)$ is not equal to $p(S|T)$.
Proof: Just for visualization of dependencies, here is the corresponding Bayesian network graphical representation:
$ T \rightarrow W$
$|~~~~~~|$
|>$S$<|
The dependencies you described gives:
$$p(S,W,T)=p(S|W,T)p(W|T)p(T)$$
$$p(S|T) = \sum_W{p(S,W|T)} = \sum_W{p(S|W,T)p(W|T)}$$
As we can see, $p(S|T)$ is not equal to $p(S|W,T)$, unless the relationship $p(W|T)$ is deterministic, i.e., $W$ has absolutely zero entropy given $T$.
Intuitive Example:
Let's say $T=\text{hot}$, $W=\text{heavy}$, and $p(W=\text{heavy}|T=\text{hot}) = 0.1$.
1) $p(S=\text{fast}|T=\text{hot})=\sum_W{p(S=\text{fast}|W,T=\text{hot})p(W|T=\text{hot})}$ (Assumes that we have no idea if $W=\text{light}$ or $\text{heavy}$, so we have to consider both possibilities and take a weighted average between $p(S=\text{fast}|W=\text{heavy},T=\text{hot})$ and $p(S=\text{fast}|W=\text{light},T=\text{hot})$.)
2) $p(S=\text{fast}|W=\text{heavy},T=\text{hot})$ (know that $W=\text{heavy}$)
If you pretend that you do not know $W$, i.e., using (1), your result will be misled by the scenario that could have occurred but actually did not ($p(S=\text{fast}|W=\text{light},T=\text{hot})$).
However, knowing that $W=\text{heavy}$, you can rule out that 90% chance of assuming the model $p(S=\text{fast}|W=\text{light},T=\text{hot})$.