The stated algorithm is right. Note sure it will be useful, but I'll try an alternative way of explaining to see if it is insightful.
The FDR-BH is based on the principle of multiple hypothesis testing leading to an elevated risk of incorrectly rejecting the null for a negligible effect.
The FDR procedure involves ordering a set of p-values from lowest to highest, from 1 … m
Define q as the false discovery rate (e.g. 0.05)
Iterating from 1 to m, temporarily define k as the order of the current p-value (that is, k is initially 1)
Reject the null hypothesis for all p values from 1 -> k
The assumption is that the lowest p-values are least likely to be false positives/discoveries, hence we start at the lowest p-value. However to be sure that we are not being too lenient we impose a more stringent bar for discovery, multiplying our threshold by the fraction of the current test's rank in the sorted list. If m = 100, then for k = 1 we require that p<0.0005.
If our first test passes this stringent criterion then we assume it really is a true discovery and reject the null. It exceeds our alpha of 0.05 by so much we assume its positive result is not due to chance and therefore can ignore it from now on, confident it is a true discovery.
FDR is valid if the tests (i.e. the p-values) are independent of each other
This is correct and very critical. If all tests are independent then the acceptance of the first has no bearing on any remaining tests. We are now effectively doing multiple hypothesis testing on m-1 tests and so we can slacken the criteria. However, if there is any family-wise errors then the tests are not independent and the related p-values will end to skew in the same direction. Accepting a test early in the rank because it happened to have a strongly skewed outlier would mean that any subsequent correlated tests are presented with a lower threshold than the first test, increasing the risk of falsely accepting the correlated test.
FDR is also valid in the case of positive regression dependency on each one from a subset (PRDS)
Define all p-values as members of a set X
Define x as a member of X
X has PRDS if and only if P(X is increasing, given that x is any p-value for which the null hypothesis is true) is non-decreasing in x
That is, FDR holds valid in the case that controlling for any p-value for which you INCORRECTLY reject the null hypothesis would not make it less likely that you reject the null hypothesis for any other p-value.
This is effectively a restatement of the BH procedure - by searching for which rank pk > (kq)/m you are testing a series of subsets (k and below) and once you accept the null for one then accepting the null for all values with a higher p-value and rank are dependent on that kth result.
* EDIT*
The PRDS is akin to BH because PRDS considers one sample at a time in order to evaluate whether they are in the subset of statistics that are true null. Each step is carried out independently, therefore once a statistic has been accepted as non-null it is excluded from potential null candidates. The only impact for higher p-value statistics is that the threshold is raised (less stringent) for rejection of the null. Since the threshold only moves up there is a positive dependency between rank and failure to reject the null.
What happens if you encounter a false positive during this process?
You get a rise in the threshold sooner than you otherwise would have, meaning any subsequent true positives are subjected to a less stringent threshold and therefore more likely to reject the null.