A concept is not well defined (learnable) if you only have positive examples of it - it could be anything between the specific set of positive examples and the whole universe.
The problem, as illustrated with Peter's example comes where there are no (or very few) examples where the raters agree on "no"! Presumably this is also what Karien and whuber have.
It would of course be just as bad to have no (or few) where the raters agree on "yes"!
With the symmetric agreement/disagreement pattern each way, the Kappas of Cohen and Fleiss and Scott all agree, as do DeltaP, DeltaP', Informedness, Markedness and (Matthews) Correlation - which are focussed on chance-correct prediction of one set of ratings using the other, and give the probability that such a prediction is informed.
Let's suppose they agree yes on A, disagree one way on B, the other on C, and agree no on D:
A | B
- - -
C | D
The symmetry gives us B = C and the lack of negative agreement gives us D = 0.
Exploiting symmetry and using the simpler (and I believe sounder) Informedness or DeltaP formulation we have
Kappa = A/[A+C] - B/[B+D] = A/[A+C] - 1.
This will be <0 (below chance) except when B=C=0 - both raters agree all examples are yes, when Kappa = 0 indicating chance level performance (there is no work for them as the examples appear to be all positive and all uncontroversial).
Without the symmetry the Kappas and Correlations are still negative although DeltaP and DeltaP' (calculated on the transposed table) differ and so the magnitude of the Correlation is their geometric mean (Cohen and Fleiss Kappa can give weird results for extreme asymmetry too - see the reference below).
Reference
Powers, D.M.W (2012). The Problem of Kappa. In Conference of the European Chapter of the Association for Computational Linguistics. ACL.
http://aclweb.org/anthology-new/E/E12/E12-1035.pdf