Regarding #1:
You don't lose anything with regards to the algorithm itself. The algorithm to go from the key to the key schedule (KS) is deterministic and well known, so anyone who gets the key can easily get the KS too. The actual algorithm itself isn't made less secure by communicating the KS instead of the key.
However, relying on the KS instead of the key makes things more practically difficult.
The scheme that you describe isn't secure because the encryption isn't randomized. All symmetric key encryption needs to be randomized. You are including a unique counter in the output, and some variations of that work for block ciphers, but that won't randomize the entire ciphertext for your stream cipher. For a stream cipher, the randomization occurs as a unique nonce that is combined with the key to generate a unique key stream that is in turn used to create a unique ciphertext.
Making your scheme secure could be done by adding nonces to the key for each encryption. So exchanging the KS itself actually makes using good encryption harder in this case because you can't simply exchange a new nonce for encryption, you have to exchange a whole new KS to include the change from the nonce. The nonce can be publicly exchanged and is short, but the KS must be kept secret and is longer. So then the question becomes, how are you securely exchanging the updated KSs? You might be able to securely agree upon the first key, but can you continuously agree upon new keys securely? Would you have to set all the KSs in advance when the original key is agreed upon? It gets messy.
If you are trying to optimize encryption, the standard way to do it is to exchange the key and then cache the KS on all the clients that use it. You would have to compute the KS once per client, but the RC4 KS is really short and shouldn't be a speed issue on any semi-modern platform. (Two loops of 256 count, an average of about three memory lookups and a few additions per loop. How bad could that be as a one-time performance penalty?) This way the optimization is abstracted within each client and you still have normal key exchange features.
So for your scheme, you shouldn't re-start the encryption for each message unless you want to include unique nonces for each reset. To avoid the nonces, you could just keep using the continued output stream of RC4 for all the messages (somehow ensuring that each client is synchronized as to how far into the stream they are). Either way would be better than the current scheme.
I know most of that isn't directly related to your question, but even though your proposal isn't inherently more insecure, it does make fixing your insecure protocol more difficult.
Regarding #2:
Note that, basically, you replaced
j := (j + S[i] + key[i mod keylength]) mod 256
with
j := random(0..255)
Assume for a moment that keylength is 256. Now consider the case that key and random() comes from a high quality entropy source, that is key is effectively just a random 256 byte string of the same quality that 256 bytes of random() would produce. The addition of constants and some slight "self-modification" from S[i] won't improve the quality of key. So for a good 256 byte random key, your scheme hasn't changed anything. In fact, you can calculate the effective key that your new scheme would create:
for i from 0 to 255
S[i] := i
endfor
j := 0
for i from 0 to 255
temp = random(0..255)
k[i] := temp - j - S[i]
j = temp
swap(S[i], S[j])
endfor
That is to say, the key k here would have produced the same KS as your scheme (if given the same values from random()). This calculated key is still as secure as the output from random(). So instead of pre-generating a key to use, you're just generating it on the fly. There's really no difference.
(If the feedback-style role of j and S[i] concerns you, remember that we assumed that random() was a good entropy source. At best these variables are adding a random value to the output of random(), but that output was already securely random so no quality is added or removed. At worst they are adding constant values, and again it doesn't change anything.)
So your second scheme is the equivalent of just generating a 256 byte random key to begin with, and that's the best key that RC4 can use. Given that your scheme doesn't improve on the maximal security bounds of RC4, it doesn't really add anything. You can select the quality of the KS by choosing an appropriately strong key. If you want maximum security, just generate a high quality (256 byte) key. (Possibly via the call to random(), if you trust it to generate keys.)
Note:
Everything here assumes that you can distribute the initial key securely and that you trust the output of this random() function. It seems like you do assume this, but keep in mind that those are two very big assumptions. Ensure that your random() call is a cryptographically secure call, preferably to something like CryptGenRandom() on Windows or /dev/random on *nix.
As for the swapping, I've fixed the pseudocode.
– Witness Protection ID 44583292 Jun 05 '12 at 18:28