If a text ciphered by substitution is in a artificial language with equal frequency distribution of characters, and perhaps 2 or 3-grams too, what would be the next, easy to understand, ways to decipher it?
I found the "Hill climbing" concept, of starting with a random set of replaces, compare results with a dictionary, try some changes and repeat (that seems like a genetic algorithm idea to me). But I would like to learn some other easy ideas.
PS: Looking for questions about it, I read something about "word patterns", but seems not too easy to code.
Edit: I have to add that I tried to find word patters in the book "A pale blue dot" by Carl Sagan. The pattern of the first 202 unique longest words in the book had only 1 match with a word in the dictionary (although I used a 80k words dictionary, so only 70 of that patterns matched a word there, Sagan seems to use infrequent long words).
Only the 203th longest unique word in the book had 2 possible words in my dictionary for its pattern. So, because I'm using a long text like a book, I have plenty of unique long words to create a substitution table, thus the method is easy to code.