You get better performance with unnest() and JOIN. Like this:
SELECT DISTINCT c.client_id
FROM unnest(string_to_array('Some people tell word1 ...', ' ')) AS t(word)
JOIN clients_words c USING (word);
Details of the query depend on missing details of your requirements. This is splitting the string at space characters.
A more flexible tool would be regexp_split_to_table(), where you can use character classes or shorthands for your delimiter characters. Like:
regexp_split_to_table('Some people tell word1 to someone', '\s') AS t(word)
regexp_split_to_table('Some people tell word1 to someone', '\W') AS t(word)
Of course the column clients_words.word needs to be indexed for performance:
CREATE INDEX clients_words_word_idx ON clients_words (word)
Would be very fast.
Ignore word boundaries
If you want to ignore word boundaries altogether, the whole matter becomes much more expensive. LIKE / ILIKE in combination with a trigram GIN index would come to mind. Details here:
PostgreSQL LIKE query performance variations
Or other pattern-matching techniques - answer on dba.SE:
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
However, your case is backwards and the index is not going to help. You'll have to inspect every single row for a partial match - making queries very expensive. The superior approach is to reverse the operation: split words and then search.