WARNING - looong post ahead
Ok, first of a bit of information so you are not going down the wrong path.
You cannot generate a chrparams at runtime. Sure, you can create the xml file, but chrparams files cannot be hot-reloaded for a character that is already loaded. So this won't help you.
You can however create an fsq file at runtime and simply manually load and play it alongside your speech synthesizer. This is more direct - and since you know what file you want, there is no need to first map it in a chrparams file anyway.
Creating an fsq is not as simple as creating a IFacialSentence. There is no code in CryEngine that will do this for you based on a text string. The facial tools were all designed to work with audio tools and use a plugin like annosoft to recognize the audio. You will have to manually write code to construct an fsq for you sentence by stringing the phonemes together for the words. This can be done (depending on your level of C++ skills it won't even be too hard), but you will have to write the code for this.
Now, the question remains whether an fsq is the right way to go in the first place. I am not sure, to be honest - for multiple reasons:
One is timing. You might know what sentence you speech synthesizer will say, but you don't know the timings of when each word/vowel/phoneme is actually generated, right? (I have not worked with speech synthesizers before, so I might be wrong). If this is the case, then the lip synch will certainly be off.
For this case I have experimented with generic talking animations. That means one or more (so it is not repetitive) looping fsqs that looks convincingly like a talking character, which are played for the entire time that the speech is active. It worked surprisingly well, and the only issue was that the mouth was moving while there was a pause in a sentence. I always thought this could be improved by stopping the fsq, or overlaying it with a closed-mouth one, when the amplitude of the sound playing was close to 0. But I never tried that out.
Just to give you even more things to consider - a friend of mine has player around with live lip synching a bit and took a different approach. Instead of playing an fsq, he played the expressions for the individual phonemes directly on the face, the moment his code recognized them for the audio file.
Anther option: If you already know which sentences you will have your speech synthesizer say, you could always record those prior to shipping and have a lip synch tool create dedicated fsqs for these sentences for you. Even if you don't know all sentences yet, maybe this is an option for those that you _do_ know.
I hope this helped you at least a little bit in finding the best way for you to go.