Not as good as something like Horizon's Aloy, but then again you have to remember they have to account for tons of variation in custom characters vs one singular character like Aloy.
I've seen this mentioned a few times by a few folks, but I'm not sure this is accurate. I think that for major cutscenes, the characters are hand animated/facial captured, but I'm fairly certain that when it comes to regular quest conversations, automated lip syncing is employed. I've seen folks bash it, but for me it's a clear step up from games like ME1,2,3 and Skyrim/Fallout.
Mouth shapes line up with sounds much better and there's a
lot of work that's gone into matching emphasis, annunciation and emotional state with head movements and subtle facial expression. I also noticed that some characters have noticeable muscle bias, meaning they speak out of one side of their mouths, or cock one eyebrow more than the other. Aloy herself tends to nod more to one side than the other.
It's a really, really neat system that I hope is built upon further. It's not as subtle or accomplished as the hand animation/capture, but then that approach isn't feasible on the scale required. Definitely a step up from what we've seen to date.
I should add I haven't seen ME:A (blackout), this relates specifically to Horizon.