no it was more of a general question targeted to the general modern games being in development, eg Destiny and its super tiny FOV
Just from what I understand (someone can correct/add more details):
Since the overall framebuffer is typically the same and is simply divided, the rendering load generally shifts to the front-end of the graphics pipeline i.e. geometry submission & visibility tests, which are usually started on the CPU.
Usually all that is done on a single CPU thread, so the work is serial and scaled as you add more independent views. It's rather important to have LOD transitions*, and you'll probably note just how bloody awful 4 player splitscreen can look compared to even 2p split, nevermind the drop from 1p. Reach is a pretty good example (can't remember how Halo 4 did it, but I had the impression the framerate was awful anyway).
*lower poly models or simply skipping objects entirely. The former helps with pixel shading efficiency as we don't want polygons being the size of pixels. The latter just removes extra visibility calculations.
Anyways, Halo games seemingly already did LOD based on amount of pixel area for an object (disappearing objects, lower LOD models), so on top of that, a smaller FOV would also mean fewer objects in your visual space, while possibly making things directly ahead appear larger. Shadows do have a geometry setup cost too, and those are heavy on rendering anyway, so it's easy to just axe those first.
---
tl;dr higher CPU load -> need to reduce geometry on-screen, smaller FOV helps, also draw fewer objects via LOD management.
---
As for why they would lower the FOV for CEA, who knows.