-what exactly do v-sync, anti-aliasing, and sub-HD refer to?
V-Sync: Monitors refresh at a given rate (60Hz, for instance, is 60 refreshes - new images - each second). Games can synchronise to the refresh rate, such that they carry out the game logic and render a view of the world once for every refresh of the monitor. If it happens to take a long time to do that, you might get one new scene for every two refreshes of the monitor (halving the framerate). Synchronised to the monitor you can only have 60fps or factors thereof.
With V-Sync turned off the game makes no effort to synchronise the behaviour of the game to the monitor refresh rate; it can easily be happily working on the logic for the next frame at the point the monitor gets around to rendering the previous one. The advantage of this is that the playable framerate is not constrained by the monitor refresh rate, so it can exceed 60fps or go to any framerate below that level.
There's two advantages of having V-Sync on: First, with V-Sync off you'll get tearing. That's when the monitor attempts to render a scene which the game is halfway through drawing, which means you'll get a clear image where the top half is of one frame and the bottom is of the previous frame; you'll see it quite often when turning. Also, there's one small possible benefit in that with V-Sync on you can code with the knowledge that the game logic will always have a fixed timestep, which opens up some potential for optimisations - albeit that's a fairly large step, and having code capable of dealing with variable timesteps is safer.
Anti-aliasing: A framebuffer is a grid of pixels. If you try and draw a diagonal line on a grid, you're going to get a 'stepped' look to things ('jaggies'). One nice way to alleviate that is to fill in the steps with a slightly lighter pixel to create a smoother look when viewed from a distance. One significant way of achieving that is to internally draw to a much finer grid than the one you're actually going to display, and then take each on-screen pixel to be the average of all the framebuffer pixels at that point. Loosely speaking! There's quite a few nuances to the generation algorithms that aren't reflected in my simplification, there.
Sub-HD: 720p and 1080i are generally accepted as the HD resolutions; that's 720 lines drawn in a progressive fashion or 1080 lines drawn in an interlaced fashion. Some games stretch to 1080p. However, there are many games that don't actually meet those resolutions. There's a few reasons for this.
First, and simplest, is RAM. A larger framebuffer takes up more RAM, and if you're extremely tight on RAM, reducing the framebuffer size can alleviate that.
One other one is that any piece of graphical hardware has pixel fill rate; how fast it can actually paint a given polygon onto the framebuffer. The larger the framebuffer, the more pixels the same polygon could overlap, and so the longer it takes to draw that polygon. If your scene requires painting too many pixels it's going to slow things down, and reducing the size of the framebuffer reduces the number of pixels to paint.
-what is netcoding? What determines whether it is "good" or "bad"?
The game runs on all the machines at the same time. Each machine distributes packets that inform the other machines of the actions of their player, and recieves packets from the others to keep track of those players. Netcoding is largely handling distribution of those packets and determining what information needs to be distributed to make the game run effectively - too little and other players won't be rendered correctly, too much and the packets will be too bloated and slow to pass around.
One other nuance of netcoding - and the bit I hate - is the fact that packet loss needs to be handled. In general, you cannot guarantee that any packet you send will actually be recieved by the recipient, and you need to handle that safely. On a similar note, you can't guarantee that the packets that are recieved by the recipient are recieved in *the right order*. Good network code plans around those limitations and allows for those failings of networking, but it requires a lot of care and attention.
-why are dedicated servers considered infinitely superior to p2p connectivity?
A dedicated server takes on all packet distribution responsibilities itself. The dedicated server's instance of the game world is the one that is 'right', and it has the responsibility to notify all its clients of the current state of that instance; they in turn feed back their movements and actions to it and it updates the state of the game world accordingly.
In peer to peer, one player is regarded as a host and handles that role; however, for one thing they're having to run the game as well as handling the hosting duties, and for another, if the host disconnects the other players have an issue. Some games are capable of migrating the 'host' role to another player (is that a requirement these days for console titles? I don't believe it was when I was developing), but that's a fairly complicated operation.
One other issue with peer to peer is often the host has certain inherent advantages; because there's little to no communication lag between their actions and their responses, they may seem to react quicker than anyone else in the game. I seem to recall hearing that Gears of War suffered from this?