To add to Beer Monkey's post, if all else is equal there is more information in a native 4K image. The issue is whether it is useful information. By useful information I mean is it perceivable by human vision at normal viewing range? The videos posted demonstrate that any increase in useful information is negligible.
A little background.
4K is not new technology. In the early '80s when ILM were doing their experiments on digital effects work they did tests. They found to capture all the detail from motion picture film they had to scan it at 4K resolution: that is around 4000 discrete photosites between the perforations of the film frame. If 4K video is a resource hog today you can imagine how bad things were 30 odd years ago. But they found there was no significant perceivable difference when the film was scanned at 2K (around 2000 photosites across the frame, which is almost identical in terms of spatial resolution as 1080p) and recorded back to film. That meant that at 2K all the useful information had been captured. So it became the standard. It was still a massive amount of data for the time but it was manageable.
Nobody outside the small community doing this work knew or cared about this. It is only when companies have to sell something that these things get hyped. The easiest way to sell something new is to show its benefits. But many (most?) of those holding the purse-strings are not technically minded. The guys in the trenches know all this stuff anyway and they'll do their own tests. But they rarely get to make financial decisions. You have to find a way to condense all the information into a digestible chunk for the money-men to consume. Tell them there is better resolution and give them a nice easy way to measure it. It is basic psychology. Give them a bigger number and most people will be impressed.
When it trickles down and becomes a consumer item the same thing happens. 480i/576i had been the SD standard for decades when over the space of around 15 years we have (as average consumers) gone from SD to ED to HD to Full HD to UHD. We are now in the bizarre position that the devices we have at home are "higher resolution" than the content being made. Think about that.
At the advent of home video you had a number of formats but I will stick with VHS and Laserdisc as those are the only two which had any real legs on the consumer market. VHS has about 240 lines of useful information per picture height and Laserdisc has around 400. Colour information is stored at much lower resolution (as low as 40 lines for VHS) than brightness information. This is because we are more sensitive to brightness (luminance) than we are to colour (chrominance). Both elements are bundled together and stored as a composite video signal which has to be converted back into its component parts to be viewed. How well this is done depends on how good the decoder in your equipment is.
This is obviously far less visual information than 35mm film. When projected on a screen a standard 35mm release print resolves between 700 and 1000 lines of perceivable information per picture height. Colour is, due to its nature, at full resolution. Obviously far better than home video of the time.
DVD arrived and offered 480p/576p video. It stored colour as component information rather than composite. That means luminance and chrominance are stored separately. Colour is at half resolution, which is okay as, again, we are less sensitive to colour than we are to image brightness. Colour is also separated into, and stored as, two channels which can be cleanly converted back to the three primaries. For practical purposes this is a non issue as the difference between component and native RGB is almost imperceptible.
It has the benefit of being digital which means, as long as the signal path is clean and your equipment is up to spec, you can be confident the signal out is the same as the signal encoded onto the disc. You could have no such confidence with analogue video. There is a reason NTSC was disparagingly said to stand for "never twice the same colour".
The move to digital required a sampling size for the video which was set at slightly less than 8bits per pixel, per channel. This means for each primary channel there are 219 possible degrees of intensity combining to around ten and a half million possible colours (full range 8 bit video has 256 degrees of intensity per colour and is what most computer monitors use). It uses a colour gamut defined as rec 601. It uses MPEG-2 compression at a maximum data rate of 9.8 Mbit/s
Much better than what came before but still a long way from competing with 35mm.
HD formats started to appear but I'll only look at Blu-ray as it is the one which survived (which still irks some HD-DVD fans). It raised its spatial resolution to 1080p, which is very nearly the same as full 2K digital cinema video (to the point they can pretty well be used interchangeably), but kept the 8 bit colour depth. It moved to a slightly wider colour gamut (rec 709) so has the potential for better colour fidelity. It offers better compression codecs, notably h.264 which is more efficient than MPEG-2 by a significant margin. That means you can store the same amount of video at the same quality in a smaller space. It has a bandwidth of up to 54Mbit/s, which is far more than streaming sites typically stream 4K video.
In the meantime the shift to digital projection in cinema was well underway. The main difference between the 2K DCPs used to show films and Blu-ray video is not one of spatial resolution: it is in its colour gamut, colour depth and compression.
DCPs use a colour gamut called P3 which is wider than Blu-rays and stores colour at 10 or even 12 bits per channel. For comparison 10 bit video offers 1024 degrees of intensity per colour as opposed to Blu-ray,s 219. That is over a billion different possible colours. 12 bit has 4096 degrees for a total of over 68 billion colours! Of course you are still limited by the constraints of the gamut you are working within, but you have a ridiculous level of precision.
The biggest problem with a lower bit depth comes when you have subtle graduations of colour. Think a sky at sunset where it goes from a deep red at the horizon to a rich blue as you look upwards. If there aren't enough degrees of intensity available to accurately reproduce this you get colour banding, which is, to my mind, the ugliest video artefact. There are ways to minimise this, but that is outside the scope of this post. A higher bit depth can eliminate this altogether.
DCPs use JPEG 2000 compression, where each frame is captured as a still image and played back at a bit rate of up to 250Mbit/s. It is not lossless but does not rely on temporal encoding so there is less likelihood of motion artefacts being present than there would be on consumer grade video.
Sad as I am to say, all things considered a DCP at 2K is at least as good as a 35mm print. In many ways it is better.
I'll skip 3D, not because it is not interesting but because it is a whole topic on its own.
So now we have 4K in the home when most cinemas still run 2K projectors. We have 10 or even 12 bit video in the home. We have a wide colour gamut in the home with intent to move to rec.2020 which is far wider than anything commonly seen, even in a commercial setting. We have HDR, which is, again, something most of the very best commercial screens do not have.
Apart from sheer scale and issues around compression, the latter of which is negligible with the h.265 codec used on UHD disks at the bandwidth provided (up to 128 Mbit/s), we have video on a home format which competes on almost every level on a par with commercial cinema. We have higher spatial fidelity than most of the content produced (although that is a moot point really).
It is crazy.
Sound is another issue to look at and I purposely have not looked at IMAX as that is its own thing apart from mainstream theatre. But I think this is enough for one post