A lot of people told you that yes, but I'd like to elaborate why.
The way that TVs work, to my knowledge even today, is that colors of pixels on each frame are being sent after each other - actually the signal for lines in analog systems is continuous but I disgress. Machines such as NES work by generating this signal, basically, at the last moment possible, when it's supposed to be sent, so the chip itself HAS TO generate frames 60 frames per second to display 60 of them per second. Now selected games which were very complicated generated the specifications of these frames at a lower frame rate, but generally speaking it didn't grant you additional graphical power, you just could work more on specifications of a single frame, and options for such specification were so basic that it was rarely deemed worth it.
In general purpose computers, a different hardware setup became popular fast: you dedicated a whole bank of memory to drawing the scene pixel by pixel and then that was emitted onto monitor or TV. This used comparatively much memory and power but was easier to modify for more... non-templated results. It started with CPU manipulating that memory bank, eventually dedicated circuits started speeding it up, also eventually memory banks multiplied for double buffering. The important difference between double frame buffer and retro sprite and tile processor is that you can literally draw twice as much stuff with a double frame buffer by skipping frames in the output, regardless whether a CPU, blitter, or a GPU draws it, while sprite and tile unit doesn't remember what it did each frame, it restarts its work again and again (actually most of it restarts its work each line, even). So going 30 fps on a sprite and tile unit usually means you're CPU bound, which with game design and programming rhetoric at the time was very unusual.