Ignoring the resolution discussion for now as it is not all that relevant to what I wish to say
Others have hinted at it but it looks like emulation theory time
Computers are made of parts and the method by which they are put together becomes known as the architecture (although the term is used across computing with subtly different meanings)
Broadly speaking you have the CPU, any associated processing abilities, the graphics, the input methods, the extra hardware (link ports, network and the like) and the ram. They are all connected in various ways, have various abilities and have various constraints- these constraints are often not exactly that and if you fail to emulate them you get somewhat broken emulation- this is why you need the likes of say bsnes rather than snes9x for some games and why some of the handheld/console ports of emulators fall short (people are now taking it to the extreme and emulating things from the transistors up-
http://www.youtube.com/watch?v=fWqBmmPQP40 but that is getting away from the point).
On top of this consider that computer graphics are incredibly costly in terms of resources to produce so there are all sorts of tricks that can be done to make things work-
http://www.youtube.com/watch?v=00gAbgBu8R4 has some of the tricks starting around 3 minutes in with the good stuff not long after 4 minutes in (watch the whole thing if you want but it is only tangentially related to what is being discussed here aside from those tricks) but and as Twiffles says on top of this you can also have filtering happen where the original hardware was lacking it.
Anyhow the job (well the drastically simplified version of) of the emulator author is to map the joypad or whatever to the IO ports that it would usually come into in the game memory, convert instructions that would usually run on the console CPU to ones that run on the host system (some more advanced emulation techniques attempt to figure out how the program was coded and adjust for that) but the big one comes with 3d- most people that are not masochists will use the 3d hardware that is provided by the console when it all comes down to it and this hardware is probably going to be close to the general mathematical methods of having 3d work which means it can then be converted to the host machine where all sorts of things can be done. This can range from improving textures, adding them where there were none, outright replacing textures, improving lighting (although this is not so common yet), rendering another frame using midpoints of an animation (again not necessarily the best idea but more often than not worth a punt) and with it being now just numbers on a machine the host machine can use its presumably far superior 3d processing abilities to clean up the graphics (anti aliasing and such-
http://www.hamst3r.com/images/aacomparison.jpg and
http://www.bit-tech.net/hardware/2005/07/0...ing_filtering/1 ) before "passing it back" (in practice most emulator authors I imagine will opt to render it outside the emulation of the console hardware). Along the same lines you can also break spec and render things as fast as they need to be/can be rendered on the host system rather than as fast as the original hardware would allow which has all sorts of benefits.
2d works along similar lines although it is usually combined with (up)scaling/interpolation which has other things at play-
http://www.general-cathexis.com/interpolation/index.html .
How hard this is to do can vary between game (or even versions thereof), SDK version, console (thinking PC for a moment it is not without reason most PC games rely on windows/directX) and can come with serious speed penalties if you are not careful.