Linear editing and non linear editing is a different matter entirely -- it refers to the underlying ethos in the UI/approach to editing rather than anything at all to do with scaling. Equally the vast majority of things you will be looking at for editing these days will be non linear/nle approaches and you will have to go out of your way to find linear stuff.
There are multiple approaches, all with their own upsides and downsides that vary with what you are doing. If you are going for accuracy then that limits you in many ways, if you want the shiniest video then that allows other things. If you fancy using particularly fancy emulators and methods thereof you have other means again. I will also note that what people want to see varies as well, and the idea that people don't much care what they see but what they hear so if it sounds like you are recording down the bottom of a well you can have 4k native HDR footage version on a supercomputer and nobody will want to watch it/engage with it.
We also have *spits* interlacing to contemplate here but I will put that off for a minute. If you can use a progressive video mode then in 95% of cases (there are some ones where interlaced might technically be a higher resolution or notable frame rate boost).
Accuracy wise then you have one choice. It is called nearest neighbour (though different editors can call it different things) and will require you to be a multiple of 2 increase to your base resolution. Fortunately most things are multiples of two away from the vertical resolution at least.
How you fill the borders is up to you. Some will go with basic black, some will have a banner, some will split the video as it is there, drop the saturation (or indeed overlay over a single colour), blur it a bit and call that a fancy professional effect. I am not overly fond of the latter but I will not dismiss it either.
After this it gets fun. Many more approaches here
1) If the emulator will dump your layers into separate video streams then that can be a thing, even if only to scale the UI elements (often still 2d affairs) separately to the background 3d elements (which might benefit from a slightly softened approach or filters added that will make the 2d UI look horrible).
2) As noted above some emulators will virtually increase their 3d resolution (polygons is polygons and rendered internally which means you can increase things there, maybe even do a subsurf) and/or do widescreen as part of that, and possibly even do texture replacement on top of that (though this will be artistic).
2d can't be as easily scaled outside of specific use cases -- the 2d world is probably what you are seeing on the screen with nothing beyond that compared to 3d where the position of most things is known. There are some things that try it on (usually variations on the theme of take savestate, move player all different directions, take shots of each, stitch back together and give back to player to continue on, works well until you encounter a wall, a NPC spawn, random battle or similar, some might incorporate a walk through walls cheat as well or similar) 2d however has long had filters -- sai, super said, eagle and all that other jazz.
2d tile replacement is a thing however.
There are some game specific filters designed for use with one game -- a lot of NES and SNES mario things will be this where filters are twisted to work well with that and maybe not others.
3) Many games these days have ports out on other systems. Can be worth looking here, and might even be available in hacked form on your base console to do something nice with (or indeed might be a translation patch if say the Japanese version was the best for some reason -- usually PAL got shafted or needed some extra hacks to do things, though not always and sometimes it had better bug fixes).
4) Conventional scaling approaches, which would include some of the more exotic ones.
Your basic filters are bilinear and lanczos, and will probably be good for most things. Interlaced video can have some perks for itself here as you are scaling between two known points ( http://avisynth.nl/index.php/Nnedi3
for an older take, and arguably bringing us into the smart scaling world).
Truly smart scalers I rarely like to use for video stuff -- the seam carving approaches work fine for static images but trying to constrain them for video is much harder. To that end you have some weaker ones, ones that may focus on edge detection, movement detection and adjust accordingly to try to keep things sharper. For game footage it is harder than some live action or animated stuff. Rendering will probably also not be anything like real time where nearest neighbour for 2x video size is take pixel, spread it to three sides to make 4x4 "pixel", repeat for all other pixels in image, and can be done on a calculator. Not so bad if you are doing a 40 minute cut up video but trying to go fancy for a 16 hour lets play, or livestream, gets harder.
On top of this we also have ideas of CRT emulation
To say nothing of trying to fix problems with games themselves -- see all those early GBA games that were forcibly brightened to work with the unlit screen of the original GBA.