A 480 picture that's perfectly upscaled to 1080 is still only ever going to look about as good as a 480 picture. The problem with the Wii U is its native upscaling is far from perfect, which is why it has been argued that using the TV's upscaler is superior. Regardless of how you're upscaling, the goal is only to get it look around as good as a native 480 image.
You can find a really good list of upscaling techniques
here. My guess is the Wii U upscaling does bilinear interpolation (maybe nearest-neighbor interpolation, which would be worse). These are easy from a computational standpoint, but they are more likely to cause artifacts and jaggies than other methods. My understanding is most modern televisions use a combination of bilinear and bicubic, which looks much better.
I personally cannot tell the difference when using the vWii, but I haven't spent any time comparing the two.
If your 4K TV can display the Wii U/vWii picture when you have it set to 480, and the picture takes up the whole TV screen, then the TV is doing the upscaling. The questions are a.) Which upscaling algorithm(s) are the TV using, and b.) Are they better than what the Wii U does natively? The answers are probably a.) A mixture of bilinear and bicubic, and b.) It's probably at least a little bit better than what the Wii U does, but it might not be very noticable.