Just a word of caution not to get hopes up about DLSS being much of a factor with the Switch 2.
DLSS requires a substantial amount of CPU grunt to pull off, independent of the GPU hardware. An ARM-based CPU is also going to have considerably less performance than even a laptop-based Intel or AMD CPU. Tremendously so.
In addition, DLSS makes heavy use of the Tensor Cores found in RTX GPU's. The scaled down laptop variants of RTX cards already have significantly fewer Tensor cores than the desktop variants. RTX 3050 on desktops for instance only has 80 Tensor Cores, whereas the laptop variant of the 3050 has only 64 Tensor Cores. And the performance advantages of DLSS on such lower end laptop hardware is significantly reduced in comparison to desktops.
Switch 2's GPU is rumored to contain 48 tensor cores. So it's even more stripped down than any of the lower end laptop RTX cards. I wouldn't be surprised if DLSS ends up largely ignored by developers on this system due to not having a substantial benefit. At least for the games that target PS4-level tech. It may have more of a use on lower end games that don't hog the resources.
Eh, the point of DLSS is to improve performance by reducing the load on the GPU through using a lower natively-rendered resolution and using tensor cores to upscale it. It doesn't require CPU power besides general set up like any upscaler, such as those used in various Switch games like the XC series, but the result of making a game not GPU-bound anymore means the frame rate can be boosted. This is where the CPU can have a heavier load if the dev so chooses. If a game is already CPU-bound, then DLSS won't help in performance, just graphical fidelity, as a dev can instead drop the resolution, but add more details into the lower-resolution render, like raytracing. It already comes with the benefit of free anti-aliasing, something normally not seen in Nintendo's games for a long time.
As for the tensor cores, just because the higher-variant RTX GPUs have more tensor cores than the laptop-variants and what's in the T239 for Switch 2 doesn't mean DLSS is actually using them all at the same time. In fact, DLSS is on the low-end of tensor core utilization, as noted by people who have tested it.
https://www.pcgamer.com/nvidias-dls...-those-ai-accelerating-tensor-cores-after-all
"Bluedot55 ran both DLSS and third party scalers on an Nvidia RTX 4090 and measured Tensor core utilisation. Looking at average Tensor core usage, the figures under DLSS were extremely low, less than 1%.
Initial investigations suggested even the peak utilisation registered in the 4-9% range, implying that while the Tensor cores were being used, they probably weren't actually essential. However, increasing the polling rate revealed that peak utilisation is in fact in excess of 90%, but only for brief periods measured in microseconds."
Microseconds, not milliseconds. Sure, an RTX 4090 with roughly 10.6x the number of SMs than the T239 is leaked to have, and twice the clock frequency of what's assumed for a supposed docked mode, but even with that, the utilization for Switch 2 would still be very much within limits, especially when it's not having to deal with the PC environment. So why do these PC GPUs have so many tensor cores anyways? Well, PCs aren't just using them for games. There are a plethora of applications that could make use of them, particularly in the AI field, which push those tensor cores far more than DLSS ever will. The Switch 2 is just a gaming machine, so it's not like it needs a lot of them anyways if DLSS is the main purpose for them. As it is, Nvidia SMs have a set design, with a certain number of CUDA cores, tensor cores, etc. They may change between architectures, but within each architecture, regardless of the power of the GPU, they remain consistent. Throwing on more SMs means more tensor cores which, for gaming, go underutilized.
So, DLSS doesn't require a hefty CPU as it barely uses it, but the results could give devs reason to push the CPU harder for higher frame rates in what was originally GPU-bound scenarios. So what are we looking at in terms of Switch 2's CPU?
The most likely CPU is the Cortex A78C, which ARM themselves have stated is designed for high-performance laptops. It's the only ARM CPU that fits in with the combination of the Nvidia leak and the Linux commits regarding the T239 for having 8 BIG cores on a single cluster. Compared to the Switch's A57s, actual efficiency at the same clock frequency is just over 3x minimum. It will have double the cores than what Switch has, and folks who have used Nvidia's power tools with the assumption of a TSMC 4N process node suggest the range of 1.8-2.0Ghz to match within the realm of Switch's own CPU power budget. In total, yet a bit inaccurate terms,
that's a boost of up to 12x. ARM is not a bad CPU architecture, at least for the past decade. In fact, the A57s on a per-core basis have a higher IPC than the Jaguar CPUs used in the PS4/XB1 at the same clock frequency. But of course with Switch having half the CPU cores and a drop in the clock, it wouldn't match up overall. The A78C also has a higher IPC of roughly 15-20% over Zen2 used in PS5/Series, but again, it won't be clocked as high. Having the same number of cores though means the gap between Switch 2 and PS5/Series is smaller than what it was with Switch vs PS4/XB1.
edit:
DLSS being able to provide a means to increase the frame rate is separate from frame generation.