Separate names with a comma.
Discussion in 'GBAtemp & Scene News' started by Chary, Apr 14, 2017.
I don't know what to believe anymore.
My link was just for the launch week, this thread is for total NA sales in March, and that link is for total world wide sales in March lol.
However maxwell is still 50% more efficient than GCN.
What good is that if you're dealing with 1/3rd of compute performance? You can't conjure calculations out of a hat.
I know, however, it should be more like 500 Gflops in comparison to GCN even with lower clocks.
That's not how efficiency works. I don't know why you're bringing GCN into the conversation when official NVidia documentation lists the X1 at 1TFLOP FP16/512GFLOPS FP32 @ 1GHz and the shader-based math supports that. That's the peak performance in the two categories, it doesn't go higher beyond that point without overclocking.
The reason for the efficiency difference in real-life applications is the fact that, to my knowledge, the baseline GCN used in the old PS4 cannot perform FP16 calculations - it doesn't support half-floats, all calculations are treated as FP32 on the silicon level and take up the same amount of resources as full floats regardless of the intended precision. The X1 can, thus lower precision calculations are performed quicker. This was corrected in the PS4 Pro which includes customised GCN, borrowing features from Polaris, and is also capable of performing two FP16 calculations instead of one FP32. If we're talking in those terms, the Pro is an 8TFLOP+ machine which we both know is false, or at least misleading.
If the Switch is a 1TFLOP machine then we have to automatically treat the PS4 Pro as an 8.4TFLOP machine, except we both know that's ridiculous - in real life we mostly use full precision and half precision support only increases efficiency if half-floats are used.
I brought GCN into this because of the fact both PS4 and XBONE use GCN and as such architectural differences should be accounted for also, not just numbers. Thats why a Pascal GPU at 4Tflops still outperforms the same 4.5Tflops Polaris equivalent.
You're pulling a number out of your ass though, you have no actual data to support it and have no idea how much of raw compute power is wasted on the Switch since no actual benchmarks exist. We're talking about two custom machines, so we can only talk about silicon, the rest is pure conjecture. That kind of a performance difference wouldn't be counted in TFLOPS anyways, the TFLOPS stay the same, it's how efficiently the math is used in rendering and physics that's the subject here. You're confusing raw performance with algorithms
... I might be. The X1 is pretty off the self and there are quite a lot of benchmarks of it, and Ive seen a few simulated PS4 benchmarks based on similarily spec'd PC's usng comparable GPU's, but thats not the point.
Also thats what I was talking about when I said efficiency.
Anyway, weve digressed enough I think.
Sorry about that, the subject was beaten to death indeed. My point was that the gap is significant and it'll only grow wider as the generation goes on, which can be an issue. Just the fact that the X1 in the Switch is downclocked widens it past the point of any architectural gains, but that's neither here nor there - it can be optimised for if it succeeds as a platform. If games have to run at 720p 30FPS, that's just life - I was never a resolution buff anyways.
Dude, Im not disagreeing with you. If this were a perfect world the switch would have A72's instead of the far inferior A57's, but its not. I feel nintendo have rushed the switch to market.
Is this enough to get through to the 3rd parties thick heads..... I bet its not. It has to be the power of the scorpio or they wont touch it. They really have not excuse when a dev kit is $500.
I'm inclined to believe that, as the system has some glaring design mistakes that are blatantly obvious even to someone with no engineering background. For instance, if you want to market the device by stressing the set-top feature and the integrated kick stand, you probably don't want to put the charger port on the bottom of the system, where it's inaccessible, so that the console can't be charged simultaneously. It's a bunch of little things that seem bizzare to me.
I could bet my house that they'll release a revision in the vain of the Xbox One S (4k capabilities, additional charging port, etc/)
Is it really such an inconvenience though? Granted, they could very well have placed it on the top, and the vents on the bottom.. So, it'd be upside down while docked, not that it matters.
Or you know, just had two ports.
Why waste that much circuitry..
Try taking a class on economics and learn how supply and demand really works.
I wouldnt consider it a waste. That USB-C couldve been used for more than just charging.
Of course they will, Nintendo invented pointless revisions. Taking one look at the 3/DS line makes the future of the Switch blatantly obvious, provided the system is successfully adopted.
Yes, because the port placement makes it impossible to use it. The vent is placed correctly, but the USB-C port is not - it should be on the top, right next to the vent, where it'd be always accessible, while the bottom should be occupied by a proprietary docking rail specifically designed for the Switch dock for proper seating instead of the wobbly nonsense that was implemented in the final unit. The heatsink and vent placement matters because of physics - heat rises to the top, so the sink and vent need to be as high as possible to prevent heat from affecting the case and other components. It's placed correctly as it is and shouldn't be moved.
It's a couple traces and a port that costs less than a dollar.