Question about GPU

Discussion in 'Computer Hardware, Devices and Accessories' started by Noctosphere, Nov 19, 2015.

  1. Noctosphere
    OP

    Noctosphere Moon furries | Official follower of Skiddon't-ism

    Member
    GBAtemp Patron
    Noctosphere is a Patron of GBAtemp and is helping us stay independent!

    Our Patreon
    2,407
    2,365
    Dec 30, 2013
    Canada
    Between three female furries
  2. Qtis

    Qtis Grey Knight Inquisitor

    Member
    3,797
    1,297
    Feb 28, 2010
    The Forge
    Not many easy ways to compare unless you actually compare benchmarks and actual performance between the cards in real life situations such as games. Many people and companies (such as PassMark) do regular benchmarking for GPUs. Many GPU/hardware reviews in the last few years have taken a habit of running a couple of different benchmarking software and a few games.

    (I'm not even discussing general "2GB vs 4GB of GDDR5" comparisons, since they rarely tell the whole picture.)
     
    RevPokemon likes this.
  3. RevPokemon

    RevPokemon GBATemp's 3rd Favorite Transgirl

    Member
    4,845
    2,430
    Aug 24, 2014
    United States
    Fort Gay, West Virginia
    And to be even more specific it depends what YOU are going to be playing/doing.
     
  4. Noctosphere
    OP

    Noctosphere Moon furries | Official follower of Skiddon't-ism

    Member
    GBAtemp Patron
    Noctosphere is a Patron of GBAtemp and is helping us stay independent!

    Our Patreon
    2,407
    2,365
    Dec 30, 2013
    Canada
    Between three female furries
    Well, what I mean is, if I look in specification, what should tell me what's making a GPU better?
    CUDA core?
    Base/Boost clock?
    Textures fill rate?

    What?
     
  5. RevPokemon

    RevPokemon GBATemp's 3rd Favorite Transgirl

    Member
    4,845
    2,430
    Aug 24, 2014
    United States
    Fort Gay, West Virginia
    Because performance is heavilyapplication-specific, you cannotquantitatively compare two video cards with different GPU architectures based on specs alone. Different GPU architectures scale differently with various specs such as memory speed, memory size, memory type, and bus width; and the only way to divine the scaling ability is to look at benchmarks.

    The only exception to this is if you are comparing two cards which are "obviously" not within the same performance class-for example, comparing a 2 MB S3 ViRGE to a 4 GB Radeon R9 290. The specs are so incredibly disparate (by an entire level of magnitude) that it's not difficult to guess which card is most likely many generations newer and should have better performance.

    But for the two cards you list, you have to refer to benchmarks for the exact or very similar applications that you are going to run--and note that similarity is not just a class of applications (such as "games" or "bitcoin mining") because some applications within the same class may be better-optimized for a particular piece of hardware, or a given piece of hardware may have more mature drivers.

    That said, there are several specs you can compare if you're looking at two video cards based on the same GPU architecture. For example, see the Nvidia and AMDtables at Wikipedia.

    GPU architecture/code name

    GPU clock speeds

    • Core (MHz)
    • Shader (MHz)
    Number of shaders

    • Unified shaders
    • Texture mapping units
    • Render output units
    Memory

    • Size (MB or GB)
    • Bus type (DDR3, GDDR5, etc.)
    • Bus width (bits--e.g., 64-bit, 128-bit, 256-bit)
    • Frequency (MHz)
    Power consumption

    There are many other specs that are derived by multiplying two or more of these basic hardware specs, such as memory bandwidth (GB/s) or processing power (GFLOPS).

    Generally-speaking, if you're looking for better performance, you want the newest architectures and the highest numbers for all other specs except TDP. (But sometimes one manufacturer's technology may lag behind that of a competitor.)

    Again, performance can be highly application-dependent, as you'll notice if you look the results from any comprehensive benchmark suite for two different cards (especially with different architectures). While one application may benefit from larger memory, another might perform better with greater bandwidth (memory frequency, bus type, and bus width) or processing power (core frequency and number of shaders). Depending on your application, improving certain specs may not yield any performance gain whatsoever.

    As if divining the computational performance wasn't already difficult enough, most people also factor in the cost, which may include not only the initial purchase price, but also power consumption.
     
  6. marcus134

    marcus134 GBAtemp Advanced Fan

    Member
    584
    81
    May 7, 2011
    Canada
    Qu├ębec
    Nothing or everything, it doesn't really matter.

    Both cards are from the same generation (as denoted by the first 7 in the "7x0").
    However the 770 is a higher tiered card and is more powerful than the 760 (because 7 is bigger than 6 and that's how Nvidia decided their numbering scheme works).
    Because they're both from the same gen. they're not meant to be competing each other but to each fill their own market space, so the only reason not to take the better one is the power requirement or the price.

    If you want a direct comparison of both card: http://www.tomshardware.com/charts/2015-vga-charts/compare,3667.html?prod[7243]=on&prod[7244]=on
    (take note that those are reference design and OC cards are extremely frequent)

    Also, as both of these cards are not available for sale anymore, you might want to check Tom's hardware best graphics card for the money (october)

    For quicker reference you may also check Tom's hardware GPU hierarchy chart (gtx 950 absent)