I agree with the score. Sure, the evo has more features - but it's really just building on the breakthroughs that the R4 made.
But I think if the R4 was re-reviewed it would/should get a slightly lower score, just due to the fact that there is now something better than it.
I also think that it would be a good idea to implement a more transparent scoring system. You could brake down the score into sections, like hardware (duribility etc), ds game support, homebrew, download play, extra features (like cheats/card reader etc), interface, reviewers tilt and inovation. Or you could have less and just go with Hardware, Software, Features, Reviewer Tilt. Reviewer tilt would just be how the reviewer felt about the product, whether the felt it was worth the money, if it's innovative etc.
From those sections you would give your final score. A support section could even be added, which starts with a base value of 5/10 (depending on expected for support) and in/decreases depend on the actual product support/improvements.
A lot of other review sites have a system similar to this and it would stop people from complaining about bias because the components of the review score would be very clear.
For example: the evo and R4 might get a 9/10 for feature and hardware, they might get evo (10/10) and R4 (9/10) for software, but evo (8/10) and R4 (10/10) for reviewer tilt/innovation - which would give a final score of evo (9/10) and R4 (9.25/10) scores would be rounded of coure so that would give R4 a final rating of 9.2/10.
Later, scores would be adjusted with the support category. Because evo just came out (and because the company it not just an upstart that nobody's heard of) it gets a 7.5/10, but the R4 would get a 10/10. Add these in to come up with the adjusted scores of evo 8.7 and R4 9.4.
The initial support score would change depending on the company and their reputation - but the good thing is that it is easy for people to see and the reviewer could give a break down on why they gave each section the score they did.
Another modification would be that each section would have to be reviewed against what the reviewer feels is 100% perpect in each category, rather than by comparing it to the compotition. So for example, if download play is only 50% working, it gets a 5/10. And if the user interface is really great, but has 1 or 2 minor flaws it can only get a 9.9, not a 10.
Obviously, what the reveiwer would class as 100% perpect would change depending on what the other cart makers are doing, but the reviewer would also need to think about what could be done in future.
Again, even though a review is the reviewers opinion and everyone is different, like a said above - the most important part of this is that the reader understands how the reviewer scored the product so they can make their own decisions. This is the main flaw I see with the GBAtemp system.
But remember, this site is always improving so I'm sure if the staff think there's a problem they'll try to fix it. This is just how I would do it and how I feel the score system would work best.
Once again, I agree with the score that shaun gave the evo. And if anyone else thinks like I do and feels that what I wrote above sounds good, please let me know
P.S. Sorry for the loooong post!
But I think if the R4 was re-reviewed it would/should get a slightly lower score, just due to the fact that there is now something better than it.
I also think that it would be a good idea to implement a more transparent scoring system. You could brake down the score into sections, like hardware (duribility etc), ds game support, homebrew, download play, extra features (like cheats/card reader etc), interface, reviewers tilt and inovation. Or you could have less and just go with Hardware, Software, Features, Reviewer Tilt. Reviewer tilt would just be how the reviewer felt about the product, whether the felt it was worth the money, if it's innovative etc.
From those sections you would give your final score. A support section could even be added, which starts with a base value of 5/10 (depending on expected for support) and in/decreases depend on the actual product support/improvements.
A lot of other review sites have a system similar to this and it would stop people from complaining about bias because the components of the review score would be very clear.
For example: the evo and R4 might get a 9/10 for feature and hardware, they might get evo (10/10) and R4 (9/10) for software, but evo (8/10) and R4 (10/10) for reviewer tilt/innovation - which would give a final score of evo (9/10) and R4 (9.25/10) scores would be rounded of coure so that would give R4 a final rating of 9.2/10.
Later, scores would be adjusted with the support category. Because evo just came out (and because the company it not just an upstart that nobody's heard of) it gets a 7.5/10, but the R4 would get a 10/10. Add these in to come up with the adjusted scores of evo 8.7 and R4 9.4.
The initial support score would change depending on the company and their reputation - but the good thing is that it is easy for people to see and the reviewer could give a break down on why they gave each section the score they did.
Another modification would be that each section would have to be reviewed against what the reviewer feels is 100% perpect in each category, rather than by comparing it to the compotition. So for example, if download play is only 50% working, it gets a 5/10. And if the user interface is really great, but has 1 or 2 minor flaws it can only get a 9.9, not a 10.
Obviously, what the reveiwer would class as 100% perpect would change depending on what the other cart makers are doing, but the reviewer would also need to think about what could be done in future.
Again, even though a review is the reviewers opinion and everyone is different, like a said above - the most important part of this is that the reader understands how the reviewer scored the product so they can make their own decisions. This is the main flaw I see with the GBAtemp system.
But remember, this site is always improving so I'm sure if the staff think there's a problem they'll try to fix it. This is just how I would do it and how I feel the score system would work best.
Once again, I agree with the score that shaun gave the evo. And if anyone else thinks like I do and feels that what I wrote above sounds good, please let me know
P.S. Sorry for the loooong post!